{"source": "This paper presents Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we simply cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural net to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural net knows about where and what the objects are, we just need to teach it how to read them out. Beyond the use of task-specific data augmentations, our approach makes minimal assumptions about the task, yet it achieves competitive results on the challenging COCO dataset, compared to highly specialized and well optimized detection algorithms.", "target": ["物体検出を言語モデルの枠組みで解いた研究。bounding boxとラベルをテキスト記述とする(位置は連続値であるため、区間(bin)で区切り離散化する)。Encoder/Decoderの構造で、画像をEncodeした結果からDecoderで検出結果記述を生成。Faster R-CNN/DETRと同等精度を達成。"]} {"source": "A “bigger is better” explosion in the number of parameters in deep neural networks has made it increasingly challenging to make state-of-the-art networks accessible in compute-restricted environments. Compression techniques have taken on renewed importance as a way to bridge the gap. However, evaluation of the trade-offs incurred by popular compression techniques has been centered on high-resource datasets. In this work, we instead consider the impact of compression in a data-limited regime. We introduce the term low-resource double bind to refer to the co-occurrence of data limitations and compute resource constraints. This is a common setting for NLP for low-resource languages, yet the trade-offs in performance are poorly studied. Our work offers surprising insights into the relationship between capacity and generalization in data-limited regimes for the task of machine translation. Our experiments on magnitude pruning for translations from English into Yoruba, Hausa, Igbo and German show that in low-resource regimes, sparsity preserves performance on frequent sentences but has a disparate impact on infrequent ones. However, it improves robustness to out-of-distribution shifts, especially for datasets that are very distinct from the training distribution. Our findings suggest that sparsity can play a beneficial role at curbing memorization of low frequency attributes, and therefore offers a promising solution to the low-resource double bind.", "target": ["学習データが少ない場合に、モデルの枝刈りを行うと性能にどのような影響があるのか調べた研究。計算資源だけでなく学習データも限られているケース(low-resource double bind)を想定している。高頻度文では性能(BLEU/人手評価)を維持し辞書外単語にも強くなる一方、低頻度文では性能低下がみられる。"]} {"source": "A major challenge for scaling machine learning is training models to perform tasks that are very difficult or time-consuming for humans to evaluate. We present progress on this problem on the task of abstractive summarization of entire fiction novels. Our method combines learning from human feedback with recursive task decomposition: we use models trained on smaller parts of the task to assist humans in giving feedback on the broader task. We collect a large volume of demonstrations and comparisons from human labelers, and fine-tune GPT-3 using behavioral cloning and reward modeling to do summarization recursively. At inference time, the model first summarizes small sections of the book and then recursively summarizes these summaries to produce a summary of the entire book. Our human labelers are able to supervise and evaluate the models quickly, despite not having read the entire books themselves. Our resulting model generates sensible summaries of entire books, even matching the quality of human-written summaries in a few cases (\\sim5\\% of books). We achieve state-of-the-art results on the recent BookSum dataset for book-length summarization. A zero-shot question-answering model using these summaries achieves state-of-the-art results on the challenging NarrativeQA benchmark for answering questions about books and movie scripts. We release datasets of samples from our model.", "target": ["長い文章を人のフィードバックを活用し要約する研究。文章を分割し、分割単位ごとモデルで要約を行う。人から要約と要約比較結果を取得し、前者で模倣学習、後者で学習した報酬関数による強化学習 2つでモデルのFineTuneを行う。各単位同様に学習し、さらに要約の要約を行う2段目以降も同様に学習する"]} {"source": "Large Transformer models have been central to recent advances in natural language processing. The training and inference costs of these models, however, have grown rapidly and become prohibitively expensive. Here we aim to reduce the costs of Transformers by searching for a more efficient variant. Compared to previous approaches, our search is performed at a lower level, over the primitives that define a Transformer TensorFlow program. We identify an architecture, named Primer, that has a smaller training cost than the original Transformer and other variants for auto-regressive language modeling. Primer's improvements can be mostly attributed to two simple modifications: squaring ReLU activations and adding a depthwise convolution layer after each Q, K, and V projection in self-attention.", "target": ["計算効率の高いTransformerを進化的アルゴリズムで探索した研究。TensorFlowの演算子と、演算子を組み合わせ(サブプログラム)からTensorFlowのコードを直接生成し学習/評価する。self-attention内のQ、K、Vにdepthwiseのconvolutionをかける、ReLU後のsquare等の機構を発見。5億パラメーターのT5の学習時間を4倍削減。"]} {"source": "Experiments show Primer's gains over Transformer increase as compute scale grows and follow a power law with respect to quality at optimal model sizes. We also verify empirically that Primer can be dropped into different codebases to significantly speed up training without additional tuning. For example, at a 500M parameter size, Primer improves the original T5 architecture on C4 auto-regressive language modeling, reducing the training cost by 4X. Furthermore, the reduced training cost means Primer needs much less compute to reach a target one-shot performance. For instance, in a 1.9B parameter configuration similar to GPT-3 XL, Primer uses 1/3 of the training compute to achieve the same one-shot performance as Transformer. We open source our models and several comparisons in T5 to help with reproducibility.", "target": ["使用するCPU/GPUの機種がDNN最適化に与える影響を調べた研究。構造が異なる2種のモデル(AutoEncoder/PredNet)について、cuDNN functionsの使用有無・初期値のランダム化有無を変えて検証している。lossはfunctionsなし、初期値固定が最も安定しCPUよりGPUの方が低い結果。"]} {"source": "Large language models (LM) generate remarkably fluent text and can be efficiently adapted across NLP tasks. Measuring and guaranteeing the quality of generated text in terms of safety is imperative for deploying LMs in the real world; to this end, prior work often relies on automatic evaluation of LM toxicity. We critically discuss this approach, evaluate several toxicity mitigation strategies with respect to both automatic and human evaluation, and analyze consequences of toxicity mitigation in terms of model bias and LM quality. We demonstrate that while basic intervention strategies can effectively optimize previously established automatic metrics on the RealToxicityPrompts dataset, this comes at the cost of reduced LM coverage for both texts about, and dialects of, marginalized groups. Additionally, we find that human raters often disagree with high automatic toxicity scores after strong toxicity reduction interventions -- highlighting further the nuances involved in careful evaluation of LM toxicity.", "target": ["言語モデルを無毒化する手法を検証した研究。無毒化の手法を適用することで自動評価指標は改善できるが、適用後のテキストに対する評価は人間評価との乖離が大きくなる。また、コーパスを無毒化するとマイノリティグループに関するテキストをそもそも生成しなくなる問題がある"]} {"source": "A fundamental goal of scientific research is to learn about causal relationships. However, despite its critical role in the life and social sciences, causality has not had the same importance in Natural Language Processing (NLP), which has traditionally placed more emphasis on predictive tasks. This distinction is beginning to fade, with an emerging area of interdisciplinary research at the convergence of causal inference and language processing. Still, research on causality in NLP remains scattered across domains without unified definitions, benchmark datasets and clear articulations of the remaining challenges. In this survey, we consolidate research across academic areas and situate it in the broader NLP landscape. We introduce the statistical challenge of estimating causal effects, encompassing settings where text is used as an outcome, treatment, or as a means to address confounding. In addition, we explore potential uses of causal inference to improve the performance, robustness, fairness, and interpretability of NLP models. We thus provide a unified overview of causal inference for the computational linguistics community.", "target": ["自然言語処理に因果推論を組み合わせた研究のサーベイ。相関関係が因果関係でない例として、プロフィールの性別が女性だと投稿へのいいね数が少ない、テキストからの医療診断をする際学習データにない地域のデータだと精度が悪化する、などを挙げ実例ベースで解説している。"]} {"source": "In complex systems, we often observe complex global behavior emerge from a collection of agents interacting with each other in their environment, with each individual agent acting only on locally available information, without knowing the full picture. Such systems have inspired development of artificial intelligence algorithms in areas such as swarm optimization and cellular automata. Motivated by the emergence of collective behavior from complex cellular systems, we build systems that feed each sensory input from the environment into distinct, but identical neural networks, each with no fixed relationship with one another. We show that these sensory networks can be trained to integrate information received locally, and through communication via an attention mechanism, can collectively produce a globally coherent policy. Moreover, the system can still perform its task even if the ordering of its inputs is randomly permuted several times during an episode. These permutation invariant systems also display useful robustness and generalization properties that are broadly applicable. Interactive demo and videos of our results: this https URL", "target": ["モデルへ入力する変数の順序を変えても挙動が変わらないようにする手法。個別の入力を受け付ける独立したネットワークを用意し、各ネットワーク出力をAttentionで統合し行動を決めるためのベクトルを作成する。エピソード中入力をシャッフルしても学習できることを確認。"]} {"source": "Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. In this paper, we present \\textbf{LayoutLMv2} by pre-training text, layout and image in a multi-modal framework, where new model architectures and pre-training tasks are leveraged. Specifically, LayoutLMv2 not only uses the existing masked visual-language modeling task but also the new text-image alignment and text-image matching tasks in the pre-training stage, where cross-modality interaction is better learned. Meanwhile, it also integrates a spatial-aware self-attention mechanism into the Transformer architecture, so that the model can fully understand the relative positional relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms strong baselines and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including FUNSD (0.7895 -> 0.8420), CORD (0.9493 -> 0.9601), SROIE (0.9524 -> 0.9781), Kleister-NDA (0.834 -> 0.852), RVL-CDIP (0.9443 -> 0.9564), and DocVQA (0.7295 -> 0.8672). The pre-trained LayoutLMv2 model is publicly available at this https URL.", "target": ["文書画像にあるマルチモーダルな特徴で事前学習をした研究。テキスト・画像・レイアウト位置の3つを入力とし、単語穴埋め・テキストが画像上マスクされているかの判定・テキストが含まれる画像かの判定、の3タスクを解く。文書画像理解の各種タスクでSOTAを更新。"]} {"source": "AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.", "target": ["大規模データで学習して様々なタスクに転移できるモデルを\"Foundation Model\"と名付け、その適用領域(言語・画像・ロボティクス等)、応用領域(医療・法律・教育)、構築技術(セキュリティや解釈性も含む)、社会的インパクトなどをまとめたサーベイ。"]} {"source": "A commonly observed problem with the state-of-the art abstractive summarization models is that the generated summaries can be factually inconsistent with the input documents. The fact that automatic summarization may produce plausible-sounding yet inaccurate summaries is a major concern that limits its wide application. In this paper we present an approach to address factual consistency in summarization. We first propose an efficient automatic evaluation metric to measure factual consistency; next, we propose a novel learning algorithm that maximizes the proposed metric during model training. Through extensive experiments, we confirm that our method is effective in improving factual consistency and even overall quality of the summaries, as judged by both automatic metrics and human evaluation.", "target": ["もっともらしいが元文とは異なる要約の生成を抑止する研究。QAモデルに元文/要約文を入力した際どれだけ回答が一致するかで事実一致を計測するが、Q・Aの生成とQAモデルが必要だと追加の学習コストが大きくなる。そこで、言語モデルの仕組みでQ/Aを同時に生成する手法を提案"]} {"source": "This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub \"prompt-based learning\". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website this http URL including constantly-updated survey, and paperlist.", "target": ["事前学習済みモデルに目的のlabel以外を埋めたフォーム(prompt)を与え答えさせるprompt learningのサーベイ資料。追加学習が不要なためzero/few shotが容易な利点がある。promptの定義、promptの作成(手動/自動)、fillされた値の解釈、といった基本から応用まで幅広に解説。"]} {"source": "Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design. Consequently, the community does not know which techniques perform best or even if they outperform simple baselines. We formulate desiderata for an ideal few-shot NLP benchmark and present FLEX, the first benchmark, public leaderboard, and framework that provides unified, comprehensive measurement for few-shot NLP techniques. FLEX incorporates and introduces new best practices for few-shot evaluation, including measurement of four transfer settings, textual labels for zero-shot evaluation, and a principled approach to benchmark design that optimizes statistical accuracy while keeping evaluation costs accessible to researchers without large compute resources. In addition, we present UniFew, a simple yet strong prompt-based model for few-shot learning which unifies the pretraining and finetuning prompt formats, eschewing complex machinery of recent prompt-based approaches in adapting downstream task formats to language model pretraining objectives. We demonstrate that despite simplicity UniFew achieves results competitive with both popular meta-learning and prompt-based approaches.", "target": ["自然言語処理のFew-short learningのベンチマークデータセット。クラス間転移、ドメイン転移、タスク転移、そして事前学習済みモデル転移の計4つの手法すべてを評価できるよう作成されている。また、事前学習/promptを多肢選択QAの形式にそろえるUniFewを提案。"]} {"source": "As buildings are central to the social and environmental sustainability of human settlements, high-quality geospatial data are necessary to support their management and planning. Authorities around the world are increasingly collecting and releasing such data openly, but these are mostly disconnected initiatives, making it challenging for users to fully leverage their potential for urban sustainability. We conduct a global study of 2D geospatial data on buildings that are released by governments for free access, ranging from individual cities to whole countries. We identify and benchmark more than 140 releases from 28 countries containing above 100 million buildings, based on five dimensions: accessibility, richness, data quality, harmonisation, and relationships with other actors. We find that much building data released by governments is valuable for spatial analyses, but there are large disparities among them and not all instances are of high quality, harmonised, and rich in descriptive information. Our study also compares authoritative data to OpenStreetMap, a crowdsourced counterpart, suggesting a mutually beneficial and complementary relationship.", "target": ["世界中の建物に関するデータを集め評価した研究。2D、3D、建物属性のテーブルデータなどがあるが、ここでは2Dのデータに絞っている。評価はアクセシビリティや豊富さ、データ品質、などの定性面と、更新頻度や標準への準拠有無といった定量面、総計13の指標で行っている。"]} {"source": "The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG. In this work, we propose a new model, QA-GNN, which addresses the above challenges through two key innovations: (i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and (ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph neural networks. We evaluate QA-GNN on the CommonsenseQA and OpenBookQA datasets, and show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning, e.g., correctly handling negation in questions.", "target": ["知識グラフを使用して多肢選択回答を行う手法。質問文と選択肢(コンテキスト)から知識グラフ内の関連箇所(サブグラフ)を抽出し、言語モデルで関連度を判定し重みづけを行う。抽出したサブグラフにコンテキストを追加ノードとして連結し、Graph Attention Networkの重みから回答を推論する。"]} {"source": "At least a quarter of the warming that the Earth is experiencing today is due to anthropogenic methane emissions. There are multiple satellites in orbit and planned for launch in the next few years which can detect and quantify these emissions; however, to attribute methane emissions to their sources on the ground, a comprehensive database of the locations and characteristics of emission sources worldwide is essential. In this work, we develop deep learning algorithms that leverage freely available high-resolution aerial imagery to automatically detect oil and gas infrastructure, one of the largest contributors to global methane emissions. We use the best algorithm, which we call OGNet, together with expert review to identify the locations of oil refineries and petroleum terminals in the U.S. We show that OGNet detects many facilities which are not present in four standard public datasets of oil and gas infrastructure. All detected facilities are associated with characteristics known to contribute to methane emissions, including the infrastructure type and the number of storage tanks. The data curated and produced in this study is freely available at this http URL .", "target": ["温暖化の原因となるメタンガスの排出施設を特定する研究。衛星画像(NAIP: National Agriculture Imagery Program)を500 x 500ピクセルに切り分け、DenseNetにかけて石油精製質かどうかを判定する。実際の施設かどうかアノテーションしたデータセットも併せて公開。"]} {"source": "Information overload is a prevalent challenge in many high-value domains. A prominent case in point is the explosion of the biomedical literature on COVID-19, which swelled to hundreds of thousands of papers in a matter of months. In general, biomedical literature expands by two papers every minute, totalling over a million new papers every year. Search in the biomedical realm, and many other vertical domains is challenging due to the scarcity of direct supervision from click logs. Self-supervised learning has emerged as a promising direction to overcome the annotation bottleneck. We propose a general approach for vertical search based on domain-specific pretraining and present a case study for the biomedical domain. Despite being substantially simpler and not using any relevance labels for training or development, our method performs comparably or better than the best systems in the official TREC-COVID evaluation, a COVID-related biomedical search competition. Using distributed computing in modern cloud infrastructure, our system can scale to tens of millions of articles on PubMed and has been deployed as Microsoft Biomedical Search, a new search experience for biomedical literature: this https URL.", "target": ["生物医学論文に特化した検索エンジン構築のための技術紹介。PubMed BERTを用いて検索結果のRe-Rankingを行う。"]} {"source": "Humans have been shown to give contrastive explanations, which explain why an observed event happened rather than some other counterfactual event (the contrast case). Despite the influential role that contrastivity plays in how humans explain, this property is largely missing from current methods for explaining NLP models. We present Minimal Contrastive Editing (MiCE), a method for producing contrastive explanations of model predictions in the form of edits to inputs that change model outputs to the contrast case. Our experiments across three tasks--binary sentiment classification, topic classification, and multiple-choice question answering--show that MiCE is able to produce edits that are not only contrastive, but also minimal and fluent, consistent with human contrastive edits. We demonstrate how MiCE edits can be used for two use cases in NLP system development--debugging incorrect model outputs and uncovering dataset artifacts--and thereby illustrate that producing contrastive explanations is a promising research direction for model interpretability.", "target": ["自然言語処理モデルの予測根拠を明らかにする研究。多肢選択を解くモデルで、予測以外の選択を「しなかった」理由を説明可能にする。予測モデルが指定されたラベルを予測するよう、Maskした元文の穴埋めを学習する。これにより予測ラベルに応じた文書の書き換えを行い、変更箇所から根拠を推定する。"]} {"source": "Humans have been shown to give contrastive explanations, which explain why an observed event happened rather than some other counterfactual event (the contrast case). Despite the influential role that contrastivity plays in how humans explain, this property is largely missing from current methods for explaining NLP models. We present Minimal Contrastive Editing (MiCE), a method for producing contrastive explanations of model predictions in the form of edits to inputs that change model outputs to the contrast case. Our experiments across three tasks--binary sentiment classification, topic classification, and multiple-choice question answering--show that MiCE is able to produce edits that are not only contrastive, but also minimal and fluent, consistent with human contrastive edits. We demonstrate how MiCE edits can be used for two use cases in NLP system development--debugging incorrect model outputs and uncovering dataset artifacts--and thereby illustrate that producing contrastive explanations is a promising research direction for model interpretability.", "target": ["文字を入力とするTransformer。文字をサブワード化するネットワーク(GBSD)を組み込んでおり、単に文字を入力とするより語彙の意味をとれるようにしている。GLUEベンチマークで既存の文字(バイト)レベルより、またサブワードと同等か超える精度を達成。さらに、28~100%高速。"]} {"source": "We consider repair tasks: given a critic (e.g., compiler) that assesses the quality of an input, the goal is to train a fixer that converts a bad example (e.g., code with syntax errors) into a good one (e.g., code with no syntax errors). Existing works create training data consisting of (bad, good) pairs by corrupting good examples using heuristics (e.g., dropping tokens). However, fixers trained on this synthetically-generated data do not extrapolate well to the real distribution of bad inputs. To bridge this gap, we propose a new training approach, Break-It-Fix-It (BIFI), which has two key ideas: (i) we use the critic to check a fixer's output on real bad inputs and add good (fixed) outputs to the training data, and (ii) we train a breaker to generate realistic bad code from good code. Based on these ideas, we iteratively update the breaker and the fixer while using them in conjunction to generate more paired data. We evaluate BIFI on two code repair datasets: GitHub-Python, a new dataset we introduce where the goal is to repair Python code with AST parse errors; and DeepFix, where the goal is to repair C code with compiler errors. BIFI outperforms existing methods, obtaining 90.5% repair accuracy on GitHub-Python (+28.5%) and 71.7% on DeepFix (+5.6%). Notably, BIFI does not require any labeled data; we hope it will be a strong starting point for unsupervised learning of various repair tasks.", "target": ["自然言語処理でソースコードを修正する研究。コードの良し悪しを判定するcriticと、コードを崩して悪い例を作るbreakerを交互に学習する。学習したcriticをラベルなしデータに適用することで、学習データを増やしていく。既存手法を大幅に上回る精度を達成。"]} {"source": "Many NLP tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue: negative examples significantly outnumber positive examples, and the huge number of background examples (or easy-negative examples) overwhelms the training. The most commonly used cross entropy (CE) criteria is actually an accuracy-oriented objective, and thus creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function, while at test time F1 score concerns more about positive examples. In this paper, we propose to use dice loss in replacement of the standard cross-entropy objective for data-imbalanced NLP tasks. Dice loss is based on the Sorensen-Dice coefficient or Tversky index, which attaches similar importance to false positives and false negatives, and is more immune to the data-imbalance issue. To further alleviate the dominating influence from easy-negative examples in training, we propose to associate training examples with dynamically adjusted weights to deemphasize easy-negative examples.Theoretical analysis shows that this strategy narrows down the gap between the F1 score in evaluation and the dice loss in training. With the proposed training objective, we observe significant performance boost on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5, CTB6 and UD1.4 for the part of speech tagging task; SOTA results on CoNLL03, OntoNotes5.0, MSRA and OntoNotes4.0 for the named entity recognition task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification.", "target": ["自然言語処理で不均衡データを分類する際の、損失関数を工夫した研究。Cross-Entropyは正負のサンプルを等しく評価する精度ベースだが、モデルが評価されるときはF1で正のサンプルが重視される。そのため損失関数自体もF1の考えに近いDice Lossを導入している。"]} {"source": "Most applications of machine learning in criminal law focus on making predictions about people and using those predictions to guide decisions. For example, judges use risk assessment tools to predict the likelihood of future violence when making decisions about whom to detain pre-trial. Whereas this predictive technology analyzes people about whom decisions are made, we propose a new direction for machine learning that scrutinizes decision-making itself. Our aim is not to predict behavior, but to provide the public with data-driven opportunities to improve the fairness and consistency of human discretionary judgment. We call our approach the Recon Approach because it encompasses two functions: reconnaissance and reconsideration. Reconnaissance harnesses natural language processing to cull through thousands of hearing transcripts and illuminate factors that appear to have influenced decisions at those hearings. Reconsideration uses modeling techniques to identify cases that appear anomalous in a way that warrants a closer review of those decisions. Reconnaissance reveals patterns that may show systemic problems across a set of decisions; reconsideration flags potential errors or injustices in individual cases. As a team of computer scientists and legal scholars, we describe our early work to apply the Recon Approach to parole-release decisions in California. Drawing on that work, we discuss challenges to the Recon Approach, as well as its potential to apply to sentencing and other discretionary decision-making contexts within and beyond criminal law.", "target": ["受刑者を仮釈放するかなど、法律判断に自然言語処理を応用する際の課題をまとめた記事。従来の適用はYes/Noを答えるようなモデルを構築しているが、下された判断の根拠推定、これまでの判断の妥当性検証(異常検知)に使用することを提案している。"]} {"source": "In computer vision, it is standard practice to draw a single sample from the data augmentation procedure for each unique image in the mini-batch, however it is not clear whether this choice is optimal for generalization. In this work, we provide a detailed empirical evaluation of how the number of augmentation samples per unique image influences performance on held out data. Remarkably, we find that drawing multiple samples per image consistently enhances the test accuracy achieved for both small and large batch training, despite reducing the number of unique training examples in each mini-batch. This benefit arises even when different augmentation multiplicities perform the same number of parameter updates and gradient evaluations. Our results suggest that, although the variance in the gradient estimate arising from subsampling the dataset has an implicit regularization benefit, the variance which arises from the data augmentation process harms test accuracy. By applying augmentation multiplicity to the recently proposed NFNet model family, we achieve a new ImageNet state of the art of 86.8\\% top-1 w/o extra data.", "target": ["1サンプルをどこまでデータ拡張で増やしてよいか検証した研究。データ拡張を行うほど精度が高く、学習時間が短くなることを確認。16倍の拡張でImageNetのSOTAを記録(モデルはNFNet-F5、最適化手法はSAM)。データ全体の分散は下がるが、個々サンプルの分散は上がることに拠る"]} {"source": "Tabular data underpins numerous high-impact applications of machine learning from fraud detection to genomics and healthcare. Classical approaches to solving tabular problems, such as gradient boosting and random forests, are widely used by practitioners. However, recent deep learning methods have achieved a degree of performance competitive with popular techniques. We devise a hybrid deep learning approach to solving tabular data problems. Our method, SAINT, performs attention over both rows and columns, and it includes an enhanced embedding method. We also study a new contrastive self-supervised pre-training method for use when labels are scarce. SAINT consistently improves performance over previous deep learning methods, and it even outperforms gradient boosting methods, including XGBoost, CatBoost, and LightGBM, on average over a variety of benchmark tasks.", "target": ["テーブルデータの予測にTransformer x 事前学習を活用した研究。通常のSelf-Attentionに加えサンプル表現間のSelf-Attention(Intersample Attention)を使用。入力にCutMix/潜在表現にMixupを適用し、対照とノイズ復元、2種の損失で学習を行う。Boosting既存手法より高い精度"]} {"source": "A special purpose learning system assumes knowledge of admissible tasks at design time. Adapting such a system to unforeseen tasks requires architecture manipulation such as adding an output head for each new task or dataset. In this work, we propose a task-agnostic vision-language system that accepts an image and a natural language task description and outputs bounding boxes, confidences, and text. The system supports a wide range of vision tasks such as classification, localization, question answering, captioning, and more. We evaluate the system's ability to learn multiple skills simultaneously, to perform tasks with novel skill-concept combinations, and to learn new skills efficiently and without forgetting.", "target": ["1モデルで画像/言語双方のタスクを解く手法の提案。画像と画像に関する質問を入力とし、質問に関連する物体領域と回答を出力とする。マルチタスクで学習することで、各タスク専門のモデルより良好な精度を達成。また、破壊的忘却も起こりにくくなる。"]} {"source": "Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.", "target": ["言語固有の単語分割が必要なトークンベースでなく、言語共通のUTF-8バイトベースでテキスト処理する研究。バイトにする分長くなる入力に対し、最小限のモデル変更で対応。事前学習時のスパン幅変更、EncoderのサイズをDecoderより大きくするといった工夫を行っている。"]} {"source": "We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.", "target": ["Transformerを強化学習に応用した研究。State/Action/Rewardの系列を入力して次の行動を予測させる。収録済みの軌跡から学習するオフライン強化学習で、既存の手法を上回る精度(オンラインの強化学習ではまだ検証されていない)。"]} {"source": "Intelligent behaviour in the physical world exhibits structure at multiple spatial and temporal scales. Although movements are ultimately executed at the level of instantaneous muscle tensions or joint torques, they must be selected to serve goals defined on much longer timescales, and in terms of relations that extend far beyond the body itself, ultimately involving coordination with other agents. Recent research in artificial intelligence has shown the promise of learning-based approaches to the respective problems of complex movement, longer-term planning and multi-agent coordination. However, there is limited research aimed at their integration. We study this problem by training teams of physically simulated humanoid avatars to play football in a realistic virtual environment. We develop a method that combines imitation learning, single- and multi-agent reinforcement learning and population-based training, and makes use of transferable representations of behaviour for decision making at different levels of abstraction. In a sequence of stages, players first learn to control a fully articulated body to perform realistic, human-like movements such as running and turning; they then acquire mid-level football skills such as dribbling and shooting; finally, they develop awareness of others and play as a team, bridging the gap between low-level motor control at a timescale of milliseconds, and coordinated goal-directed behaviour as a team at the timescale of tens of seconds. We investigate the emergence of behaviours at different levels of abstraction, as well as the representations that underlie these behaviours using several analysis techniques, including statistics from real-world sports analytics. Our work constitutes a complete demonstration of integrated decision-making at multiple scales in a physically embodied multi-agent setting. See project video at this https URL.", "target": ["強化学習で2足歩行といった複雑な制御と、長期かつマルチエージェントの学習を統合して行った研究。サッカーを題材としボールを追う/ドリブルするための制御、メンバーと協調し相手ゴールにいれる行動を学習させている。制御は模倣、協調は複数環境での同時学習で行っている"]} {"source": "Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years. Here we propose a simple network architecture, gMLP, based on MLPs with gating, and show that it can perform as well as Transformers in key language and vision applications. Our comparisons show that self-attention is not critical for Vision Transformers, as gMLP can achieve the same accuracy. For BERT, our model achieves parity with Transformers on pretraining perplexity and is better on some downstream NLP tasks. On finetuning tasks where gMLP performs worse, making the gMLP model substantially larger can close the gap with Transformers. In general, our experiments show that gMLP can scale as well as Transformers over increased data and compute.", "target": ["全結合層のネットワークで画像(Vision Transformer)、言語(BERT)のタスクでTransformerを超える精度を達成したという研究。チャンネル(系列)方向に線形変換=>活性関数(GeLU)=>線形変換の掛け合わせ(系列間のインタラクション計算)を行うブロックを組み合わせて演算する。"]} {"source": "Modern machine learning models for computer vision exceed humans in accuracy on specific visual recognition tasks, notably on datasets like ImageNet. However, high accuracy can be achieved in many ways. The particular decision function found by a machine learning system is determined not only by the data to which the system is exposed, but also the inductive biases of the model, which are typically harder to characterize. In this work, we follow a recent trend of in-depth behavioral analyses of neural network models that go beyond accuracy as an evaluation metric by looking at patterns of errors. Our focus is on comparing a suite of standard Convolutional Neural Networks (CNNs) and a recently-proposed attention-based network, the Vision Transformer (ViT), which relaxes the translation-invariance constraint of CNNs and therefore represents a model with a weaker set of inductive biases. Attention-based networks have previously been shown to achieve higher accuracy than CNNs on vision tasks, and we demonstrate, using new metrics for examining error consistency with more granularity, that their errors are also more consistent with those of humans. These results have implications both for building more human-like vision models, as well as for understanding visual object recognition in humans.", "target": ["CNNとTransformerを、精度ではなくエラーの傾向から比較した研究。Transformerベースのモデルの方が、(精度が高いだけでなく)より人間に近いエラー傾向を示している。"]} {"source": "Recent advances in large-scale pre-training such as GPT-3 allow seemingly high quality text to be generated from a given prompt. However, such generation systems often suffer from problems of hallucinated facts, and are not inherently designed to incorporate useful external information. Grounded generation models appear to offer remedies, but their training typically relies on rarely-available parallel data where corresponding information-relevant documents are provided for context. We propose a framework that alleviates this data constraint by jointly training a grounded generator and document retriever on the language model signal. The model learns to reward retrieval of the documents with the highest utility in generation, and attentively combines them using a Mixture-of-Experts (MoE) ensemble to generate follow-on text. We demonstrate that both generator and retriever can take advantage of this joint training and work synergistically to produce more informative and relevant text in both prose and dialogue generation.", "target": ["根拠に根差したテキスト生成を行うための研究。潜在空間上の距離から先行文書に近い根拠となる文書(本研究ではWikipedia)を検索し、生成文を先行文書に対する尤度だけでなく先行文書/検索した根拠に対する尤度からも評価する。対話文生成で効果を確認。"]} {"source": "In the era of pre-trained language models, Transformers are the de facto choice of model architectures. While recent research has shown promise in entirely convolutional, or CNN, architectures, they have not been explored using the pre-train-fine-tune paradigm. In the context of language models, are convolutional models competitive to Transformers when pre-trained? This paper investigates this research question and presents several interesting findings. Across an extensive set of experiments on 8 datasets/tasks, we find that CNN-based pre-trained models are competitive and outperform their Transformer counterpart in certain scenarios, albeit with caveats. Overall, the findings outlined in this paper suggest that conflating pre-training and architectural advances is misguided and that both advances should be considered independently. We believe our research paves the way for a healthy amount of optimism in alternative architectures.", "target": ["Transformerを用いた事前学習済みモデルが注目されているが、CNNで事前学習を行えば同等・超える精度が記録できるとした研究。DepthwiseのCNNなどでT5と同様のseq2seqで学習。事前学習とモデル構造は分けて論じるべきとしている。"]} {"source": "We algorithmically identify label errors in the test sets of 10 of the most commonly-used computer vision, natural language, and audio datasets, and subsequently study the potential for these label errors to affect benchmark results. Errors in test sets are numerous and widespread: we estimate an average of 3.4% errors across the 10 datasets, where for example 2916 label errors comprise 6% of the ImageNet validation set. Putative label errors are found using confident learning and then human-validated via crowdsourcing (54% of the algorithmically-flagged candidates are indeed erroneously labeled). Surprisingly, we find that lower capacity models may be practically more useful than higher capacity models in real-world datasets with high proportions of erroneously labeled data. For example, on ImageNet with corrected labels: ResNet-18 outperforms ResNet-50 if the prevalence of originally mislabeled test examples increases by just 6%. On CIFAR-10 with corrected labels: VGG-11 outperforms VGG-19 if the prevalence of originally mislabeled test examples increases by 5%. Traditionally, ML practitioners choose which model to deploy based on test accuracy -- our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets.", "target": ["機械学習で用いられるベンチマークのデータセット内にある、誤ったラベルのデータを訂正した研究。画像はImageNetやCIFAR等、自然言語はIMDB等、音声はAudioSet等各タスクで検証している。全体として3.4%程度誤りがあり訂正すると小さいモデルの方が精度が高くなる傾向がある"]} {"source": "We algorithmically identify label errors in the test sets of 10 of the most commonly-used computer vision, natural language, and audio datasets, and subsequently study the potential for these label errors to affect benchmark results. Errors in test sets are numerous and widespread: we estimate an average of 3.4% errors across the 10 datasets, where for example 2916 label errors comprise 6% of the ImageNet validation set. Putative label errors are found using confident learning and then human-validated via crowdsourcing (54% of the algorithmically-flagged candidates are indeed erroneously labeled). Surprisingly, we find that lower capacity models may be practically more useful than higher capacity models in real-world datasets with high proportions of erroneously labeled data. For example, on ImageNet with corrected labels: ResNet-18 outperforms ResNet-50 if the prevalence of originally mislabeled test examples increases by just 6%. On CIFAR-10 with corrected labels: VGG-11 outperforms VGG-19 if the prevalence of originally mislabeled test examples increases by 5%. Traditionally, ML practitioners choose which model to deploy based on test accuracy -- our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets.", "target": ["事前学習済みモデルでFine TuneとPromptどちらが有効かSuperGLUEのタスクで検証した研究。Promptは入力文に「文書の分類はニュース?」等のpatternを付与して入力し、生成結果をクラス等に変換する(verbalizer)。Reasoning以外ではPromptの方が大幅に精度が高い。"]} {"source": "NLP systems rarely give special consideration to numbers found in text. This starkly contrasts with the consensus in neuroscience that, in the brain, numbers are represented differently from words. We arrange recent NLP work on numeracy into a comprehensive taxonomy of tasks and methods. We break down the subjective notion of numeracy into 7 subtasks, arranged along two dimensions: granularity (exact vs approximate) and units (abstract vs grounded). We analyze the myriad representational choices made by 18 previously published number encoders and decoders. We synthesize best practices for representing numbers in text and articulate a vision for holistic numeracy in NLP, comprised of design trade-offs and a unified evaluation.", "target": ["自然言語処理で無視されがちな、テキスト中の数値を認識する研究をまとめたサーベイ。数値の抽出か推測か(Exact/Abstract)、単位を持つか否か(Abstract/Grounded)、2つの軸で既存タスクを7つに整理している。サブワードによるテキスト分割が数値関連タスクに適さないと指摘。"]} {"source": "The combination of multilingual pre-trained representations and cross-lingual transfer learning is one of the most effective methods for building functional NLP systems for low-resource languages. However, for extremely low-resource languages without large-scale monolingual corpora for pre-training or sufficient annotated data for fine-tuning, transfer learning remains an under-studied and challenging task. Moreover, recent work shows that multilingual representations are surprisingly disjoint across languages, bringing additional challenges for transfer onto extremely low-resource languages. In this paper, we propose MetaXL, a meta-learning based framework that learns to transform representations judiciously from auxiliary languages to a target one and brings their representation spaces closer for effective transfer. Extensive experiments on real-world low-resource languages - without access to large-scale monolingual corpora or large amounts of labeled data - for tasks like cross-lingual sentiment analysis and named entity recognition show the effectiveness of our approach. Code for MetaXL is publicly available at this http URL.", "target": ["データ量が豊富な言語のモデルを、小リソースの言語へ転移する研究。対象の言語は、言語モデルの学習ができないほど少量のデータしかない言語を想定している。表現変換のネットワークを組み込み、元言語学習時には更新をスキップし、変換先言語の学習時のみ更新する。"]} {"source": "In this work, we explore \"prompt tuning\", a simple yet effective mechanism for learning \"soft prompts\" to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signal from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3's \"few-shot\" learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method \"closes the gap\" and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant in that large models are costly to share and serve, and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed \"prefix tuning\" of Li and Liang (2021), and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer, as compared to full model tuning.", "target": ["タスク個別の頭出しtoken(prompt)につらなる生成を追加学習することで、タスク転移を行う研究。事前学習済みモデルは固定し、頭出しtokenのEncodeを行うパラメーター(prompt parameter)のみ学習する。タスク固有の学習を外出しすることで、モデル全体の転移を不要にする。"]} {"source": "Human evaluation of modern high-quality machine translation systems is a difficult problem, and there is increasing evidence that inadequate evaluation procedures can lead to erroneous conclusions. While there has been considerable research on human evaluation, the field still lacks a commonly-accepted standard procedure. As a step toward this goal, we propose an evaluation methodology grounded in explicit error analysis, based on the Multidimensional Quality Metrics (MQM) framework. We carry out the largest MQM research study to date, scoring the outputs of top systems from the WMT 2020 shared task in two language pairs using annotations provided by professional translators with access to full document context. We analyze the resulting data extensively, finding among other results a substantially different ranking of evaluated systems from the one established by the WMT crowd workers, exhibiting a clear preference for human over machine output. Surprisingly, we also find that automatic metrics based on pre-trained embeddings can outperform human crowd workers. We make our corpus publicly available for further research.", "target": ["機械翻訳に対する人間評価の妥当性を調査した研究。評価は正確さと流暢さを表す観点を階層的に定義しスコア化するMQMというフレームワークを用いている。翻訳家が観点をアノテーションし評価した結果はクラウドソーシングによる評価と乖離があり、事前学習済み言語モデルを使用した評価の方に近かった"]} {"source": "We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.", "target": ["多言語の事前学習済み言語モデルを学習をする際、入力より出力のEmbeddingの次元を大きくした方が言語理解タスクで転移性能が高くなるとした研究。出力のEmbeddingは転移学習時は外すので、サイズを大きくしても利用する際に影響は出ない。"]} {"source": "Recently, adversarial attack methods have been developed to challenge the robustness of machine learning models. However, mainstream evaluation criteria experience limitations, even yielding discrepancies among results under different settings. By examining various attack algorithms, including gradient-based and query-based attacks, we notice the lack of a consensus on a uniform standard for unbiased performance evaluation. Accordingly, we propose a Piece-wise Sampling Curving (PSC) toolkit to effectively address the aforementioned discrepancy, by generating a comprehensive comparison among adversaries in a given range. In addition, the PSC toolkit offers options for balancing the computational cost and evaluation effectiveness. Experimental results demonstrate our PSC toolkit presents comprehensive comparisons of attack algorithms, significantly reducing discrepancies in practice.", "target": ["Adversarial Attackの手法を評価する方法の提案。評価のばらつきを抑えるため、機械学習モデルをだますために入れるノイズの変動幅と攻撃成功率(分類を誤らせるなど)をプロットし、描画線下の面積(AUC)で評価を行う。"]} {"source": "While self-supervised learning has made rapid advances in natural language processing, it remains unclear when researchers should engage in resource-intensive domain-specific pretraining (domain pretraining). The law, puzzlingly, has yielded few documented instances of substantial gains to domain pretraining in spite of the fact that legal language is widely seen to be unique. We hypothesize that these existing results stem from the fact that existing legal NLP tasks are too easy and fail to meet conditions for when domain pretraining can help. To address this, we first present CaseHOLD (Case Holdings On Legal Decisions), a new dataset comprised of over 53,000+ multiple choice questions to identify the relevant holding of a cited case. This dataset presents a fundamental task to lawyers and is both legally meaningful and difficult from an NLP perspective (F1 of 0.4 with a BiLSTM baseline). Second, we assess performance gains on CaseHOLD and existing legal NLP datasets. While a Transformer architecture (BERT) pretrained on a general corpus (Google Books and Wikipedia) improves performance, domain pretraining (using corpus of approximately 3.5M decisions across all courts in the U.S. that is larger than BERT's) with a custom legal vocabulary exhibits the most substantial performance gains with CaseHOLD (gain of 7.2% on F1, representing a 12% improvement on BERT) and consistent performance gains across two other legal tasks. Third, we show that domain pretraining may be warranted when the task exhibits sufficient similarity to the pretraining corpus: the level of performance increase in three legal tasks was directly tied to the domain specificity of the task. Our findings inform when researchers should engage resource-intensive pretraining and show that Transformer-based architectures, too, learn embeddings suggestive of distinct legal language.", "target": ["裁判の文書から正しい判決を選択するデータセットCaseHOLDの提案。判決文で事前学習(domain pretraining)することで、精度が向上することを確認。"]} {"source": "Clark et al. [2020] claims that the ELECTRA approach is highly efficient in NLP performances relative to computation budget. As such, this reproducibility study focus on this claim, summarized by the following question: Can we use ELECTRA to achieve close to SOTA performances for NLP in low-resource settings, in term of compute cost?", "target": ["コンピューティングリソースが少ない場合、モデルとしてはELECTRAがよいとした記事。DistilBERT/TinyBERTより少ないパラメーターで、 GLUEベンチマークで同等精度を記録できる。"]} {"source": "We consider the problem of using observational data to estimate the causal effects of linguistic properties. For example, does writing a complaint politely lead to a faster response time? How much will a positive product review increase sales? This paper addresses two technical challenges related to the problem before developing a practical method. First, we formalize the causal quantity of interest as the effect of a writer's intent, and establish the assumptions necessary to identify this from observational data. Second, in practice, we only have access to noisy proxies for the linguistic properties of interest -- e.g., predictions from classifiers and lexicons. We propose an estimator for this setting and prove that its bias is bounded when we perform an adjustment for the text. Based on these results, we introduce TextCause, an algorithm for estimating causal effects of linguistic properties. The method leverages (1) distant supervision to improve the quality of noisy proxies, and (2) a pre-trained language model (BERT) to adjust for the text. We show that the proposed method outperforms related approaches when estimating the effect of Amazon review sentiment on semi-simulated sales figures. Finally, we present an applied case study investigating the effects of complaint politeness on bureaucratic response times.", "target": ["商品レビューがクリック数などにどう影響を与えているか、因果推論のモデルで検証した研究。疑似ラベルの付与にdistant supervision、疑似ラベルの効果予測にBERTを用いている。Amazon Reviewを使用した実験で効果を確認。"]} {"source": "Understanding the context of complex and cluttered scenes is a challenging problem for semantic segmentation. However, it is difficult to model the context without prior and additional supervision because the scene's factors, such as the scale, shape, and appearance of objects, vary considerably in these scenes. To solve this, we propose to learn the structures of objects and the hierarchy among objects because context is based on these intrinsic properties. In this study, we design novel hierarchical, contextual, and multiscale pyramidal representations to capture the properties from an input image. Our key idea is the recursive segmentation in different hierarchical regions based on a predefined number of regions and the aggregation of the context in these regions. The aggregated contexts are used to predict the contextual relationship between the regions and partition the regions in the following hierarchical level. Finally, by constructing the pyramid representations from the recursively aggregated context, multiscale and hierarchical properties are attained. In the experiments, we confirmed that our proposed method achieves state-of-the-art performance in PASCAL Context.", "target": ["様々な物体が写る画像にセグメンテーションを適用する手法。検出領域数を増やしながら再帰的に特徴マップを得て、集約することで予測を行う。"]} {"source": "In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation. In the life sciences, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity. The resulting model contains information about biological properties in its representations. The representations are learned from sequence data alone. The learned representation space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and can be identified by linear projections. Representation learning produces features that generalize across a range of applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and improving state-of-the-art features for long-range contact prediction.", "target": ["言語モデルで2億5000万のたんぱく質配列を学習させることで相同性のあるタンパク質の発見などに有用な表現が得られたという研究。Fine-tuningすることで教師ありモデルの予測精度向上を確認。"]} {"source": "Deep neural networks and huge language models are becoming omnipresent in natural language applications. As they are known for requiring large amounts of training data, there is a growing body of work to improve the performance in low-resource settings. Motivated by the recent fundamental changes towards neural models and the popular pre-train and fine-tune paradigm, we survey promising approaches for low-resource natural language processing. After a discussion about the different dimensions of data availability, we give a structured overview of methods that enable learning when training data is sparse. This includes mechanisms to create additional labeled data like data augmentation and distant supervision as well as transfer learning settings that reduce the need for target supervision. A goal of our survey is to explain how these methods differ in their requirements as understanding them is essential for choosing a technique suited for a specific low-resource setting. Further key aspects of this work are to highlight open issues and to outline promising directions for future research.", "target": ["少ないリソースしかない場合に使える自然言語処理の手法を調べた研究。Data Augmentationや弱教師、転移学習などの手法がまとめられている。"]} {"source": "Mixup is the latest data augmentation technique that linearly interpolates input examples and the corresponding labels. It has shown strong effectiveness in image classification by interpolating images at the pixel level. Inspired by this line of research, in this paper, we explore i) how to apply mixup to natural language processing tasks since text data can hardly be mixed in the raw format; ii) if mixup is still effective in transformer-based learning models, e.g., BERT. To achieve the goal, we incorporate mixup to transformer-based pre-trained architecture, named \"mixup-transformer\", for a wide range of NLP tasks while keeping the whole end-to-end training system. We evaluate the proposed framework by running extensive experiments on the GLUE benchmark. Furthermore, we also examine the performance of mixup-transformer in low-resource scenarios by reducing the training data with a certain ratio. Our studies show that mixup is a domain-independent data augmentation technique to pre-trained language models, resulting in significant performance improvement for transformer-based models.", "target": ["画像で用いられるMixup(2つの学習データを混合させてデータの水増しを行う手法)を自然言語処理で行った研究。異なる2つの学習データを同一の事前学習済みモデルに投入し、得られた潜在表現をミックスしたものでクラス分類などを学習する。GLUE benchmarkの各タスクで精度向上"]} {"source": "Language model based pre-trained models such as BERT have provided significant gains across different NLP tasks. In this paper, we study different types of transformer based pre-trained models such as auto-regressive models (GPT-2), auto-encoder models (BERT), and seq2seq models (BART) for conditional data augmentation. We show that prepending the class labels to text sequences provides a simple yet effective way to condition the pre-trained models for data augmentation. Additionally, on three classification benchmarks, pre-trained Seq2Seq model outperforms other data augmentation methods in a low-resource setting. Further, we explore how different pre-trained model based data augmentation differs in-terms of data diversity, and how well such methods preserve the class-label information.", "target": ["Data Augmentationを行うための条件付きテキスト生成に適した事前学習の形態を調査した研究。GPTのような自己回帰型、BERTのようなAuto Encoder型、BARTのようなseq2seq型で検証。データ量が少ない場合seq2seqが効果的で、入力へのラベル付与はすべての手法で効果あり。"]} {"source": "Natural Language Processing (NLP) can help unlock the vast troves of unstructured data in clinical text and thus improve healthcare research. However, a big barrier to developments in this field is data access due to patient confidentiality which prohibits the sharing of this data, resulting in small, fragmented and sequestered openly available datasets. Since NLP model development requires large quantities of data, we aim to help side-step this roadblock by exploring the usage of Natural Language Generation in augmenting datasets such that they can be used for NLP model development on downstream clinically relevant tasks. We propose a methodology guiding the generation with structured patient information in a sequence-to-sequence manner. We experiment with state-of-the-art Transformer models and demonstrate that our augmented dataset is capable of beating our baselines on a downstream classification task. Finally, we also create a user interface and release the scripts to train generation models to stimulate further research in this area.", "target": ["医療用の自然言語処理モデルの性能を上げるために、事前学習用のテキストを自動生成する研究。患者の属性情報や診断情報を連結したテキストから診察記録を生成できるようseq2seqの形式で学習。再入院予測や肥満やアルコールといった様態予測のタスクで精度向上を確認。"]} {"source": "In the standard Markov decision process formalism, users specify tasks by writing down a reward function. However, in many scenarios, the user is unable to describe the task in words or numbers, but can readily provide examples of what the world would look like if the task were solved. Motivated by this observation, we derive a control algorithm from first principles that aims to visit states that have a high probability of leading to successful outcomes, given only examples of successful outcome states. Prior work has approached similar problem settings in a two-stage process, first learning an auxiliary reward function and then optimizing this reward function using another reinforcement learning algorithm. In contrast, we derive a method based on recursive classification that eschews auxiliary reward functions and instead directly learns a value function from transitions and successful outcomes. Our method therefore requires fewer hyperparameters to tune and lines of code to debug. We show that our method satisfies a new data-driven Bellman equation, where examples take the place of the typical reward function term. Experiments show that our approach outperforms prior methods that learn explicit reward functions.", "target": ["報酬関数でなくタスク達成時の状態を与えて強化学習を行う手法の提案。模倣学習と異なり状態のみを与えることで学習できる、一旦報酬関数を学習してその後に学習するのでなく同時に行動を学習するという点がこれまでと異なる。ロボットのマニピュレーションで安定した報酬達成"]} {"source": "Transformer models have advanced the state of the art in many Natural Language Processing (NLP) tasks. In this paper, we present a new Transformer architecture, Extended Transformer Construction (ETC), that addresses two key challenges of standard Transformer architectures, namely scaling input length and encoding structured inputs. To scale attention to longer inputs, we introduce a novel global-local attention mechanism between global tokens and regular input tokens. We also show that combining global-local attention with relative position encodings and a Contrastive Predictive Coding (CPC) pre-training objective allows ETC to encode structured inputs. We achieve state-of-the-art results on four natural language datasets requiring long and/or structured inputs.", "target": ["Transformerで長い系列を扱えるようにした研究。token間のAttention範囲は局所的にする一方、sentence/paragraphといったglobalな表現とAttentionを張ることで長い系列の関係を認識できるようにしている(global表現間でもSelf-Attentionを行う)。"]} {"source": "A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving. We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform. The object association leverages quasi-dense similarity learning to identify objects in various poses and viewpoints with appearance cues only. After initial 2D association, we further utilize 3D bounding boxes depth-ordering heuristics for robust instance association and motion-based 3D trajectory prediction for re-identification of occluded vehicles. In the end, an LSTM-based object velocity learning module aggregates the long-term trajectory information for more accurate motion extrapolation. Experiments on our proposed simulation data and real-world benchmarks, including KITTI, nuScenes, and Waymo datasets, show that our tracking framework offers robust object association and tracking on urban-driving scenarios. On the Waymo Open benchmark, we establish the first camera-only baseline in the 3D tracking and 3D detection challenges. Our quasi-dense 3D tracking pipeline achieves impressive improvements on the nuScenes 3D tracking benchmark with near five times tracking accuracy of the best vision-only submission among all published methods. Our code, data and trained models are available at this https URL.", "target": ["連続したフレーム画像から3Dオブジェクトのトラッキングを可能にした研究。フレーム間のオブジェクト類似度の学習に対照学習を用い(実領域との重複(IoU)が高い=positive/低い=negativeで学習)、物体の状態予測/更新にLSTMを用いている。"]} {"source": "Masked language models have quickly become the de facto standard when processing text. Recently, several approaches have been proposed to further enrich word representations with external knowledge sources such as knowledge graphs. However, these models are devised and evaluated in a monolingual setting only. In this work, we propose a languageindependent entity prediction task as an intermediate training procedure to ground word representations on entity semantics and bridge the gap across different languages by means of a shared vocabulary of entities. We show that our approach effectively injects new lexicalsemantic knowledge into neural models, improving their performance on different semantic tasks in the zero-shot crosslingual setting. As an additional advantage, our intermediate training does not require any supplementary input, allowing our models to be applied to new datasets right away. In our experiments, we use Wikipedia articles in up to 100 languages and already observe consistent gains compared to strong baselines when predicting entities using only the English Wikipedia. Further adding extra languages lead to improvements in most tasks up to a certain point, but overall we found it non-trivial to scale improvements in model transferability by training on ever increasing amounts of Wikipedia languages.", "target": ["事前学習済み言語モデルを、Wikipediaのハイパーリンク予測のタスクでFine Tuneしより良い表現を得る手法の提案。予測は普通にTokenを使用する他、頭のCLSを結合する手法、Token/CLSを確率的に置き換える手法の3つを検証している。Tokenかつ様々な言語で学習させた方が良い傾向。"]} {"source": "Large-scale language models such as BERT have achieved state-of-the-art performance across a wide range of NLP tasks. Recent studies, however, show that such BERT-based models are vulnerable facing the threats of textual adversarial attacks. We aim to address this problem from an information-theoretic perspective, and propose InfoBERT, a novel learning framework for robust fine-tuning of pre-trained language models. InfoBERT contains two mutual-information-based regularizers for model training: (i) an Information Bottleneck regularizer, which suppresses noisy mutual information between the input and the feature representation; and (ii) a Robust Feature regularizer, which increases the mutual information between local robust features and global features. We provide a principled way to theoretically analyze and improve the robustness of representation learning for language models in both standard and adversarial training. Extensive experiments demonstrate that InfoBERT achieves state-of-the-art robust accuracy over several adversarial datasets on Natural Language Inference (NLI) and Question Answering (QA) tasks. Our code is available at this https URL.", "target": ["事前学習済み言語モデルをFine TuneしてAdversarial Sentenceの追加に頑健にする研究。(後続タスクに)必要最低限な特徴に抑えるためInformation Bottleneckを用いた正則化を行うと共に、単語特徴と大域特徴を比較し影響が大きすぎる/小さすぎる単語を除外している。"]} {"source": "Novel computer vision architectures monopolize the spotlight, but the impact of the model architecture is often conflated with simultaneous changes to training methodology and scaling strategies. Our work revisits the canonical ResNet (He et al., 2015) and studies these three aspects in an effort to disentangle them. Perhaps surprisingly, we find that training and scaling strategies may matter more than architectural changes, and further, that the resulting ResNets match recent state-of-the-art models. We show that the best performing scaling strategy depends on the training regime and offer two new scaling strategies: (1) scale model depth in regimes where overfitting can occur (width scaling is preferable otherwise); (2) increase image resolution more slowly than previously recommended (Tan & Le, 2019). Using improved training and scaling strategies, we design a family of ResNet architectures, ResNet-RS, which are 1.7x - 2.7x faster than EfficientNets on TPUs, while achieving similar accuracies on ImageNet. In a large-scale semi-supervised learning setup, ResNet-RS achieves 86.2% top-1 ImageNet accuracy, while being 4.7x faster than EfficientNet NoisyStudent. The training techniques improve transfer performance on a suite of downstream tasks (rivaling state-of-the-art self-supervised algorithms) and extend to video classification on Kinetics-400. We recommend practitioners use these simple revised ResNets as baselines for future research.", "target": ["モデルサイズと学習方法を工夫すれば、ResNetでもEfficientNetを超える精度/効率を達成できるという研究。Overfitが起こらないよう正則化のパラメーターをチューニングしつつDepthを大きくしていき、入力画像の解像度を推奨よりゆっくり上げていくことでEfficientNet越えを達成できる。"]} {"source": "Convolution has been the core ingredient of modern neural networks, triggering the surge of deep learning in vision. In this work, we rethink the inherent principles of standard convolution for vision tasks, specifically spatial-agnostic and channel-specific. Instead, we present a novel atomic operation for deep neural networks by inverting the aforementioned design principles of convolution, coined as involution. We additionally demystify the recent popular self-attention operator and subsume it into our involution family as an over-complicated instantiation. The proposed involution operator could be leveraged as fundamental bricks to build the new generation of neural networks for visual recognition, powering different deep learning models on several prevalent benchmarks, including ImageNet classification, COCO detection and segmentation, together with Cityscapes segmentation. Our involution-based models improve the performance of convolutional baselines using ResNet-50 by up to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU absolutely while compressing the computational cost to 66%, 65%, 72%, and 57% on the above benchmarks, respectively. Code and pre-trained models for all the tasks are available at this https URL.", "target": ["入力に対し同一の重み(フィルタ)を適用しチャンネル単位で切り替える、というCNNの仕組みを逆転させ、領域ごと(ピクセル単位)に重みを変えチャンネル間で共有する仕組みを提案した研究。チャンネル間で個別領域の重みを共有するのはSelf-Attentionの簡易版と見なせる。"]} {"source": "Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias. To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters.", "target": ["文字列をユニコード文字の系列として扱うことで、全言語対応かつ単語分割問題を解決する試み。ユニコード文字を複数のハッシュ関数で潜在表現に変換し、連結した後1層のSelf-Attention、CNNで次元を減らしTransformerに入れる。系列ラベリングの場合元入力と結合しCNNにかけ系列長を復元する。"]} {"source": "Unsupervised pre-training has led to much recent progress in natural language understanding. In this paper, we study self-training as another way to leverage unlabeled data through semi-supervised learning. To obtain additional data for a specific task, we introduce SentAugment, a data augmentation method which computes task-specific query embeddings from labeled data to retrieve sentences from a bank of billions of unlabeled sentences crawled from the web. Unlike previous semi-supervised methods, our approach does not require in-domain unlabeled data and is therefore more generally applicable. Experiments show that self-training is complementary to strong RoBERTa baselines on a variety of tasks. Our augmentation approach leads to scalable and effective self-training with improvements of up to 2.6% on standard text classification benchmarks. Finally, we also show strong gains on knowledge-distillation and few-shot learning.", "target": ["自然言語処理で、事前学習済みモデルと大規模な文書コーパスから弱教師のラベルデータを量産する手法。教師モデルを作成後、モデルの潜在表現をコーパスからデータを抜くためのクエリ表現として使用。文書分類のベンチマークで2.6%の精度指標改善。"]} {"source": "Deep learning has seen a movement away from representing examples with a monolithic hidden state towards a richly structured state. For example, Transformers segment by position, and object-centric architectures decompose images into entities. In all these architectures, interactions between different elements are modeled via pairwise interactions: Transformers make use of self-attention to incorporate information from other positions; object-centric architectures make use of graph neural networks to model interactions among entities. However, pairwise interactions may not achieve global coordination or a coherent, integrated representation that can be used for downstream tasks. In cognitive science, a global workspace architecture has been proposed in which functionally specialized components share information through a common, bandwidth-limited communication channel. We explore the use of such a communication channel in the context of deep learning for modeling the structure of complex environments. The proposed method includes a shared workspace through which communication among different specialist modules takes place but due to limits on the communication bandwidth, specialist modules must compete for access. We show that capacity limitations have a rational basis in that (1) they encourage specialization and compositionality and (2) they facilitate the synchronization of otherwise independent specialists.", "target": ["意識の研究で提唱されているグローバルワークスペース理論をDNNに取り入れる研究。特定の入力に反応するモジュールから共用パラメーターに書き込みを行い、書き込み結果は全モジュールにフィードバックする。TransformerやGNNなどペアの関係を扱うモデルに組み込むことで画像Reasoningタスク精度を向上"]} {"source": "Although using convolutional neural networks (CNNs) as backbones achieves great successes in computer vision, this work investigates a simple backbone network useful for many dense prediction tasks without convolutions. Unlike the recently-proposed Transformer model (e.g., ViT) that is specially designed for image classification, we propose Pyramid Vision Transformer~(PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several merits compared to prior arts. (1) Different from ViT that typically has low-resolution outputs and high computational and memory cost, PVT can be not only trained on dense partitions of the image to achieve high output resolution, which is important for dense predictions but also using a progressive shrinking pyramid to reduce computations of large feature maps. (2) PVT inherits the advantages from both CNN and Transformer, making it a unified backbone in various vision tasks without convolutions by simply replacing CNN backbones. (3) We validate PVT by conducting extensive experiments, showing that it boosts the performance of many downstream tasks, e.g., object detection, semantic, and instance segmentation. For example, with a comparable number of parameters, RetinaNet+PVT achieves 40.4 AP on the COCO dataset, surpassing RetinNet+ResNet50 (36.3 AP) by 4.1 absolute AP. We hope PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future researches. Code is available at this https URL.", "target": ["画像分類だけでなく、物体検知やセグメンテーションといったDense PredictionのタスクにTransformerの適用を進めた研究。CNNによるFeature PyramidをTransformerベースで構築しており、Patch表現=>Self-Attention=>全結合を1ステージの処理として重ねる。CNNより高精度を達成"]} {"source": "Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV have reduced the gap with supervised methods. These results have been achieved in a control environment, that is the highly curated ImageNet dataset. However, the premise of self-supervised learning is that it can learn from any random image and from any unbounded dataset. In this work, we explore if self-supervision lives to its expectation by training large models on random, uncurated images with no supervision. Our final SElf-supERvised (SEER) model, a RegNetY with 1.3B parameters trained on 1B random images with 512 GPUs achieves 84.2% top-1 accuracy, surpassing the best self-supervised pretrained model by 1% and confirming that self-supervised learning works in a real world setting. Interestingly, we also observe that self-supervised models are good few-shot learners achieving 77.9% top-1 with access to only 10% of ImageNet. Code: this https URL", "target": ["自己教師学習を大規模かつインターネット上から収集した無選別のデータセットで行った研究。データにスケールするアーキテクチャとしてRegNetY(パラメーター空間探索で発見した構造)、自己教師学習としてSwAV(画像クラスター間の変換を対照学習する手法)を採用。ImageNetで転移後の性能でSOTA達成。"]} {"source": "This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs). In contrast with existing methods, our work does not require computationally expensive higher-order representations in intermediate layers while it still achieves competitive or better performance. In addition, whereas existing methods are limited to equivariance on 3 dimensional spaces, our model is easily scaled to higher-dimensional spaces. We demonstrate the effectiveness of our method on dynamical systems modelling, representation learning in graph autoencoders and predicting molecular properties.", "target": ["3次元(E3)の変換+交換に対し不変な出力が行えるGNNの提案。CNNだとE2(画像の平行移動や回転等)、GNNだと交換(ノードの位置変更)に対し不変だが、それをさらに拡張している。ノードの表現を更新する際相対位置を用いることで実現している。分子構造データセットQM9のタスク12中9でSOTAを達成。"]} {"source": "This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language", "target": ["潜在表現を獲得する複数の手法(Transformer、対照学習、蒸留等)の良い所を組み合わせるためのデザイン案。人間の認知と同様、NNも「部分の総和」では全体の表現を得ることはできないとし、全体を表す共有潜在表現と、ツリー構造を持ちながらもつながりが(離散でなく)連続で表現する構造を提案している"]} {"source": "We propose UniT, a Unified Transformer model to simultaneously learn the most prominent tasks across different domains, ranging from object detection to language understanding and multimodal reasoning. Based on the transformer encoder-decoder architecture, our UniT model encodes each input modality with an encoder and makes predictions on each task with a shared decoder over the encoded input representations, followed by task-specific output heads. The entire model is jointly trained end-to-end with losses from each task. Compared to previous efforts on multi-task learning with transformers, we share the same model parameters to all tasks instead of separately fine-tuning task-specific models and handle a much higher variety of tasks across different domains. In our experiments, we learn 7 tasks jointly over 8 datasets, achieving comparable performance to well-established prior work on each domain under the same supervision with a compact set of model parameters. Code will be released in MMF at this https URL.", "target": ["1つのTransformerで画像・言語・画像&言語(VQA/Visual Entailment(画像の内容と一致する記載か判定する))3種のタスクを学習した研究。Encoderは画像とテキスト、タスク個別にQuery/出力のHeadがある以外はパラメーターを共有する1モデルの構成。どのタスクでも一定程度の精度を達成。"]} {"source": "We present lambda layers -- an alternative framework to self-attention -- for capturing long-range interactions between an input and structured contextual information (e.g. a pixel surrounded by other pixels). Lambda layers capture such interactions by transforming available contexts into linear functions, termed lambdas, and applying these linear functions to each input separately. Similar to linear attention, lambda layers bypass expensive attention maps, but in contrast, they model both content and position-based interactions which enables their application to large structured inputs such as images. The resulting neural network architectures, LambdaNetworks, significantly outperform their convolutional and attentional counterparts on ImageNet classification, COCO object detection and instance segmentation, while being more computationally efficient. Additionally, we design LambdaResNets, a family of hybrid architectures across different scales, that considerably improves the speed-accuracy tradeoff of image classification models. LambdaResNets reach excellent accuracies on ImageNet while being 3.2 - 4.4x faster than the popular EfficientNets on modern machine learning accelerators. In large-scale semi-supervised training with an additional 130M pseudo-labeled images, LambdaResNets achieve up to 86.7% ImageNet accuracy while being 9.5x faster than EfficientNet NoisyStudent and 9x faster than a Vision Transformer with comparable accuracies.", "target": ["Self-Attentionを代替する仕組みの提案。Query x KeyでAttention Mapを作って演算するのでなく、Valueから位置独立なContentと位置情報を持つPositionをそれぞれ計算し(Lambda関数の適用)、加算によって(Attention済みの)表現を算出する。EfficientNetsと同等精度で3.2 - 4.4倍高速。"]} {"source": "The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that “classical” flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. Moreover, we find that while median filtering of intermediate flow fields during optimization is a key to recent performance gains, it leads to higher energy solutions. To understand the principles behind this phenomenon, we derive a new objective that formalizes the median filtering heuristic. This objective includes a nonlocal term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that ranks at the top of the Middlebury benchmark.", "target": ["古典的なmedian filteringと近年の最適化手法を組み合わせることで高精度なオプティカルフロー推定が可能なことを示した研究。画像構造とフロー境界の情報を、重みづけした非局所の項として目的関数に組み込む手法を提案している。"]} {"source": "Prevailing methods for mapping large generative language models to supervised tasks may fail to sufficiently probe models' novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts. We suggest that the function of few-shot examples in these cases is better described as locating an already learned task rather than meta-learning. This analysis motivates rethinking the role of prompts in controlling and evaluating powerful language models. In this work, we discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language. We explore techniques for exploiting the capacity of narratives and cultural anchors to encode nuanced intentions and techniques for encouraging deconstruction of a problem into components before producing a verdict. Informed by this more encompassing theory of prompt programming, we also introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks. Finally, we discuss how these more general methods of interacting with language models can be incorporated into existing and future benchmarks and practical applications.", "target": ["言語モデルで、数サンプルを与えるFew-shotより入力フォーマット(Prompt)を工夫したZero-shotの方が有効に働くという研究。翻訳を題材に、「French: 入力文」など言語+コロン+文といったフォーマットで入力するとFew-shotよりも高精度になることを確認。"]} {"source": "Batch normalization is a key component of most image classification models, but it has many undesirable properties stemming from its dependence on the batch size and interactions between examples. Although recent work has succeeded in training deep ResNets without normalization layers, these models do not match the test accuracies of the best batch-normalized networks, and are often unstable for large learning rates or strong data augmentations. In this work, we develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer-Free ResNets. Our smaller models match the test accuracy of an EfficientNet-B7 on ImageNet while being up to 8.7x faster to train, and our largest models attain a new state-of-the-art top-1 accuracy of 86.5%. In addition, Normalizer-Free models attain significantly better performance than their batch-normalized counterparts when finetuning on ImageNet after large-scale pre-training on a dataset of 300 million labeled images, with our best models obtaining an accuracy of 89.2%. Our code is available at this https URL deepmind-research/tree/master/nfnets", "target": ["Batch Normalization(BN)なしで高精度なモデルを構築した研究。BNはメリットが大きい一方メモリを食う、学習/推論時の性能不一致、学習データ間の依存を生むという問題がある。既存のBNを抜く手法は精度/安定性に難があったがレイヤーより細かいユニット(行)単位で勾配のClipを行うことで安定化に成功"]} {"source": "In this tutorial article, we aim to provide the reader with the conceptual tools needed to get started on research on offline reinforcement learning algorithms: reinforcement learning algorithms that utilize previously collected data, without additional online data collection. Offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines. Effective offline reinforcement learning methods would be able to extract policies with the maximum possible utility out of the available data, thereby allowing automation of a wide range of decision-making domains, from healthcare and education to robotics. However, the limitations of current algorithms make this difficult. We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep reinforcement learning methods, and describe some potential solutions that have been explored in recent work to mitigate these challenges, along with recent applications, and a discussion of perspectives on open problems in the field.", "target": ["収集済みのサンプルを利用するオフライン強化学習のチュートリアル資料。解説を始める前に、まずオフライン強化学習が有効に働くシチュエーションが述べられており学習のゴールがイメージできるようなっている(人間相手で多数の試行が困難な医療や対話が挙げられている)。"]} {"source": "The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.", "target": ["枝刈りに代表されるDNNの離散化についてまとめたサーベイ。1980年代後半までさかのぼり収集した約300の論文で提案された手法を体系立ててまとめている。枝刈り以外に重み追加(接ぎ木?)の手法、またDropoutをはじめとした動的な離散化(ephemeral sparsification)などについて解説している。"]} {"source": "How can multiple distributed entities collaboratively train a shared deep net on their private data while preserving privacy? This paper introduces InstaHide, a simple encryption of training images, which can be plugged into existing distributed deep learning pipelines. The encryption is efficient and applying it during training has minor effect on test accuracy. InstaHide encrypts each training image with a \"one-time secret key\" which consists of mixing a number of randomly chosen images and applying a random pixel-wise mask. Other contributions of this paper include: (a) Using a large public dataset (e.g. ImageNet) for mixing during its encryption, which improves security. (b) Experimental results to show effectiveness in preserving privacy against known attacks with only minor effects on accuracy. (c) Theoretical analysis showing that successfully attacking privacy requires attackers to solve a difficult computational problem. (d) Demonstrating that use of the pixel-wise mask is important for security, since Mixup alone is shown to be insecure to some some efficient attacks. (e) Release of a challenge dataset this https URL Our code is available at this https URL", "target": ["攻撃者からデータセットの情報を保護するInstaHideを提案。Web利用できるImageNetなどの画像と独自のデータセットを含めたデータセットから複数の画像を選び出し符号を反転させながら混合することでデータセットを構築する。学習後の精度を劣化させずにデータの保護が可能になる。"]} {"source": "The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.", "target": ["ネットワークのスパース化に対する実用的な手法の検証。70ページを超える論文で、近年の提案に基づいて何が実践的であるかを議論している。正則化とスケジューリングによる調整は高い性能を達成できるがハイパーパラメータが増え学習難易度が高く高コスト、ドロップアウトは疎化のための事前正則化として有用、スパース化の初期化は層毎に慎重に検討する必要がある、など。"]} {"source": "Existing methods for vision-and-language learning typically require designing task-specific architectures and objectives for each task. For example, a multi-label answer classifier for visual question answering, a region scorer for referring expression comprehension, and a language decoder for image captioning, etc. To alleviate these hassles, in this work, we propose a unified framework that learns different tasks in a single architecture with the same language modeling objective, i.e., multimodal conditional text generation, where our models learn to generate labels in text based on the visual and textual inputs. On 7 popular vision-and-language benchmarks, including visual question answering, referring expression comprehension, visual commonsense reasoning, most of which have been previously modeled as discriminative tasks, our generative approach (with a single unified architecture) reaches comparable performance to recent task-specific state-of-the-art vision-and-language models. Moreover, our generative approach shows better generalization ability on questions that have rare answers. Also, we show that our framework allows multi-task learning in a single architecture with a single set of parameters, achieving similar performance to separately optimized single-task models. Our code is publicly available at: this https URL", "target": ["自己回帰型言語モデルによる文書生成を利用することで、様々画像言語タスクを1つのモデルで行えるVL-BART, VL-T5を提案。タスク固有のヘッドを必要とする先行研究と比較しても良い結果を出せている。"]} {"source": "In this paper, we present a novel approach, Momentum^2 Teacher, for student-teacher based self-supervised learning. The approach performs momentum update on both network weights and batch normalization (BN) statistics. The teacher's weight is a momentum update of the student, and the teacher's BN statistics is a momentum update of those in history. The Momentum^2 Teacher is simple and efficient. It can achieve the state of the art results (74.5\\%) under ImageNet linear evaluation protocol using small-batch size(\\eg, 128), without requiring large-batch training on special hardware like TPU or inefficient across GPU operation (\\eg, shuffling BN, synced BN). Our implementation and pre-trained models will be given on GitHub\\footnote{this https URL}.", "target": ["大バッチサイズの統計量でBatchNormを行う代わりに、今までのバッチの統計量で正規化を行うMomentum BatchNormを提案。BYOLの枠組みにそのまま適用できる。BYOLは4096のバッチサイズを使うのに対して32程度でもそこそこの精度を出すことができる。"]} {"source": "Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a $ProbSparse$ self-attention mechanism, which achieves $O(L \\log L)$ in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.", "target": ["Transformerを時系列データを始めとする長い系列に応用した研究。Self-Attentionで使用するクエリを重要なもののみに制限し(ProbSparse)、さらにDilated Convを参考にAttentionを蒸留している。Decoder側は全結合層を通じ一括で系列予測するよう構築している。"]} {"source": "For infinitesimal learning rates, stochastic gradient descent (SGD) follows the path of gradient flow on the full batch loss function. However moderately large learning rates can achieve higher test accuracies, and this generalization benefit is not explained by convergence bounds, since the learning rate which maximizes test accuracy is often larger than the learning rate which minimizes training loss. To interpret this phenomenon we prove that for SGD with random shuffling, the mean SGD iterate also stays close to the path of gradient flow if the learning rate is small and finite, but on a modified loss. This modified loss is composed of the original loss function and an implicit regularizer, which penalizes the norms of the minibatch gradients. Under mild assumptions, when the batch size is small the scale of the implicit regularization term is proportional to the ratio of the learning rate to the batch size. We verify empirically that explicitly including the implicit regularizer in the loss can enhance the test accuracy when the learning rate is small.", "target": ["SGDで最適化を行う場合、低い学習率だと学習データへの適合が高くなる一方評価データでの精度が低くなる原因を検証した研究。ミニバッチに対する勾配のノルムを正則化項として加えると、低い学習率でも高い学習率と同等の効果が得られることを確認。"]} {"source": "With the rapid adoption of machine learning (ML), a number of domains now use the approach of fine-tuning models pre-trained on a large corpus of data. However, our experiments show that even fine-tuning on models like BERT can take many hours when using GPUs. While prior work proposes limiting the number of layers that are fine-tuned, e.g., freezing all layers but the last layer, we find that such static approaches lead to reduced accuracy. We propose, AutoFreeze, a system that uses an adaptive approach to choose which layers are trained and show how this can accelerate model fine-tuning while preserving accuracy. We also develop mechanisms to enable efficient caching of intermediate activations which can reduce the forward computation time when performing fine-tuning. Our evaluation on fourNLP tasks shows that AutoFreeze, with caching enabled, can improve fine-tuning performance by up to 2.55x.", "target": ["finetuningをする際に訓練対象とする層をadaptiveに選択することで、精度向上と訓練にかかる時間を短縮させるシステムAutoFreezeを提案。freezeした層の順方向計算をキャッシュする仕組みを持つ。"]} {"source": "We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck blocks of a ResNet and no other changes, our approach improves upon the baselines significantly on instance segmentation and object detection while also reducing the parameters, with minimal overhead in latency. Through the design of BoTNet, we also point out how ResNet bottleneck blocks with self-attention can be viewed as Transformer blocks. Without any bells and whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance Segmentation benchmark using the Mask R-CNN framework; surpassing the previous best published single model and single scale results of ResNeSt evaluated on the COCO validation set. Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to 2.33x faster in compute time than the popular EfficientNet models on TPU-v3 hardware. We hope our simple and effective approach will serve as a strong baseline for future research in self-attention models for vision.", "target": ["画像分類,物体検出,セブメンテーションタスクに対するモデルの提案.ResNetの最後の3つのボトルネックブロックにおいて,空間的な畳み込みをSelf-Attentionに置き換えることで,パラメータ削減と精度向上を同時に達成した."]} {"source": "Transformers, which are popular for language modeling, have been explored for solving vision tasks recently, e.g., the Vision Transformers (ViT) for image classification. The ViT model splits each image into a sequence of tokens with fixed length and then applies multiple Transformer layers to model their global relation for classification. However, ViT achieves inferior performance compared with CNNs when trained from scratch on a midsize dataset (e.g., ImageNet). We find it is because: 1) the simple tokenization of input images fails to model the important local structure (e.g., edges, lines) among neighboring pixels, leading to its low training sample efficiency; 2) the redundant attention backbone design of ViT leads to limited feature richness in fixed computation budgets and limited training samples. To overcome such limitations, we propose a new Tokens-To-Token Vision Transformers (T2T-ViT), which introduces 1) a layer-wise Tokens-to-Token (T2T) transformation to progressively structurize the image to tokens by recursively aggregating neighboring Tokens into one Token (Tokens-to-Token), such that local structure presented by surrounding tokens can be modeled and tokens length can be reduced; 2) an efficient backbone with a deep-narrow structure for vision transformers motivated by CNN architecture design after extensive study. Notably, T2T-ViT reduces the parameter counts and MACs of vanilla ViT by 200\\%, while achieving more than 2.5\\% improvement when trained from scratch on ImageNet. It also outperforms ResNets and achieves comparable performance with MobileNets when directly training on ImageNet. For example, T2T-ViT with ResNet50 comparable size can achieve 80.7\\% top-1 accuracy on ImageNet. (Code: this https URL)", "target": ["ViT (Vision Transformer) は中規模のデータセットで学習した場合,CNNよりも精度が低いという課題があった.これに対して,画像を段階的にトークンに構造化するレイヤー単位のToken to Token変換を導入.結果,ImageNetでもSOTAを達成."]} {"source": "We propose a simple method for automatic speech recognition (ASR) by fine-tuning BERT, which is a language model (LM) trained on large-scale unlabeled text data and can generate rich contextual representations. Our assumption is that given a history context sequence, a powerful LM can narrow the range of possible choices and the speech signal can be used as a simple clue. Hence, comparing to conventional ASR systems that train a powerful acoustic model (AM) from scratch, we believe that speech recognition is possible by simply fine-tuning a BERT model. As an initial study, we demonstrate the effectiveness of the proposed idea on the AISHELL dataset and show that stacking a very simple AM on top of BERT can yield reasonable performance.", "target": ["BERTを使用した音声認識。音声は一旦セグメントに区切ったうえでEncoderにかけ、その後事前学習済みのBERTに入力を行う。BERTで分類に使用される[CLS]トークンを音声セグメントと認識テキストの対応付けに使用し、再帰的に認識を行っていく。"]} {"source": "Datasets are not only resources for training accurate, deployable systems, but are also benchmarks for developing new modeling approaches. While large, natural datasets are necessary for training accurate systems, are they necessary for driving modeling innovation? For example, while the popular SQuAD question answering benchmark has driven the development of new modeling approaches, could synthetic or smaller benchmarks have led to similar innovations? This counterfactual question is impossible to answer, but we can study a necessary condition: the ability for a benchmark to recapitulate findings made on SQuAD. We conduct a retrospective study of 20 SQuAD modeling approaches, investigating how well 32 existing and synthesized benchmarks concur with SQuAD -- i.e., do they rank the approaches similarly? We carefully construct small, targeted synthetic benchmarks that do not resemble natural language, yet have high concurrence with SQuAD, demonstrating that naturalness and size are not necessary for reflecting historical modeling improvements on SQuAD. Our results raise the intriguing possibility that small and carefully designed synthetic benchmarks may be useful for driving the development of new modeling approaches.", "target": ["ベンチマークに使われるデータセットはモデル開発の発展を支えてきたが、人手かつ大規模なデータセットでないとその役割は果たせないのか検証した研究。各手法のランキングの近さをベンチマークの類似度とし、SQuADと同等になるベンチマークをWikipediaのトリプルから合成することに成功。"]} {"source": "In this paper, we address the problem of image anomaly detection and segmentation. Anomaly detection involves making a binary decision as to whether an input image contains an anomaly, and anomaly segmentation aims to locate the anomaly on the pixel level. Support vector data description (SVDD) is a long-standing algorithm used for an anomaly detection, and we extend its deep learning variant to the patch-based method using self-supervised learning. This extension enables anomaly segmentation and improves detection performance. As a result, anomaly detection and segmentation performances measured in AUROC on MVTec AD dataset increased by 9.8% and 7.0%, respectively, compared to the previous state-of-the-art methods. Our results indicate the efficacy of the proposed method and its potential for industrial application. Detailed analysis of the proposed method offers insights regarding its behavior, and the code is available online.", "target": ["画像をパッチに分割し、正常画像パッチとの特徴量上の最近傍パッチとの距離を「異常度」とする異常検知手法Patch SVDDを提案。通常の異常検知の損失に加え、自己教師あり学習の損失を加えることが手法の鍵。MV-Tecデータセットで大幅に精度が向上。"]} {"source": "Deep Neural Networks (DNNs) are widely used for decision making in a myriad of critical applications, ranging from medical to societal and even judicial. Given the importance of these decisions, it is crucial for us to be able to interpret these models. We introduce a new method for interpreting image segmentation models by learning regions of images in which noise can be applied without hindering downstream model performance. We apply this method to segmentation of the pancreas in CT scans, and qualitatively compare the quality of the method to existing explainability techniques, such as Grad-CAM and occlusion sensitivity. Additionally we show that, unlike other methods, our interpretability model can be quantitatively evaluated based on the downstream performance over obscured images.", "target": ["意味的領域分割における学習済みモデルの解釈性を向上させる研究。ノイズを載せると重要な箇所はノイズの大きさに敏感に反応する(予測が変化する)はずだという考えから、学習済みモデルと画像からノイズの載せ方を学習する小さなモデルを学習することで、その学習済みモデルが敏感に反応する箇所を探し、それを重要な箇所だと認識する。"]} {"source": "Due to the need to store the intermediate activations for back-propagation, end-to-end (E2E) training of deep networks usually suffers from high GPUs memory footprint. This paper aims to address this problem by revisiting the locally supervised learning, where a network is split into gradient-isolated modules and trained with local supervision. We experimentally show that simply training local modules with E2E loss tends to collapse task-relevant information at early layers, and hence hurts the performance of the full model. To avoid this issue, we propose an information propagation (InfoPro) loss, which encourages local modules to preserve as much useful information as possible, while progressively discard task-irrelevant information. As InfoPro loss is difficult to compute in its original form, we derive a feasible upper bound as a surrogate optimization objective, yielding a simple but effective algorithm. In fact, we show that the proposed method boils down to minimizing the combination of a reconstruction loss and a normal cross-entropy/contrastive term. Extensive empirical results on five datasets (i.e., CIFAR, SVHN, STL-10, ImageNet and Cityscapes) validate that InfoPro is capable of achieving competitive performance with less than 40% memory footprint compared to E2E training, while allowing using training data with higher-resolution or larger batch sizes under the same GPU memory constraint. Our method also enables training local modules asynchronously for potential training acceleration. Code is available at: this https URL.", "target": ["DDNのEnd-to-End学習はGPUメモリフットプリントが高いという問題がある.この問題に対し,ネットワークを勾配分離されたモジュールに分割し,局所的な監視下で学習するlocal supervisionを改善することによって対処する.結果,通常のE2E学習と比較して40%以下のメモリフットプリントで競争力のある性能を達成した."]} {"source": "Inspired by human learning, researchers have proposed ordering examples during training based on their difficulty. Both curriculum learning, exposing a network to easier examples early in training, and anti-curriculum learning, showing the most difficult examples first, have been suggested as improvements to the standard i.i.d. training. In this work, we set out to investigate the relative benefits of ordered learning. We first investigate the implicit curricula resulting from architectural and optimization bias and find that samples are learned in a highly consistent order. Next, to quantify the benefit of explicit curricula, we conduct extensive experiments over thousands of orderings spanning three kinds of learning: curriculum, anti-curriculum, and random-curriculum -- in which the size of the training dataset is dynamically increased over time, but the examples are randomly ordered. We find that for standard benchmark datasets, curricula have only marginal benefits, and that randomly ordered samples perform as well or better than curricula and anti-curricula, suggesting that any benefit is entirely due to the dynamic training set size. Inspired by common use cases of curriculum learning in practice, we investigate the role of limited training time budget and noisy data in the success of curriculum learning. Our experiments demonstrate that curriculum, but not anti-curriculum or random ordering can indeed improve the performance either with limited training time budget or in the existence of noisy data.", "target": ["簡単なサンプルから学習するカリキュラム学習が有効なシチュエーションを検証した研究。カリキュラム/ランダム/逆カリキュラム(難しいものから)の3つを検証したところ、ランダムが最も良かった。ただ、学習時間が限られる場合やノイズが多いデータでは有効との結果。"]} {"source": "Recent studies on machine reading comprehension have focused on text-level understanding but have not yet reached the level of human understanding of the visual layout and content of real-world documents. In this study, we introduce a new visual machine reading comprehension dataset, named VisualMRC, wherein given a question and a document image, a machine reads and comprehends texts in the image to answer the question in natural language. Compared with existing visual question answering (VQA) datasets that contain texts in images, VisualMRC focuses more on developing natural language understanding and generation abilities. It contains 30,000+ pairs of a question and an abstractive answer for 10,000+ document images sourced from multiple domains of webpages. We also introduce a new model that extends existing sequence-to-sequence models, pre-trained with large-scale text corpora, to take into account the visual layout and content of documents. Experiments with VisualMRC show that this model outperformed the base sequence-to-sequence models and a state-of-the-art VQA model. However, its performance is still below that of humans on most automatic evaluation metrics. The dataset will facilitate research aimed at connecting vision and language understanding.", "target": ["マルチモーダルなReading ComprehensionのデータセットVisualMRCの提案。一般的なVQAよりも長い文書を含み、新聞記事を読んで回答するようなタスクとなっている。物体検出からROI、OCRからテキストを取得し、位置情報と共にTransformerベースのモデルに入力・学習する。"]} {"source": "A great part of software development involves conceptualizing or communicating the underlying procedures and logic that needs to be expressed in programs. One major difficulty of programming is turning concept into code, especially when dealing with the APIs of unfamiliar libraries. Recently, there has been a proliferation of machine learning methods for code generation and retrieval from natural language queries, but these have primarily been evaluated purely based on retrieval accuracy or overlap of generated code with developer-written code, and the actual effect of these methods on the developer workflow is surprisingly unattested. We perform the first comprehensive investigation of the promise and challenges of using such technology inside the IDE, asking \"at the current state of technology does it improve developer productivity or accuracy, how does it affect the developer experience, and what are the remaining gaps and challenges?\" We first develop a plugin for the IDE that implements a hybrid of code generation and code retrieval functionality, and orchestrate virtual environments to enable collection of many user events. We ask developers with various backgrounds to complete 14 Python programming tasks ranging from basic file manipulation to machine learning or data visualization, with or without the help of the plugin. While qualitative surveys of developer experience are largely positive, quantitative results with regards to increased productivity, code quality, or program correctness are inconclusive. Analysis identifies several pain points that could improve the effectiveness of future machine learning based code generation/retrieval developer assistants, and demonstrates when developers prefer code generation over code retrieval and vice versa. We release all data and software to pave the road for future empirical studies and development of better models.", "target": ["自然言語からのコード補完が、IDE(PyCharm)でどの程度開発者の生産性に寄与するか調査した研究。抽出と生成の2通りを検証しているが、実装の生産性を上げるにはまだ遠いという結果。"]} {"source": "Graph-structured data ubiquitously appears in science and engineering. Graph neural networks (GNNs) are designed to exploit the relational inductive bias exhibited in graphs; they have been shown to outperform other forms of neural networks in scenarios where structure information supplements node features. The most common GNN architecture aggregates information from neighborhoods based on message passing. Its generality has made it broadly applicable. In this paper, we focus on a special, yet widely used, type of graphs -- DAGs -- and inject a stronger inductive bias -- partial ordering -- into the neural network design. We propose the \\emph{directed acyclic graph neural network}, DAGNN, an architecture that processes information according to the flow defined by the partial order. DAGNN can be considered a framework that entails earlier works as special cases (e.g., models for trees and models updating node representations recurrently), but we identify several crucial components that prior architectures lack. We perform comprehensive experiments, including ablation studies, on representative DAG datasets (i.e., source code, neural architectures, and probabilistic graphical models) and demonstrate the superiority of DAGNN over simpler DAG architectures as well as general graph architectures.", "target": ["グラフの内,DAGに焦点を当て,より強い誘導バイアス(部分的順序付け)をニューラルネットワークに応用."]} {"source": "The success of Convolutional Neural Networks (CNNs) in computer vision is mainly driven by their strong inductive bias, which is strong enough to allow CNNs to solve vision-related tasks with random weights, meaning without learning. Similarly, Long Short-Term Memory (LSTM) has a strong inductive bias towards storing information over time. However, many real-world systems are governed by conservation laws, which lead to the redistribution of particular quantities -- e.g. in physical and economical systems. Our novel Mass-Conserving LSTM (MC-LSTM) adheres to these conservation laws by extending the inductive bias of LSTM to model the redistribution of those stored quantities. MC-LSTMs set a new state-of-the-art for neural arithmetic units at learning arithmetic operations, such as addition tasks, which have a strong conservation law, as the sum is constant over time. Further, MC-LSTM is applied to traffic forecasting, modelling a pendulum, and a large benchmark dataset in hydrology, where it sets a new state-of-the-art for predicting peak flows. In the hydrology example, we show that MC-LSTM states correlate with real-world processes and are therefore interpretable.", "target": ["質量保存の法則をLSTMに応用."]} {"source": "To communicate instance-wise uncertainty for prediction tasks, we show how to generate set-valued predictions for black-box predictors that control the expected loss on future test points at a user-specified level. Our approach provides explicit finite-sample guarantees for any dataset by using a holdout set to calibrate the size of the prediction sets. This framework enables simple, distribution-free, rigorous error control for many tasks, and we demonstrate it in five large-scale machine learning problems: (1) classification problems where some mistakes are more costly than others; (2) multi-label classification, where each observation has multiple associated labels; (3) classification problems where the labels have a hierarchical structure; (4) image segmentation, where we wish to predict a set of pixels containing an object of interest; and (5) protein structure prediction. Lastly, we discuss extensions to uncertainty quantification for ranking, metric learning and distributionally robust learning.", "target": ["インスタンスごとの予測の不確実性を伝達する手法の提案.ホールドアウトにより任意のサンプルに対して有限サンプル保証を提供する."]} {"source": "Exploratory analysis of time series data can yield a better understanding of complex dynamical systems. Granger causality is a practical framework for analysing interactions in sequential data, applied in a wide range of domains. In this paper, we propose a novel framework for inferring multivariate Granger causality under nonlinear dynamics based on an extension of self-explaining neural networks. This framework is more interpretable than other neural-network-based techniques for inferring Granger causality, since in addition to relational inference, it also allows detecting signs of Granger-causal effects and inspecting their variability over time. In comprehensive experiments on simulated data, we show that our framework performs on par with several powerful baseline methods at inferring Granger causality and that it achieves better performance at inferring interaction signs. The results suggest that our framework is a viable and more interpretable alternative to sparse-input neural networks for inferring Granger causality.", "target": ["ニューラルネットワークベースのグランジャー因果推論手法の新しい提案."]} {"source": "In this paper, we present a transformer-based learning framework for 3D dance generation conditioned on music. We carefully design our network architecture and empirically study the keys for obtaining qualitatively pleasing results. The critical components include a deep cross-modal transformer, which well learns the correlation between the music and dance motion; and the full-attention with future-N supervision mechanism which is essential in producing long-range non-freezing motion. In addition, we propose a new dataset of paired 3D motion and music called AIST++, which we reconstruct from the AIST multi-view dance videos. This dataset contains 1.1M frames of 3D dance motion in 1408 sequences, covering 10 genres of dance choreographies and accompanied with multi-view camera parameters. To our knowledge it is the largest dataset of this kind. Rich experiments on AIST++ demonstrate our method produces much better results than the state-of-the-art methods both qualitatively and quantitatively.", "target": ["音楽から3Dダンスを生成する深層学習の研究.トランスフォーマーを応用している."]} {"source": "Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were \"virtual tokens\". We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We find that by learning only 0.1\\% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics unseen during training.", "target": ["事前学習済み言語モデルを効率的にタスク転移させる手法の提案。タスク個別に学習可能なパラメーター(Prefix)を入力に付与し(Encoder/Decoderの場合は入出力)、事前学習済み言語モデルのパラメーター自体は固定する。これによりモデル全体の学習をする必要をなくしている。"]} {"source": "Entities are at the center of how we represent and aggregate knowledge. For instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one per Wikipedia article). The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering. Current approaches can be understood as classifiers among atomic labels, one for each entity. Their weight vectors are dense entity representations produced by encoding entity meta information such as their descriptions. This approach has several shortcomings: (i) context and entity affinity is mainly captured through a vector dot product, potentially missing fine-grained interactions; (ii) a large memory footprint is needed to store dense representations when considering large entity sets; (iii) an appropriately hard set of negative data has to be subsampled at training time. In this work, we propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion. This mitigates the aforementioned technical issues since: (i) the autoregressive formulation directly captures relations between context and entity name, effectively cross encoding both; (ii) the memory footprint is greatly reduced because the parameters of our encoder-decoder architecture scale with vocabulary size, not entity count; (iii) the softmax loss is computed without subsampling negative data. We experiment with more than 20 datasets on entity disambiguation, end-to-end entity linking and document retrieval tasks, achieving new state-of-the-art or very competitive results while using a tiny fraction of the memory footprint of competing systems. Finally, we demonstrate that new entities can be added by simply specifying their names. Code and pre-trained models at this https URL.", "target": ["テキストからのエンティティ抽出を、分類問題ではなく生成問題として解いた研究。Seq2Seqの枠組みでエンティティの名称を直接生成する。生成する際に、既存エンティティの名前に近しくなるよう制約をかける。20のデータベースでSOTAか近しい精度を記録"]} {"source": "If the same neural network architecture is trained multiple times on the same dataset, will it make similar linguistic generalizations across runs? To study this question, we fine-tuned 100 instances of BERT on the Multi-genre Natural Language Inference (MNLI) dataset and evaluated them on the HANS dataset, which evaluates syntactic generalization in natural language inference. On the MNLI development set, the behavior of all instances was remarkably consistent, with accuracy ranging between 83.6% and 84.8%. In stark contrast, the same models varied widely in their generalization performance. For example, on the simple case of subject-object swap (e.g., determining that \"the doctor visited the lawyer\" does not entail \"the lawyer visited the doctor\"), accuracy ranged from 0.00% to 66.2%. Such variation is likely due to the presence of many local minima that are equally attractive to a low-bias learner such as a neural network; decreasing the variability may therefore require models with stronger inductive biases.", "target": ["同じように学習させたBERTは同じような文法知識を学ぶか検証した研究。同じデータセット(MNLI)で学習させたところ精度はほとんど同じだが、文法理解には精度に大きな差異が出た(HANS datasetを使用)。"]} {"source": "Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives. We analyze Siren activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine Sirens with hypernetworks to learn priors over the space of Siren functions.", "target": ["ネットワークが入力のどこに着目しているかを可視化する手法SIRENの提案。勾配の勾配を取ることでどの位置が重視されているかわかるが、ReLUのような不連続な関数だとこれが取れない(一方でsigmoidやtanhだと詳細がとれない)。そこで、周期的なsinでactivateする方法を提案。"]} {"source": "We study a class of realistic computer vision settings wherein one can influence the design of the objects being recognized. We develop a framework that leverages this capability to significantly improve vision models' performance and robustness. This framework exploits the sensitivity of modern machine learning algorithms to input perturbations in order to design \"robust objects,\" i.e., objects that are explicitly optimized to be confidently detected or classified. We demonstrate the efficacy of the framework on a wide variety of vision-based tasks ranging from standard benchmarks, to (in-simulation) robotics, to real-world experiments. Our code can be found at this https URL .", "target": ["敵対的サンプルの逆の操作をすることで確信度を高めるようなパッチを作成し、それを貼り付けることで認識精度を高める研究。ドローンの着陸時に認識しないといけない着陸パッドなど、現実世界で認識精度が必要な箇所に貼り付けることで安全性などを高める応用が考えられる。"]} {"source": "A generalist robot must be able to complete a variety of tasks in its environment. One appealing way to specify each task is in terms of a goal observation. However, learning goal-reaching policies with reinforcement learning remains a challenging problem, particularly when hand-engineered reward functions are not available. Learned dynamics models are a promising approach for learning about the environment without rewards or task-directed data, but planning to reach goals with such a model requires a notion of functional similarity between observations and goal states. We present a self-supervised method for model-based visual goal reaching, which uses both a visual dynamics model as well as a dynamical distance function learned using model-free reinforcement learning. Our approach learns entirely using offline, unlabeled data, making it practical to scale to large and diverse datasets. In our experiments, we find that our method can successfully learn models that perform a variety of tasks at test-time, moving objects amid distractors with a simulated robotic arm and even learning to open and close a drawer using a real-world robot. In comparisons, we find that this approach substantially outperforms both model-free and model-based prior methods. Videos and visualizations are available here: this http URL.", "target": ["自己教師でオフライン強化学習を行う手法。取得済みの軌跡から環境の遷移(Dynamics)、現在の状態からゴールまでの距離(Distance)を学習する。前者は状態と行動のペアから遷移先状態を学習(Forward型の学習)、後者はQ-learningのシンプルな手法で学習する。"]} {"source": "We seek to learn models that we can interact with using high-level concepts: if the model did not think there was a bone spur in the x-ray, would it still predict severe arthritis? State-of-the-art models today do not typically support the manipulation of concepts like \"the existence of bone spurs\", as they are trained end-to-end to go directly from raw input (e.g., pixels) to output (e.g., arthritis severity). We revisit the classic idea of first predicting concepts that are provided at training time, and then using these concepts to predict the label. By construction, we can intervene on these concept bottleneck models by editing their predicted concept values and propagating these changes to the final prediction. On x-ray grading and bird identification, concept bottleneck models achieve competitive accuracy with standard end-to-end models, while enabling interpretation in terms of high-level clinical concepts (\"bone spurs\") or bird attributes (\"wing color\"). These models also allow for richer human-model interaction: accuracy improves significantly if we can correct model mistakes on concepts at test time.", "target": ["画像の予測で、コンセプトを中継し予測への介入性を高める研究。入力/予測の間にコンセプト(鳥の種別予測なら羽の色やくちばしの長さなど)の予測を挟み、入力=>コンセプト=>予測とする。予測に使用されたコンセプトに違和感があれば編集して再予測させることができる。"]} {"source": "This paper proposes to make a first step towards compatible and hence reusable network components. Rather than training networks for different tasks independently, we adapt the training process to produce network components that are compatible across tasks. In particular, we split a network into two components, a features extractor and a target task head, and propose various approaches to accomplish compatibility between them. We systematically analyse these approaches on the task of image classification on standard datasets. We demonstrate that we can produce components which are directly compatible without any fine-tuning or compromising accuracy on the original tasks. Afterwards, we demonstrate the use of compatible components on three applications: Unsupervised domain adaptation, transferring classifiers across feature extractors with different architectures, and increasing the computational efficiency of transfer learning.", "target": ["互換性のあるネットワークモジュールを得る研究。ネットワークを特徴抽出/タスク個別予測の2つに分け特徴抽出部分の互換性を検証している。マルチタスク学習後特徴抽出を入れ替える、共通ヘッドを含む構造で一方のタスクを学習した後ヘッド固定で他方のタスクを学習の2通りを検証。転移時間削減に成功"]} {"source": "It is a common belief in the NLP community that continuous bag-of-words (CBOW) word embeddings tend to underperform skip-gram (SG) embeddings. We find that this belief is founded less on theoretical differences in their training objectives but more on faulty CBOW implementations in standard software libraries such as the official implementation word2vec.c and Gensim. We show that our correct implementation of CBOW yields word embeddings that are fully competitive with SG on various intrinsic and extrinsic tasks while being more than three times as fast to train. We release our implementation, kōan, at this https URL.", "target": ["CBOWがSkip-gramに比べて性能が劣るのは理論面ではなく実装面の誤りとした研究。オリジナルのCBOW(とそれに忠実なgensim実装)は勾配の計算式が間違っている、具体的にはコンテキストウィンドウ内単語数で正規化していないとし、実装の修正で性能が向上するという"]} {"source": "Influence functions approximate the \"influences\" of training data-points for test predictions and have a wide variety of applications. Despite the popularity, their computational cost does not scale well with model and training data size. We present FastIF, a set of simple modifications to influence functions that significantly improves their run-time. We use k-Nearest Neighbors (kNN) to narrow the search space down to a subset of good candidate data points, identify the configurations that best balance the speed-quality trade-off in estimating the inverse Hessian-vector product, and introduce a fast parallel variant. Our proposed method achieves about 80X speedup while being highly correlated with the original influence values. With the availability of the fast influence functions, we demonstrate their usefulness in four applications. First, we examine whether influential data-points can \"explain\" test time behavior using the framework of simulatability. Second, we visualize the influence interactions between training and test data-points. Third, we show that we can correct model errors by additional fine-tuning on certain influential data-points, improving the accuracy of a trained MultiNLI model by 2.5% on the HANS dataset. Finally, we experiment with a similar setup but fine-tuning on datapoints not seen during training, improving the model accuracy by 2.8% and 1.7% on HANS and ANLI datasets respectively. Overall, our fast influence functions can be efficiently applied to large models and datasets, and our experiments demonstrate the potential of influence functions in model interpretation and correcting model errors. Code is available at this https URL", "target": ["評価精度への学習データの影響を計算する関数(influence functions)の導出を効率化する研究。評価データに近い学習データをkNNで絞り込む、ヘシアンの逆行列を計算結果のキャッシュ/ミニバッチによる推定で効率化している。80倍の高速化を達成。"]} {"source": "The performance of generative adversarial networks (GANs) heavily deteriorates given a limited amount of training data. This is mainly because the discriminator is memorizing the exact training set. To combat it, we propose Differentiable Augmentation (DiffAugment), a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples. Previous attempts to directly augment the training data manipulate the distribution of real images, yielding little benefit; DiffAugment enables us to adopt the differentiable augmentation for the generated samples, effectively stabilizes training, and leads to better convergence. Experiments demonstrate consistent gains of our method over a variety of GAN architectures and loss functions for both unconditional and class-conditional generation. With DiffAugment, we achieve a state-of-the-art FID of 6.80 with an IS of 100.8 on ImageNet 128x128 and 2-4x reductions of FID given 1,000 images on FFHQ and LSUN. Furthermore, with only 20% training data, we can match the top performance on CIFAR-10 and CIFAR-100. Finally, our method can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms. Code is available at this https URL.", "target": ["少数データで画像を生成できるGAN。微分可能なデータ拡張を使って、G/Dの学習両方でデータ拡張をかける。微分不可能だとDの学習のみでしかデータ拡張をかけれず、G/Dの学習のバランスが崩れ、性能劣化する。"]} {"source": "Recently, retrieval systems based on dense representations have led to important improvements in open-domain question answering, and related tasks. While very effective, this approach is also memory intensive, as the dense vectors for the whole knowledge source need to be kept in memory. In this paper, we study how the memory footprint of dense retriever-reader systems can be reduced. We consider three strategies to reduce the index size: dimension reduction, vector quantization and passage filtering. We evaluate our approach on two question answering benchmarks: TriviaQA and NaturalQuestions, showing that it is possible to get competitive systems using less than 6Gb of memory.", "target": ["ベクトル特徴を使用した検索システムで、インデックスのサイズを減らしメモリ効率を上げる手法。ベクトルのサイズダウン・量子化に加え関係ないドキュメントをフィルタする(登録記事のタイトル/カテゴリに関係ある節/ない節(ランダムサンプル)を識別する分類機を使用)ことで効率化している。"]} {"source": "In this work, we propose TransTrack, a simple but efficient scheme to solve the multiple object tracking problems. TransTrack leverages the transformer architecture, which is an attention-based query-key mechanism. It applies object features from the previous frame as a query of the current frame and introduces a set of learned object queries to enable detecting new-coming objects. It builds up a novel joint-detection-and-tracking paradigm by accomplishing object detection and object association in a single shot, simplifying complicated multi-step settings in tracking-by-detection methods. On MOT17 and MOT20 benchmark, TransTrack achieves 74.5\\% and 64.5\\% MOTA, respectively, competitive to the state-of-the-art methods. We expect TransTrack to provide a novel perspective for multiple object tracking. The code is available at: \\url{this https URL}.", "target": ["オブジェクトトラッキングにTransformerを使用した研究。Decoderは物体検出と(前画像)オブジェクト位置推定用の2つでそれぞれ(学習可能な)物体検出用クエリ、オブジェクト特徴を入力とする。Encoderで計算した前後画像特徴をKeyとしてDecoder内でCross Attentionし位置を推定"]} {"source": "Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions, respectively. Second, an enhanced mask decoder is used to incorporate absolute positions in the decoding layer to predict the masked tokens in model pre-training. In addition, a new virtual adversarial training method is used for fine-tuning to improve models' generalization. We show that these techniques significantly improve the efficiency of model pre-training and the performance of both natural language understanding (NLU) and natural langauge generation (NLG) downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). Notably, we scale up DeBERTa by training a larger version that consists of 48 Transform layers with 1.5 billion parameters. The significant performance boost makes the single DeBERTa model surpass the human performance on the SuperGLUE benchmark (Wang et al., 2019a) for the first time in terms of macro-average score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the SuperGLUE leaderboard as of January 6, 2021, out performing the human baseline by a decent margin (90.3 versus 89.8).", "target": ["BERT/RoBERTaを改良したDeBERTaの提案。潜在表現をcontentとpositionに分割する(Attentionは4通りの組み合わせに対する重みの合計になる)、Fine Tuneを想定し最後のsoutmaxを1~2層のTransformer+softmaxで構成されるDecoderに置き換えるという工夫をしている(Fine Tune時はこのDecoderを外して使う)。"]} {"source": "This paper presents a novel training method, Conditional Masked Language Modeling (CMLM), to effectively learn sentence representations on large scale unlabeled corpora. CMLM integrates sentence representation learning into MLM training by conditioning on the encoded vectors of adjacent sentences. Our English CMLM model achieves state-of-the-art performance on SentEval, even outperforming models learned using supervised signals. As a fully unsupervised learning method, CMLM can be conveniently extended to a broad range of languages and domains. We find that a multilingual CMLM model co-trained with bitext retrieval (BR) and natural language inference (NLI) tasks outperforms the previous state-of-the-art multilingual models by a large margin, e.g. 10% improvement upon baseline models on cross-lingual semantic search. We explore the same language bias of the learned representations, and propose a simple, post-training and model agnostic approach to remove the language identifying information from the representation while still retaining sentence semantics.", "target": ["良質な文表現を得るために、BERTの学習を組み合わせた研究。Encode => Average Pooling => MLPで文表現を得た後、次文の入力に結合してMasked LMの学習を行う(文表現用と学習用Encoderは重みを共有する)。隣接文情報からのMask予測という点で文レベルのSkip Thoughtに近い。多言語NLPで優秀な精度。"]} {"source": "In general, sufficient data is essential for the better performance and generalization of deep-learning models. However, lots of limitations(cost, resources, etc.) of data collection leads to lack of enough data in most of the areas. In addition, various domains of each data sources and licenses also lead to difficulties in collection of sufficient data. This situation makes us hard to utilize not only the pre-trained model, but also the external knowledge. Therefore, it is important to leverage small dataset effectively for achieving the better performance. We applied some techniques in three aspects: data, loss function, and prediction to enable training from scratch with less data. With these methods, we obtain high accuracy by leveraging ImageNet data which consist of only 50 images per class. Furthermore, our model is ranked 4th in Visual Inductive Printers for Data-Effective Computer Vision Challenge.", "target": ["画像分類において少ないデータを有効に活用する方法をデータ、損失関数、予測の3つの側面からいくつかの手法を適用した。具体的にはLSB Swap、Focal Cosine Loss、Plurality Voting Ensembleという手法。その結果1クラス50枚の画像からなるImageNetのデータにおいて高い精度を得られた。"]} {"source": "Data mixing augmentation has proved effective in training deep models. Recent methods mix labels mainly based on the mixture proportion of image pixels. As the main discriminative information of a fine-grained image usually resides in subtle regions, methods along this line are prone to heavy label noise in fine-grained recognition. We propose in this paper a novel scheme, termed as Semantically Proportional Mixing (SnapMix), which exploits class activation map (CAM) to lessen the label noise in augmenting fine-grained data. SnapMix generates the target label for a mixed image by estimating its intrinsic semantic composition, and allows for asymmetric mixing operations and ensures semantic correspondence between synthetic images and target labels. Experiments show that our method consistently outperforms existing mixed-based approaches on various datasets and under different network depths. Furthermore, by incorporating the mid-level features, the proposed SnapMix achieves top-level performance, demonstrating its potential to serve as a solid baseline for fine-grained recognition. Our code is available at this https URL.", "target": ["Data mixing augmentationの新しい手法であるSemantically Proportional Mixing (SnapMix)の提案。最近のData mixing augmentationの課題として細かい領域の認識においてラベルノイズが大きくなる傾向がある。提案する手法ではクラス活性化マップ(CAM)を利用してラベルノイズを軽減する。"]} {"source": "Planning - the ability to analyze the structure of a problem in the large and decompose it into interrelated subproblems - is a hallmark of human intelligence. While deep reinforcement learning (RL) has shown great promise for solving relatively straightforward control tasks, it remains an open problem how to best incorporate planning into existing deep RL paradigms to handle increasingly complex environments. One prominent framework, Model-Based RL, learns a world model and plans using step-by-step virtual rollouts. This type of world model quickly diverges from reality when the planning horizon increases, thus struggling at long-horizon planning. How can we learn world models that endow agents with the ability to do temporally extended reasoning? In this work, we propose to learn graph-structured world models composed of sparse, multi-step transitions. We devise a novel algorithm to learn latent landmarks that are scattered (in terms of reachability) across the goal space as the nodes on the graph. In this same graph, the edges are the reachability estimates distilled from Q-functions. On a variety of high-dimensional continuous control tasks ranging from robotic manipulation to navigation, we demonstrate that our method, named L3P, significantly outperforms prior work, and is oftentimes the only method capable of leveraging both the robustness of model-free RL and generalization of graph-search algorithms. We believe our work is an important step towards scalable planning in reinforcement learning.", "target": ["モデルベースの強化学習でグラフ表現を利用した研究。表現学習単体(World Model)だと計画期間が長くなるにつれてずれが大きくなることから、表現特徴の近さ/到達距離の近さからクラスタリングを行いランドマークを特定、グラフ探索により計画作成を行う。"]} {"source": "Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.", "target": ["Transformerに画像を効率的に学習させる手法。CNNのモデルを使用し蒸留を行う。通常画像パッチ+教師ラベル予測用Tokenを入力とするが、教師CNNの出力を予測する蒸留用Tokenを追加。ラベル+教師出力を別々のTokenから予測し学習する。8NodeのGPU1つを使い2~3日で学習完了。"]} {"source": "The meaning of natural language text is supported by cohesion among various kinds of entities, including coreference relations, predicate-argument structures, and bridging anaphora relations. However, predicate-argument structures for nominal predicates and bridging anaphora relations have not been studied well, and their analyses have been still very difficult. Recent advances in neural networks, in particular, self training-based language models including BERT (Devlin et al., 2019), have significantly improved many natural language processing tasks, making it possible to dive into the study on analysis of cohesion in the whole text. In this study, we tackle an integrated analysis of cohesion in Japanese texts. Our results significantly outperformed existing studies in each task, especially about 10 to 20 point improvement both for zero anaphora and coreference resolution. Furthermore, we also showed that coreference resolution is different in nature from the other tasks and should be treated specially.", "target": ["BERTベースのモデルでゼロ照応や共参照といった結束性を判定するタスクを解く研究。マルチタスクで解いており、共参照については推論結果を戻して使用する2段階のステップで解いている。タスクによって既存手法をF値で10pt以上上回る精度を達成。"]} {"source": "Reinforcement learning has enabled agents to solve challenging tasks in unknown environments. However, manually crafting reward functions can be time consuming, expensive, and error prone to human error. Competing objectives have been proposed for agents to learn without external supervision, but it has been unclear how well they reflect task rewards or human behavior. To accelerate the development of intrinsic objectives, we retrospectively compute potential objectives on pre-collected datasets of agent behavior, rather than optimizing them online, and compare them by analyzing their correlations. We study input entropy, information gain, and empowerment across seven agents, three Atari games, and the 3D game Minecraft. We find that all three intrinsic objectives correlate more strongly with a human behavior similarity metric than with task reward. Moreover, input entropy and information gain correlate more strongly with human similarity than task reward does, suggesting the use of intrinsic objectives for designing agents that behave similarly to human players.", "target": ["内発報酬による行動と人の行動との相関を調べた研究。行動による環境変化の大きさ(Empowerment)や観測の予測可能性等の内発報酬に基づいた行動は、単純なタスク報酬ベースの行動より人の行動との相関が大きいという結果。"]} {"source": "Accompanying China's rapid urbanization in recent decades, especially in the new millennium, the housing problem has become one of the most important issues. The estimation and analysis of housing vacancy rate (HVR) can assist decision-making in solving this puzzle. It is particularly significant to government departments. This paper proposed a practical model for estimating the HVR in Qingdao city using NPP-VIIRS nighttime light composed data, Geographic National Conditions Monitoring data (GNCMD) and resident population distribution data. The main steps are: Firstly, pre-process the data, and finally forming a series of data sets with 500*500 grid as the basic unit; Secondly, select 400 grids of different types within the city as sample grids for SVM training, and establish a reasonable HVR model; Thirdly, using the model to estimate HVR in Qingdao and employing spatial statistical analysis methods to reveal the spatial differentiation pattern of HVR in this city; Finally test the accuracy of the model with two different methods. The results conclude that HVR in the southeastern coastal area of Qingdao city is relatively low and the low-low clusters distributed in patches. Simultaneously, in other regions it shows the tendency of the low value accumulation in the downtown area and the increasing trend towards the outer suburbs. Meanwhile the suburban and scenery regions by the side of the sea and mountains are likely to be the most vacant part of the city.", "target": ["NPP-VIIRSの夜間光合成データ、地理的国家状況監視データ(GNCMD)、居住者人口分布データを用いて、青島市の住宅空室率(HVR)を推定するためのモデル(SVM)を提案した。"]} {"source": "This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong individual models. Analysis shows that the latent discrete speech representations are shared across languages with increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing XLSR-53, a large model pretrained in 53 languages.", "target": ["音声で教師なし学習を行ったwav2vec 2.0 ( #1950 ) を多言語に拡張した手法。言語共通のコードブックで離散化する処理を挟むことで、言語の識別を学習できるようにしている。"]} {"source": "Designed to learn long-range interactions on sequential data, transformers continue to show state-of-the-art results on a wide variety of tasks. In contrast to CNNs, they contain no inductive bias that prioritizes local interactions. This makes them expressive, but also computationally infeasible for long sequences, such as high-resolution images. We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images. We show how to (i) use CNNs to learn a context-rich vocabulary of image constituents, and in turn (ii) utilize transformers to efficiently model their composition within high-resolution images. Our approach is readily applied to conditional synthesis tasks, where both non-spatial information, such as object classes, and spatial information, such as segmentations, can control the generated image. In particular, we present the first results on semantically-guided synthesis of megapixel images with transformers and obtain the state of the art among autoregressive models on class-conditional ImageNet. Code and pretrained models can be found at this https URL .", "target": ["高解像度の画像生成にTransformerを応用した研究。CNNで抽出した特徴をコードブックで離散化することで、高解像度でもTransformerで対応可能な長さに落としている(ピクセルレベルだとさすがに無理)。VQGANのDecode前にTransformerによるAttention処理を入れる構成になっている。"]} {"source": "While task-specific finetuning of pretrained networks has led to significant empirical advances in NLP, the large size of networks makes finetuning difficult to deploy in multi-task, memory-constrained settings. We propose diff pruning as a simple approach to enable parameter-efficient transfer learning within the pretrain-finetune framework. This approach views finetuning as learning a task-specific diff vector that is applied on top of the pretrained parameter vector, which remains fixed and is shared across different tasks. The diff vector is adaptively pruned during training with a differentiable approximation to the L0-norm penalty to encourage sparsity. Diff pruning becomes parameter-efficient as the number of tasks increases, as it requires storing only the nonzero positions and weights of the diff vector for each task, while the cost of storing the shared pretrained model remains constant. It further does not require access to all tasks during training, which makes it attractive in settings where tasks arrive in stream or the set of tasks is unknown. We find that models finetuned with diff pruning can match the performance of fully finetuned baselines on the GLUE benchmark while only modifying 0.5% of the pretrained model's parameters per task.", "target": ["マルチタスクを前提とした効率的な転移学習の提案。事前学習済みベクトルとの差分(Diff vector)のみ保持することで、タスク個別にヘッドを持つより効率を上げている。差分ベクトルの収納効率を上げるため、マスク(L0-norm)による正則化を行っている。"]} {"source": "We present the Supermasks in Superposition (SupSup) model, capable of sequentially learning thousands of tasks without catastrophic forgetting. Our approach uses a randomly initialized, fixed base network and for each task finds a subnetwork (supermask) that achieves good performance. If task identity is given at test time, the correct subnetwork can be retrieved with minimal memory usage. If not provided, SupSup can infer the task using gradient-based optimization to find a linear superposition of learned supermasks which minimizes the output entropy. In practice we find that a single gradient step is often sufficient to identify the correct mask, even among 2500 tasks. We also showcase two promising extensions. First, SupSup models can be trained entirely without task identity information, as they may detect when they are uncertain about new data and allocate an additional supermask for the new training distribution. Finally the entire, growing set of supermasks can be stored in a constant-sized reservoir by implicitly storing them as attractors in a fixed-sized Hopfield network.", "target": ["NN内のサブネットワークをタスク特化させる研究。タスクに応じて重みにマスクをかけるが、推論時タスク情報がない場合はOne-shotで勾配を計算しエントロピー最小(最もそれらしい)マスクを選択、学習時もない場合タスクの識別ができない(エントロピーが大きい=一様な)場合追加する。"]} {"source": "We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines. Training code and pretrained models are available at this https URL.", "target": ["物体検知/セグメンテーションにTransformer (#329 を適用した研究。ResNetで特徴抽出しTransformerに入れるシンプルな構成だが、End2Endで予測するため(=重複領域除去処理(NMS)フリーにするため)固定数の検出予測と実際をマッチングさせるlossを導入している。"]} {"source": "The growing population in China has led to an increasing importance of crop area (CA) protection. A powerful tool for acquiring accurate and up-to-date CA maps is automatic mapping using information extracted from high spatial resolution remote sensing (RS) images. RS image information extraction includes feature classification, which is a long-standing research issue in the RS community. Emerging deep learning techniques, such as the deep semantic segmentation network technique, are effective methods to automatically discover relevant contextual features and get better image classification results. In this study, we exploited deep semantic segmentation networks to classify and extract CA from high-resolution RS images. WorldView-2 (WV-2) images with only Red-Green-Blue (RGB) bands were used to confirm the effectiveness of the proposed semantic classification framework for information extraction and the CA mapping task. Specifically, we used the deep learning framework TensorFlow to construct a platform for sampling, training, testing, and classifying to extract and map CA on the basis of DeepLabv3+. By leveraging per-pixel and random sample point accuracy evaluation methods, we conclude that the proposed approach can efficiently obtain acceptable accuracy (Overall Accuracy = 95%, Kappa = 0.90) of CA classification in the study area, and the approach performs better than other deep semantic segmentation networks (U-Net/PspNet/SegNet/DeepLabv2) and traditional machine learning methods, such as Maximum Likelihood (ML), Support Vector Machine (SVM), and RF (Random Forest). Furthermore, the proposed approach is highly scalable for the variety of crop types in a crop area. Overall, the proposed approach can train a precise and effective model that is capable of adequately describing the small, irregular fields of smallholder agriculture and handling the great level of details in RGB high spatial resolution images.", "target": ["RGB3バンドのみのWorldView-2の1mの高解像度衛星画像を用いたCA(CropArea)のSemantic Segmentation(DeepLabv3+)を適用、CAマッピングの有効性の確認した。"]} {"source": "A common vision from science fiction is that robots will one day inhabit our physical spaces, sense the world as we do, assist our physical labours, and communicate with us through natural language. Here we study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment. This setting nevertheless integrates a number of the central challenges of artificial intelligence (AI) research: complex visual perception and goal-directed physical control, grounded language comprehension and production, and multi-agent social interaction. To build agents that can robustly interact with humans, we would ideally train them while they interact with humans. However, this is presently impractical. Therefore, we approximate the role of the human with another learned agent, and use ideas from inverse reinforcement learning to reduce the disparities between human-human and agent-agent interactive behaviour. Rigorously evaluating our agents poses a great challenge, so we develop a variety of behavioural tests, including evaluation by humans who watch videos of agents or interact directly with them. These evaluations convincingly demonstrate that interactive training and auxiliary losses improve agent behaviour beyond what is achieved by supervised learning of actions alone. Further, we demonstrate that agent capabilities generalise beyond literal experiences in the dataset. Finally, we train evaluation models whose ratings of agents agree well with human judgement, thus permitting the evaluation of new agent models without additional effort. Taken together, our results in this virtual environment provide evidence that large-scale human behavioural imitation is a promising tool to create intelligent, interactive agents, and the challenge of reliably evaluating such agents is possible to surmount.", "target": ["人間とインタラクションを行うエージェントを作るためのステップ・アーキテクチャについてまとめた研究。仮想環境を作成しログを収集、模倣学習がベースだが、肝心のインタラクションの学習でエージェント同士での学習を導入している(片方が人間役(指示役))を行う。"]} {"source": "Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leading to a less effective cross-domain evaluation. To address these obstacles, we introduce a cross-domain NER dataset (CrossNER), a fully-labeled collection of NER data spanning over five diverse domains with specialized entity categories for different domains. Additionally, we also provide a domain-related corpus since using it to continue pre-training language models (domain-adaptive pre-training) is effective for the domain adaptation. We then conduct comprehensive experiments to explore the effectiveness of leveraging different levels of the domain corpus and pre-training strategies to do domain-adaptive pre-training for the cross-domain task. Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre-training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and our proposed method can consistently outperform existing cross-domain NER baselines. Nevertheless, experiments also illustrate the challenge of this cross-domain NER task. We hope that our dataset and baselines will catalyze research in the NER domain adaptation area. The code and data are available at this https URL.", "target": ["異なるドメインに対応できる固有表現認識の研究。これまでドメインごとの性能を測るベンチマークがなかったことから、ニュース・政治・科学・音楽・文学・AI(論文)の6ドメインからなるベンチマークを作成。事前学習用のコーパスも付属している。"]} {"source": "The presence of spurious features interferes with the goal of obtaining robust models that perform well across many groups within the population. A natural remedy is to remove spurious features from the model. However, in this work we show that removal of spurious features can decrease accuracy due to the inductive biases of overparameterized models. We completely characterize how the removal of spurious features affects accuracy across different groups (more generally, test distributions) in noiseless overparameterized linear regression. In addition, we show that removal of spurious feature can decrease the accuracy even in balanced datasets -- each target co-occurs equally with each spurious feature; and it can inadvertently make the model more susceptible to other spurious features. Finally, we show that robust self-training can remove spurious features without affecting the overall accuracy. Experiments on the Toxic-Comment-Detectoin and CelebA datasets show that our results hold in non-linear models.", "target": ["本質的な識別特徴ではない疑似特徴を取り除くと、逆にテスト時のパフォーマンスが落ちることを示した研究。疑似特徴を削ると学習データに最適化してしまい、未学習データの予測に必要な重みへの配分がなくなってしまうからという。"]} {"source": "A large part of the current success of deep learning lies in the effectiveness of data -- more precisely: labelled data. Yet, labelling a dataset with human annotation continues to carry high costs, especially for videos. While in the image domain, recent methods have allowed to generate meaningful (pseudo-) labels for unlabelled datasets without supervision, this development is missing for the video domain where learning feature representations is the current focus. In this work, we a) show that unsupervised labelling of a video dataset does not come for free from strong feature encoders and b) propose a novel clustering method that allows pseudo-labelling of a video dataset without any human annotations, by leveraging the natural correspondence between the audio and visual modalities. An extensive analysis shows that the resulting clusters have high semantic overlap to ground truth human labels. We further introduce the first benchmarking results on unsupervised labelling of common video datasets Kinetics, Kinetics-Sound, VGG-Sound and AVE.", "target": ["自己教師学習で動画のラベリングを行う研究。表現とそれによるクラスタリングを同時に学習するSeLaをベースに、クラスタへの配分がそれぞれ等しい⇒偏りを許容、画像⇒動画(マルチモーダル)、単一の関数学習⇒複数並列といった改善を行っている。"]} {"source": "Siamese networks have become a common structure in various recent models for unsupervised visual representation learning. These models maximize the similarity between two augmentations of one image, subject to certain conditions for avoiding collapsing solutions. In this paper, we report surprising empirical results that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders. Our experiments show that collapsing solutions do exist for the loss and structure, but a stop-gradient operation plays an essential role in preventing collapsing. We provide a hypothesis on the implication of stop-gradient, and further show proof-of-concept experiments verifying it. Our \"SimSiam\" method achieves competitive results on ImageNet and downstream tasks. We hope this simple baseline will motivate people to rethink the roles of Siamese architectures for unsupervised representation learning. Code will be made available.", "target": ["Siameseネットワークの表現崩壊を防ぐシンプルな手法。これまで考えられていたネガティブサンプリングや大きなバッチサイズ、Encoderのmomentumよりも勾配停止の方が効くという結果。予測するMLPがついているのとは逆側のEncoderへの勾配を停止させる。"]} {"source": "raining Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for GAN with minimum computing cost. We propose a light-weight GAN structure that gains superior quality on 1024^2 resolution. Notably, the model converges from scratch with just a few hours of training on a single RTX-2080 GPU, and has a consistent performance, even with less than 100 training samples. Two technique designs constitute our work, a skip-layer channel-wise excitation module and a self-supervised discriminator trained as a feature-encoder. With thirteen datasets covering a wide variety of image domains (The datasets and code are available at https://github.com/odegeasslbc/FastGAN-pytorch), we show our model's superior performance compared to the state-of-the-art StyleGAN2, when data and computing budget are limited.", "target": ["省エネかつ小サンプルで学習可能なGAN。高解像度画像(1024x1024)でも100枚程度でSOTA(StyleGAN2)に匹敵する画像が生成できる。異なる解像度の特徴マップ間でchannelの重みづけ(Excitation)を行う、Discriminator側でAutoEncoder的な学習を行うという工夫がされている。"]} {"source": "A wide range of reinforcement learning (RL) problems - including robustness, transfer learning, unsupervised RL, and emergent complexity - require specifying a distribution of tasks or environments in which a policy will be trained. However, creating a useful distribution of environments is error prone, and takes a significant amount of developer time and effort. We propose Unsupervised Environment Design (UED) as an alternative paradigm, where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments. Existing approaches to automatically generating environments suffer from common failure modes: domain randomization cannot generate structure or adapt the difficulty of the environment to the agent's learning progress, and minimax adversarial training leads to worst-case environments that are often unsolvable. To generate structured, solvable environments for our protagonist agent, we introduce a second, antagonist agent that is allied with the environment-generating adversary. The adversary is motivated to generate environments which maximize regret, defined as the difference between the protagonist and antagonist agent's return. We call our technique Protagonist Antagonist Induced Regret Environment Design (PAIRED). Our experiments demonstrate that PAIRED produces a natural curriculum of increasingly complex environments, and PAIRED agents achieve higher zero-shot transfer performance when tested in highly novel environments.", "target": ["強化学習で、学習の度合いに応じて環境の難易度を調整する研究。ランダム生成では障害物間の構造が失われ難易度が下がり、Adversarialに生成すると攻略不能な環境になることがある。そこで敵対エージェントを用い、獲得報酬の差が大きくなる環境を生成する。"]} {"source": "It is an important activity for our society to create new value by combining materials. From daily cooking to industrial manufacturing, procedural texts describe the way to do it allowing readers to reproduce procedures for these activities. As pointed by some previous studies for natural language understanding, one important property of the procedural text is its context dependency, which is the merging operations of materials and can be represented by a graph or tree structure. This paper aims to investigate the impact of explicitly introducing such a structure on the vision and language task of procedural text generation from an image sequence. To this end, we propose (1) a new dataset, which extends a definition of a tree structure merging tree to a vision and language version and (2) a novel structure-aware procedural text generation model, which learns the context dependency efficiently. Experimental results show that the proposed method can boost the performance of traditional versatile methods.", "target": ["材料名と各工程の画像から説明文を生成する研究(料理ならレシピの生成)。どの材料がどの画像(工程)で組み合わせられるかは木構造で表現するが、アノテーションが困難なため説明されたレシピ文から再度木構造を生成しそれらが一致するかで学習を行う。"]} {"source": "We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy. By combining with CutMix, it further boosts the performance to 79.67%, which outperforms an ensemble of advanced training tricks. The generalization ability is evaluated on COCO object detection and instance segmentation where GradAug significantly surpasses other state-of-the-art methods. GradAug is also robust to image distortions and FGSM adversarial attacks and is highly effective in low data regimes. Code is available at this https URL", "target": ["Data Augmentationを部分ネットワークの学習に用いる研究。通常の学習に、Augmentationされた画像による複数の部分ネットワークの学習を組み合わせる。この時、通常学習側のラベルをsoft labelとして用いる/複数部分のうち最小は常に使用という工夫を入れている。"]} {"source": "In recent years, machine learning has received increased interest both as an academic research field and as a solution for real-world business problems. However, the deployment of machine learning models in production systems can present a number of issues and concerns. This survey reviews published reports of deploying machine learning solutions in a variety of use cases, industries and applications and extracts practical considerations corresponding to stages of the machine learning deployment workflow. Our survey shows that practitioners face challenges at each stage of the deployment. The goal of this paper is to layout a research agenda to explore approaches addressing these challenges.", "target": ["機械学習モデルを実運用する際の課題と事例をまとめたサーベイ。データの収集からデプロイ後の更新に至るまでの各ステップ、また倫理問題なども扱っている。"]} {"source": "Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community's emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.", "target": ["強化学習のRainbow( #439)でどのコンポーネントがどう効果を果たしているのか詳細に調べた研究。Prioritized replayとmulti stepが一番インパクトが大きく、不思議なことにDistributional RLは他のコンポーネントがない状態だとパフォーマンスが悪化するケースが見られた。"]} {"source": "Large-scale pretraining and task-specific fine-tuning is now the standard methodology for many tasks in computer vision and natural language processing. Recently, a multitude of methods have been proposed for pretraining vision and language BERTs to tackle challenges at the intersection of these two key areas of AI. These models can be categorised into either single-stream or dual-stream encoders. We study the differences between these two categories, and show how they can be unified under a single theoretical framework. We then conduct controlled experiments to discern the empirical differences between five V&L BERTs. Our experiments show that training data and hyperparameters are responsible for most of the differences between the reported results, but they also reveal that the embedding layer plays a crucial role in these massive models.", "target": ["Vision&Languageを扱う統合的なモデルの提案。テキスト/画像を別々のEncoderで処理するdual型をベースに、テキスト/画像を結合し入力するsingle型のパスも表現できるようdualのencoder間に伝播制御/重み結合のgateを設けている。既存モデルはデータ量やハイパラといった学習条件が同じなら大差ない結果"]} {"source": "We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.", "target": ["音声で教師なし学習を行い、少量サンプルで転移学習を行う研究。音声の局所特徴はCNNで抽出し、離散化した特徴をTransformerで学習する。Maskをかけた出力の元離散特徴を推定させると共に、離散用コードブックをまんべんなく使うよう制約かける。最良の半教師手法を上回る。"]} {"source": "Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. The annotation consists in a linguistically motivated word segmentation; a morphological layer comprising lemmas, universal part-of-speech tags, and standardized morphological features; and a syntactic layer focusing on syntactic relations between predicates, arguments and modifiers. In this paper, we describe version 2 of the guidelines (UD v2), discuss the major changes from UD v1 to UD v2, and give an overview of the currently available treebanks for 90 languages.", "target": ["異なる言語間で共通化した依存構造のアノテーション仕様Universal Dependenciesのv2が始動。単語内スペース禁止の緩和や拡張依存構造(ベースとなるアノテーションに付加的な情報を付与する)などが提案されている。"]} {"source": "Given the importance of remote sensing, surprisingly little attention has been paid to it by the representation learning community. To address it and to establish baselines and a common evaluation protocol in this domain, we provide simplified access to 5 diverse remote sensing datasets in a standardized form. Specifically, we investigate in-domain representation learning to develop generic remote sensing representations and explore which characteristics are important for a dataset to be a good source for remote sensing representation learning. The established baselines achieve state-of-the-art performance on these datasets.", "target": ["Remote Sensing用のベンチマークを測定できるように5つのデータセットを整備、TFDSで公開した。またこれらデータセットの特性(解像度、クラス数、データの規模)が精度にどれだけの影響を与えるか調査した。Scratch、In-Domain(Remote Sensing用のデータセットで事前学習)、ImageNetそれぞれのFineTuningでの精度を比較し、ひとつのデータセットを除いてIn-DomainがImageNetを上回った。"]} {"source": "Recent advances in one-shot semi-supervised learning have lowered the barrier for deep learning of new applications. However, the state-of-the-art for semi-supervised learning is slow to train and the performance is sensitive to the choices of the labeled data and hyper-parameter values. In this paper, we present a one-shot semi-supervised learning method that trains up to an order of magnitude faster and is more robust than state-of-the-art methods. Specifically, we show that by combining semi-supervised learning with a one-stage, single network version of self-training, our FROST methodology trains faster and is more robust to choices for the labeled samples and changes in hyper-parameters. Our experiments demonstrate FROST's capability to perform well when the composition of the unlabeled data is unknown; that is when the unlabeled data contain unequal numbers of each class and can contain out-of-distribution examples that don't belong to any of the training classes. High performance, speed of training, and insensitivity to hyper-parameters make FROST the most practical method for one-shot semi-supervised training. Our code is available at this https URL.", "target": ["各クラス1サンプルのみの分類を半教師で行う手法(One-shot半教師)の精度を高めた研究。教師あり+教師なしのlossを組み合わせており後者はAugmentationの強弱で予測不変(Consistency)と表現一致(Contrastive)を使用。学習では確信度高いサンプルを追加するBootstrappingを使用"]} {"source": "We present a hierarchical VAE that, for the first time, generates samples quickly while outperforming the PixelCNN in log-likelihood on all natural image benchmarks. We begin by observing that, in theory, VAEs can actually represent autoregressive models, as well as faster, better models if they exist, when made sufficiently deep. Despite this, autoregressive models have historically outperformed VAEs in log-likelihood. We test if insufficient depth explains why by scaling a VAE to greater stochastic depth than previously explored and evaluating it CIFAR-10, ImageNet, and FFHQ. In comparison to the PixelCNN, these very deep VAEs achieve higher likelihoods, use fewer parameters, generate samples thousands of times faster, and are more easily applied to high-resolution images. Qualitative studies suggest this is because the VAE learns efficient hierarchical visual representations. We release our source code and models at this https URL.", "target": ["VAEの確率分布を含む層を深くすると自己回帰モデルの一種とみなせ、ゆえに生成精度も向上するという研究。Decoder側をResidual Connectionを組み込んだ確率分布層で構成し、これを重ねる+U-NetのようなSkip-Connectionをはることで深いVAEを実現。複数データセットでPixelCNNより軽量かつ高尤度。"]} {"source": "Extrapolation to unseen sequence lengths is a challenge for neural generative models of language. In this work, we characterize the effect on length extrapolation of a modeling decision often overlooked: predicting the end of the generative process through the use of a special end-of-sequence (EOS) vocabulary item. We study an oracle setting - forcing models to generate to the correct sequence length at test time - to compare the length-extrapolative behavior of networks trained to predict EOS (+EOS) with networks not trained to (-EOS). We find that -EOS substantially outperforms +EOS, for example extrapolating well to lengths 10 times longer than those seen at training time in a bracket closing task, as well as achieving a 40% improvement over +EOS in the difficult SCAN dataset length generalization task. By comparing the hidden states and dynamics of -EOS and +EOS models, we observe that +EOS models fail to generalize because they (1) unnecessarily stratify their hidden states by their linear position is a sequence (structures we call length manifolds) or (2) get stuck in clusters (which we refer to as length attractors) once the EOS token is the highest-probability prediction.", "target": ["自然言語処理では文の終わりを示す記号としてEOSが用いられることが多いが、これがモデルの予測能力を抑制しているのではという研究。EOSを予測するよう学習させたモデルとそうでないモデルを比較し、後者の方が文終了の予測判定が良いことを確認。"]} {"source": "Sentence embedding is an important research topic in natural language processing (NLP) since it can transfer knowledge to downstream tasks. Meanwhile, a contextualized word representation, called BERT, achieves the state-of-the-art performance in quite a few NLP tasks. Yet, it is an open problem to generate a high quality sentence representation from BERT-based word models. It was shown in previous study that different layers of BERT capture different linguistic properties. This allows us to fusion information across layers to find better sentence representation. In this work, we study the layer-wise pattern of the word representation of deep contextualized models. Then, we propose a new sentence embedding method by dissecting BERT-based word models through geometric analysis of the space spanned by the word representation. It is called the SBERT-WK method. No further training is required in SBERT-WK. We evaluate SBERT-WK on semantic textual similarity and downstream supervised tasks. Furthermore, ten sentence-level probing tasks are presented for detailed linguistic analysis. Experiments show that SBERT-WK achieves the state-of-the-art performance. Our codes are publicly available.", "target": ["BERT内の各レイヤを使用して文ベクトルを作成する手法。レイヤによってとらえている特徴が異なることから、各レイヤの特徴を使用すると共に単語の重要度(周辺単語との類似度)と新規性(類似度の分散)から単語に重みをかけて合算する。"]} {"source": "Blending representation learning approaches with simultaneous localization and mapping (SLAM) systems is an open question, because of their highly modular and complex nature. Functionally, SLAM is an operation that transforms raw sensor inputs into a distribution over the state(s) of the robot and the environment. If this transformation (SLAM) were expressible as a differentiable function, we could leverage task-based error signals to learn representations that optimize task performance. However, several components of a typical dense SLAM system are non-differentiable. In this work, we propose gradSLAM, a methodology for posing SLAM systems as differentiable computational graphs, which unifies gradient-based learning and SLAM. We propose differentiable trust-region optimizers, surface measurement and fusion schemes, and raycasting, without sacrificing accuracy. This amalgamation of dense SLAM with computational graphs enables us to backprop all the way from 3D maps to 2D pixels, opening up new possibilities in gradient-based learning for SLAM. TL;DR: We leverage the power of automatic differentiation frameworks to make dense SLAM differentiable.", "target": ["ピクセル(RGB)レベルの密なMap構築を行うDense SLAMについて、SLAMの構築プロセスをすべて微分可能にし勾配法による最適化を可能にした研究。フレーム間のマッチング・Map推定・グローバル最適化をそれぞれ微分可能な計算に置き換えている。"]} {"source": "We show that a critical vulnerability in adversarial imitation is the tendency of discriminator networks to learn spurious associations between visual features and expert labels. When the discriminator focuses on task-irrelevant features, it does not provide an informative reward signal, leading to poor task performance. We analyze this problem in detail and propose a solution that outperforms standard Generative Adversarial Imitation Learning (GAIL). Our proposed method, Task-Relevant Adversarial Imitation Learning (TRAIL), uses constrained discriminator optimization to learn informative rewards. In comprehensive experiments, we show that TRAIL can solve challenging robotic manipulation tasks from pixels by imitating human operators without access to any task rewards, and clearly outperforms comparable baseline imitation agents, including those trained via behaviour cloning and conventional GAIL.", "target": ["強化学習におけるGAN(GAIL)が画像のそれほどインパクトを出していないのは、Discriminatorがエキスパート/エージェントの軌跡を見破るのにタスクと関係ない情報に依存しているからとした研究。タスク情報を含まない観測(行動初期など)の識別精度に制約をかけている(=タスク非依存情報への依存防止)。"]} {"source": "We describe a system for large-scale audiovisual translation and dubbing, which translates videos from one language to another. The source language's speech content is transcribed to text, translated, and automatically synthesized into target language speech using the original speaker's voice. The visual content is translated by synthesizing lip movements for the speaker to match the translated audio, creating a seamless audiovisual experience in the target language. The audio and visual translation subsystems each contain a large-scale generic synthesis model trained on thousands of hours of data in the corresponding domain. These generic models are fine-tuned to a specific speaker before translation, either using an auxiliary corpus of data from the target speaker, or using the video to be translated itself as the input to the fine-tuning process. This report gives an architectural overview of the full system, as well as an in-depth discussion of the video dubbing component. The role of the audio and text components in relation to the full system is outlined, but their design is not discussed in detail. Translated and dubbed demo videos generated using our system can be viewed at this https URL", "target": ["動画の自動吹替を行うシステムの提案。単純に翻訳/音声合成だけでなく、LipSyncを行うことであたかもナチュラルに翻訳先言語を話しているかのように変換を行う。"]} {"source": "We study the zero-shot transfer capabilities of text matching models on a massive scale, by self-supervised training on 140 source domains from community question answering forums in English. We investigate the model performances on nine benchmarks of answer selection and question similarity tasks, and show that all 140 models transfer surprisingly well, where the large majority of models substantially outperforms common IR baselines. We also demonstrate that considering a broad selection of source domains is crucial for obtaining the best zero-shot transfer performances, which contrasts the standard procedure that merely relies on the largest and most similar domains. In addition, we extensively study how to best combine multiple source domains. We propose to incorporate self-supervised with supervised multi-task learning on all available source domains. Our best zero-shot transfer model considerably outperforms in-domain BERT and the previous state of the art on six benchmarks. Fine-tuning of our model with in-domain data results in additional large gains and achieves the new state of the art on all nine benchmarks.", "target": ["1ドメインのデータを増やすのではなく、(インバランスであっても)ドメインを増やして学習する方が効果的とした論文。Stack Exchangeの総計140ドメインのデータで自己教師学習(質問/本文ペアのpositive/negative判定)を行うことで、個別ドメイン特化のBERTを上回る。"]} {"source": "We consider the problem of learning to repair programs from diagnostic feedback (e.g., compiler error messages). Program repair is challenging for two reasons: First, it requires reasoning and tracking symbols across source code and diagnostic feedback. Second, labeled datasets available for program repair are relatively small. In this work, we propose novel solutions to these two challenges. First, we introduce a program-feedback graph, which connects symbols relevant to program repair in source code and diagnostic feedback, and then apply a graph neural network on top to model the reasoning process. Second, we present a self-supervised learning paradigm for program repair that leverages unlabeled programs available online to create a large amount of extra program repair examples, which we use to pre-train our models. We evaluate our proposed approach on two applications: correcting introductory programming assignments (DeepFix dataset) and correcting the outputs of program synthesis (SPoC dataset). Our final system, DrRepair, significantly outperforms prior work, achieving 68.2% full repair rate on DeepFix (+22.9% over the prior best), and 48.4% synthesis success rate on SPoC (+3.7% over the prior best).", "target": ["機械学習でプログラムのエラー修正を行う研究。プログラムの行とエラーメッセージそれぞれをBi-LSTMでEncode=>Graph Attention=>LSTM=>各行の表現にエラーの表現を結合し発生位置予測&修正性を行う。稼働可能なコードを編集し、発生したエラーで学習データを生成している。"]} {"source": "Transformers do not scale very well to long sequence lengths largely because of quadratic self-attention complexity. In the recent months, a wide spectrum of efficient, fast Transformers have been proposed to tackle this problem, more often than not claiming superior or comparable model quality to vanilla Transformer models. To this date, there is no well-established consensus on how to evaluate this class of models. Moreover, inconsistent benchmarking on a wide spectrum of tasks and datasets makes it difficult to assess relative model quality amongst many models. This paper proposes a systematic and unified benchmark, LRA, specifically focused on evaluating model quality under long-context scenarios. Our benchmark is a suite of tasks consisting of sequences ranging from 1K to 16K tokens, encompassing a wide range of data types and modalities such as text, natural, synthetic images, and mathematical expressions requiring similarity, structural, and visual-spatial reasoning. We systematically evaluate ten well-established long-range Transformer models (Reformers, Linformers, Linear Transformers, Sinkhorn Transformers, Performers, Synthesizers, Sparse Transformers, and Longformers) on our newly proposed benchmark suite. LRA paves the way towards better understanding this class of efficient Transformer models, facilitates more research in this direction, and presents new challenging tasks to tackle. Our benchmark code will be released at this https URL.", "target": ["Transformerを長い系列のタスクで評価するためのベンチマーク。括弧の対応といった合成タスク、テキスト分類/抽出といった自然言語系のタスクだけでなく画像(ピクセルの系列)分類といった多様なタスクを含んでいる。スピードと精度のトレードオフがあり、唯一ベストなモデルは存在しないという結果。"]} {"source": "Is it possible to use convolutional neural networks pre-trained without any natural images to assist natural image understanding? The paper proposes a novel concept, Formula-driven Supervised Learning. We automatically generate image patterns and their category labels by assigning fractals, which are based on a natural law existing in the background knowledge of the real world. Theoretically, the use of automatically generated images instead of natural images in the pre-training phase allows us to generate an infinite scale dataset of labeled images. Although the models pre-trained with the proposed Fractal DataBase (FractalDB), a database without natural images, does not necessarily outperform models pre-trained with human annotated datasets at all settings, we are able to partially surpass the accuracy of ImageNet/Places pre-trained models. The image representation with the proposed FractalDB captures a unique feature in the visualization of convolutional layers and attentions.", "target": ["実画像無しで事前学習する研究。自然界にある構造を疑似的に生成できる反復関数系(フラクタルの一種)を用いて生成した画像で学習を行う(フラクタルのカテゴリをラベルとして用いる)。いくつかのデータセットでImageNetで事前学習したモデルを上回る精度を達成。"]} {"source": "ML models often exhibit unexpectedly poor behavior when they are deployed in real-world domains. We identify underspecification as a key reason for these failures. An ML pipeline is underspecified when it can return many predictors with equivalently strong held-out performance in the training domain. Underspecification is common in modern ML pipelines, such as those based on deep learning. Predictors returned by underspecified pipelines are often treated as equivalent based on their training domain performance, but we show here that such predictors can behave very differently in deployment domains. This ambiguity can lead to instability and poor model behavior in practice, and is a distinct failure mode from previously identified issues arising from structural mismatch between training and deployment domains. We show that this problem appears in a wide variety of practical ML pipelines, using examples from computer vision, medical imaging, natural language processing, clinical risk prediction based on electronic health records, and medical genomics. Our results show the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain.", "target": ["機械学習モデルが本番で精度を落とすのは単にデータの分布が異なるのではなく、(テストセットで)同一精度を出す解が多数あること(Underspecification)が原因とした研究。画像・自然言語等幅広いタスクで存在。固有の制約を導入するのが肝で、精度を落とさず導入可能としている"]} {"source": "We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models. We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language. We find that training on non-linguistic data with latent structure (MIDI music or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. To pinpoint the kinds of abstract structure that models may be encoding to lead to this improvement, we run similar experiments with two artificial parentheses languages: one which has a hierarchical recursive structure, and a control which has paired tokens but no recursion. Surprisingly, training a model on either of these artificial languages leads to the same substantial gains when testing on natural language. Further experiments on transfer between natural languages controlling for vocabulary overlap show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced by pre-training correspond to the cross-linguistic syntactic properties. Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which allow for natural language acquisition.", "target": ["言語以外データ(音楽やJavaのコードなど)で学習したモデルを言語にFine Tuneした研究。モデルはLSTMで、系列に内在する構造をとらえて汎化できるかを見ている。音楽やコードで学習してもRandomより良い転移性能(語彙の重複ないにもかかわらず)のため、きちんと構造をとらえていることが示唆されている"]} {"source": "One critical issue of zero anaphora resolution (ZAR) is the scarcity of labeled data. This study explores how effectively this problem can be alleviated by data augmentation. We adopt a state-of-the-art data augmentation method, called the contextual data augmentation (CDA), that generates labeled training instances using a pretrained language model. The CDA has been reported to work well for several other natural language processing tasks, including text classification and machine translation. This study addresses two underexplored issues on CDA, that is, how to reduce the computational cost of data augmentation and how to ensure the quality of the generated data. We also propose two methods to adapt CDA to ZAR: [MASK]-based augmentation and linguistically-controlled masking. Consequently, the experimental results on Japanese ZAR show that our methods contribute to both the accuracy gain and the computation cost reduction. Our closer analysis reveals that the proposed method can improve the quality of the augmented training data when compared to the conventional CDA.", "target": ["省略された主語/目的語を特定するゼロ照応解析の精度を言語モデルによるData Augmentationで向上させた研究。通常tokenを入れ替えデータ量を増やすが、適切な代替tokenを発見するための言語モデル推論が重い+適切な保障がないことから、単純に[MASK]する+入れ替える品詞を限定し速度と精度を維持"]} {"source": "Progress in Reinforcement Learning (RL) algorithms goes hand-in-hand with the development of challenging environments that test the limits of current methods. While existing RL environments are either sufficiently complex or based on fast simulation, they are rarely both. Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminal-based roguelike game, NetHack. We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL, while dramatically reducing the computational resources required to gather a large amount of experience. We compare NLE and its task suite to existing alternatives, and discuss why it is an ideal medium for testing the robustness and systematic generalization of RL agents. We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration, alongside qualitative analysis of various agents trained in the environment. NLE is open source at this https URL.", "target": ["複雑かつ軽量な強化学習環境の提案。高度なスキルが要求される複雑な環境ほど実行が重い(例: ロボット操作など)一方、実行が軽い環境は単純なことが多いが複雑性と軽量性を両立させた環境となる。トルネコのような自動生成ダンジョンゲームで、到達階やスコアなど様々な報酬を設定できる。"]} {"source": "Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, we improve upon both deep and batch ensembles.", "target": ["初期値とハイパーパラメーター(HP)双方でアンサンブルを取る手法。初期値 x HPの組み全てを学習するのは現実的でないため、初期値は各バッチに別個のベクトルを当てることで1 forwardで全パターン計算(batch ensemble)、HPはHP推定を組み込む(Self-tuning)手法を採用している"]} {"source": "Pushing forward the compute efficacy frontier in deep learning is critical for tasks that require frequent model re-training or workloads that entail training a large number of models. We introduce SliceOut -- a dropout-inspired scheme designed to take advantage of GPU memory layout to train deep learning models faster without impacting final test accuracy. By dropping contiguous sets of units at random, our method realises training speedups through (1) fast memory access and matrix multiplication of smaller tensors, and (2) memory savings by avoiding allocating memory to zero units in weight gradients and activations. At test time, turning off SliceOut performs an implicit ensembling across a linear number of architectures that preserves test accuracy. We demonstrate 10-40% speedups and memory reduction with Wide ResNets, EfficientNets, and Transformer models, with minimal to no loss in accuracy. This leads to faster processing of large computational workloads overall, and significantly reduce the resulting energy consumption and CO2emissions.", "target": ["メモリ効率の良いDropoutの提案。通常はランダムに行/列の重みをゼロにする(その後効率を高めるためにメモリ上に再配置する)が、行列上隣接するエリア以外をカットする(SliceOut)ことで、再配置なしに演算を可能にする。EfficientNets等画像だけでなくTransformerでも効果有"]} {"source": "Large text corpora are increasingly important for a wide variety of Natural Language Processing (NLP) tasks, and automatic language identification (LangID) is a core technology needed to collect such datasets in a multilingual context. LangID is largely treated as solved in the literature, with models reported that achieve over 90% average F1 on as many as 1,366 languages. We train LangID models on up to 1,629 languages with comparable quality on held-out test sets, but find that human-judged LangID accuracy for web-crawl text corpora created using these models is only around 5% for many lower-resource languages, suggesting a need for more robust evaluation. Further analysis revealed a variety of error modes, arising from domain mismatch, class imbalance, language similarity, and insufficiently expressive models. We propose two classes of techniques to mitigate these errors: wordlist-based tunable-precision filters (for which we release curated lists in about 500 languages) and transformer-based semi-supervised LangID models, which increase median dataset precision from 5.5% to 71.2%. These techniques enable us to create an initial data set covering 100K or more relatively clean sentences in each of 500+ languages, paving the way towards a 1,000-language web text corpus.", "target": ["Webから多言語のコーパスを作るために、言語特定の精度を上げた研究。基本n-gramのシンプルな手法で可能だが、それだと小リソースの言語でかなりミスが出ることを発見(F1 5%程度)。ワードリストや半教師を用いてこの問題の改善を行なった。"]} {"source": "Reinforcement learning (RL) has achieved impressive performance in a variety of online settings in which an agent's ability to query the environment for transitions and rewards is effectively unlimited. However, in many practical applications, the situation is reversed: an agent may have access to large amounts of undirected offline experience data, while access to the online environment is severely limited. In this work, we focus on this offline setting. Our main insight is that, when presented with offline data composed of a variety of behaviors, an effective way to leverage this data is to extract a continuous space of recurring and temporally extended primitive behaviors before using these primitives for downstream task learning. Primitives extracted in this way serve two purposes: they delineate the behaviors that are supported by the data from those that are not, making them useful for avoiding distributional shift in offline RL; and they provide a degree of temporal abstraction, which reduces the effective horizon yielding better learning in theory, and improved offline RL in practice. In addition to benefiting offline policy optimization, we show that performing offline primitive learning in this way can also be leveraged for improving few-shot imitation learning as well as exploration and transfer in online RL on a variety of benchmark domains. Visualizations are available at this https URL", "target": ["オフライン強化学習で、軌跡の表現をAutoencoder的に学習、戦略で使用する研究。軌跡→Encoderを使用して系列の表現を作成→状態/表現から行動を出力する戦略で軌跡が再現できるよう学習する。タスクごとに表現を学習することでタスクのつなぎ合わせで行動できるよう設計している。"]} {"source": "The extraction of labels from radiology text reports enables large-scale training of medical imaging models. Existing approaches to report labeling typically rely either on sophisticated feature engineering based on medical domain knowledge or manual annotations by experts. In this work, we introduce a BERT-based approach to medical image report labeling that exploits both the scale of available rule-based systems and the quality of expert annotations. We demonstrate superior performance of a biomedically pretrained BERT model first trained on annotations of a rule-based labeler and then finetuned on a small set of expert annotations augmented with automated backtranslation. We find that our final model, CheXbert, is able to outperform the previous best rules-based labeler with statistical significance, setting a new SOTA for report labeling on one of the largest datasets of chest x-rays.", "target": ["放射線画像診断書から所見のラベル(骨折、骨緻密化?、異常なしなど)を抽出するモデル。BERTベースのモデルをルールベースで抽出したラベルで学習し、放射線医師が付与したラベルでFine Tuneすることで精度を高める(エキスパートがアノテーションしたデータを活かすためBack-Translationも加えている)。"]} {"source": "This paper describes the submission of LMU Munich to the WMT 2020 unsupervised shared task, in two language directions, German<->Upper Sorbian. Our core unsupervised neural machine translation (UNMT) system follows the strategy of Chronopoulou et al. (2020), using a monolingual pretrained language generation model (on German) and fine-tuning it on both German and Upper Sorbian, before initializing a UNMT model, which is trained with online backtranslation. Pseudo-parallel data obtained from an unsupervised statistical machine translation (USMT) system is used to fine-tune the UNMT model. We also apply BPE-Dropout to the low resource (Upper Sorbian) data to obtain a more robust system. We additionally experiment with residual adapters and find them useful in the Upper Sorbian->German direction. We explore sampling during backtranslation and curriculum learning to use SMT translations in a more principled way. Finally, we ensemble our best-performing systems and reach a BLEU score of 32.4 on German->Upper Sorbian and 35.2 on Upper Sorbian->German.", "target": ["WMT2020の教師なし機械翻訳タスク(リソースがある言語/ない言語間の翻訳)の手法。リソース有側で言語モデル学習=>学習中モデルを使用しMonolingualをパラレルにするonline BTで学習=>統計的モデルで作成したパラレルで学習=>ニューラルモデルで作成したパラレルで学習=>低リソースにBPE Dropoutを行う"]} {"source": "The Transformer model has achieved state-of-the-art performance in many sequence modeling tasks. However, how to leverage model capacity with large or variable depths is still an open challenge. We present a probabilistic framework to automatically learn which layer(s) to use by learning the posterior distributions of layer selection. As an extension of this framework, we propose a novel method to train one shared Transformer network for multilingual machine translation with different layer selection posteriors for each language pair. The proposed method alleviates the vanishing gradient issue and enables stable training of deep Transformers (e.g. 100 layers). We evaluate on WMT English-German machine translation and masked language modeling tasks, where our method outperforms existing approaches for training deeper Transformers. Experiments on multilingual machine translation demonstrate that this approach can effectively leverage increased model capacity and bring universal improvement for both many-to-one and one-to-many translation with diverse language pairs.", "target": ["Transformerのレイヤーを深くして多言語の翻訳モデルを構築した研究。モデルのキャパシティを上げるため単純に大きくすると勾配消失の問題が発生するため、必要なレイヤを選択する仕組みを導入。これにより言語によって必要なレイヤを変えられるようにしている。"]} {"source": "We employ a combination of recent developments in semi-supervised learning for automatic speech recognition to obtain state-of-the-art results on LibriSpeech utilizing the unlabeled audio of the Libri-Light dataset. More precisely, we carry out noisy student training with SpecAugment using giant Conformer models pre-trained using wav2vec 2.0 pre-training. By doing so, we are able to achieve word-error-rates (WERs) 1.4%/2.6% on the LibriSpeech test/test-other sets against the current state-of-the-art WERs 1.7%/3.3%.", "target": ["半教師学習を音声認識で行った研究。モデルはTransformerベースで(Conformer)、Maskした表現と元表現が同じと識別できるように事前学習、言語モデルを使用した音声認識モデルでラベルなしデータにラベルを振り半教師学習、の2つを実行。LibriSpeechデータセットでSOTA更新。"]} {"source": "Self-supervised learning has emerged as a strategy to reduce the reliance on costly supervised signal by pretraining representations only using unlabeled data. These methods combine heuristic proxy classification tasks with data augmentations and have achieved significant success, but our theoretical understanding of this success remains limited. In this paper we analyze self-supervised representation learning using a causal framework. We show how data augmentations can be more effectively utilized through explicit invariance constraints on the proxy classifiers employed during pretraining. Based on this, we propose a novel self-supervised objective, Representation Learning via Invariant Causal Mechanisms (ReLIC), that enforces invariant prediction of proxy targets across augmentations through an invariance regularizer which yields improved generalization guarantees. Further, using causality we generalize contrastive learning, a particular kind of self-supervised method, and provide an alternative theoretical explanation for the success of these methods. Empirically, ReLIC significantly outperforms competing methods in terms of robustness and out-of-distribution generalization on ImageNet, while also significantly outperforming these methods on Atari achieving above human-level performance on 51 out of 57 games.", "target": ["汎用性の高い表現を自己教師で得るために、因果グラフを利用した研究。画像がスタイルとコンテンツから構成されているとし、スタイルをいくら変更しても(=介入しても)コンテンツは変化しない、という仮定からスタイルに対し変更を加えるAugmentation間で表現距離が一定値以下になるよう学習する"]} {"source": "We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.", "target": ["Self-Attentionの二次行列を低ランク行列同士の演算で近似する手法。Query用・Key用のランダムな(低ランク)行列を用意し、非線形の関数を通じ特徴を得てその掛け合わせでAttentionを推定する。これによりメモリ/演算効率を向上させ、既存のTransformerから重みを引き継ぎ高速に学習できることも確認"]} {"source": "Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities. We propose a general method to embed multiple knowledge bases (KBs) into large scale models, and thereby enhance their representations with structured, human-curated knowledge. For each KB, we first use an integrated entity linker to retrieve relevant entity embeddings, then update contextual word representations via a form of word-to-entity attention. In contrast to previous approaches, the entity linkers and self-supervised language modeling objective are jointly trained end-to-end in a multitask setting that combines a small amount of entity linking supervision with a large amount of raw text. After integrating WordNet and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert) demonstrates improved perplexity, ability to recall facts as measured in a probing task and downstream performance on relationship extraction, entity typing, and word sense disambiguation. KnowBert's runtime is comparable to BERT's and it scales to large KBs.", "target": ["BERTに知識グラフ(構造知識)を組み込む研究。Word Pieceの表現を線形変換=>メンションスパン分集めてpooling=>スパン間でSelf-Attention=>候補エンティティ表現の重み付き平均をとりスパン表現に加算=>Word PieceのSelf-Attention時にこれを参照、という流れ。"]} {"source": "Identifying prescription medications is a frequent task for patients and medical professionals; however, this is an error-prone task as many pills have similar appearances (e.g. white round pills), which increases the risk of medication errors. In this paper, we introduce ePillID, the largest public benchmark on pill image recognition, composed of 13k images representing 9804 appearance classes (two sides for 4902 pill types). For most of the appearance classes, there exists only one reference image, making it a challenging low-shot recognition setting. We present our experimental setup and evaluation results of various baseline models on the benchmark. The best baseline using a multi-head metric-learning approach with bilinear features performed remarkably well; however, our error analysis suggests that they still fail to distinguish particularly confusing classes. The code and data are available at this https URL.", "target": ["薬の画像認識のためのベンチマークデータセットを整備。4902種の錠剤が対象。各クラスに与えられる画像数が非常に少ないため、Low-Shot用のアプローチが必要。Multi-Head Metric Loss を用いたモデルで精度92%。"]} {"source": "Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control. While previous studies attempt to provide different types of guidance to control the output and increase faithfulness, it is not clear how these strategies compare and contrast to each other. In this paper, we propose a general and extensible guided summarization framework (GSum) that can effectively take different kinds of external guidance as input, and we perform experiments across several different varieties. Experiments demonstrate that this model is effective, achieving state-of-the-art performance according to ROUGE on 4 popular summarization datasets when using highlighted sentences as guidance. In addition, we show that our guided model can generate more faithful summaries and demonstrate how different types of guidance generate qualitatively different summaries, lending a degree of controllability to the learned models.", "target": ["ガイド文を使って出力をコントロールする要約モデルの提案。ガイド文はキーワードや抽出文など様々なものが利用でき、それぞれ別個のEncoderでEncodeしたのちにDecoderで出力する。Encoderは学習済み言語モデルを組み合わせて作る。CNN/Daily Mailで大幅に各種ROUGEを改善。"]} {"source": "We introduce Mischief, a simple and lightweight method to produce a class of human-readable, realistic adversarial examples for language models. We perform exhaustive experimentations of our algorithm on four transformer-based architectures, across a variety of downstream tasks, as well as under varying concentrations of said examples. Our findings show that the presence of Mischief-generated adversarial samples in the test set significantly degrades (by up to 20%) the performance of these models with respect to their reported baselines. Nonetheless, we also demonstrate that, by including similar examples in the training set, it is possible to restore the baseline scores on the adversarial test set. Moreover, for certain tasks, the models trained with Mischief set show a modest increase on performance with respect to their original, non-adversarial baseline.", "target": ["Transformerベースの言語モデルに対する(人に検知されない)Adversarialの提案。人間の場合最初と最後の文字が同じだと読むのに差し支えないとの結果があるが、これに基づき確率的に単語の中間部分文字を変動させる。結果転移学習時精度を落とすことに成功+Augmentationとして使えることを確認"]} {"source": "We introduce Mischief, a simple and lightweight method to produce a class of human-readable, realistic adversarial examples for language models. We perform exhaustive experimentations of our algorithm on four transformer-based architectures, across a variety of downstream tasks, as well as under varying concentrations of said examples. Our findings show that the presence of Mischief-generated adversarial samples in the test set significantly degrades (by up to  20% ) the performance of these models with respect to their reported baselines. Nonetheless, we also demonstrate that, by including similar examples in the training set, it is possible to restore the baseline scores on the adversarial test set. Moreover, for certain tasks, the models trained with Mischief set show a modest increase on performance with respect to their original, non-adversarial baseline.", "target": ["自然言語でデータを格納し自然言語で問い合わせを行うNeuralDBの提案。factを含む文をデータ、factを問う文をクエリとして構成する(複数factの連結が必要な質問=joinも可)。通常知識グラフの構築からSPARQLで問い合わせるが、Transformer(T5)を使用し事前関係不要にしている。"]} {"source": "Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a method's relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macro-structure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls. The code used is available at this https URL.", "target": ["DNNの構造探索手法(Neural Architecture Search=NAS)のベンチマークを作成した研究。手法によって探索空間がバラバラなので、探索空間における平均的な構造をどれだけ改善できるかで比較。実際平均を超えることが難しく、使用しているAugmentationによる精度差が大きい結果"]} {"source": "Most popular optimizers for deep learning can be broadly categorized as adaptive methods (e.g. Adam) and accelerated schemes (e.g. stochastic gradient descent (SGD) with momentum). For many models such as convolutional neural networks (CNNs), adaptive methods typically converge faster but generalize worse compared to SGD; for complex settings such as generative adversarial networks (GANs), adaptive methods are typically the default because of their stability.We propose AdaBelief to simultaneously achieve three goals: fast convergence as in adaptive methods, good generalization as in SGD, and training stability. The intuition for AdaBelief is to adapt the stepsize according to the \"belief\" in the current gradient direction. Viewing the exponential moving average (EMA) of the noisy gradient as the prediction of the gradient at the next time step, if the observed gradient greatly deviates from the prediction, we distrust the current observation and take a small step; if the observed gradient is close to the prediction, we trust it and take a large step. We validate AdaBelief in extensive experiments, showing that it outperforms other methods with fast convergence and high accuracy on image classification and language modeling. Specifically, on ImageNet, AdaBelief achieves comparable accuracy to SGD. Furthermore, in the training of a GAN on Cifar10, AdaBelief demonstrates high stability and improves the quality of generated samples compared to a well-tuned Adam optimizer. Code is available at this https URL", "target": ["Adamを改良したAdaBeliefの提案。勾配の指数平滑移動平均(EMA)を勾配の予測と考え、予測とはずれる場合は小さく同じ場合は大きくステップを取る。Adamに比べ新しいハイパーパラメーターの追加はない一方、Adamと同程度に早くSGDと同程度に汎化性能がありGANの学習も安定する"]} {"source": "Local feature frameworks are difficult to learn in an end-to-end fashion, due to the discreteness inherent to the selection and matching of sparse keypoints. We introduce DISK (DIScrete Keypoints), a novel method that overcomes these obstacles by leveraging principles from Reinforcement Learning (RL), optimizing end-to-end for a high number of correct feature matches. Our simple yet expressive probabilistic model lets us keep the training and inference regimes close, while maintaining good enough convergence properties to reliably train from scratch. Our features can be extracted very densely while remaining discriminative, challenging commonly held assumptions about what constitutes a good keypoint, as showcased in Fig. 1, and deliver state-of-the-art results on three public benchmarks.", "target": ["3次元構造の復元(SfM)に必要な局所特徴点一致を強化学習で行った研究。局所特徴の抽出後に特徴間の一致を推定し、実際の特徴点一致と合致した場合に報酬を与える。特徴点同士の一致は組み合わせが膨大になるため、学習に必要な勾配はモンテカルロで推定する。"]} {"source": "Large datasets have become commonplace in NLP research. However, the increased emphasis on data quantity has made it challenging to assess the quality of data. We introduce Data Maps---a model-based tool to characterize and diagnose datasets. We leverage a largely ignored source of information: the behavior of the model on individual instances during training (training dynamics) for building data maps. This yields two intuitive measures for each example---the model's confidence in the true class, and the variability of this confidence across epochs---obtained in a single run of training. Experiments across four datasets show that these model-dependent measures reveal three distinct regions in the data map, each with pronounced characteristics. First, our data maps show the presence of \"ambiguous\" regions with respect to the model, which contribute the most towards out-of-distribution generalization. Second, the most populous regions in the data are \"easy to learn\" for the model, and play an important role in model optimization. Finally, data maps uncover a region with instances that the model finds \"hard to learn\"; these often correspond to labeling errors. Our results indicate that a shift in focus from quantity to quality of data could lead to robust models and improved out-of-distribution generalization.", "target": ["データセットの品質を解析する手法の提案。学習中のモデルの挙動(正解クラスに対する確信度)の変遷を計測しデータサンプルを分類する。この結果、データを学習が容易・学習が困難(ラベルミスなど)、不明瞭、の3領域に区分できた。不明瞭はモデルの汎化性能向上に貢献する。"]} {"source": "Deep supervised learning has achieved great success in the last decade. However, its deficiencies of dependence on manual labels and vulnerability to attacks have driven people to explore a better solution. As an alternative, self-supervised learning attracts many researchers for its soaring performance on representation learning in the last several years. Self-supervised representation learning leverages input data itself as supervision and benefits almost all types of downstream tasks. In this survey, we take a look into new self-supervised learning methods for representation in computer vision, natural language processing, and graph learning. We comprehensively review the existing empirical methods and summarize them into three main categories according to their objectives: generative, contrastive, and generative-contrastive (adversarial). We further investigate related theoretical analysis work to provide deeper thoughts on how self-supervised learning works. Finally, we briefly discuss open problems and future directions for self-supervised learning. An outline slide for the survey is provided.", "target": ["自己教師学習のサーベイ資料。既存手法を生成系・対照系・敵対系の3つにカテゴライズしている。対象としているタスクもメジャーな画像だけでなく自然言語処理・グラフも扱っており幅広なサーベイとなっている。"]} {"source": "Choosing the optimizer is considered to be among the most crucial design decisions in deep learning, and it is not an easy one. The growing literature now lists hundreds of optimization methods. In the absence of clear theoretical guidance and conclusive empirical evidence, the decision is often made based on anecdotes. In this work, we aim to replace these anecdotes, if not with a conclusive ranking, then at least with evidence-backed heuristics. To do so, we perform an extensive, standardized benchmark of fifteen particularly popular deep learning optimizers while giving a concise overview of the wide range of possible choices. Analyzing more than 50,000 individual runs, we contribute the following three points: (i) Optimizer performance varies greatly across tasks. (ii) We observe that evaluating multiple optimizers with default parameters works approximately as well as tuning the hyperparameters of a single, fixed optimizer. (iii) While we cannot discern an optimization method clearly dominating across all tested tasks, we identify a significantly reduced subset of specific optimizers and parameter choices that generally lead to competitive results in our experiments: Adam remains a strong contender, with newer methods failing to significantly and consistently outperform it. Our open-sourced results are available as challenging and well-tuned baselines for more meaningful evaluations of novel optimization methods without requiring any further computational efforts.", "target": ["どのタスクにどのOptimizerが向いているのかを調べた研究。汎用的に使えるOptimizerはやはりないが、多くのタスクで良好な結果を出す手法のグループは存在する(特にADAM/ADABOUND)。異なるOptimizerを試すのはハイパーパラメーターのチューニングと同程度の効果があるとの結果。"]} {"source": "Learning visual representations of medical images is core to medical image understanding but its progress has been held back by the small size of hand-labeled datasets. Existing work commonly relies on transferring weights from ImageNet pretraining, which is suboptimal due to drastically different image characteristics, or rule-based label extraction from the textual report data paired with medical images, which is inaccurate and hard to generalize. We propose an alternative unsupervised strategy to learn medical visual representations directly from the naturally occurring pairing of images and textual data. Our method of pretraining medical image encoders with the paired text data via a bidirectional contrastive objective between the two modalities is domain-agnostic, and requires no additional expert input. We test our method by transferring our pretrained weights to 4 medical image classification tasks and 2 zero-shot retrieval tasks, and show that our method leads to image representations that considerably outperform strong baselines in most settings. Notably, in all 4 classification tasks, our method requires only 10% as much labeled training data as an ImageNet initialized counterpart to achieve better or comparable performance, demonstrating superior data efficiency.", "target": ["医療用画像の表現学習を行った研究。医療画像はラベルが少なく、転移学習するにもImageNetなどとはドメインが違いすぎることが課題だった。そこで画像の含まれる医療テキストから画像表現/テキスト表現をそれぞれ抽出し対照学習する手法を提案。"]} {"source": "We introduce k-nearest-neighbor machine translation (kNN-MT), which predicts tokens with a nearest neighbor classifier over a large datastore of cached examples, using representations from a neural translation model for similarity search. This approach requires no additional training and scales to give the decoder direct access to billions of examples at test time, resulting in a highly expressive model that consistently improves performance across many settings. Simply adding nearest neighbor search improves a state-of-the-art German-English translation model by 1.5 BLEU. kNN-MT allows a single model to be adapted to diverse domains by using a domain-specific datastore, improving results by an average of 9.2 BLEU over zero-shot transfer, and achieving new state-of-the-art results -- without training on these domains. A massively multilingual model can also be specialized for particular language pairs, with improvements of 3 BLEU for translating from English into German and Chinese. Qualitatively, kNN-MT is easily interpretable; it combines source and target context to retrieve highly relevant examples.", "target": ["kNNを利用してDecoderを強化した研究。翻訳元文と生成済みトークンから表現を作成し、学習データ中の同様に作成した表現との距離を測定、学習データにおける「次の単語」を距離に応じて重み付けし確率に変換する。元Decoderの予測と組み合わせることでBLEUの大幅な改善に成功。"]} {"source": "Despite the subjective nature of many NLP tasks, most NLU evaluations have focused on using the majority label with presumably high agreement as the ground truth. Less attention has been paid to the distribution of human opinions. We collect ChaosNLI, a dataset with a total of 464,500 annotations to study Collective HumAn OpinionS in oft-used NLI evaluation sets. This dataset is created by collecting 100 annotations per example for 3,113 examples in SNLI and MNLI and 1,532 examples in Abductive-NLI. Analysis reveals that: (1) high human disagreement exists in a noticeable amount of examples in these datasets; (2) the state-of-the-art models lack the ability to recover the distribution over human labels; (3) models achieve near-perfect accuracy on the subset of data with a high level of human agreement, whereas they can barely beat a random guess on the data with low levels of human agreement, which compose most of the common errors made by state-of-the-art models on the evaluation sets. This questions the validity of improving model performance on old metrics for the low-agreement part of evaluation datasets. Hence, we argue for a detailed examination of human agreement in future data collection efforts, and evaluating model outputs against the distribution over collective human opinions. The ChaosNLI dataset and experimental scripts are available at this https URL", "target": ["アノテーションではアノテーターの合意が一番得られたラベルがつけられることが多いが、意見の分散をモデルは再現できているのか?を調査した研究。NLIのタスク(SNLI/MNLI)で、1サンプル100アノテーションを収集し実験。結果として一致率が高いサンプルは上手く予測できるが意見分散は再現できなかった"]} {"source": "Pretrained language models, especially masked language models (MLMs) have seen success across many NLP tasks. However, there is ample evidence that they use the cultural biases that are undoubtedly present in the corpora they are trained on, implicitly creating harm with biased representations. To measure some forms of social bias in language models against protected demographic groups in the US, we introduce the Crowdsourced Stereotype Pairs benchmark (CrowS-Pairs). CrowS-Pairs has 1508 examples that cover stereotypes dealing with nine types of bias, like race, religion, and age. In CrowS-Pairs a model is presented with two sentences: one that is more stereotyping and another that is less stereotyping. The data focuses on stereotypes about historically disadvantaged groups and contrasts them with advantaged groups. We find that all three of the widely-used MLMs we evaluate substantially favor sentences that express stereotypes in every category in CrowS-Pairs. As work on building less biased models advances, this dataset can be used as a benchmark to evaluate progress.", "target": ["言語モデルのバイアスを評価するデータセットの提案。バイアスが含まれる文と最小限(単語)の変更でバイアスをフラットにした文(京都の人は陰湿だ&東京の人は陰湿だetc)をペアで作成している(バイアスの数は9種でクラウドソーシングで作成)。ALBERTはなぜかBERTよりバイアスのスコアがかなり高い。"]} {"source": "While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.", "target": ["画像をパッチに分割し、パッチの系列としてTransformerに入力して画像分類を行った研究。CNNのSOTAに匹敵する精度を記録する一方、計算コストが少なく済む。2Dを意識したposition embeddingはあまり効果がなく(通常1Dの方が良い)、事前学習済み特徴(ResNet)の入力は小サイズモデルの場合のみ効果有。"]} {"source": "Attribution methods assess the contribution of inputs to the model prediction. One way to do so is erasure: a subset of inputs is considered irrelevant if it can be removed without affecting the prediction. Though conceptually simple, erasure's objective is intractable and approximate search remains expensive with modern deep NLP models. Erasure is also susceptible to the hindsight bias: the fact that an input can be dropped does not mean that the model `knows' it can be dropped. The resulting pruning is over-aggressive and does not reflect how the model arrives at the prediction. To deal with these challenges, we introduce Differentiable Masking. DiffMask learns to mask-out subsets of the input while maintaining differentiability. The decision to include or disregard an input token is made with a simple model based on intermediate hidden layers of the analyzed model. First, this makes the approach efficient because we predict rather than search. Second, as with probing classifiers, this reveals what the network `knows' at the corresponding layers. This lets us not only plot attribution heatmaps but also analyze how decisions are formed across network layers. We use DiffMask to study BERT models on sentiment classification and question answering.", "target": ["自然言語処理のモデルが何を根拠に予測しているかを調べるために、レイヤ内の隠れ層をマスクする手法の提案。マスクの予測器を学習する際、予測したマスクの適用結果を入力に戻すことで後知恵のバイアス(事前に知ることができないForward後の結果によるマスク)を回避している"]} {"source": "We propose a simple and efficient multi-hop dense retrieval approach for answering complex open-domain questions, which achieves state-of-the-art performance on two multi-hop datasets, HotpotQA and multi-evidence FEVER. Contrary to previous work, our method does not require access to any corpus-specific information, such as inter-document hyperlinks or human-annotated entity markers, and can be applied to any unstructured text corpus. Our system also yields a much better efficiency-accuracy trade-off, matching the best published accuracy on HotpotQA while being 10 times faster at inference time.", "target": ["マルチホップのQA(HotpotQAなど)で、再帰的なクエリにより高い精度を記録した研究。Query/PassageそれぞれにEncodeして内積により近さを取り抽出するのが基本だが、QueryのEncode対象に抽出結果をどんどん加えていく。"]} {"source": "Efforts to improve the learning abilities of neural networks have focused mostly on the role of optimization methods rather than on weight initializations. Recent findings, however, suggest that neural networks rely on lucky random initial weights of subnetworks called \"lottery tickets\" that converge quickly to a solution. To investigate how weight initializations affect performance, we examine small convolutional networks that are trained to predict n steps of the two-dimensional cellular automaton Conway's Game of Life, the update rules of which can be implemented efficiently in a 2n+1 layer convolutional network. We find that networks of this architecture trained on this task rarely converge. Rather, networks require substantially more parameters to consistently converge. In addition, near-minimal architectures are sensitive to tiny changes in parameters: changing the sign of a single weight can cause the network to fail to learn. Finally, we observe a critical value d_0 such that training minimal networks with examples in which cells are alive with probability d_0 dramatically increases the chance of convergence to a solution. We conclude that training convolutional neural networks to learn the input/output function represented by n steps of Game of Life exhibits many characteristics predicted by the lottery ticket hypothesis, namely, that the size of the networks required to learn this function are often significantly larger than the minimal network required to implement the function.", "target": ["CNNでライフゲームの予測を行うことで、宝くじ仮説を検証した研究。ライフゲームの状態は隣接セルの状態から決まるが、CNNでこの法則を学習する場合過剰なパラメーターが必要となる。また、ノイズやデータセットの分布に敏感であることを発見。これらは宝くじ仮説の予測と適合する。"]} {"source": "Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.", "target": ["オープンドメインのQAで必要な回答が含まれていそうな文書(Passage)の抽出について、既存のTF-IDFやBM25よりベクトル特徴の内積を使用した抽出の方が1~2割精度が改善するという研究結果。Q、Aは別個のBERTでEncodeされ、実行時は FAISSで近傍ベクトルを抽出する。"]} {"source": "Many real-world problems, including multi-speaker text-to-speech synthesis, can greatly benefit from the ability to meta-learn large models with only a few task-specific components. Updating only these task-specific modules then allows the model to be adapted to low-data tasks for as many steps as necessary without risking overfitting. Unfortunately, existing meta-learning methods either do not scale to long adaptation or else rely on handcrafted task-specific architectures. Here, we propose a meta-learning approach that obviates the need for this often sub-optimal hand-selection. In particular, we develop general techniques based on Bayesian shrinkage to automatically discover and learn both task-specific and general reusable modules. Empirically, we demonstrate that our method discovers a small set of meaningful task-specific modules and outperforms existing meta-learning approaches in domains like few-shot text-to-speech that have little task data and long adaptation horizons. We also show that existing meta-learning methods including MAML, iMAML, and Reptile emerge as special cases of our method.", "target": ["メタラーニングでネットワークのどこが汎用的/タスク依存かを自動で調整する手法の提案。ネットワークをm個のモジュールに分割し、それぞれ平均Φ_m/分散σ_mの正規分布からパラメーターを生成するが、この時σ_mが0に近いほど平均に集中=汎用となる。階層ベイズの枠組みでこのパラメーターの学習を行う。"]} {"source": "Gradient descent can be surprisingly good at optimizing deep neural networks without overfitting and without explicit regularization. We find that the discrete steps of gradient descent implicitly regularize models by penalizing gradient descent trajectories that have large loss gradients. We call this Implicit Gradient Regularization (IGR) and we use backward error analysis to calculate the size of this regularization. We confirm empirically that implicit gradient regularization biases gradient descent toward flat minima, where test errors are small and solutions are robust to noisy parameter perturbations. Furthermore, we demonstrate that the implicit gradient regularization term can be used as an explicit regularizer, allowing us to control this gradient regularization directly. More broadly, our work indicates that backward error analysis is a useful theoretical approach to the perennial question of how learning rate, model size, and parameter regularization interact to determine the properties of overparameterized models optimized with gradient descent.", "target": ["最急降下法は手法自体に暗黙的な正則化が含まれるためグローバルな最適化に近づけるとした研究。最急降下法は、実際はなだらかな勾配をステップを刻みジグザクに追跡をしていく。このジグザクは単に外れているのでなく、正則化項が加わったloss=真の最適解への経路とし最適解へ至る事象を説明している"]} {"source": "The notion of task similarity is at the core of various machine learning paradigms, such as domain adaptation and meta-learning. Current methods to quantify it are often heuristic, make strong assumptions on the label sets across the tasks, and many are architecture-dependent, relying on task-specific optimal parameters (e. g., require training a model on each dataset). In this work we propose an alternative notion of distance between datasets that (i) is model-agnostic, (ii) does not involve training, (iii) can compare datasets even if their label sets are completely disjoint and (iv) has solid theoretical footing. This distance relies on optimal transport, which provides it with rich geometry awareness, interpretable correspondences and well-understood properties. Our results show that this novel distance provides meaningful comparison of datasets, and correlates well with transfer learning hardness across various experimental settings and datasets.", "target": ["データセット間の距離を計測することで、事前学習済みモデルの転移性能を予測する研究。データとラベル2つの距離を合計するのが基本で、データは普通に距離、ラベルはラベルで条件付けた確率分布(P(X|Y=y))の輸送距離(p-Wasserstein距離)で計測を行う。"]} {"source": "We consider the problem of evaluating representations of data for use in solving a downstream task. We propose to measure the quality of a representation by the complexity of learning a predictor on top of the representation that achieves low loss on a task of interest, and introduce two methods, surplus description length (SDL) and \\varepsilon sample complexity (\\varepsilonSC). In contrast to prior methods, which measure the amount of information about the optimal predictor that is present in a specific amount of data, our methods measure the amount of information needed from the data to recover an approximation of the optimal predictor up to a specified tolerance. We present a framework to compare these methods based on plotting the validation loss versus evaluation dataset size (the \"loss-data\" curve). Existing measures, such as mutual information and minimum description length probes, correspond to slices and integrals along the data axis of the loss-data curve, while ours correspond to slices and integrals along the loss axis. We provide experiments on real data to compare the behavior of each of these methods over datasets of varying size along with a high performance open source library for representation evaluation at this https URL.", "target": ["潜在表現の性能を測る指標の提案。既存手法は一定データ量でどれだけlossが下がるか計測するが、未知のタスクではどの程度データを用意するか自明でない。そこで考え方を逆にし、一定のlossに至るのに必要なデータ量を計測する手法を提案(良質な表現なら少量データから元データ全体の情報を復元できる)"]} {"source": "We propose a simple and effective method for machine translation evaluation which does not require reference translations. Our approach is based on (1) grounding the entity mentions found in each source sentence and candidate translation against a large-scale multilingual knowledge base, and (2) measuring the recall of the grounded entities found in the candidate vs. those found in the source. Our approach achieves the highest correlation with human judgements on 9 out of the 18 language pairs from the WMT19 benchmark for evaluation without references, which is the largest number of wins for a single evaluation method on this task. On 4 language pairs, we also achieve higher correlation with human judgements than BLEU. To foster further research, we release a dataset containing 1.8 million grounded entity mentions across 18 language pairs from the WMT19 metrics track data.", "target": ["翻訳文の評価を人手翻訳なしに行う手法の提案。元文に含まれるエンティティが、翻訳文にも含まれているかで評価する(多言語のEntity LinkingはまだOSSなどで行うのは困難なので、Google Knowledge Graph Search APIを使用している)。評価スコアが人間評価と相関が高いことを確認。"]} {"source": "Much as replacing hand-designed features with learned functions has revolutionized how we solve perceptual tasks, we believe learned algorithms will transform how we train models. In this work we focus on general-purpose learned optimizers capable of training a wide variety of problems with no user-specified hyperparameters. We introduce a new, neural network parameterized, hierarchical optimizer with access to additional features such as validation loss to enable automatic regularization. Most learned optimizers have been trained on only a single task, or a small number of tasks. We train our optimizers on thousands of tasks, making use of orders of magnitude more compute, resulting in optimizers that generalize better to unseen tasks. The learned optimizers not only perform well, but learn behaviors that are distinct from existing first order optimizers. For instance, they generate update steps that have implicit regularization and adapt as the problem hyperparameters (e.g. batch size) or architecture (e.g. neural network width) change. Finally, these learned optimizers show evidence of being useful for out of distribution tasks such as training themselves from scratch.", "target": ["DNNの学習自体を学習させる試み。momentum等から勾配を出力する全結合を、学習/バリデーションlossを入力とするLSTMで調整する構成。この学習器を、CNN/RNN/全結合等様々なネットワークの学習タスク約6000で学習させる。構成を変えたネットワークでも最適化できることを確認。"]} {"source": "While machine learning (ML) models are being increasingly trusted to make decisions in different and varying areas, the safety of systems using such models has become an increasing concern. In particular, ML models are often trained on data from potentially untrustworthy sources, providing adversaries with the opportunity to manipulate them by inserting carefully crafted samples into the training set. Recent work has shown that this type of attack, called a poisoning attack, allows adversaries to insert backdoors or trojans into the model, enabling malicious behavior with simple external backdoor triggers at inference time and only a blackbox perspective of the model itself. Detecting this type of attack is challenging because the unexpected behavior occurs only when a backdoor trigger, which is known only to the adversary, is present. Model users, either direct users of training data or users of pre-trained model from a catalog, may not guarantee the safe operation of their ML-based system. In this paper, we propose a novel approach to backdoor detection and removal for neural networks. Through extensive experimental results, we demonstrate its effectiveness for neural networks classifying text and images. To the best of our knowledge, this is the first methodology capable of detecting poisonous data crafted to insert backdoors and repairing the model that does not require a verified and trusted dataset.", "target": ["分類器にバックドアを設置するために作成された汚染データを検出する手法。汚染データを含むデータセットをバックドアが設置されていない分類器に入力し、出力層手前の全結合層から得たActivationをクラス毎に2クラスタリングすることで、クリーンデータと汚染データを選別する。"]} {"source": "What are the latent questions on some textual data? In this work, we investigate using question generation models for exploring a collection of documents. Our method, dubbed corpus2question, consists of applying a pre-trained question generation model over a corpus and aggregating the resulting questions by frequency and time. This technique is an alternative to methods such as topic modelling and word cloud for summarizing large amounts of textual data. Results show that applying corpus2question on a corpus of scientific articles related to COVID-19 yields relevant questions about the topic. The most frequent questions are \"what is covid 19\" and \"what is the treatment for covid\". Among the 1000 most frequent questions are \"what is the threshold for herd immunity\" and \"what is the role of ace2 in viral entry\". We show that the proposed method generated similar questions for 13 of the 27 expert-made questions from the CovidQA question answering dataset. The code to reproduce our experiments and the generated questions are available at: this https URL", "target": ["QA形式でトピックモデルのような教師なしの文書分類を行う研究。文書/関連箇所のペアからクエリ(質問)を逆生成する事前学習済みモデルを使用し(論文中ではT5をMS MARCOでFine Tuneしたモデルを使用)、生成したクエリ=トピックとして回答を含む文書を集め分類を行う。文脈を考慮した分類が可能という。"]} {"source": "While large-scale language models (LMs) are able to imitate the distribution of natural language well enough to generate realistic text, it is difficult to control which regions of the distribution they generate. This is especially problematic because datasets used for training large LMs usually contain significant toxicity, hate, bias, and negativity. We propose GeDi as an efficient method for using smaller LMs as generative discriminators to guide generation from large LMs to make them safer and more controllable. GeDi guides generation at each step by computing classification probabilities for all possible next tokens via Bayes rule by normalizing over two class-conditional distributions; one conditioned on the desired attribute, or control code, and another conditioned on the undesired attribute, or anti control code. We find that GeDi gives stronger controllability than the state of the art method while also achieving generation speeds more than 30 times faster. Additionally, training GeDi on only four topics allows us to controllably generate new topics zero-shot from just a keyword, unlocking a new capability that previous controllable generation methods do not have. Lastly, we show that GeDi can make GPT-2 (1.5B parameters) significantly less toxic without sacrificing linguistic quality, making it by far the most practical existing method for detoxifying large language models while maintaining a fast generation speed.", "target": ["言語モデルによる文生成でスタイルを反映させる手法の提案。Positiveな映画レビューを生成する場合、同時に相反するNegativeで生成を行い差異が大きい単語をサンプリングする。ただこの場合双方に等しく出現する単語が一様になるため、生成系列がレビュー(=クラス)と見なされるようバイアスをかける。"]} {"source": "Models that perform well on a training domain often fail to generalize to out-of-domain (OOD) examples. Data augmentation is a common method used to prevent overfitting and improve OOD generalization. However, in natural language, it is difficult to generate new examples that stay on the underlying data manifold. We introduce SSMBA, a data augmentation method for generating synthetic training examples by using a pair of corruption and reconstruction functions to move randomly on a data manifold. We investigate the use of SSMBA in the natural language domain, leveraging the manifold assumption to reconstruct corrupted text with masked language models. In experiments on robustness benchmarks across 3 tasks and 9 datasets, SSMBA consistently outperforms existing data augmentation methods and baseline models on both in-domain and OOD data, achieving gains of 0.8% accuracy on OOD Amazon reviews, 1.8% accuracy on OOD MNLI, and 1.4 BLEU on in-domain IWSLT14 German-English.", "target": ["自然言語処理でData Augmentationを行うシンプルな手法の提案。学習データ分布域内のサンプルが生成されるように、学習データをcorrupt(一部単語をMask)して事前学習済み言語モデルで復元するという手法を取っている。Sentiment/NLI/翻訳といったタスクで精度向上を確認。"]} {"source": "Recently, there has been a great interest in the development of small and accurate neural networks that run entirely on devices such as mobile phones, smart watches and IoT. This enables user privacy, consistent user experience and low latency. Although a wide range of applications have been targeted from wake word detection to short text classification, yet there are no on-device networks for long text classification. We propose a novel projection attention neural network PRADO that combines trainable projections with attention and convolutions. We evaluate our approach on multiple large document text classification tasks. Our results show the effectiveness of the trainable projection model in finding semantically similar phrases and reaching high performance while maintaining compact size. Using this approach, we train tiny neural networks just 200 Kilobytes in size that improve over prior CNN and LSTM models and achieve near state of the art performance on multiple long document classification tasks. We also apply our model for transfer learning, show its robustness and ability to further improve the performance in limited data scenarios.", "target": ["テキスト分類をオンデバイスで稼働させるための研究。ハッシュ関数を用い単語を一意のバイト列に変換、これを入力にベクトルを出力することで通常語彙数xベクトルサイズとなる埋め込み行列を不要にしている。以後は複数windowsサイズのCNNxAttentionで処理。"]} {"source": "We present a new large-scale corpus of Question-Answer driven Semantic Role Labeling (QA-SRL) annotations, and the first high-quality QA-SRL parser. Our corpus, QA-SRL Bank 2.0, consists of over 250,000 question-answer pairs for over 64,000 sentences across 3 domains and was gathered with a new crowd-sourcing scheme that we show has high precision and good recall at modest cost. We also present neural models for two QA-SRL subtasks: detecting argument spans for a predicate and generating questions to label the semantic relationship. The best models achieve question accuracy of 82.6% and span-level accuracy of 77.6% (under human evaluation) on the full pipelined QA-SRL prediction task. They can also, as we show, be used to gather additional annotations at low cost.", "target": ["文の意味役割ラベリング(主語、目的語etc)を質問回答で解くデータセットの公開。専門家でないとラベリングが困難なケースが多いが、質問回答の形にすることで(誰が本を書いたか?など)回答収集を容易にすると共に自動生成も行いやすくしている。"]} {"source": "Until now, error type performance for Grammatical Error Correction (GEC) systems could only be measured in terms of recall because system output is not annotated. To overcome this problem, we introduce ERRANT, a grammatical ERRor ANnotation Toolkit designed to automatically extract edits from parallel original and corrected sentences and classify them according to a new, dataset-agnostic, rule-based framework. This not only facilitates error type evaluation at different levels of granularity, but can also be used to reduce annotator workload and standardise existing GEC datasets. Human experts rated the automatic edits as “Good” or “Acceptable” in at least 95% of cases, so we applied ERRANT to the system output of the CoNLL-2014 shared task to carry out a detailed error type analysis for the first time.", "target": ["文法誤り訂正で、訂正前後の文から訂正箇所/種別を自動特定する手法の提案。これによりモデルがどの誤りをどれくらい訂正できるか評価できるようにする。誤り箇所は品詞等も考慮した編集距離、種別分類はルールベースで行っている(普通に学習するとデータ依存になるため)。"]} {"source": "In this work, we investigate the positional encoding methods used in language pre-training (e.g., BERT) and identify several problems in the existing formulations. First, we show that in the absolute positional encoding, the addition operation applied on positional embeddings and word embeddings brings mixed correlations between the two heterogeneous information resources. It may bring unnecessary randomness in the attention and further limit the expressiveness of the model. Second, we question whether treating the position of the symbol \\texttt{[CLS]} the same as other words is a reasonable design, considering its special role (the representation of the entire sentence) in the downstream tasks. Motivated from above analysis, we propose a new positional encoding method called \\textbf{T}ransformer with \\textbf{U}ntied \\textbf{P}ositional \\textbf{E}ncoding (TUPE). In the self-attention module, TUPE computes the word contextual correlation and positional correlation separately with different parameterizations and then adds them together. This design removes the mixed and noisy correlations over heterogeneous embeddings and offers more expressiveness by using different projection matrices. Furthermore, TUPE unties the \\texttt{[CLS]} symbol from other positions, making it easier to capture information from all positions. Extensive experiments and ablation studies on GLUE benchmark demonstrate the effectiveness of the proposed method. Codes and models are released at this https URL.", "target": ["単語情報と位置情報(Positional Encoding)を別々の重みで処理する手法の提案。単語分散表現/位置情報、別々に相関を取りAddをすることでAttentionを作成する。GLUE benchmarkで、ベースラインであるBERTより良好なスコアを記録。"]} {"source": "Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep learning stack. Recently, a dizzying number of \"X-former\" models have been proposed - Reformer, Linformer, Performer, Longformer, to name a few - which improve upon the original Transformer architecture, many of which make improvements around computational and memory efficiency. With the aim of helping the avid researcher navigate this flurry, this paper characterizes a large and thoughtful selection of recent efficiency-flavored \"X-former\" models, providing an organized and comprehensive overview of existing work and models across multiple domains.", "target": ["最近多く公開されているTransformer(#329 )改善系論文のサーベイ(概ねxxxformerという名前になることが多い)。メモリ/演算効率など改善観点に応じてまとめられている。"]} {"source": "Attention is a key component of Transformers, which have recently achieved considerable success in natural language processing. Hence, attention is being extensively studied to investigate various linguistic capabilities of Transformers, focusing on analyzing the parallels between attention weights and specific linguistic phenomena. This paper shows that attention weights alone are only one of the two factors that determine the output of attention and proposes a norm-based analysis that incorporates the second factor, the norm of the transformed input vectors. The findings of our norm-based analyses of BERT and a Transformer-based neural machine translation system include the following: (i) contrary to previous studies, BERT pays poor attention to special tokens, and (ii) reasonable word alignment can be extracted from attention mechanisms of Transformer. These findings provide insights into the inner workings of Transformers.", "target": ["Transformerの挙動分析でAttentionの重みだけでなく重みをかけられるベクトルの大きさ(ノルム)を考慮した研究。Attentionが大きくてもベクトルが小さいと影響は少ない(逆もしかり)。ノルムを加えた分析から、BERTが高頻度語への参照を抑えている、Transformerがアライメントをとらえていること等を発見"]} {"source": "In an effort to overcome limitations of reward-driven feature learning in deep reinforcement learning (RL) from images, we propose decoupling representation learning from policy learning. To this end, we introduce a new unsupervised learning (UL) task, called Augmented Temporal Contrast (ATC), which trains a convolutional encoder to associate pairs of observations separated by a short time difference, under image augmentations and using a contrastive loss. In online RL experiments, we show that training the encoder exclusively using ATC matches or outperforms end-to-end RL in most environments. Additionally, we benchmark several leading UL algorithms by pre-training encoders on expert demonstrations and using them, with weights frozen, in RL agents; we find that agents using ATC-trained encoders outperform all others. We also train multi-task encoders on data from multiple environments and show generalization to different downstream RL tasks. Finally, we ablate components of ATC, and introduce a new data augmentation to enable replay of (compressed) latent images from pre-trained encoders when RL requires augmentation. Our experiments span visually diverse RL benchmarks in DeepMind Control, DeepMind Lab, and Atari, and our complete code is available at this https URL.", "target": ["Data Augmentationを用いる強化学習で、事前に表現学習を行いその後に通常通りの強化学習を行う研究。表現学習は時系列が近い状態を近いと(Augmentationをかけても)認識できるよう対照学習を行う。その後強化学習を行う。初回からEnd2Endより高い性能を観測"]} {"source": "We propose an efficient inference procedure for non-autoregressive machine translation that iteratively refines translation purely in the continuous space. Given a continuous latent variable model for machine translation (Shu et al., 2020), we train an inference network to approximate the gradient of the marginal log probability of the target sentence, using only the latent variable as input. This allows us to use gradient-based optimization to find the target sentence at inference time that approximately maximizes its marginal probability. As each refinement step only involves computation in the latent space of low dimensionality (we use 8 in our experiments), we avoid computational overhead incurred by existing non-autoregressive inference procedures that often refine in token space. We compare our approach to a recently proposed EM-like inference procedure (Shu et al., 2020) that optimizes in a hybrid space, consisting of both discrete and continuous variables. We evaluate our approach on WMT'14 En-De, WMT'16 Ro-En and IWSLT'16 De-En, and observe two advantages over the EM-like inference: (1) it is computationally efficient, i.e. each refinement step is twice as fast, and (2) it is more effective, resulting in higher marginal probabilities and BLEU scores with the same number of refinement steps. On WMT'14 En-De, for instance, our approach is able to decode 6.2 times faster than the autoregressive model with minimal degradation to translation quality (0.9 BLEU).", "target": ["翻訳文を出力する前の潜在表現をブラッシュアップし、出力文品質を高める手法の提案。出力後のトークン(離散)を修正する手法はあったが、この場合語彙数ある候補から選ぶ必要があり語彙サイズが大きい場合適用困難だった。潜在表現レベルで修正することで速度/BLEU共に向上。"]} {"source": "We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub.", "target": ["2つのネットワークを使用した表現学習の提案。ネットワークはAutoEncoderの構成で、異なるAugmentationをかけた画像(view)をonlineとtargetの2種にそれぞれ入力、onlineの出力とtargetの出力が一致するよう学習する。targetは勾配からでなくonlineの移動平均で更新する。ImageNetでtop-1 74.3%を達成"]} {"source": "Anomaly Detection (AD) in images is a fundamental computer vision problem and refers to identifying images and image substructures that deviate significantly from the norm. Popular AD algorithms commonly try to learn a model of normality from scratch using task specific datasets, but are limited to semi-supervised approaches employing mostly normal data due to the inaccessibility of anomalies on a large scale combined with the ambiguous nature of anomaly appearance. We follow an alternative approach and demonstrate that deep feature representations learned by discriminative models on large natural image datasets are well suited to describe normality and detect even subtle anomalies in a transfer learning setting. Our model of normality is established by fitting a multivariate Gaussian (MVG) to deep feature representations of classification networks trained on ImageNet using normal data only. By subsequently applying the Mahalanobis distance as the anomaly score we outperform the current state of the art on the public MVTec AD dataset, achieving an AUROC value of 95.8 \\pm 1.2 (mean \\pm SEM) over all 15 classes. We further investigate why the learned representations are discriminative to the AD task using Principal Component Analysis. We find that the principal components containing little variance in normal data are the ones crucial for discriminating between normal and anomalous instances. This gives a possible explanation to the often sub-par performance of AD approaches trained from scratch using normal data only. By selectively fitting a MVG to these most relevant components only, we are able to further reduce model complexity while retaining AD performance. We also investigate setting the working point by selecting acceptable False Positive Rate thresholds based on the MVG assumption. Code available at this https URL", "target": ["事前学習済みの画像認識モデルを利用して異常検知を行った研究(EfficientNetを使用)。通常データに対する事前学習済みモデルの各階層の特徴に多変量正規分布をフィッティングさせ、標本点との距離をマハラノビス距離で測り異常かどうか判定する。MVTec AD datasetでSOTA達成"]} {"source": "We present a simple approach for text infilling, the task of predicting missing spans of text at any position in a document. While infilling could enable rich functionality especially for writing assistance tools, more attention has been devoted to language modeling---a special case of infilling where text is predicted at the end of a document. In this paper, we aim to extend the capabilities of language models (LMs) to the more general task of infilling. To this end, we train (or fine-tune) off-the-shelf LMs on sequences containing the concatenation of artificially-masked text and the text which was masked. We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics. Furthermore, we show that humans have difficulty identifying sentences infilled by our approach as machine-generated in the domain of short stories.", "target": ["言語モデルを使用し文中の穴埋めを行う研究。「私は[blank]が好き[sep]りんご[answer]」のように、ランダムに単語を[blank]にした文と、[blank]にされた単語([answer])を[sep]で連結しひとつなぎの系列にし、言語モデルで学習を行う。単にBERTで埋めるより高品質なことを確認"]} {"source": "We explore the application of transformer-based language models to automated theorem proving. This work is motivated by the possibility that a major limitation of automated theorem provers compared to humans -- the generation of original mathematical terms -- might be addressable via generation from language models. We present an automated prover and proof assistant, GPT-f, for the Metamath formalization language, and analyze its performance. GPT-f found new short proofs that were accepted into the main Metamath library, which is to our knowledge, the first time a deep-learning based system has contributed proofs that were adopted by a formal mathematics community.", "target": ["自動定理証明を言語モデルで行う研究。A=Cを証明したい場合A=BでB=C、といったステップを踏むことになるがこのステップを言語モデルにより推定していく(証明過程=系列とみなせる)。最終的に最も証明確率が高いルートを採用する。事前学習によりブースト可能なことも確認"]} {"source": "Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. In this paper, we challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models. To facilitate this investigation, we compile a comprehensive biomedical NLP benchmark from publicly-available datasets. Our experiments show that domain-specific pretraining serves as a solid foundation for a wide range of biomedical NLP tasks, leading to new state-of-the-art results across the board. Further, in conducting a thorough evaluation of modeling choices, both for pretraining and task-specific fine-tuning, we discover that some common practices are unnecessary with BERT models, such as using complex tagging schemes in named entity recognition (NER). To help accelerate research in biomedical NLP, we have released our state-of-the-art pretrained and task-specific models for the community, and created a leaderboard featuring our BLURB benchmark (short for Biomedical Language Understanding & Reasoning Benchmark) at this https URL.", "target": ["医学・生命科学分野に特化した言語モデル PubMed BERT を公開。Wikiなどの一般コーパスは一切使わず、PubMed/PMC論文だけでBERT事前学習。BioMed NLP ベンチマーク \"BLURB\" を新たに整備し、SciBERT/BioBERTなど既存モデルの性能を上回ることを確認。"]} {"source": "We present a new flow-based video completion algorithm. Previous flow completion methods are often unable to retain the sharpness of motion boundaries. Our method first extracts and completes motion edges, and then uses them to guide piecewise-smooth flow completion with sharp edges. Existing methods propagate colors among local flow connections between adjacent frames. However, not all missing regions in a video can be reached in this way because the motion boundaries form impenetrable barriers. Our method alleviates this problem by introducing non-local flow connections to temporally distant frames, enabling propagating video content over motion boundaries. We validate our approach on the DAVIS dataset. Both visual and quantitative results show that our method compares favorably against the state-of-the-art algorithms.", "target": ["Flowを用いたビデオの欠損補完の研究。ポイントは①欠損部エッジ検出して繋げる(補完する)ことでFlowの補完③少し時間的に遠いフレームを使ったFlow計算で見えない部分を取得③勾配を用いた再構成で継ぎ目を防ぐ。先行研究より定量的に優れており、ビデオから人を消すこともできる。"]} {"source": "In this paper, we study an intermediate form of supervision, i.e., single-frame supervision, for temporal action localization (TAL). To obtain the single-frame supervision, the annotators are asked to identify only a single frame within the temporal window of an action. This can significantly reduce the labor cost of obtaining full supervision which requires annotating the action boundary. Compared to the weak supervision that only annotates the video-level label, the single-frame supervision introduces extra temporal action signals while maintaining low annotation overhead. To make full use of such single-frame supervision, we propose a unified system called SF-Net. First, we propose to predict an actionness score for each video frame. Along with a typical category score, the actionness score can provide comprehensive information about the occurrence of a potential action and aid the temporal boundary refinement during inference. Second, we mine pseudo action and background frames based on the single-frame annotations. We identify pseudo action frames by adaptively expanding each annotated single frame to its nearby, contextual frames and we mine pseudo background frames from all the unannotated frames across multiple videos. Together with the ground-truth labeled frames, these pseudo-labeled frames are further used for training the classifier. In extensive experiments on THUMOS14, GTEA, and BEOID, SF-Net significantly improves upon state-of-the-art weakly-supervised methods in terms of both segment localization and single-frame localization. Notably, SF-Net achieves comparable results to its fully-supervised counterpart which requires much more resource intensive annotations. The code is available at this https URL.", "target": ["1つフレーム(画像)をもとに、行動の始めと終わりの場所を予測するSF-Netを提案。各フレームやビデオ全体で行動のクラスを予測させるだけでなく、”行動”or”何もしていない”の予測もしながら学習している。全部のフレームを見る必要がないので、アノテーション補助ツールとして活用できる。"]} {"source": "In this paper, we focus on the task of extracting visual correspondences across videos. Given a query video clip from an action class, we aim to align it with training videos in space and time. Obtaining training data for such a fine-grained alignment task is challenging and often ambiguous. Hence, we propose a novel alignment procedure that learns such correspondence in space and time via cross video cycle-consistency. During training, given a pair of videos, we compute cycles that connect patches in a given frame in the first video by matching through frames in the second video. Cycles that connect overlapping patches together are encouraged to score higher than cycles that connect non-overlapping patches. Our experiments on the Penn Action and Pouring datasets demonstrate that the proposed method can successfully learn to correspond semantically similar patches across videos, and learns representations that are sensitive to object and action states.", "target": ["ビデオにおける行動の対応部分を教師なしで検出する研究。教師なしで学習したトラッカーからで検出されたパッチ同士の距離を測ることによって、パッチレベルの対応関係を学習する。既存の特徴量抽出機より良い精度で検出できる。"]} {"source": "Generative adversarial networks (GANs) have gained increasing popularity in various computer vision applications, and recently start to be deployed to resource-constrained mobile devices. Similar to other deep models, state-of-the-art GANs suffer from high parameter complexities. That has recently motivated the exploration of compressing GANs (usually generators). Compared to the vast literature and prevailing success in compressing deep classifiers, the study of GAN compression remains in its infancy, so far leveraging individual compression techniques instead of more sophisticated combinations. We observe that due to the notorious instability of training GANs, heuristically stacking different compression techniques will result in unsatisfactory results. To this end, we propose the first unified optimization framework combining multiple compression means for GAN compression, dubbed GAN Slimming (GS). GS seamlessly integrates three mainstream compression techniques: model distillation, channel pruning and quantization, together with the GAN minimax objective, into one unified optimization form, that can be efficiently optimized from end to end. Without bells and whistles, GS largely outperforms existing options in compressing image-to-image translation GANs. Specifically, we apply GS to compress CartoonGAN, a state-of-the-art style transfer network, by up to 47 times, with minimal visual quality degradation. Codes and pre-trained models can be found at this https URL.", "target": ["GANのGeneratorを小さくする研究。蒸留・量子化・枝刈を組み合わせる。量子化は通常微分不可だが擬似的な勾配を使うことでE2Eの学習を可能にしている。既存のモデルを1/47にまで圧縮することに成功した。"]} {"source": "Having the right inductive biases can be crucial in many tasks or scenarios where data or computing resources are a limiting factor, or where training data is not perfectly representative of the conditions at test time. However, defining, designing and efficiently adapting inductive biases is not necessarily straightforward. In this paper, we explore the power of knowledge distillation for transferring the effect of inductive biases from one model to another. We consider families of models with different inductive biases, LSTMs vs. Transformers and CNNs vs. MLPs, in the context of tasks and scenarios where having the right inductive biases is critical. We study the effect of inductive biases on the solutions the models converge to and investigate how and to what extent the effect of inductive biases is transferred through knowledge distillation, in terms of not only performance but also different aspects of converged solutions.", "target": ["構造が異なるモデル間で蒸留を行うことで、特定モデルで学習しやすい知識(CNNなら局所特徴、RNNなら系列構造など)を転移できるか検証した研究。CNN=>MLP、LSTM=>Transformerで蒸留を行いそれぞれの学習傾向が蒸留先モデルに反映されることを確認。"]} {"source": "Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks), that do not require labeled data, to learn semantic representations. These pretext tasks are created solely using the input features, such as predicting a missing image patch, recovering the color channels of an image from context, or predicting missing words, yet predicting this known\\ information helps in learning representations effective for downstream prediction tasks. This paper posits a mechanism based on conditional independence to formalize how solving certain pretext tasks can learn representations that provably decreases the sample complexity of downstream supervised tasks. Formally, we quantify how approximate independence between the components of the pretext task (conditional on the label and latent variables) allows us to learn representations that can solve the downstream task with drastically reduced sample complexity by just training a linear layer on top of the learned representation.", "target": ["自己教師学習の効果を検証した研究。最終的なタスクに必要な情報との相関(Conditional Independence)という観点から分析している。色復元の自己教師なら、風景予測への相関は強いが顔認識なら色以外の情報も必要なため相関は弱くなる。相関を推定できれば有効な自己教師を見つけることができる。"]} {"source": "The successes of deep learning, variational inference, and many other fields have been aided by specialized implementations of reverse-mode automatic differentiation (AD) to compute gradients of mega-dimensional objectives. The AD techniques underlying these tools were designed to compute exact gradients to numerical precision, but modern machine learning models are almost always trained with stochastic gradient descent. Why spend computation and memory on exact (minibatch) gradients only to use them for stochastic optimization? We develop a general framework and approach for randomized automatic differentiation (RAD), which can allow unbiased gradient estimates to be computed with reduced memory in return for variance. We examine limitations of the general approach, and argue that we must leverage problem specific structure to realize benefits. We develop RAD techniques for a variety of simple neural network architectures, and show that for a fixed memory budget, RAD converges in fewer iterations than using a small batch size for feedforward networks, and in a similar number for recurrent networks. We also show that RAD can be applied to scientific computing, and use it to develop a low-memory stochastic gradient method for optimizing the control parameters of a linear reaction-diffusion PDE representing a fission reactor.", "target": ["近年の深層学習フレームワークは自動微分により厳密な勾配を計算するが、そもそもSGDで最適化するなら(実際ミニバッチで勾配を計算しているように)厳密な計算は必要ないのでは?という着想のもと、演算グラフのパスをサンプリングして近似勾配を計算することで、勾配記録に必要なメモリ量を削減している"]} {"source": "As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we really care about---summary quality. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning. We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone. Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. We conduct extensive analyses to understand our human feedback dataset and fine-tuned models We establish that our reward model generalizes to new datasets, and that optimizing our reward model results in better summaries than optimizing ROUGE according to humans. We hope the evidence from our paper motivates machine learning researchers to pay closer attention to how their training loss affects the model behavior they actually want.", "target": ["強化学習でチューニングすることでより人手評価が高い要約モデルを作成した研究。言語モデル学習=>教師データによるFine Tune=>強化学習(PPO)という流れ。報酬関数は同じ記事に対する2つの要約の評価差分を予測できるよう学習する。自動評価指標(ROUGE)を使うより好まれる結果"]} {"source": "This paper introduces WaveGrad, a conditional model for waveform generation which estimates gradients of the data density. The model is built on prior work on score matching and diffusion probabilistic models. It starts from a Gaussian white noise signal and iteratively refines the signal via a gradient-based sampler conditioned on the mel-spectrogram. WaveGrad offers a natural way to trade inference speed for sample quality by adjusting the number of refinement steps, and bridges the gap between non-autoregressive and autoregressive models in terms of audio quality. We find that it can generate high fidelity audio samples using as few as six iterations. Experiments reveal WaveGrad to generate high fidelity audio, outperforming adversarial non-autoregressive baselines and matching a strong likelihood-based autoregressive baseline using fewer sequential operations. Audio samples are available at this https URL.", "target": ["ノイズを除去するAuto Encoderの枠組みで学習する、段階的生成モデルDenoising Diffusion確率モデルを音声生成に応用した研究(更新回数でなくノイズレベルで直接条件付けしている)。完全なノイズ(white noise)から段階的にノイズを除けていくことで生成を行う。6回程度の更新で高忠実度な音声を生成。"]} {"source": "In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs.", "target": ["ネットワークのパラメータ集合を考え、実験によって良い集合を見つけるネットワーク構造最適化手法を提案。NASのような完全自動化ではなく、むしろ熟練者の手動デザインに近いが具体的な評価方法と結果を示している。探索結果のRegNetはEfficientNet以上の精度で5倍高速。"]} {"source": "This paper presents X3D, a family of efficient video networks that progressively expand a tiny 2D image classification architecture along multiple network axes, in space, time, width and depth. Inspired by feature selection methods in machine learning, a simple stepwise network expansion approach is employed that expands a single axis in each step, such that good accuracy to complexity trade-off is achieved. To expand X3D to a specific target complexity, we perform progressive forward expansion followed by backward contraction. X3D achieves state-of-the-art performance while requiring 4.8x and 5.5x fewer multiply-adds and parameters for similar accuracy as previous work. Our most surprising finding is that networks with high spatiotemporal resolution can perform well, while being extremely light in terms of network width and parameters. We report competitive accuracy at unprecedented efficiency on video classification and detection benchmarks. Code will be available at: this https URL", "target": ["行動検知ネットワークの効率的探索の研究。ネットワークの各構成要素を一番効率が良い因子を順次選びながら精度を上げていく。先行研究の1/5のパラメータ数で最高精度を更新。探索されたネットワークはチャネル数が小さくて高解像度画像を使うものになった。"]} {"source": "Attention mechanisms have become a popular component in deep neural networks, yet there has been little examination of how different influencing factors and methods for computing attention from these factors affect performance. Toward a better general understanding of attention mechanisms, we present an empirical study that ablates various spatial attention elements within a generalized attention formulation, encompassing the dominant Transformer attention as well as the prevalent deformable convolution and dynamic convolution modules. Conducted on a variety of applications, the study yields significant findings about spatial attention in deep networks, some of which run counter to conventional understanding. For example, we find that the query and key content comparison in Transformer attention is negligible for self-attention, but vital for encoder-decoder attention. A proper combination of deformable convolution with key content only saliency achieves the best accuracy-efficiency tradeoff in self-attention. Our results suggest that there exists much room for improvement in the design of attention mechanisms.", "target": ["画像認識におけるAttentionの類型と効果をまとめた研究。Attentionをqueryのベクトル/位置、keyのベクトル/位置の組み合わせから計算するものと定義し(演算種類は2x2=4となる)、Transformer・素/Deformable/Dynamic CNNのAttentionがそれぞれどの組み合わせの計算なのか示している。"]} {"source": "Methods for dynamic difficulty adjustment allow games to be tailored to particular players to maximize their engagement. However, current methods often only modify a limited set of game features such as the difficulty of the opponents, or the availability of resources. Other approaches, such as experience-driven Procedural Content Generation (PCG), can generate complete levels with desired properties such as levels that are neither too hard nor too easy, but require many iterations. This paper presents a method that can generate and search for complete levels with a specific target difficulty in only a few trials. This advance is enabled by through an Intelligent Trial-and-Error algorithm, originally developed to allow robots to adapt quickly. Our algorithm first creates a large variety of different levels that vary across predefined dimensions such as leniency or map coverage. The performance of an AI playing agent on these maps gives a proxy for how difficult the level would be for another AI agent (e.g. one that employs Monte Carlo Tree Search instead of Greedy Tree Search); using this information, a Bayesian Optimization procedure is deployed, updating the difficulty of the prior map to reflect the ability of the agent. The approach can reliably find levels with a specific target difficulty for a variety of planning agents in only a few trials, while maintaining an understanding of their skill landscape.", "target": ["プレイヤーにとって適切なゲームレベル(勝率50~70%)を素早く発見する手法の提案。最初に様々なレベルの環境を作成、幾つかの手法でプレイさせて難易度を割り出し、環境決定要素(敵の数やゴールへの経路数等)x難易度の事前分布を作成。実プレイ結果により更新を行っていく。"]} {"source": "Existing disentanglement methods for deep generative models rely on hand-picked priors and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a simple regularization term that encourages the Hessian of a generative model with respect to its input to be diagonal. We introduce a model-agnostic, unbiased stochastic approximation of this term based on Hutchinson's estimator to compute it efficiently during training. Our method can be applied to a wide range of deep generators with just a few lines of code. We show that training with the Hessian Penalty often causes axis-aligned disentanglement to emerge in latent space when applied to ProGAN on several datasets. Additionally, we use our regularization term to identify interpretable directions in BigGAN's latent space in an unsupervised fashion. Finally, we provide empirical evidence that the Hessian Penalty encourages substantial shrinkage when applied to over-parameterized latent spaces.", "target": ["GANの潜在空間においてdisentangleな表現を得る研究。ある方向iに変化させることで他の要素j(≠i)に変化を与えないようにHessianを使った正則化項を提案。学習済みモデルに対してもFine-tuningが可能になっている。"]} {"source": "Causal models can compactly and efficiently encode the data-generating process under all interventions and hence may generalize better under changes in distribution. These models are often represented as Bayesian networks and learning them scales poorly with the number of variables. Moreover, these approaches cannot leverage previously learned knowledge to help with learning new causal models. In order to tackle these challenges, we represent a novel algorithm called \\textit{causal relational networks} (CRN) for learning causal models using neural networks. The CRN represent causal models using continuous representations and hence could scale much better with the number of variables. These models also take in previously learned information to facilitate learning of new causal models. Finally, we propose a decoding-based metric to evaluate causal models with continuous representations. We test our method on synthetic data achieving high accuracy and quick adaptation to previously unseen causal models.", "target": ["ある因果関係に対する介入効果をニューラルネットで予測する研究。メタラーニングの構成をとっており初見の因果関係について素早く介入効果を予測する設定となっている。因果関係を構成する各変数から入力以外の変数状態を予測し、入力+他変数からの予測にAttentionをかけて集計し効果(出力)を予測する"]} {"source": "Super-resolution is an ill-posed problem, since it allows for multiple predictions for a given low-resolution image. This fundamental fact is largely ignored by state-of-the-art deep learning based approaches. These methods instead train a deterministic mapping using combinations of reconstruction and adversarial losses. In this work, we therefore propose SRFlow: a normalizing flow based super-resolution method capable of learning the conditional distribution of the output given the low-resolution input. Our model is trained in a principled manner using a single loss, namely the negative log-likelihood. SRFlow therefore directly accounts for the ill-posed nature of the problem, and learns to predict diverse photo-realistic high-resolution images. Moreover, we utilize the strong image posterior learned by SRFlow to design flexible image manipulation techniques, capable of enhancing super-resolved images by, e.g., transferring content from other images. We perform extensive experiments on faces, as well as on super-resolution in general. SRFlow outperforms state-of-the-art GAN-based approaches in terms of both PSNR and perceptual quality metrics, while allowing for diversity through the exploration of the space of super-resolved solutions.", "target": ["正規化流を使った高解像度画像生成。複数の損失項の調整が必要なGANと異なり対数尤度のみを最適化することで学習する。また、可逆変換を使っているため潜在表現との一対一対応させることが可能なので、高解像度の潜在変数を使ってスタイル変換もできる。"]} {"source": "Recent work has shown that self-attention can serve as a basic building block for image recognition models. We explore variations of self-attention and assess their effectiveness for image recognition. We consider two forms of self-attention. One is pairwise self-attention, which generalizes standard dot-product attention and is fundamentally a set operator. The other is patchwise self-attention, which is strictly more powerful than convolution. Our pairwise self-attention networks match or outperform their convolutional counterparts, and the patchwise models substantially outperform the convolutional baselines. We also conduct experiments that probe the robustness of learned representations and conclude that self-attention networks may have significant benefits in terms of robustness and generalizatio", "target": ["畳み込み処理をSelf-Attentionで置き換えた機構の提案。周辺点に重みをかけ集計するのはCNNと変わらないが、画一的なKernelでなく周辺点個別に(Patchwise)、また周辺点との関係(SumやConcatして重みをかける=Pairwise)からAttentionの重みを計算し集計する。"]} {"source": "We present a new method to learn video representations from large-scale unlabeled video data. Ideally, this representation will be generic and transferable, directly usable for new tasks such as action recognition and zero or few-shot learning. We formulate unsupervised representation learning as a multi-modal, multi-task learning problem, where the representations are shared across different modalities via distillation. Further, we introduce the concept of loss function evolution by using an evolutionary search algorithm to automatically find optimal combination of loss functions capturing many (self-supervised) tasks and modalities. Thirdly, we propose an unsupervised representation evaluation metric using distribution matching to a large unlabeled dataset as a prior constraint, based on Zipf's law. This unsupervised constraint, which is not guided by any labeling, produces similar results to weakly-supervised, task-specific ones. The proposed unsupervised representation learning results in a single RGB network and outperforms previous methods. Notably, it is also more effective than several label-based methods (e.g., ImageNet), with the exception of large, fully labeled video datasets.", "target": ["マルチタスク・マルチモーダルで教師なし学習をさせつつ、進化アルゴによるタスクバランスの調整と蒸留を使ってRBG動画を入力とするネットワークに情報を集約する機構を提案。200万動画を利用することで行動検知タスクにおいて非常に高い精度を達成した。"]} {"source": "Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes.", "target": ["画像でも局所特徴間の相関を得るためにSelf-Attentionが使用されているが、演算効率に問題がある。そこで、列内(=1次元のSelf-Attention)を行った後に行内のSelf-Attentionを行うといういわゆる因子化演算を提案している。加えて相対位置/特徴をより反映させるPosition-Sensitiveな方法も提案している"]} {"source": "Pre-trained language models such as BERT have exhibited remarkable performances in many tasks in natural language understanding (NLU). The tokens in the models are usually fine-grained in the sense that for languages like English they are words or sub-words and for languages like Chinese they are characters. In English, for example, there are multi-word expressions which form natural lexical units and thus the use of coarse-grained tokenization also appears to be reasonable. In fact, both fine-grained and coarse-grained tokenizations have advantages and disadvantages for learning of pre-trained language models. In this paper, we propose a novel pre-trained language model, referred to as AMBERT (A Multi-grained BERT), on the basis of both fine-grained and coarse-grained tokenizations. For English, AMBERT takes both the sequence of words (fine-grained tokens) and the sequence of phrases (coarse-grained tokens) as input after tokenization, employs one encoder for processing the sequence of words and the other encoder for processing the sequence of the phrases, utilizes shared parameters between the two encoders, and finally creates a sequence of contextualized representations of the words and a sequence of contextualized representations of the phrases. Experiments have been conducted on benchmark datasets for Chinese and English, including CLUE, GLUE, SQuAD and RACE. The results show that AMBERT can outperform BERT in all cases, particularly the improvements are significant for Chinese. We also develop a method to improve the efficiency of AMBERT in inference, which still performs better than BERT with the same computational cost as BERT.", "target": ["細かいTokenizeと荒いTokenizeを別々にEncodeして双方の特徴を使用できるようにした研究。モデルはBERTベースで、それぞれのEncoderは重みを共有する。中国語の場合は文字/単語、英語の場合は単語/フレーズで行っている。GLUE等で既存モデルを上回る精度を確認。"]} {"source": "Natural language interfaces to databases (NLIDB) democratize end user access to relational data. Due to fundamental differences between natural language communication and programming, it is common for end users to issue questions that are ambiguous to the system or fall outside the semantic scope of its underlying query language. We present Photon, a robust, modular, cross-domain NLIDB that can flag natural language input to which a SQL mapping cannot be immediately determined. Photon consists of a strong neural semantic parser (63.2\\% structure accuracy on the Spider dev benchmark), a human-in-the-loop question corrector, a SQL executor and a response generator. The question corrector is a discriminative neural sequence editor which detects confusion span(s) in the input question and suggests rephrasing until a translatable input is given by the user or a maximum number of iterations are conducted. Experiments on simulated data show that the proposed method effectively improves the robustness of text-to-SQL system against untranslatable user input. The live demo of our system is available at this http URL.", "target": ["Text-to-SQLを対話形式で行う研究。自然言語のクエリの中で不明瞭な箇所を、対話で確認し明確化する。質問/スキーマを連結した系列をBERTに投入し、Bi-LSTMにかけた後質問パートはさらにBi-LSTMをかけてDecoderへ、残りのパートはDB特徴として扱う。デモが公開されており実際試すことができる。"]} {"source": "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g.Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.", "target": ["未見カテゴリを分類するzero-shot学習において、GANを用いた研究。テキストを入力としたGANで生成したそれらしきデータで分類器を訓練することでzero-shot学習問題を簡単な分類問題に落とし込んだ。先行研究と比較して優れた結果。"]} {"source": "Existing semi-supervised learning (SSL) algorithms use a single weight to balance the loss of labeled and unlabeled examples, i.e., all unlabeled examples are equally weighted. But not all unlabeled data are equal. In this paper we study how to use a different weight for every unlabeled example. Manual tuning of all those weights -- as done in prior work -- is no longer possible. Instead, we adjust those weights via an algorithm based on the influence function, a measure of a model's dependency on one training example. To make the approach efficient, we propose a fast and effective approximation of the influence function. We demonstrate that this technique outperforms state-of-the-art methods on semi-supervised image and language classification tasks.", "target": ["半教師あり学習において、通常ラベルなしデータの重みは一様に扱われるが、個々のデータの重みを自動決定する手法を提案。FixMatch等既存のロスに組み込むことが可能で、有意に精度を向上させることができる。"]} {"source": "Data augmentations have been widely studied to improve the accuracy and robustness of classifiers. However, the potential of image augmentation in improving GAN models for image synthesis has not been thoroughly investigated in previous studies. In this work, we systematically study the effectiveness of various existing augmentation techniques for GAN training in a variety of settings. We provide insights and guidelines on how to augment images for both vanilla GANs and GANs with regularizations, improving the fidelity of the generated images substantially. Surprisingly, we find that vanilla GANs attain generation quality on par with recent state-of-the-art results if we use augmentations on both real and generated images. When this GAN training is combined with other augmentation-based regularization techniques, such as contrastive loss and consistency regularization, the augmentations further improve the quality of generated images. We provide new state-of-the-art results for conditional generation on CIFAR-10 with both consistency loss and contrastive loss as additional regularizations.", "target": ["GAN学習時の判別器にFake/Real両方にデータ拡張をかけることでFIDスコアが大幅に向上したという研究。Real画像だけにかけるだけでは効果がない。さらにデータ拡張前後で判別器の値を不変にする制約であるConsistency(BCR)や自己教師あり学習で使われるContrastive loss(Cntr)の導入も効果があった。"]} {"source": "Deep learning's recent history has been one of achievement: from triumphing over humans in the game of Go to world-leading performance in image recognition, voice recognition, translation, and other tasks. But this progress has come with a voracious appetite for computing power. This article reports on the computational demands of Deep Learning applications in five prominent application areas and shows that progress in all five is strongly reliant on increases in computing power. Extrapolating forward this reliance reveals that progress along current lines is rapidly becoming economically, technically, and environmentally unsustainable. Thus, continued progress in these applications will require dramatically more computationally-efficient methods, which will either have to come from changes to deep learning or from moving to other machine learning methods.", "target": ["Deep Learningは膨大な計算力を使うことで多くのタスクの性能を向上させたが、必要とされる計算力がどんどん大きくなっているので、ハードウェアの発展次第では失速していくかもしれないことを示唆した論文。金銭的・環境的な負荷も法外なものになっていくので、抜本的な改善が必要ではないかと提言している。"]} {"source": "Prior work in visual dialog has focused on training deep neural models on VisDial in isolation. Instead, we present an approach to leverage pretraining on related vision-language datasets before transferring to visual dialog. We adapt the recently proposed ViLBERT (Lu et al., 2019) model for multi-turn visually-grounded conversations. Our model is pretrained on the Conceptual Captions and Visual Question Answering datasets, and finetuned on VisDial. Our best single model outperforms prior published work (including model ensembles) by more than 1% absolute on NDCG and MRR. Next, we find that additional finetuning using \"dense\" annotations in VisDial leads to even higher NDCG -- more than 10% over our base model -- but hurts MRR -- more than 17% below our base model! This highlights a trade-off between the two primary metrics -- NDCG and MRR -- which we find is due to dense annotations not correlating well with the original ground-truth answers to questions.", "target": ["画像を参照し対話を行うモデルに事前学習を活用した研究。言語/画像双方を入力に取り相互にAttentionを貼るViLBERTをベースにWikipedia=>VQA(一回答)の順で事前学習、Visual Dialog(複数回答)でFineTuneする。これにより精度向上したが、評価指標間のトレードオフも発見"]} {"source": "Query expansion is a technique widely used in image search consisting in combining highly ranked images from an original query into an expanded query that is then reissued, generally leading to increased recall and precision. An important aspect of query expansion is choosing an appropriate way to combine the images into a new query. Interestingly, despite the undeniable empirical success of query expansion, ad-hoc methods with different caveats have dominated the landscape, and not a lot of research has been done on learning how to do query expansion. In this paper we propose a more principled framework to query expansion, where one trains, in a discriminative manner, a model that learns how images should be aggregated to form the expanded query. Within this framework, we propose a model that leverages a self-attention mechanism to effectively learn how to transfer information between the different images before aggregating them. Our approach obtains higher accuracy than existing approaches on standard benchmarks. More importantly, our approach is the only one that consistently shows high accuracy under different regimes, overcoming caveats of existing methods.", "target": ["画像検索におけるクエリ拡張の研究。シンプルかつ強力な手法としてクエリ画像に検索結果TopNの画像を追加する手法があるが、Self-Attentionを用い重みをかけてTopNをマージする手法を提案している。関連画像が少なくても改善効果があることを確認。"]} {"source": "Supervised deep networks are among the best methods for finding correspondences in stereo image pairs. Like all supervised approaches, these networks require ground truth data during training. However, collecting large quantities of accurate dense correspondence data is very challenging. We propose that it is unnecessary to have such a high reliance on ground truth depths or even corresponding stereo pairs. Inspired by recent progress in monocular depth estimation, we generate plausible disparity maps from single images. In turn, we use those flawed disparity maps in a carefully designed pipeline to generate stereo training pairs. Training in this manner makes it possible to convert any collection of single RGB images into stereo training data. This results in a significant reduction in human effort, with no need to collect real depths or to hand-design synthetic data. We can consequently train a stereo matching network from scratch on datasets like COCO, which were previously hard to exploit for stereo. Through extensive experiments we show that our approach outperforms stereo networks trained with standard synthetic datasets, when evaluated on KITTI, ETH3D, and Middlebury.", "target": ["深度推定において、単眼画像からステレオ画像ペアを生成し、それをステレオ深度推定モデル学習に活用する手法を提案。まず単眼深度推定モデルを使って深度を推定し、それから反対側の画像を推定する。そして左右両方の画像をステレオ深度推定モデルに突っ込む。様々なステレオ深度推定モデルの精度向上を確認。"]} {"source": "We present FastRP, a scalable and performant algorithm for learning distributed node representations in a graph. FastRP is over 4,000 times faster than state-of-the-art methods such as DeepWalk and node2vec, while achieving comparable or even better performance as evaluated on several real-world networks on various downstream tasks. We observe that most network embedding methods consist of two components: construct a node similarity matrix and then apply dimension reduction techniques to this matrix. We show that the success of these methods should be attributed to the proper construction of this similarity matrix, rather than the dimension reduction method employed. FastRP is proposed as a scalable algorithm for network embeddings. Two key features of FastRP are: 1) it explicitly constructs a node similarity matrix that captures transitive relationships in a graph and normalizes matrix entries based on node degrees; 2) it utilizes very sparse random projection, which is a scalable optimization-free method for dimension reduction. An extra benefit from combining these two design choices is that it allows the iterative computation of node embeddings so that the similarity matrix need not be explicitly constructed, which further speeds up FastRP. FastRP is also advantageous for its ease of implementation, parallelization and hyperparameter tuning. The source code is available at this https URL.", "target": ["各ノード間の距離を保ったままグラフデータの次元削減をするタスクにおいて、DeepWalkより4000倍高速なFastRPを提案。Sparse Random Projectionによる射影行列初期化を踏襲しながら、複数ステップの遷移を逐次的に計算させることにより計算量を削減している。"]} {"source": "Machine learning is predicated on the concept of generalization: a model achieving low error on a sufficiently large training set should also perform well on novel samples from the same distribution. We show that both data whitening and second order optimization can harm or entirely prevent generalization. In general, model training harnesses information contained in the sample-sample second moment matrix of a dataset. For a general class of models, namely models with a fully connected first layer, we prove that the information contained in this matrix is the only information which can be used to generalize. Models trained using whitened data, or with certain second order optimization schemes, have less access to this information, resulting in reduced or nonexistent generalization ability. We experimentally verify these predictions for several architectures, and further demonstrate that generalization continues to be harmed even when theoretical requirements are relaxed. However, we also show experimentally that regularized second order optimization can provide a practical tradeoff, where training is accelerated but less information is lost, and generalization can in some circumstances even improve.", "target": ["白色化の処理を行うと学習速度は向上するが汎化性能は下がるという研究(最初の層が全結合層(CNN含む)の場合)。ニュートン法等二次の情報を使用した最適化は白色化済みデータ上の最適化と見なせ同様に影響あり。予測が学習データ点の補完により成せるとすると、その情報の削りが汎化性能への影響となる"]} {"source": "We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new Hopfield network can store exponentially (with the dimension of the associative space) many patterns, retrieves the pattern with one update, and has exponentially small retrieval errors. It has three types of energy minima (fixed points of the update): (1) global fixed point averaging over all patterns, (2) metastable states averaging over a subset of patterns, and (3) fixed points which store a single pattern. The new update rule is equivalent to the attention mechanism used in transformers. This equivalence enables a characterization of the heads of transformer models. These heads perform in the first layers preferably global averaging and in higher layers partial averaging via metastable states. The new modern Hopfield network can be integrated into deep learning architectures as layers to allow the storage of and access to raw input data, intermediate results, or learned prototypes. These Hopfield layers enable new ways of deep learning, beyond fully-connected, convolutional, or recurrent networks, and provide pooling, memory, association, and attention mechanisms. We demonstrate the broad applicability of the Hopfield layers across various domains. Hopfield layers improved state-of-the-art on three out of four considered multiple instance learning problems as well as on immune repertoire classification with several hundreds of thousands of instances. On the UCI benchmark collections of small classification tasks, where deep learning methods typically struggle, Hopfield layers yielded a new state-of-the-art when compared to different machine learning methods. Finally, Hopfield layers achieved state-of-the-art on two drug design datasets. The implementation is available at: this https URL", "target": ["Self-Attentionの更新ルールが連続的な状態を記憶するHopfield Networkの更新と同等であるとした論文。これに基づく解析ではBERTの第一レイヤは全パターンの平均、上に行くにつれ特定パターンを記憶ししているとのこと。またPooling/LSTM(Memory)の処理もHopfiledで代替可能であると主張している。"]} {"source": "Within months of birth, children develop meaningful expectations about the world around them. How much of this early knowledge can be explained through generic learning mechanisms applied to sensory data, and how much of it requires more substantive innate inductive biases? Addressing this fundamental question in its full generality is currently infeasible, but we can hope to make real progress in more narrowly defined domains, such as the development of high-level visual categories, thanks to improvements in data collecting technology and recent progress in deep learning. In this paper, our goal is precisely to achieve such progress by utilizing modern self-supervised deep learning methods and a recent longitudinal, egocentric video dataset recorded from the perspective of three young children (Sullivan et al., 2020). Our results demonstrate the emergence of powerful, high-level visual representations from developmentally realistic natural videos using generic self-supervised learning objectives.", "target": ["赤ん坊の知能の発達をDLを使って解析する試み。頭につけたカメラの動画から自己教師あり学習(MOCO)で得られた表現を使って分類問題を解いてカテゴリ毎の精度を見たり、Attention Mapによる可視化で、赤ん坊が何を学んでいるかをみている。"]} {"source": "Single image deraining regards an input image as a fusion of a background image, a transmission map, rain streaks, and atmosphere light. While advanced models are proposed for image restoration (i.e., background image generation), they regard rain streaks with the same properties as background rather than transmission medium. As vapors (i.e., rain streaks accumulation or fog-like rain) are conveyed in the transmission map to model the veiling effect, the fusion of rain streaks and vapors do not naturally reflect the rain image formation. In this work, we reformulate rain streaks as transmission medium together with vapors to model rain imaging. We propose an encoder-decoder CNN named as SNet to learn the transmission map of rain streaks. As rain streaks appear with various shapes and directions, we use ShuffleNet units within SNet to capture their anisotropic representations. As vapors are brought by rain streaks, we propose a VNet containing spatial pyramid pooling (SSP) to predict the transmission map of vapors in multi-scales based on that of rain streaks. Meanwhile, we use an encoder CNN named ANet to estimate atmosphere light. The SNet, VNet, and ANet are jointly trained to predict transmission maps and atmosphere light for rain image restoration. Extensive experiments on the benchmark datasets demonstrate the effectiveness of the proposed visual model to predict rain streaks and vapors. The proposed deraining method performs favorably against state-of-the-art deraining approaches.", "target": ["従来の雨粒除去の手法では、雨粒周辺の水蒸気と画像全体のコントラスト変化で上手くいかないことがあった。この研究では、雨粒除去とコントラスト変化を予測するネットワークを事前学習させた上で、水蒸気除去も含めて学習させる機構を提案。先行研究と比較して綺麗に除去できている。"]} {"source": "Transformers are emerging as the new workhorse of NLP, showing great success across tasks. Unlike LSTMs, transformers process input sequences entirely through self-attention. Previous work has suggested that the computational capabilities of self-attention to process hierarchical structures are limited. In this work, we mathematically investigate the computational power of self-attention to model formal languages. Across both soft and hard attention, we show strong theoretical limitations of the computational abilities of selfattention, finding that it cannot model periodic finite-state languages, nor hierarchical structure, unless the number of layers or heads increases with input length. These limitations seem surprising given the practical success of self-attention and the prominent role assigned to hierarchical structure in linguistics, suggesting that natural language can be approximated well with models that are too weak for the formal languages typically assumed in theoretical linguistics.", "target": ["Transformerが解けないタスクとその理論的な根拠を述べた研究。Self-Attention型のネットワークは特定要素への注目を繰り返していくため、要素全体の情報が必要なタスク(パリティビットの付与やカッコの閉じ)は解けないとしている(特にHard Attentionの場合)。"]} {"source": "Transfer learning has emerged as a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains. This process consists of taking a neural network pre-trained on a large feature-rich source dataset, freezing the early layers that encode essential generic image properties, and then fine-tuning the last few layers in order to capture specific information related to the target situation. This approach is particularly useful when only limited or weakly labeled data are available for the new task. In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models, especially if only limited data are available for the new domain task. Further, we observe that adversarial training biases the learnt representations to retaining shapes, as opposed to textures, which impacts the transferability of the source models. Finally, through the lens of influence functions, we discover that transferred adversarially-trained models contain more human-identifiable semantic information, which explains -- at least partly -- why adversarially-trained models transfer better.", "target": ["普通に学習したモデルと比較して敵対的ノイズを乗せながら学習したモデルは、転移学習において高い性能を示した。可視化した結果、より人間に近い感覚で分類をしていることがわかり、それが影響しているのでは、とのこと。"]} {"source": "Training with more data has always been the most stable and effective way of improving performance in deep learning era. As the largest object detection dataset so far, Open Images brings great opportunities and challenges for object detection in general and sophisticated scenarios. However, owing to its semi-automatic collecting and labeling pipeline to deal with the huge data scale, Open Images dataset suffers from label-related problems that objects may explicitly or implicitly have multiple labels and the label distribution is extremely imbalanced. In this work, we quantitatively analyze these label problems and provide a simple but effective solution. We design a concurrent softmax to handle the multi-label problems in object detection and propose a soft-sampling methods with hybrid training scheduler to deal with the label imbalance. Overall, our method yields a dramatic improvement of 3.34 points, leading to the best single model with 60.90 mAP on the public object detection test set of Open Images. And our ensembling result achieves 67.17 mAP, which is 4.29 points higher than the best result of Open Images public test 2018.", "target": ["OpenImagesはCOCOよりラベル不均衡があり、また、1つの物体に複数のラベルがついており今までの物体検知タスクとは異なっていた。そこで、ラベルの数によってサンプリング確率を動的に調整できるsoft-balance samplingとラベルの共起確率を加味したConcurrent softmaxを提案。大きくスコアを上げることができた。"]} {"source": "We present a system that converts annotated broadcast video of tennis matches into interactively controllable video sprites that behave and appear like professional tennis players. Our approach is based on controllable video textures, and utilizes domain knowledge of the cyclic structure of tennis rallies to place clip transitions and accept control inputs at key decision-making moments of point play. Most importantly, we use points from the video collection to model a player's court positioning and shot selection decisions during points. We use these behavioral models to select video clips that reflect actions the real-life player is likely to take in a given match play situation, yielding sprites that behave realistically at the macro level of full points, not just individual tennis motions. Our system can generate novel points between professional tennis players that resemble Wimbledon broadcasts, enabling new experiences such as the creation of matchups between players that have not competed in real life, or interactive control of players in the Wimbledon final. According to expert tennis players, the rallies generated using our approach are significantly more realistic in terms of player behavior than video sprite methods that only consider the quality of motion transitions during video synthesis.", "target": ["テニスの試合をリアルに合成する研究。テニスのドメイン知識をもとに、選手の動きを「ボールに追いついて打つ動作→次のポジション取り」に分割し、それぞれデータから近いものを選んだ後に修正して合成する。影がないので合成物と判別できるが、動き自体はかなり自然なものになっている。"]} {"source": "Deep reinforcement learning has the potential to train robots to perform complex tasks in the real world without requiring accurate models of the robot or its environment. A practical approach is to train agents in simulation, and then transfer them to the real world. One popular method for achieving transferability is to use domain randomisation, which involves randomly perturbing various aspects of a simulated environment in order to make trained agents robust to the reality gap. However, less work has gone into understanding such agents - which are deployed in the real world - beyond task performance. In this work we examine such agents, through qualitative and quantitative comparisons between agents trained with and without visual domain randomisation. We train agents for Fetch and Jaco robots on a visuomotor control task and evaluate how well they generalise using different testing conditions. Finally, we investigate the internals of the trained agents by using a suite of interpretability techniques. Our results show that the primary outcome of domain randomisation is more robust, entangled representations, accompanied with larger weights with greater spatial structure; moreover, the types of changes are heavily influenced by the task setup and presence of additional proprioceptive inputs. Additionally, we demonstrate that our domain randomised agents require higher sample complexity, can overfit and more heavily rely on recurrent processing. Furthermore, even with an improved saliency method introduced in this work, we show that qualitative studies may not always correspond with quantitative measures, necessitating the combination of inspection tools in order to provide sufficient insights into the behaviour of trained agents.", "target": ["強化学習の汎化性能を高めるDomain Randomisation(DR)の効果を調べた研究(背景にノイズを入れたり色/テクスチャを変えたりする)。Sim-to-Real(実環境への転移)で有効かを検証している。実験の結果汎化性能の向上+初期化依存を減らせたがDRへのOverfitが起こり得ることを確認"]} {"source": "Affective tasks such as sentiment analysis, emotion classification, and sarcasm detection have been popular in recent years due to an abundance of user-generated data, accurate computational linguistic models, and a broad range of relevant applications in various domains. At the same time, many studies have highlighted the importance of text preprocessing, as an integral step to any natural language processing prediction model and downstream task. While preprocessing in affective systems is well-studied, preprocessing in word vector-based models applied to affective systems, is not. To address this limitation, we conduct a comprehensive analysis of the role of preprocessing techniques in affective analysis based on word vector models. Our analysis is the first of its kind and provides useful insights of the importance of each preprocessing technique when applied at the training phase, commonly ignored in pretrained word vector models, and/or at the downstream task phase.", "target": ["分散表現を学習する際、コーパスに対してかける前処理がどのような影響をもたらすのかを調査した研究。Negation(否定文を通常文に言い換える(not happy => sadなど))は様々なデータセットで効果があり、次いで品詞フィルタが効果有。StopWordとStemmingは逆に精度が落ちた。"]} {"source": "Motivated by human attention, computational attention mechanisms have been designed to help neural networks adjust their focus on specific parts of the input data. While attention mechanisms are claimed to achieve interpretability, little is known about the actual relationships between machine and human attention. In this work, we conduct the first quantitative assessment of human versus computational attention mechanisms for the text classification task. To achieve this, we design and conduct a large-scale crowd-sourcing study to collect human attention maps that encode the parts of a text that humans focus on when conducting text classification. Based on this new resource of human attention dataset for text classification, YELP-HAT, collected on the publicly available YELP dataset, we perform a quantitative comparative analysis of machine attention maps created by deep learning models and human attention maps. Our analysis offers insights into the relationships between human versus machine attention maps along three dimensions: overlap in word selections, distribution over lexical categories, and context-dependency of sentiment polarity. Our findings open promising future research opportunities ranging from supervised attention to the design of human-centric attention-based explanations.", "target": ["感情分類のタスクで、人間のAttention(注目する単語)と機械学習モデル(RNN系)のAttentionがどれくらい一致するかを調べた研究。Bi-directionalの場合複数人アノテーションの合意(AND)と近しい結果。系列が長くなるほど一致度合いは低くなる(ただ人間同士も低くなる)。"]} {"source": "How does a user's prior experience with deep learning impact accuracy? We present an initial study based on 31 participants with different levels of experience. Their task is to perform hyperparameter optimization for a given deep learning architecture. The results show a strong positive correlation between the participant's experience and the final performance. They additionally indicate that an experienced participant finds better solutions using fewer resources on average. The data suggests furthermore that participants with no prior experience follow random strategies in their pursuit of optimal hyperparameters. Our study investigates the subjective human factor in comparisons of state of the art results and scientific reproducibility in deep learning.", "target": ["image classification taskにおいてディープラーニングの事前経験がモデルの精度にどれだけ影響するのか?という問い対する実験を行った。その結果、参加者の経験と最終的なモデルの精度の間に強い正の相関が見られた。さらに経験豊富な参加者ほど、平均的に少ないリソースでより良い解決策を見つけられることがわかった。"]} {"source": "We propose a novel method for generating titles for unstructured text documents. We reframe the problem as a sequential question-answering task. A deep neural network is trained on document-title pairs with decomposable titles, meaning that the vocabulary of the title is a subset of the vocabulary of the document. To train the model we use a corpus of millions of publicly available document-title pairs: news articles and headlines. We present the results of a randomized double-blind trial in which subjects were unaware of which titles were human or machine-generated. When trained on approximately 1.5 million news articles, the model generates headlines that humans judge to be as good or better than the original human-written headlines in the majority of cases.", "target": ["ニュースのタイトルを生成する研究。質問回答(というよりスパン抽出)を繰り返すことで生成を行う。最初の質問は空白で、以後質問+回答=次の質問として繰り返す。人150万件の学習の結果、生成したタイトルが人間が作成したものより良い(人手)評価が得られた。"]} {"source": "Real-world data often follow a long-tailed distribution as the frequency of each class is typically different. For example, a dataset can have a large number of under-represented classes and a few classes with more than sufficient data. However, a model to represent the dataset is usually expected to have reasonably homogeneous performances across classes. Introducing class-balanced loss and advanced methods on data re-sampling and augmentation are among the best practices to alleviate the data imbalance problem. However, the other part of the problem about the under-represented classes will have to rely on additional knowledge to recover the missing information. In this work, we present a novel approach to address the long-tailed problem by augmenting the under-represented classes in the feature space with the features learned from the classes with ample samples. In particular, we decompose the features of each class into a class-generic component and a class-specific component using class activation maps. Novel samples of under-represented classes are then generated on the fly during training stages by fusing the class-specific features from the under-represented classes with the class-generic features from confusing classes. Our results on different datasets such as iNaturalist, ImageNet-LT, Places-LT and a long-tailed version of CIFAR have shown the state of the art performances.", "target": ["クラス毎のデータ数が大きく異なるLong-tailデータセットにおいて、稀少カテゴリと混同しやすい頻出カテゴリを選び、判断根拠になる部分以外を稀少カテゴリと特徴量空間で混ぜるオンラインデータ拡張手法を提案。既存手法と比較して大きく精度向上に成功。"]} {"source": "How can we tell whether an image has been mirrored? While we understand the geometry of mirror reflections very well, less has been said about how it affects distributions of imagery at scale, despite widespread use for data augmentation in computer vision. In this paper, we investigate how the statistics of visual data are changed by reflection. We refer to these changes as \"visual chirality\", after the concept of geometric chirality - the notion of objects that are distinct from their mirror image. Our analysis of visual chirality reveals surprising results, including low-level chiral signals pervading imagery stemming from image processing in cameras, to the ability to discover visual chirality in images of people and faces. Our work has implications for data augmentation, self-supervised learning, and image forensics.", "target": ["データ拡張で用いられる左右反転は、反転によってデータ分布が変わらないことを前提している。しかし、実はデータ分布が少し異なっており(時計は左手につける人がおおいが、反転すると右手につけるように見え、左手に時計がついている元データと異なる分布になる。)、DLを使うと反転した画像かどうか判断できた。このように反転させて分布が変わるものを抽出することで、新たなツールになるのではないかと期待がある。"]} {"source": "We present a self-supervised Contrastive Video Representation Learning (CVRL) method to learn spatiotemporal visual representations from unlabeled videos. Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away. We study what makes for good data augmentations for video self-supervised learning and find that both spatial and temporal information are crucial. We carefully design data augmentations involving spatial and temporal cues. Concretely, we propose a temporally consistent spatial augmentation method to impose strong spatial augmentations on each frame of the video while maintaining the temporal consistency across frames. We also propose a sampling-based temporal augmentation method to avoid overly enforcing invariance on clips that are distant in time. On Kinetics-600, a linear classifier trained on the representations learned by CVRL achieves 70.4% top-1 accuracy with a 3D-ResNet-50 (R3D-50) backbone, outperforming ImageNet supervised pre-training by 15.7% and SimCLR unsupervised pre-training by 18.8% using the same inflated R3D-50. The performance of CVRL can be further improved to 72.9% with a larger R3D-152 (2x filters) backbone, significantly closing the gap between unsupervised and supervised video representation learning. Our code and models will be available at this https URL.", "target": ["教師なしで動画の潜在表現を学習する手法の提案。対照学習を使用しており、同じ動画から抽出されたクリップ(時系列に並んだ画像)同士は近く、異なる動画から抽出されたクリップ同士は遠くなるよう学習する。Augmentationはフレーム単位ではなく、クリップ単位で一貫した適用を行う。"]} {"source": "We present a generic image-to-image translation framework, pixel2style2pixel (pSp). Our pSp framework is based on a novel encoder network that directly generates a series of style vectors which are fed into a pretrained StyleGAN generator, forming the extended W+ latent space. We first show that our encoder can directly embed real images into W+, with no additional optimization. Next, we propose utilizing our encoder to directly solve image-to-image translation tasks, defining them as encoding problems from some input domain into the latent domain. By deviating from the standard invert first, edit later methodology used with previous StyleGAN encoders, our approach can handle a variety of tasks even when the input image is not represented in the StyleGAN domain. We show that solving translation tasks through StyleGAN significantly simplifies the training process, as no adversary is required, has better support for solving tasks without pixel-to-pixel correspondence, and inherently supports multi-modal synthesis via the resampling of styles. Finally, we demonstrate the potential of our framework on a variety of facial image-to-image translation tasks, even when compared to state-of-the-art solutions designed specifically for a single task, and further show that it can be extended beyond the human facial domain.", "target": ["入力画像を一度潜在表現(スタイル表現)に落とし、学習済みStyleGANの生成器にいれることによって、画像変換をする研究。一度潜在表現に落としているので、入力画像の画素情報に束縛されない画像の変換(顔の向きを正面にする等)ができる。また、このフレームワークを用いて、pix2pixのように幅広いタスク(高解像化、顔の向き変換等)をこなすことができる。"]} {"source": "Many NLP tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue: negative examples significantly outnumber positive examples, and the huge number of background examples (or easy-negative examples) overwhelms the training. The most commonly used cross entropy (CE) criteria is actually an accuracy-oriented objective, and thus creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function, while at test time F1 score concerns more about positive examples. In this paper, we propose to use dice loss in replacement of the standard cross-entropy objective for data-imbalanced NLP tasks. Dice loss is based on the Sorensen-Dice coefficient or Tversky index, which attaches similar importance to false positives and false negatives, and is more immune to the data-imbalance issue. To further alleviate the dominating influence from easy-negative examples in training, we propose to associate training examples with dynamically adjusted weights to deemphasize easy-negative examples.Theoretical analysis shows that this strategy narrows down the gap between the F1 score in evaluation and the dice loss in training. With the proposed training objective, we observe significant performance boost on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5, CTB6 and UD1.4 for the part of speech tagging task; SOTA results on CoNLL03, OntoNotes5.0, MSRA and OntoNotes4.0 for the named entity recognition task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification.", "target": ["NLPの分類タスクでは、しばしば評価にF1スコアを使うが、最適化にはクロスエントロピーを使っており、ラベルデータのバランスが悪い場合はそれらで大きな乖離が発生する。そこでF1を滑らかにしたと解釈できるDSCに、簡単に分類出来るサンプルの比重をゼロにするような係数をかけたロスを提案。多くのモデル、データセットに渡って効果があることを確認した"]} {"source": "Unsupervised methods for learning distributed representations of words are ubiquitous in today's NLP research, but far less is known about the best ways to learn distributed phrase or sentence representations from unlabelled data. This paper is a systematic comparison of models that learn such representations. We find that the optimal approach depends critically on the intended application. Deeper, more complex models are preferable for representations to be used in supervised systems, but shallow log-linear models work best for building representation spaces that can be decoded with simple spatial distance metrics. We also propose two new unsupervised representation-learning objectives designed to optimise the trade-off between training time, domain portability and performance.", "target": ["文のベクトル表現について、モデルと性能を調査した研究。最終的に教師あり(文分類など)で使用するか、教師なしタスク(類似度判定など)で使用するかで適したモデルが異なるという結果。深いモデルは教師ありに強く、浅い対数線形は教師なしに強い傾向がある。"]} {"source": "For sequence models with large vocabularies, a majority of network parameters lie in the input and output layers. In this work, we describe a new method, DeFINE, for learning deep token representations efficiently. Our architecture uses a hierarchical structure with novel skip-connections which allows for the use of low dimensional input and output layers, reducing total parameters and training time while delivering similar or better performance versus existing methods. DeFINE can be incorporated easily in new or existing sequence models. Compared to state-of-the-art methods including adaptive input representations, this technique results in a 6% to 20% drop in perplexity. On WikiText-103, DeFINE reduces the total parameters of Transformer-XL by half with minimal impact on performance. On the Penn Treebank, DeFINE improves AWD-LSTM by 4 points with a 17% reduction in parameters, achieving comparable performance to state-of-the-art methods with fewer parameters. For machine translation, DeFINE improves the efficiency of the Transformer model by about 1.4 times while delivering similar performance.", "target": ["言語の埋め込みベクトルを洗練するDeFINEを提案。階層的にtokenのチャネルをグループ化・統合を行う。パラメータ数を削減することができ、シーケンス型モデルにおいてLSTM, Transformer両方で、DeFINEを使用することで精度をあげつつパラメータ数を削減できることを確認した。"]} {"source": "We introduce a deep and light-weight transformer, DeLighT, that delivers similar or better performance than standard transformer-based models with significantly fewer parameters. DeLighT more efficiently allocates parameters both (1) within each Transformer block using the DeLighT transformation, a deep and light-weight transformation, and (2) across blocks using block-wise scaling, which allows for shallower and narrower DeLighT blocks near the input and wider and deeper DeLighT blocks near the output. Overall, DeLighT networks are 2.5 to 4 times deeper than standard transformer models and yet have fewer parameters and operations. Experiments on benchmark machine translation and language modeling tasks show that DeLighT matches or improves the performance of baseline Transformers with 2 to 3 times fewer parameters on average. Our source code is available at: \\url{this https URL}", "target": ["軽量化かつ高効率なTransformerであるDeLighT(Deep and Light-weight Transformer)を提案。チャネルをグループ化した後に一度次元数を増やすDeFINEを改良したDERxTra、深くなる毎にパラメータ数(層数)を増やすBlock-wise Scalingがポイント。計算量が削減されるため層を深くすることができ、より高精度により小さい計算量で到達できる。"]} {"source": "In image-to-image translation, each patch in the output should reflect the content of the corresponding patch in the input, independent of domain. We propose a straightforward method for doing so -- maximizing mutual information between the two, using a framework based on contrastive learning. The method encourages two elements (corresponding patches) to map to a similar point in a learned feature space, relative to other elements (other patches) in the dataset, referred to as negatives. We explore several critical design choices for making contrastive learning effective in the image synthesis setting. Notably, we use a multilayer, patch-based approach, rather than operate on entire images. Furthermore, we draw negatives from within the input image itself, rather than from the rest of the dataset. We demonstrate that our framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time. In addition, our method can even be extended to the training setting where each \"domain\" is only a single image.", "target": ["教師なし表現学習でよく用いられるcontrastive learningをGANによるドメイン変換に応用した研究。ドメイン変換前後の画像の各対応する場所では、変換前後でも構造をもつはずだという示唆より、同じ画像内で対応する位置を正、それ以外を負のサンプルとしてcontrastive learningを行う。"]} {"source": "In this paper, we are interested in generating fine-grained cartoon faces for various groups. We assume that one of these groups consists of sufficient training data while the others only contain few samples. Although the cartoon faces of these groups share similar style, the appearances in various groups could still have some specific characteristics, which makes them differ from each other. A major challenge of this task is how to transfer knowledge among groups and learn group-specific characteristics with only few samples. In order to solve this problem, we propose a two-stage training process. First, a basic translation model for the basic group (which consists of sufficient data) is trained. Then, given new samples of other groups, we extend the basic model by creating group-specific branches for each new group. Group-specific branches are updated directly to capture specific appearances for each group while the remaining group-shared parameters are updated indirectly to maintain the distribution of intermediate feature space. In this manner, our approach is capable to generate high-quality cartoon faces for various groups.", "target": ["顔画像をアニメっぽく変換するGAN。広いドメイン(女性全般など)を使ってBasic modelを学習させたたあとに、細かいドメイン(老人、子供、若者等)それぞれで分類器を併用しながら、各ドメインの違いを反映させるように学習させる戦略をとっている。生成されたものは、割りかしその人独自の特徴も捉えられていそう。"]} {"source": "Organizing companies by industry segment (e.g. artificial intelligence, healthcare or fintech) is useful for analyzing stock market performance and for designing theme base investment funds, among others. Current practice is to manually assign companies to sectors or industries from a small predefined list, which has two key limitations. First, due to the manual effort involved, this strategy is only feasible for relatively mainstream industry segments, and can thus not easily be used for niche or emerging topics. Second, the use of hard label assignments ignores the fact that different companies will be more or less exposed to a particular segment. To address these limitations, we propose to learn vector representations of companies based on their annual reports. The key challenge is to distill the relevant information from these reports for characterizing their industries, since annual reports also contain a lot of information which is not relevant for our purpose. To this end, we introduce a multi-task learning strategy, which is based on fine-tuning the BERT language model on (i) existing sector labels and (ii) stock market performance. Experiments in both English and Japanese demonstrate the usefulness of this strategy", "target": ["年次報告書から会社の特徴ベクトルを学習する研究で、単純にBERT転移ではなくセクターのラベル予測や株式値動きの関係性(同業界だと同値動きする傾向がある)との同期などを加えマルチタスク学習を行っている"]} {"source": "Offline reinforcement learning (RL purely from logged data) is an important avenue for deploying RL techniques in real-world scenarios. However, existing hyperparameter selection methods for offline RL break the offline assumption by evaluating policies corresponding to each hyperparameter setting in the environment. This online execution is often infeasible and hence undermines the main aim of offline RL. Therefore, in this work, we focus on \\textit{offline hyperparameter selection}, i.e. methods for choosing the best policy from a set of many policies trained using different hyperparameters, given only logged data. Through large-scale empirical evaluation we show that: 1) offline RL algorithms are not robust to hyperparameter choices, 2) factors such as the offline RL algorithm and method for estimating Q values can have a big impact on hyperparameter selection, and 3) when we control those factors carefully, we can reliably rank policies across hyperparameter choices, and therefore choose policies which are close to the best policy in the set. Overall, our results present an optimistic view that offline hyperparameter selection is within reach, even in challenging tasks with pixel observations, high dimensional action spaces, and long horizon.", "target": ["オフライン強化学習のハイパーパラメーター(hp)に対する頑健性を調査した研究。基本的な模倣学習手法Behavior Cloningと近年の手法であるCRR/D4PGの3つを特定レンジのhpで評価。hpによるばらつきは大きいが(概ねOver Estimateする傾向がある)、戦略固定の価値関数更新を行うことで影響を軽減できる。"]} {"source": "Object frequency in the real world often follows a power law, leading to a mismatch between datasets with long-tailed class distributions seen by a machine learning model and our expectation of the model to perform well on all classes. We analyze this mismatch from a domain adaptation point of view. First of all, we connect existing class-balanced methods for long-tailed classification to target shift, a well-studied scenario in domain adaptation. The connection reveals that these methods implicitly assume that the training data and test data share the same class-conditioned distribution, which does not hold in general and especially for the tail classes. While a head class could contain abundant and diverse training examples that well represent the expected data at inference time, the tail classes are often short of representative training data. To this end, we propose to augment the classic class-balanced learning by explicitly estimating the differences between the class-conditioned distributions with a meta-learning approach. We validate our approach with six benchmark datasets and three loss functions.", "target": ["クラス間データ数の乖離が大きなデータセットでは、少数クラスデータにおいてテストと学習データの分布が異なる。このタスクをドメイン適用の問題として捉え、普通に学習したあとにモデルパラメータと同時にデータ毎の係数を学習させるメタ学習てきな機構で対応した。"]} {"source": "Deep-learning based salient object detection methods achieve great progress. However, the variable scale and unknown category of salient objects are great challenges all the time. These are closely related to the utilization of multi-level and multi-scale features. In this paper, we propose the aggregate interaction modules to integrate the features from adjacent levels, in which less noise is introduced because of only using small up-/down-sampling rates. To obtain more efficient multi-scale features from the integrated features, the self-interaction modules are embedded in each decoder unit. Besides, the class imbalance issue caused by the scale variation weakens the effect of the binary cross entropy loss and results in the spatial inconsistency of the predictions. Therefore, we exploit the consistency-enhanced loss to highlight the fore-/back-ground difference and preserve the intra-class consistency. Experimental results on five benchmark datasets demonstrate that the proposed method without any post-processing performs favorably against 23 state-of-the-art approaches. The source code will be publicly available at this https URL.", "target": ["画像の中で注目ポイントを区分けするSODタスクにおいて、複数の解像度を使いながら隣の解像度と情報を統合することで精度を向上させるMINetを提案した。多くのタスクで先行研究のSOTAに匹敵、またはうわまわる結果となっている。"]} {"source": "Geospatial object segmentation, as a particular semantic segmentation task, always faces with larger-scale variation, larger intra-class variance of background, and foreground-background imbalance in the high spatial resolution (HSR) remote sensing imagery. However, general semantic segmentation methods mainly focus on scale variation in the natural scene, with inadequate consideration of the other two problems that usually happen in the large area earth observation scene. In this paper, we argue that the problems lie on the lack of foreground modeling and propose a foreground-aware relation network (FarSeg) from the perspectives of relation-based and optimization-based foreground modeling, to alleviate the above two problems. From perspective of relation, FarSeg enhances the discrimination of foreground features via foreground-correlated contexts associated by learning foreground-scene relation. Meanwhile, from perspective of optimization, a foregroundaware optimization is proposed to focus on foreground examples and hard examples of background during training for a balanced optimization. The experimental results obtained using a large scale dataset suggest that the proposed method is superior to the state-of-the-art general semantic segmentation methods and achieves a better trade-off between speed and accuracy.", "target": ["高解像度画像の領域分割において、画像の最も抽象化された潜在表現を使って複数解像度の特徴量マップの関係性をヒートマップとして算出させることで、画像の文脈を加味した領域分割をすることを狙う。さらに難しいサンプルを優先的に学習させる機構も提案。"]} {"source": "We present Chirpy Cardinal, an open-domain dialogue agent, as a research platform for the 2019 Alexa Prize competition. Building an open-domain socialbot that talks to real people is challenging – such a system must meet multiple user expectations such as broad world knowledge, conversational style, and emotional connection. Our socialbot engages users on their terms – prioritizing their interests, feelings and autonomy. As a result, our socialbot provides a responsive, personalized user experience, capable of talking knowledgeably about a wide variety of topics, as well as chatting empathetically about ordinary life. Neural generation plays a key role in achieving these goals, providing the backbone for our conversational and emotional tone. At the end of the competition, Chirpy Cardinal progressed to the finals with an average rating of 3.6/5.0, a median conversation duration of 2 minutes 16 seconds, and a 90th percentile duration of over 12 minutes.", "target": ["対話botのコンペティションAlexa Prizeに応募された対話システムの解説。ユーザーの発話解析、意図分類に基づく発話生成、必要なら話題変更等の提案(prompt)の3構成となっている。発話文解析(CoreNLP)、DNN(BERTベースの分類機)のみEC2で他はLambdaで構成、状態はDynamoDBで管理している。"]} {"source": "Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or views) of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a swapped prediction mechanism where we predict the cluster assignment of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much. We validate our findings by achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised pretraining on all the considered transfer tasks.", "target": ["表現学習においてSimCLRなどは大きなバッチサイズを必要とした。SwAVはオンラインで、同じ画像を違うデータ拡張した2つ画像のクラスタリング結果に一貫性をもたせる学習をすることで小さなバッチサイズでも表現学習を可能にした。得られた表現を使って、物体検知、画像分類で成果。"]} {"source": "Solving long-tail large vocabulary object detection with deep learning based models is a challenging and demanding task, which is however this http URL this work, we provide the first systematic analysis on the underperformance of state-of-the-art models in front of long-tail distribution. We find existing detection methods are unable to model few-shot classes when the dataset is extremely skewed, which can result in classifier imbalance in terms of parameter magnitude. Directly adapting long-tail classification models to detection frameworks can not solve this problem due to the intrinsic difference between detection and this http URL this work, we propose a novel balanced group softmax (BAGS) module for balancing the classifiers within the detection frameworks through group-wise training. It implicitly modulates the training process for the head and tail classes and ensures they are both sufficiently trained, without requiring any extra sampling for the instances from the tail classes.Extensive experiments on the very recent long-tail large vocabulary object recognition benchmark LVIS show that our proposed BAGS significantly improves the performance of detectors with various backbones and frameworks on both object detection and instance segmentation. It beats all state-of-the-art methods transferred from long-tail image classification and establishes new state-of-the-art.Code is available at this https URL.", "target": ["物体検知において、少数サンプルしかないカテゴリでは性能が悪くなる。そのようなデータセットに対する分類タスク用手法は、物体検知と相入れず精度がそこまで上がらなかった。そこでカテゴリの数毎にグループを設け、各グループ内に”その他”クラスを作る手法を提案。それによりマイナーカテゴリの分類精度を向上させることができる。"]} {"source": "We introduce Hindsight Off-policy Options (HO2), a data-efficient option learning algorithm. Given any trajectory, HO2 infers likely option choices and backpropagates through the dynamic programming inference procedure to robustly train all policy components off-policy and end-to-end. The approach outperforms existing option learning methods on common benchmarks. To better understand the option framework and disentangle benefits from both temporal and action abstraction, we evaluate ablations with flat policies and mixture policies with comparable optimization. The results highlight the importance of both types of abstraction as well as off-policy training and trust-region constraints, particularly in challenging, simulated 3D robot manipulation tasks from raw pixel inputs. Finally, we intuitively adapt the inference step to investigate the effect of increased temporal abstraction on training with pre-trained options and from scratch.", "target": ["タスクを分割して解く階層型強化学習の手法。行動の背景に状態に応じたスキル(持ち上げる・運ぶetc)を仮定するoption frameworkを利用しつつ、スキルの発動/終了が頻発しないようlossで制御を行っている。Replay Bufferを使用してCriticを更新=>期待値最大になるよう戦略更新、を繰り返す。"]} {"source": "Contrastive unsupervised learning has recently shown encouraging progress, e.g., in Momentum Contrast (MoCo) and SimCLR. In this note, we verify the effectiveness of two of SimCLR's design improvements by implementing them in the MoCo framework. With simple modifications to MoCo---namely, using an MLP projection head and more data augmentation---we establish stronger baselines that outperform SimCLR and do not require large training batches. We hope this will make state-of-the-art unsupervised learning research more accessible. Code will be made public.", "target": ["表現学習で良い結果を出しているSimCLR(https://arxiv.org/abs/2002.05709)で使用されている”MLP projection head”,”strong data augmentation”を使って、Moco(https://arxiv.org/abs/1911.05722)を改善した。"]} {"source": "This paper investigates the principles of embedding learning to tackle the challenging semi-supervised video object segmentation. Different from previous practices that only explore the embedding learning using pixels from foreground object (s), we consider background should be equally treated and thus propose Collaborative video object segmentation by Foreground-Background Integration (CFBI) approach. Our CFBI implicitly imposes the feature embedding from the target foreground object and its corresponding background to be contrastive, promoting the segmentation results accordingly. With the feature embedding from both foreground and background, our CFBI performs the matching process between the reference and the predicted sequence from both pixel and instance levels, making the CFBI be robust to various object scales. We conduct extensive experiments on three popular benchmarks, i.e., DAVIS 2016, DAVIS 2017, and YouTube-VOS. Our CFBI achieves the performance (J$F) of 89.4%, 81.9%, and 81.4%, respectively, outperforming all the other state-of-the-art methods. Code: this https URL.", "target": ["半教師ありのビデオのセグメンテーション(VOS)において、先行研究のように前景だけ使用するのではなく、背景情報も一緒に取り込むことで精度向上を目指した研究。 背景を入れないと、隠れていた物体の検知が上手くいかないが、背景情報を取り込むことによってそれを防げる。DAVIS2016等で定量、定性的に良い結果。"]} {"source": "Image quality assessment (IQA) is the key factor for the fast development of image restoration (IR) algorithms. The most recent IR methods based on Generative Adversarial Networks (GANs) have achieved significant improvement in visual performance, but also presented great challenges for quantitative evaluation. Notably, we observe an increasing inconsistency between perceptual quality and the evaluation results. Then we raise two questions: (1) Can existing IQA methods objectively evaluate recent IR algorithms? (2) When focus on beating current benchmarks, are we getting better IR algorithms? To answer these questions and promote the development of IQA methods, we contribute a large-scale IQA dataset, called Perceptual Image Processing Algorithms (PIPAL) dataset. Especially, this dataset includes the results of GAN-based methods, which are missing in previous datasets. We collect more than 1.13 million human judgments to assign subjective scores for PIPAL images using the more reliable \"Elo system\". Based on PIPAL, we present new benchmarks for both IQA and super-resolution methods. Our results indicate that existing IQA methods cannot fairly evaluate GAN-based IR algorithms. While using appropriate evaluation methods is important, IQA methods should also be updated along with the development of IR algorithms. At last, we improve the performance of IQA networks on GAN-based distortions by introducing anti-aliasing pooling. Experiments show the effectiveness of the proposed method.", "target": ["GAN生成など様々なタイプの画像の歪みを含むデータセットPIPALと、チェスなどの強さを表すのによく使用されるElo ratingを使った評価方法を提案。 それを使ってよく使われるPSNRとSSIMを評価すると、知覚的な結果と相入れず、他の評価手法も改善が必要そうであることがわかった。"]} {"source": "Unsupervised image-to-image translation is an inherently ill-posed problem. Recent methods based on deep encoder-decoder architectures have shown impressive results, but we show that they only succeed due to a strong locality bias, and they fail to learn very simple nonlocal transformations (e.g. mapping upside down faces to upright faces). When the locality bias is removed, the methods are too powerful and may fail to learn simple local transformations. In this paper we introduce linear encoder-decoder architectures for unsupervised image to image translation. We show that learning is much easier and faster with these architectures and yet the results are surprisingly effective. In particular, we show a number of local problems for which the results of the linear methods are comparable to those of state-of-the-art architectures but with a fraction of the training time, and a number of nonlocal problems for which the state-of-the-art fails while linear methods succeed.", "target": ["GANによるドメイン変換は局在するバイアスに頼っており、画像全体が大きく変化すると簡単なタスク(上下反転など)でも失敗する。線形変換のみでも、局在性によらないように変換をすれば(PCA等)、それらのタスクでも上手くいくことを発見。"]} {"source": "We introduce a simple and versatile framework for image-to-image translation. We unearth the importance of normalization layers, and provide a carefully designed two-stream generative model with newly proposed feature transformations in a coarse-to-fine fashion. This allows multi-scale semantic structure information and style representation to be effectively captured and fused by the network, permitting our method to scale to various tasks in both unsupervised and supervised settings. No additional constraints (e.g., cycle consistency) are needed, contributing to a very clean and simple method. Multi-modal image synthesis with arbitrary style control is made possible. A systematic study compares the proposed method with several state-of-the-art task-specific baselines, verifying its effectiveness in both perceptual quality and quantitative evaluations.", "target": ["style,contentの2 streamネットワークを複数解像度で情報統合をすることでスタイル変換を行う。SPADEとAdaINをそれぞれ改良したFADE,FAdaIN構造でスタイル入れ込みを行う。先行研究と比較して綺麗に変換できている。"]} {"source": "Words can be represented by composing the representations of subword units such as word segments, characters, and/or character n-grams. While such representations are effective and may capture the morphological regularities of words, they have not been systematically compared, and it is not understood how they interact with different morphological typologies. On a language modeling task, we present experiments that systematically vary (1) the basic unit of representation, (2) the composition of these representations, and (3) the morphological typology of the language modeled. Our results extend previous findings that character representations are effective across typologies, and we find that a previously unstudied combination of character trigram representations composed with bi-LSTMs outperforms most others. But we also find room for improvement: none of the character-level models match the predictive accuracy of a model with access to true morphological analyses, even when learned from an order of magnitude more data.", "target": ["分割の手法として文字/文字の組み合わせと単語とで言語モデルのパフォーマンスがどう変わるか調査した研究。単語より文字が様々な言語体系で優秀で、とくにBi-LSTMで3文字(trigram)を使うのが良好な結果だったとのこと。"]} {"source": "The success of pretrained transformer language models (LMs) in natural language processing has led to a wide range of pretraining setups. In particular, these models employ a variety of subword tokenization methods, most notably byte-pair encoding (BPE) (Sennrich et al., 2016; Gage, 1994), the WordPiece method (Schuster and Nakajima, 2012), and unigram language modeling (Kudo, 2018), to segment text. However, to the best of our knowledge, the literature does not contain a direct evaluation of the impact of tokenization on language model pretraining. We analyze differences between BPE and unigram LM tokenization, finding that the latter method recovers subword units that align more closely with morphology and avoids problems stemming from BPE's greedy construction procedure. We then compare the fine-tuned task performance of identical transformer masked language models pretrained with these tokenizations. Across downstream tasks and two languages (English and Japanese), we find that the unigram LM tokenization method matches or outperforms BPE. We hope that developers of future pretrained LMs will consider adopting the unigram LM method over the more prevalent BPE.", "target": ["事前学習済み言語モデルのパフォーマンスがトークナイズによってどう変化するかを調査した研究。連続頻度をもとにボトムアップ的にトークンを組み立てるBPEと、単語など意味あるトークンを切り崩していく(枝刈りしてく)ユニグラム言語方式を比較。ユニグラムの方が良好な結果"]} {"source": "The scarcity of comprehensive up-to-date studies on evaluation metrics for text summarization and the lack of consensus regarding evaluation protocols continue to inhibit progress. We address the existing shortcomings of summarization evaluation methods along five dimensions: 1) we re-evaluate 14 automatic evaluation metrics in a comprehensive and consistent fashion using neural summarization model outputs along with expert and crowd-sourced human annotations, 2) we consistently benchmark 23 recent summarization models using the aforementioned automatic evaluation metrics, 3) we assemble the largest collection of summaries generated by models trained on the CNN/DailyMail news dataset and share it in a unified format, 4) we implement and share a toolkit that provides an extensible and unified API for evaluating summarization models across a broad range of automatic metrics, 5) we assemble and share the largest and most diverse, in terms of model types, collection of human judgments of model-generated summaries on the CNN/Daily Mail dataset annotated by both expert judges and crowd-source workers. We hope that this work will help promote a more complete evaluation protocol for text summarization as well as advance research in developing evaluation metrics that better correlate with human judgments.", "target": ["最近発表された12の要約手法を専門家/クラウドソースのアノテーションで評価すると共に、この指標で代表的な23の要約モデルを評価した研究。実装・評価に使用した実装も公開されている。"]} {"source": "Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.", "target": ["TransformerのSelf-Attentionで演算量が二次になってしまう問題を解決した論文。Attention対象をRandomサンプル/Attention範囲限定(Window)/一部トークンについて全Attention、の3つを組み合わせることで効率化している。一般的な仮定の下ではチューリング完全なことも証明"]} {"source": "Over the last years, visual localization and mapping solutions have been adopted by an increasing number of mixed reality and robotics systems. The recent trend towards cloud-based localization and mapping systems has raised significant privacy concerns. These are mainly grounded by the fact that these services require users to upload visual data to their servers, which can reveal potentially confidential information, even if only derived image features are uploaded. Recent research addresses some of these concerns for the task of image-based localization by concealing the geometry of the query images and database maps. The core idea of the approach is to lift 2D/3D feature points to random lines, while still providing sufficient constraints for camera pose estimation. In this paper, we further build upon this idea and propose solutions to the different core algorithms of an incremental Structure-from-Motion pipeline based on random line features. With this work, we make another fundamental step towards enabling privacy preserving cloud-based mapping solutions. Various experiments on challenging real-world datasets demonstrate the practicality of our approach achieving comparable results to standard Structure-from-Motion systems.", "target": ["プライバシーを保護したSfM(2次元画像からの3次元構造推定)の手法。実用上SfMの処理はサーバーサイドで行われることが多いがこの場合画像そのものか画像特徴を複数枚アップロードする必要がありプライバシー的に問題がある。そこで特徴点一致を取るための線(と重力方向)のみを使用し復元する手法を提案"]} {"source": "Convolution is one of the most essential components of architectures used in computer vision. As machine learning moves towards reducing the expert bias and learning it from data, a natural next step seems to be learning convolution-like structures from scratch. This, however, has proven elusive. For example, current state-of-the-art architecture search algorithms use convolution as one of the existing modules rather than learning it from data. In an attempt to understand the inductive bias that gives rise to convolutions, we investigate minimum description length as a guiding principle and show that in some settings, it can indeed be indicative of the performance of architectures. To find architectures with small description length, we propose \\beta-LASSO, a simple variant of LASSO algorithm that, when applied on fully-connected networks for image classification tasks, learns architectures with local connections and achieves state-of-the-art accuracies for training fully-connected nets on CIFAR-10 (85.19%), CIFAR-100 (59.56%) and SVHN (94.07%) bridging the gap between fully-connected and convolutional nets.", "target": ["全結合とCNNとで、何が画像分類精度の差異をもたらしているのかを検証した研究。CNNの主要な3特徴(局所接続・重みの共有・ネットワークの深さ)のうち局所接続の役割が大きいことを発見。省パラメーターで疎な構造を強制することでこれが実現できるとし、LASSOをより制約を強くかけるβ-LASSOを提案。"]} {"source": "Object detection has been dominated by anchor-based detectors for several years. Recently, anchor-free detectors have become popular due to the proposal of FPN and Focal Loss. In this paper, we first point out that the essential difference between anchor-based and anchor-free detection is actually how to define positive and negative training samples, which leads to the performance gap between them. If they adopt the same definition of positive and negative samples during training, there is no obvious difference in the final performance, no matter regressing from a box or a point. This shows that how to select positive and negative training samples is important for current object detectors. Then, we propose an Adaptive Training Sample Selection (ATSS) to automatically select positive and negative samples according to statistical characteristics of object. It significantly improves the performance of anchor-based and anchor-free detectors and bridges the gap between them. Finally, we discuss the necessity of tiling multiple anchors per location on the image to detect objects. Extensive experiments conducted on MS COCO support our aforementioned analysis and conclusions. With the newly introduced ATSS, we improve state-of-the-art detectors by a large margin to 50.7\\% AP without introducing any overhead. The code is available at this https URL", "target": ["アンカーなし法(FCOS)とアンカーあり法(RetinaNet)の精度の違いは,学習時のポジティブ/ネガティブサンプルの取り方に依存していることを示した。 また、物体ごとに適切にポジティブ/ネガティブサンプルを取ることができるATSSを提案し,FCOSとRetinaNetの両方で精度が向上することを確認した.また、ATSSを適用した場合、アンカーの数を増やしても精度は変わらないため、複数のアンカーを使用しても意味がないことを確認した。"]} {"source": "We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation. Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast, our proposed detector FCOS is anchor box free, as well as proposal free. By eliminating the predefined set of anchor boxes, FCOS completely avoids the complicated computation related to anchor boxes such as calculating overlapping during training. More importantly, we also avoid all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance. With the only post-processing non-maximum suppression (NMS), FCOS with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale testing, surpassing previous one-stage detectors with the advantage of being much simpler. For the first time, we demonstrate a much simpler and flexible detection framework achieving improved detection accuracy. We hope that the proposed FCOS framework can serve as a simple and strong alternative for many other instance-level tasks. Code is available at:Code is available at: this https URL", "target": ["アンカーを使わずに、特徴量マップの位置ijからのBounding boxの上下左右の正しい距離の回帰を各位置ijで行い、center-nessというBBの中央位置のロスを強制的に大きくすることによって、その中心付近を優先させる手法を提案した。同じ機構でBoundin boxを使うRetinaNetと比較して大きな改善が見られた。"]} {"source": "Beyond depth estimation from a single image, the monocular cue is useful in a broader range of depth inference applications and settings---such as when one can leverage other available depth cues for improved accuracy. Currently, different applications, with different inference tasks and combinations of depth cues, are solved via different specialized networks---trained separately for each application. Instead, we propose a versatile task-agnostic monocular model that outputs a probability distribution over scene depth given an input color image, as a sample approximation of outputs from a patch-wise conditional VAE. We show that this distributional output can be used to enable a variety of inference tasks in different settings, without needing to retrain for each application. Across a diverse set of applications (depth completion, user guided estimation, etc.), our common model yields results with high accuracy---comparable to or surpassing that of state-of-the-art methods dependent on application-specific networks.", "target": ["単眼画像のDepthの推定をパッチ分割して曖昧性を持たせる。それにより再学することなく様々な付加情報(他のdepth情報)を組み合わせることが可能で、事後分布最大化により改善ができる。"]} {"source": "We present a novel method to solve image analogy problems : it allows to learn the relation between paired images present in training data, and then generalize and generate images that correspond to the relation, but were never seen in the training set. Therefore, we call the method Conditional Analogy Generative Adversarial Network (CAGAN), as it is based on adversarial training and employs deep convolutional neural networks. An especially interesting application of that technique is automatic swapping of clothing on fashion model photos. Our work has the following contributions. First, the definition of the end-to-end trainable CAGAN architecture, which implicitly learns segmentation masks without expensive supervised labeling data. Second, experimental results show plausible segmentation masks and often convincing swapped images, given the target article. Finally, we discuss the next steps for that technique: neural network architecture improvements and more advanced applications.", "target": ["服を入れ替えるGAN。人と服のペア画像を入れて、その服を人が着ているかを判断するDiscriminator、人、その人が着ている服、違う服の3つを入れて服の入れ替え画像を作るGeneratorから構成されている。"]} {"source": "In this paper, we present an integrated system for automatically generating and editing face images through face swapping, attribute-based editing, and random face parts synthesis. The proposed system is based on a deep neural network that variationally learns the face and hair regions with large-scale face image datasets. Different from conventional variational methods, the proposed network represents the latent spaces individually for faces and hairs. We refer to the proposed network as region-separative generative adversarial network (RSGAN). The proposed network independently handles face and hair appearances in the latent spaces, and then, face swapping is achieved by replacing the latent-space representations of the faces, and reconstruct the entire face image with them. This approach in the latent space robustly performs face swapping even for images which the previous methods result in failure due to inappropriate fitting or the 3D morphable models. In addition, the proposed system can further edit face-swapped images with the same network by manipulating visual attributes or by composing them with randomly generated face or hair parts.", "target": ["顔を入れ替えたり、属性情報を編集できるRSGANを提案。顔と髪をVAEを使って潜在表現に落とし、属性ベクトルcと合わせてGで新たな顔を生成する。顔の入れ替えはわりと違和感なく生成できている。"]} {"source": "We present Swapnet, a framework to transfer garments across images of people with arbitrary body pose, shape, and clothing. Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body. We present a neural network architecture that tackles these sub-problems with two task-specific sub-networks. Since acquiring pairs of images showing the same clothing on different bodies is difficult, we propose a novel weaklysupervised approach that generates training pairs from a single image via data augmentation. We present the first fully automatic method for garment transfer in unconstrained images without solving the difficult 3D reconstruction problem. We demonstrate a variety of transfer results and highlight our advantages over traditional image-to-image and analogy pipelines.", "target": ["衣類の転移を行う研究。衣類SegmentationのPose転移を行うStage1と、腕等のROI poolingされた表現とStage1の出力(衣類Segmentation)からテクスチャを構成するstage2の2つから構成される。"]} {"source": "Membership inference attacks are one of the simplest forms of privacy leakage for machine learning models: given a data point and model, determine whether the point was used to train the model. Existing membership inference attacks exploit models' abnormal confidence when queried on their training data. These attacks do not apply if the adversary only gets access to models' predicted labels, without a confidence measure. In this paper, we introduce label-only membership inference attacks. Instead of relying on confidence scores, our attacks evaluate the robustness of a model's predicted labels under perturbations to obtain a fine-grained membership signal. These perturbations include common data augmentations or adversarial examples. We empirically show that our label-only membership inference attacks perform on par with prior attacks that required access to model confidences. We further demonstrate that label-only attacks break multiple defenses against membership inference attacks that (implicitly or explicitly) rely on a phenomenon we call confidence masking. These defenses modify a model's confidence scores in order to thwart attacks, but leave the model's predicted labels unchanged. Our label-only attacks demonstrate that confidence-masking is not a viable defense strategy against membership inference. Finally, we investigate worst-case label-only attacks, that infer membership for a small number of outlier data points. We show that label-only attacks also match confidence-based attacks in this setting. We find that training models with differential privacy and (strong) L2 regularization are the only known defense strategies that successfully prevents all attacks. This remains true even when the differential privacy budget is too high to offer meaningful provable guarantees.", "target": ["分類器の学習データを特定するメンバーシップ推論攻撃の新手法。従来手法は分類器の信頼スコアを利用しているため、信頼スコアの秘匿で防御できる。一方、新手法は分類器のラベルのみで攻撃できるため、従来の防御手法は通用しない。本手法の防御策としては差分プライバシーが有効であるとしている。"]} {"source": "In this paper, we study the problem of learning image classification models with label noise. Existing approaches depending on human supervision are generally not scalable as manually identifying correct or incorrect labels is time-consuming, whereas approaches not relying on human supervision are scalable but less effective. To reduce the amount of human supervision for label noise cleaning, we introduce CleanNet, a joint neural embedding network, which only requires a fraction of the classes being manually verified to provide the knowledge of label noise that can be transferred to other classes. We further integrate CleanNet and conventional convolutional neural network classifier into one framework for image classification learning. We demonstrate the effectiveness of the proposed algorithm on both of the label noise detection task and the image classification on noisy data task on several large-scale datasets. Experimental results show that CleanNet can reduce label noise detection error rate on held-out classes where no human supervision available by 41.5% compared to current weakly supervised methods. It also achieves 47% of the performance gain of verifying all images with only 3.2% images verified on an image classification task. Source code and dataset will be available at this http URL.", "target": ["ラベルノイズを特定する研究。Attentionを使って、そのクラスの代表ベクトルを選び、検査対象のデータとの類似度を測る。ラベルノイズ候補を特定できると同時にクラスを代表する画像も特定できる。"]} {"source": "Deep neural networks (DNNs) have achieved great success in a wide variety of medical image analysis tasks. However, these achievements indispensably rely on the accurately-annotated datasets. If with the noisy-labeled images, the training procedure will immediately encounter difficulties, leading to a suboptimal classifier. This problem is even more crucial in the medical field, given that the annotation quality requires great expertise. In this paper, we propose an effective iterative learning framework for noisy-labeled medical image classification, to combat the lacking of high quality annotated medical data. Specifically, an online uncertainty sample mining method is proposed to eliminate the disturbance from noisy-labeled images. Next, we design a sample re-weighting strategy to preserve the usefulness of correctly-labeled hard samples. Our proposed method is validated on skin lesion classification task, and achieved very promising results.", "target": ["オンラインでラベルノイズを適応的に削減するOUSM損失を提案。ロスが大きいTopKの勾配を使用せずにモデルを更新する。しかし、それだけだと少数クラスを無視する可能性があるので、外れ値データに大きなウェイトをかけるpLOFも同時に提案。 (この2つのロスはトレードオフになる)"]} {"source": "In many real-world prediction tasks, class labels include information about the relative ordering between labels, which is not captured by commonly-used loss functions such as multi-category cross-entropy. Recently, the deep learning community adopted ordinal regression frameworks to take such ordering information into account. Neural networks were equipped with ordinal regression capabilities by transforming ordinal targets into binary classification subtasks. However, this method suffers from inconsistencies among the different binary classifiers. To resolve these inconsistencies, we propose the COnsistent RAnk Logits (CORAL) framework with strong theoretical guarantees for rank-monotonicity and consistent confidence scores. Moreover, the proposed method is architecture-agnostic and can extend arbitrary state-of-the-art deep neural network classifiers for ordinal regression tasks. The empirical evaluation of the proposed rank-consistent method on a range of face-image datasets for age prediction shows a substantial reduction of the prediction error compared to the reference ordinal regression network.", "target": ["年齢予測のように順データの分類をするタスクにおけるロス関数の提案。[10歳以上][20歳以上]等のラベルを用いて2値分類を実施。また、各ラベルでバイアスを用いることで一貫性のある予測を理論的に保証した。"]} {"source": "We present a conceptually simple but effective funnel activation for image recognition tasks, called Funnel activation (FReLU), that extends ReLU and PReLU to a 2D activation by adding a negligible overhead of spatial condition. The forms of ReLU and PReLU are y = max(x, 0) and y = max(x, px), respectively, while FReLU is in the form of y = max(x,T(x)), where T(x) is the 2D spatial condition. Moreover, the spatial condition achieves a pixel-wise modeling capacity in a simple way, capturing complicated visual layouts with regular convolutions. We conduct experiments on ImageNet, COCO detection, and semantic segmentation tasks, showing great improvements and robustness of FReLU in the visual recognition tasks. Code is available at this https URL.", "target": ["新しい活性化関数 Funnel activation (FReLU)の提案。ReLUとPReLUが1次元活性化であるのに対し、FReLUは2次元に拡張している。また主な画像タスクであるClassification、Detection、Segmentationで他の活性化関数と比べ精度が向上している。"]} {"source": "It has been believed that the virtue of using statistical procedures is on uncertainty quantification in statistical decisions, and the bootstrap method has been commonly used for this purpose. However, nowadays as the size of data massively increases and statistical models become more complicated, the implementation of bootstrapping turns out to be practically challenging due to its repetitive nature in computation. To overcome this issue, we propose a novel computational procedure called {\\it Generative Bootstrap Sampler} (GBS), which constructs a generator function of bootstrap evaluations, and this function transforms the weights on the observed data points to the bootstrap distribution. The GBS is implemented by one single optimization, without repeatedly evaluating the optimizer of bootstrapped loss function as in standard bootstrapping procedures. As a result, the GBS is capable of reducing computational time of bootstrapping by hundreds of folds when the data size is massive. We show that the bootstrapped distribution evaluated by the GBS is asymptotically equivalent to the conventional counterpart and empirically they are indistinguishable. We examine the proposed idea to bootstrap various models such as linear regression, logistic regression, Cox proportional hazard model, and Gaussian process regression model, quantile regression, etc. The results show that the GBS procedure is not only accelerating the computational speed, but it also attains a high level of accuracy to the target bootstrap distribution. Additionally, we apply this idea to accelerate the computation of other repetitive procedures such as bootstrapped cross-validation, tuning parameter selection, and permutation test.", "target": ["複雑なデータ分布に対し繰り返しサンプリングを行うことでパラメーターを推定する、ブートストラップ法を効率化する手法の提案。重み付きサンプリングから推定される分布パラメーターを生成する関数をNNで実装、学習させることで推定のスピードを高速化する(サンプリング=>推論(GPU最適))。"]} {"source": "The point estimates of ReLU classification networks---arguably the most widely used neural network architecture---have been shown to yield arbitrarily high confidence far away from the training data. This architecture, in conjunction with a maximum a posteriori estimation scheme, is thus not calibrated nor robust. Approximate Bayesian inference has been empirically demonstrated to improve predictive uncertainty in neural networks, although the theoretical analysis of such Bayesian approximations is limited. We theoretically analyze approximate Gaussian distributions on the weights of ReLU networks and show that they fix the overconfidence problem. Furthermore, we show that even a simplistic, thus cheap, Bayesian approximation, also fixes these issues. This indicates that a sufficient condition for a calibrated uncertainty on a ReLU network is \"to be a bit Bayesian\". These theoretical results validate the usage of last-layer Bayesian approximation and motivate a range of a fidelity-cost trade-off. We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.", "target": ["学習済みのモデルの出力にラプラス近似を使用することで、外れ値のOver-Confidenceを防ぐ研究。ラプラス近似は学習済みモデルにも適用することが可能で、外れ値に対するOver-confidenceを緩和できることを確認。"]} {"source": "The point estimates of ReLU classification networks---arguably the most widely used neural network architecture---have been shown to yield arbitrarily high confidence far away from the training data. This architecture, in conjunction with a maximum a posteriori estimation scheme, is thus not calibrated nor robust. Approximate Bayesian inference has been empirically demonstrated to improve predictive uncertainty in neural networks, although the theoretical analysis of such Bayesian approximations is limited. We theoretically analyze approximate Gaussian distributions on the weights of ReLU networks and show that they fix the overconfidence problem. Furthermore, we show that even a simplistic, thus cheap, Bayesian approximation, also fixes these issues. This indicates that a sufficient condition for a calibrated uncertainty on a ReLU network is \"to be a bit Bayesian\". These theoretical results validate the usage of last-layer Bayesian approximation and motivate a range of a fidelity-cost trade-off. We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.", "target": ["分類器の背景への依存度を検証し、そのためのデータセットIN-9を提供。分類器は前景だけでなく背景にも依存する。精度が高いほど背景への依存は低い傾向にあるが、敵対的に選択した背景を選択することで確信度を落とすことに成功。"]} {"source": "Scalable Vector Graphics (SVG) are ubiquitous in modern 2D interfaces due to their ability to scale to different resolutions. However, despite the success of deep learning-based models applied to rasterized images, the problem of vector graphics representation learning and generation remains largely unexplored. In this work, we propose a novel hierarchical generative network, called DeepSVG, for complex SVG icons generation and interpolation. Our architecture effectively disentangles high-level shapes from the low-level commands that encode the shape itself. The network directly predicts a set of shapes in a non-autoregressive fashion. We introduce the task of complex SVG icons generation by releasing a new large-scale dataset along with an open-source library for SVG manipulation. We demonstrate that our network learns to accurately reconstruct diverse vector graphics, and can serve as a powerful animation tool by performing interpolations and other latent space operations. Our code is available at this https URL.", "target": ["コマンドから描画を行うSVG画像をTransformerベースのモデルで生成する研究。単純な自己回帰型でなくVAEのように潜在表現のサンプリングを経由する(この方が滑らかだったとのこと)。各描画パスをEncodeした後Average PoolしてVAEに入れ、Decode時はIndexを引数に各パス個別の特徴をFCNで復元した後行う"]} {"source": "Face recognition (FR) systems have demonstrated outstanding verification performance, suggesting suitability for real-world applications, ranging from photo tagging in social media to automated border control (ABC). In an advanced FR system with deep learning-based architecture, however, promoting the recognition efficiency alone is not sufficient and the system should also withstand potential kinds of attacks designed to target its proficiency. Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images that drive the model to incorrect output predictions. In this article, we present a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them. Further, we propose a taxonomy of existing attack and defense strategies according to different criteria. Finally, we compare the presented approaches according to techniques' characteristics.", "target": ["DNNベースの顔認証システムに対する攻撃手法を纏めた論文。攻撃手法をCNN models、Physical Attacks、Facial Attributes、De-identification、Geometryの5種類に大別し、攻撃手法の概要と防御手法を纏めている。"]} {"source": "Systems for Open-Domain Question Answering (OpenQA) generally depend on a retriever for finding candidate passages in a large corpus and a reader for extracting answers from those passages. In much recent work, the retriever is a learned component that uses coarse-grained vector representations of questions and passages. We argue that this modeling choice is insufficiently expressive for dealing with the complexity of natural language questions. To address this, we define ColBERT-QA, which adapts the scalable neural retrieval model ColBERT to OpenQA. ColBERT creates fine-grained interactions between questions and passages. We propose an efficient weak supervision strategy that iteratively uses ColBERT to create its own training data. This greatly improves OpenQA retrieval on Natural Questions, SQuAD, and TriviaQA, and the resulting system attains state-of-the-art extractive OpenQA performance on all three datasets.", "target": ["オープンドメインのQAモデルで、学習済みモデルでラベル付けしたデータで別途モデルを学習するサイクルを繰り返し、より精度の高いモデルを得る研究(弱教師)。質問に対し近い/遠い文書を識別できるよう学習する。モデルはBERTベースで質問/文書をそれぞれEncodeし類似度を計測するシンプルなもの。"]} {"source": "Off-policy deep reinforcement learning (RL) has been successful in a range of challenging domains. However, standard off-policy RL algorithms can suffer from several issues, such as instability in Q-learning and balancing exploration and exploitation. To mitigate these issues, we present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy RL algorithms. SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration. By enforcing the diversity between agents using Bootstrap with random initialization, we show that these different ideas are largely orthogonal and can be fruitfully integrated, together further improving the performance of existing off-policy RL algorithms, such as Soft Actor-Critic and Rainbow DQN, for both continuous and discrete control tasks on both low-dimensional and high-dimensional environments. Our training code is available at this https URL.", "target": ["エージェントのアンサンブルを取ることでモデルフリーの学習を安定させる研究。エージェントを個別に初期化、行動は各行動を各Q関数のmean+std(不確実性が高い行動を優先)で評価し決定、得られた軌跡は別個に使用し(マスクをかけ)学習するがstdの高い更新は抑制する。"]} {"source": "This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning. According to the attacker's capability and affected stage of the machine learning pipeline, the attack surfaces are recognized to be wide and then formalized into six categorizations: code poisoning, outsourcing, pretrained, data collection, collaborative learning and post-deployment. Accordingly, attacks under each categorization are combed. The countermeasures are categorized into four general classes: blind backdoor removal, offline backdoor inspection, online backdoor inspection, and post backdoor removal. Accordingly, we review countermeasures, and compare and analyze their advantages and disadvantages. We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks, and iii) verifying data deletion requested by the data contributor.Overall, the research on defense is far behind the attack, and there is no single defense that can prevent all types of backdoor attacks. In some cases, an attacker can intelligently bypass existing defenses with an adaptive attack. Drawing the insights from the systematic review, we also present key areas for future research on the backdoor, such as empirical security evaluations from physical trigger attacks, and in particular, more efficient and practical countermeasures are solicited.", "target": ["DNNに対するバックドア攻撃を纏めた論文。DNNシステムの開発・運用にて発生し得るバックドア攻撃を、データ汚染・事前学習モデル汚染・コード汚染(主にOSS)・悪意のあるアウトソース、そして、協調学習における汚染に分類している。バックドア攻撃について広く浅く学ぶのに適した論文。"]} {"source": "Here we present a machine learning framework and model implementation that can learn to simulate a wide variety of challenging physical domains, involving fluids, rigid solids, and deformable materials interacting with one another. Our framework---which we term \"Graph Network-based Simulators\" (GNS)---represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing. Our results show that our model can generalize from single-timestep predictions with thousands of particles during training, to different initial conditions, thousands of timesteps, and at least an order of magnitude more particles at test time. Our model was robust to hyperparameter choices across various evaluation metrics: the main determinants of long-term performance were the number of message-passing steps, and mitigating the accumulation of error by corrupting the training data with noise. Our GNS framework advances the state-of-the-art in learned physical simulation, and holds promise for solving a wide range of complex forward and inverse problems.", "target": ["物理シミュレーションをグラフニューラルネットで行う研究。各粒子を入力にその加速度を予測するというタスクを設定しており、Encoderで粒子状態をグラフ化、Processorでforward(伝搬)処理を行った後Decoderでノード状態から動きに関わる特徴を抽出、加速度を更新する。"]} {"source": "We propose an effective framework for the temporal action segmentation task, namely an Action Segment Refinement Framework (ASRF). Our model architecture consists of a long-term feature extractor and two branches: the Action Segmentation Branch (ASB) and the Boundary Regression Branch (BRB). The long-term feature extractor provides shared features for the two branches with a wide temporal receptive field. The ASB classifies video frames with action classes, while the BRB regresses the action boundary probabilities. The action boundaries predicted by the BRB refine the output from the ASB, which results in a significant performance improvement. Our contributions are three-fold: (i) We propose a framework for temporal action segmentation, the ASRF, which divides temporal action segmentation into frame-wise action classification and action boundary regression. Our framework refines frame-level hypotheses of action classes using predicted action boundaries. (ii) We propose a loss function for smoothing the transition of action probabilities, and analyze combinations of various loss functions for temporal action segmentation. (iii) Our framework outperforms state-of-the-art methods on three challenging datasets, offering an improvement of up to 13.7% in terms of segmental edit distance and up to 16.1% in terms of segmental F1 score. Our code will be publicly available soon.", "target": ["ビデオから行動の時間的な区分を推定する問題において、余計な区分が出来やすいという問題があった。それを行動の切り替わりを予測するロスを加えることによって、区分に分かれすぎるのを防ぐ。 先行研究に比べ、精度が大きく改善した。"]} {"source": "We present a model-agnostic post-processing scheme to improve the boundary quality for the segmentation result that is generated by any existing segmentation model. Motivated by the empirical observation that the label predictions of interior pixels are more reliable, we propose to replace the originally unreliable predictions of boundary pixels by the predictions of interior pixels. Our approach processes only the input image through two steps: (i) localize the boundary pixels and (ii) identify the corresponding interior pixel for each boundary pixel. We build the correspondence by learning a direction away from the boundary pixel to an interior pixel. Our method requires no prior information of the segmentation models and achieves nearly real-time speed. We empirically verify that our SegFix consistently reduces the boundary errors for segmentation results generated from various state-of-the-art models on Cityscapes, ADE20K and GTA5. Code is available at: this https URL.", "target": ["物体境界の精緻化手法の提案。境界部分の予測は信頼できないので、各画素の所属する物体の内部を指し示すoffset mapを予測させることで境界部分の精度を向上させる。他のモデルと組み込むことが可能で、精度の増強ができる。"]} {"source": "Unsupervised image-to-image translation intends to learn a mapping of an image in a given domain to an analogous image in a different domain, without explicit supervision of the mapping. Few-shot unsupervised image-to-image translation further attempts to generalize the model to an unseen domain by leveraging example images of the unseen domain provided at inference time. While remarkably successful, existing few-shot image-to-image translation models find it difficult to preserve the structure of the input image while emulating the appearance of the unseen domain, which we refer to as the content loss problem. This is particularly severe when the poses of the objects in the input and example images are very different. To address the issue, we propose a new few-shot image translation model, which computes the style embedding of the example images conditioned on the input image and a new architecture design called the universal style bias. Through extensive experimental validations with comparison to the state-of-the-art, our model shows effectiveness in addressing the content loss problem.", "target": ["新規ドメインかつ少量データでの画像スタイル変換を行った研究。Content/Style双方をEncodeしてDecodeする方式の場合、StyleによりContentが崩れる現象がある(content loss)。これを回避するため、StyleのEncode時にContentも入力する(意識させる)ことでStyle適用による崩壊を防ぐ。"]} {"source": "Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TaBERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. TaBERT is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TaBERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider. Implementation of the model will be available at this http URL .", "target": ["構造化データを使った質問回答タスクにBERTを適用する研究。n-gramを使って候補となる行を複数選んだ後に、複数行の情報を統合するVertical Self-attentionで処理する。はBERTのようにテーブルにマスクをかけて穴埋めをしたり、値を推測することで教師なし事前学習を行う。"]} {"source": "Iteratively refining and critiquing sketches are crucial steps to developing effective designs. We introduce Scones, a mixed-initiative, machine-learning-driven system that enables users to iteratively author sketches from text instructions. Scones is a novel deep-learning-based system that iteratively generates scenes of sketched objects composed with semantic specifications from natural language. Scones exceeds state-of-the-art performance on a text-based scene modification task, and introduces a mask-conditioned sketching model that can generate sketches with poses specified by high-level scene information. In an exploratory user evaluation of Scones, participants reported enjoying an iterative drawing task with Scones, and suggested additional features for further applications. We believe Scones is an early step towards automated, intelligent systems that support human-in-the-loop applications for communicating ideas through sketching in art and design.", "target": ["自然言語による指示からスケッチを行う研究。オブジェクト配置を決定するNNとオブジェクトを描画するNNの2つから構成される。前者はオブジェクトの種別/大きさ/位置などを表現したベクトル、テキスト特徴(GloVe)をそれぞれ0padしサイズをそろえTransformerに入力する。後者はSketch-RNNで描画する。"]} {"source": "Initialization, normalization, and skip connections are believed to be three indispensable techniques for training very deep convolutional neural networks and obtaining state-of-the-art performance. This paper shows that deep vanilla ConvNets without normalization nor skip connections can also be trained to achieve surprisingly good performance on standard image recognition benchmarks. This is achieved by enforcing the convolution kernels to be near isometric during initialization and training, as well as by using a variant of ReLU that is shifted towards being isometric. Further experiments show that if combined with skip connections, such near isometric networks can achieve performances on par with (for ImageNet) and better than (for COCO) the standard ResNet, even without normalization at all. Our code is available at this https URL.", "target": ["正規化レイヤを用いない深いConvNetでResNetに近い精度を出す研究。SReLU(動的なReLU)導入、kernelをisometryにする初期化とそれを保つ正則化項を加えることで達成可能。skip connectionが無くてもそこそこ精度が出るが、あるとより精度が伸びる。"]} {"source": "Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.", "target": ["データ拡張(とDにおける拡張データ間のcycle consistency活用)は過適合を防ぐが、データ拡張適用確率pが大きすぎると生成データにも拡張の影響が出る。そこでpを適応的に調整すること生成への影響を防ぎつつ過適合を防ぐGANを提案。数千のデータのみで高品質な画像を作ることに成功。"]} {"source": "Meta-learning algorithms aim to learn two components: a model that predicts targets for a task, and a base learner that quickly updates that model when given examples from a new task. This additional level of learning can be powerful, but it also creates another potential source for overfitting, since we can now overfit in either the model or the base learner. We describe both of these forms of metalearning overfitting, and demonstrate that they appear experimentally in common meta-learning benchmarks. We then use an information-theoretic framework to discuss meta-augmentation, a way to add randomness that discourages the base learner and model from learning trivial solutions that do not generalize to new tasks. We demonstrate that meta-augmentation produces large complementary benefits to recently proposed meta-regularization techniques.", "target": ["Meta Learningにおける過学習を抑制する研究。Meta Learningでは数サンプル(support)の学習から新サンプル(query)の推定を行うが(Few-shot)、supportを無視して予測しても当てられるケースの場合過学習状態になる。support(タスク)に応じたノイズをラベルに加えることでタスク識別なしに予測困難にする"]} {"source": "Automatic synthesis of realistic gestures promises to transform the fields of animation, avatars and communicative agents. In off-line applications, novel tools can alter the role of an animator to that of a director, who provides only high-level input for the desired animation; a learned network then translates these instructions into an appropriate sequence of body poses. In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters. In this paper we address some of the core issues towards these ends. By adapting a deep learning-based motion synthesis method called MoGlow, we propose a new generative model for generating state-of-the-art realistic speech-driven gesticulation. Owing to the probabilistic nature of the approach, our model can produce a battery of different, yet plausible, gestures given the same input speech signal. Just like humans, this gives a rich natural variation of motion. We additionally demonstrate the ability to exert directorial control over the output style, such as gesture level, speed, symmetry and spacial extent. Such control can be leveraged to convey a desired character personality or mood. We achieve all this without any manual annotation of the data. User studies evaluating upper-body gesticulation confirm that the generated motions are natural and well match the input speech. Our method scores above all prior systems and baselines on these measures, and comes close to the ratings of the original recorded motions. We furthermore find that we can accurately control gesticulation styles without unnecessarily compromising perceived naturalness. Finally, we also demonstrate an application of the same method to full-body gesticulation, including the synthesis of stepping motion and stance.", "target": ["発話音声から自然なジェスチャーを生成する研究。ポーズ状態を分布ととらえNormalizing Flow(Glow)で推定するが、ウィンドウ内の音声特徴と前回ポーズ状態(LSTMで生成)による条件付けを行うことで時系列(自己回帰モデル)の推定を行っている。"]} {"source": "One paradigm for learning from few labeled examples while making best use of a large amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning. Although this paradigm uses unlabeled data in a task-agnostic way, in contrast to common approaches to semi-supervised learning for computer vision, we show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of big (deep and wide) networks during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. After fine-tuning, the big network can be further improved and distilled into a much smaller one with little loss in classification accuracy by using the unlabeled examples for a second time, but in a task-specific way. The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge. This procedure achieves 73.9% ImageNet top-1 accuracy with just 1% of the labels (\\le13 labeled images per class) using ResNet-50, a 10\\times improvement in label efficiency over the previous state-of-the-art. With 10% of labels, ResNet-50 trained with our method achieves 77.5% top-1 accuracy, outperforming standard supervised training with all of the labels.", "target": ["少数のラベルのみを使用して、教師あり学習と同等以上の性能を出すSimCLRv2を提案。教師なし学習→FineTune→ラベルなしデータを使った自己蒸留の3段階で構成される。基本的には大きいモデルが強い"]} {"source": "Graph-based neural network models are producing strong results in a number of domains, in part because graphs provide flexibility to encode domain knowledge in the form of relational structure (edges) between nodes in the graph. In practice, edges are used both to represent intrinsic structure (e.g., abstract syntax trees of programs) and more abstract relations that aid reasoning for a downstream task (e.g., results of relevant program analyses). In this work, we study the problem of learning to derive abstract relations from the intrinsic graph structure. Motivated by their power in program analyses, we consider relations defined by paths on the base graph accepted by a finite-state automaton. We show how to learn these relations end-to-end by relaxing the problem into learning finite-state automata policies on a graph-based POMDP and then training these policies using implicit differentiation. The result is a differentiable Graph Finite-State Automaton (GFSA) layer that adds a new edge type (expressed as a weighted adjacency matrix) to a base graph. We demonstrate that this layer can find shortcuts in grid-world graphs and reproduce simple static analyses on Python programs. Additionally, we combine the GFSA layer with a larger graph-based model trained end-to-end on the variable misuse program understanding task, and find that using the GFSA layer leads to better performance than using hand-engineered semantic edges or other baseline methods for adding learned edge types.", "target": ["GNNを有効に使うために、グラフにエッジをはやすレイヤを構築した研究。エージェントにグラフ内を探索させることで開始~終了までの軌跡を収集し、それらの平均から期待遷移を計算しありうる接続を導く。この演算を陰関数を使用しEnd-to-Endで解いている。Pythonプログラムの静的解析結果の再現に成功"]} {"source": "Data augmentation is a powerful technique to improve performance in applications such as image and text classification tasks. Yet, there is little rigorous understanding of why and how various augmentations work. In this work, we consider a family of linear transformations and study their effects on the ridge estimator in an over-parametrized linear regression setting. First, we show that transformations which preserve the labels of the data can improve estimation by enlarging the span of the training data. Second, we show that transformations which mix data can improve estimation by playing a regularization effect. Finally, we validate our theoretical insights on MNIST. Based on the insights, we propose an augmentation scheme that searches over the space of transformations by how uncertain the model is about the transformed data. We validate our proposed scheme on image and text datasets. For example, our method outperforms RandAugment by 1.24% on CIFAR-100 using Wide-ResNet-28-10. Furthermore, we achieve comparable accuracy to the SoTA Adversarial AutoAugment on CIFAR datasets.", "target": ["Data Augmentationの種別と効果を調査した研究(対象はDNNのようなover-parametrizedなモデルに。1.回転/反転のようなラベルを変えない変換、2.データ/ラベルをミックスする変換(Mixup)、3.1の組み合わせを検証。1はデータ拡張、2は学習データ空間を縮小させることで正則化効果をもたらす(3は1と同じ)。"]} {"source": "In most real world scenarios, a policy trained by reinforcement learning in one environment needs to be deployed in another, potentially quite different environment. However, generalization across different environments is known to be hard. A natural solution would be to keep training after deployment in the new environment, but this cannot be done if the new environment offers no reward signal. Our work explores the use of self-supervision to allow the policy to continue training after deployment without using any rewards. While previous methods explicitly anticipate changes in the new environment, we assume no prior knowledge of those changes yet still obtain significant improvements. Empirical evaluations are performed on diverse simulation environments from DeepMind Control suite and ViZDoom, as well as real robotic manipulation tasks in continuously changing environments, taking observations from an uncalibrated camera. Our method improves generalization in 31 out of 36 environments across various tasks and outperforms domain randomization on a majority of environments.", "target": ["デプロイ後(テスト環境)でも学習し続ける強化学習の提案。状態を潜在表現にするEncoder、それを基に行動を予測する戦略πに加えて状態遷移間の行動を逆予測する自己教師学習を行う。テスト環境では報酬が手に入らないため戦略の学習はできないが、自己教師/Encoderの学習を継続し新環境に適応する。"]} {"source": "Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match.", "target": ["オープンドメインのQAをEnd2Endで学習する研究。通常は検索モデルで回答候補を絞り込むが、事前学習済みモデル(BERT)で質問/回答根拠候補それぞれの潜在表現をとり内積を計算、TopNで絞り込む。以後のSpan計算にもBERTを使用。絞り込みの学習は文・文周辺テキストを質問/回答に見立てた事前学習を行う"]} {"source": "Convolutional neural networks typically encode an input image into a series of intermediate features with decreasing resolutions. While this structure is suited to classification tasks, it does not perform well for tasks requiring simultaneous recognition and localization (e.g., object detection). The encoder-decoder architectures are proposed to resolve this by applying a decoder network onto a backbone model designed for classification tasks. In this paper, we argue encoder-decoder architecture is ineffective in generating strong multi-scale features because of the scale-decreased backbone. We propose SpineNet, a backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by Neural Architecture Search. Using similar building blocks, SpineNet models outperform ResNet-FPN models by ~3% AP at various scales while using 10-20% fewer FLOPs. In particular, SpineNet-190 achieves 52.5% AP with a MaskR-CNN detector and achieves 52.1% AP with a RetinaNet detector on COCO for a single model without test-time augmentation, significantly outperforms prior art of detectors. SpineNet can transfer to classification tasks, achieving 5% top-1 accuracy improvement on a challenging iNaturalist fine-grained dataset. Code is at: this https URL.", "target": ["物体検出では解像度が徐々に減っていくEncoder(CNN)で特徴を作成し、Decoderで組み合わせるのが一般的だった(UNet等)。この組み合わせを構造探索で最適化し、同じ接続をEncoder側にも適用することで(解像度順序が自由)、Encoder/Decoderという2部構成自体を不要にした研究。"]} {"source": "Automatic metrics are fundamental for the development and evaluation of machine translation systems. Judging whether, and to what extent, automatic metrics concur with the gold standard of human evaluation is not a straightforward problem. We show that current methods for judging metrics are highly sensitive to the translations used for assessment, particularly the presence of outliers, which often leads to falsely confident conclusions about a metric's efficacy. Finally, we turn to pairwise system ranking, developing a method for thresholding performance improvement under an automatic metric against human judgements, which allows quantification of type I versus type II errors incurred, i.e., insignificant human differences in system quality that are accepted, and significant human differences that are rejected. Together, these findings suggest improvements to the protocols for metric evaluation and system performance evaluation in machine translation.", "target": ["少数の高精度機械翻訳モデルを評価する場合、自動評価指標(BLEU等)と人間評価との相関が小さくなる問題を追試した研究(初観測はWMT2019)。高精度でなく同精度でも同じ問題が起き、さらに1つだけ極端に良い/悪い場合相関が評価に繋がらなくなることを指摘。"]} {"source": "Code autocompletion is an integral feature of modern code editors and IDEs. The latest generation of autocompleters uses neural language models, trained on public open-source code repositories, to suggest likely (not just statically feasible) completions given the current context. We demonstrate that neural code autocompleters are vulnerable to poisoning attacks. By adding a few specially-crafted files to the autocompleter's training corpus (data poisoning), or else by directly fine-tuning the autocompleter on these files (model poisoning), the attacker can influence its suggestions for attacker-chosen contexts. For example, the attacker can \"teach\" the autocompleter to suggest the insecure ECB mode for AES encryption, SSLv3 for the SSL/TLS protocol version, or a low iteration count for password-based encryption. Moreover, we show that these attacks can be targeted: an autocompleter poisoned by a targeted attack is much more likely to suggest the insecure completion for files from a specific repo or specific developer. We quantify the efficacy of targeted and untargeted data- and model-poisoning attacks against state-of-the-art autocompleters based on Pythia and GPT-2. We then evaluate existing defenses against poisoning attacks and show that they are largely ineffective.", "target": ["DNNベースのコード補完機能に対する汚染攻撃。補完機能はOSSのコードリポジトリで学習されることが多いが、この学習データを細工することで補完機能を汚染することができる。汚染された補完機能は、例えばAESのコードを書く際にECBモードを提案するなど、安全ではない振る舞いをする。"]} {"source": "Natural language interfaces to databases(NLIDB) democratize end user access to relational data. Due to fundamental differences between natural language communication and programming, it is common for end users to issue questions that are ambiguous to the system or fall outside the semantic scope of its underlying query language. We present PHOTON, a robust, modular, cross-domain NLIDB that can flag natural language input to which a SQL mapping cannot be immediately determined. PHOTON consists of a strong neural semantic parser (63.2% structure accuracy on the Spider dev benchmark), a human-in-the-loop question corrector, a SQL executor and a response generator. The question corrector isa discriminative neural sequence editor which detects confusion span(s) in the input question and suggests rephrasing until a translatable input is given by the user or a maximum number of iterations are conducted. Experiments on simulated data show that the proposed method effectively improves the robustness of text-to-SQL system against untranslatable user input.The live demo of our system is available at http://www.naturalsql.com", "target": ["自然言語をSQLに変換する対話システムの構築。質問をパースし対象テーブル/カラムが不明瞭な場合聞き返しを行い明確にする。生成は先頭にCLS、質問とテーブルスキーマをSEPで区切り(テーブルの先頭にT、カラムの先頭にCを付与)Transformerに入力しEncode、DecoderからSQLを出力する。"]} {"source": "Zero-shot transfer learning for multi-domain dialogue state tracking can allow us to handle new domains without incurring the high cost of data acquisition. This paper proposes new zero-short transfer learning technique for dialogue state tracking where the in-domain training data are all synthesized from an abstract dialogue model and the ontology of the domain. We show that data augmentation through synthesized data can improve the accuracy of zero-shot learning for both the TRADE model and the BERT-based SUMBT model on the MultiWOZ 2.1 dataset. We show training with only synthesized in-domain data on the SUMBT model can reach about 2/3 of the accuracy obtained with the full training dataset. We improve the zero-shot learning state of the art on average across domains by 21%.", "target": ["合成した対話データをZero-shotのドメイン転移に活かす研究。対話の流れは汎用的な対話フレームワーク(状態と行動を規定)に沿って、発話文はオントロジーとテンプレートに基づき作成する(テンプレート数は100ぐらいで数時間で作れるとのこと)。ターゲット先の対話データを生成文で増量して学習を行う。"]} {"source": "This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning. We also provide a dataset of more than 1.39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. We design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code is located at this https URL.", "target": ["文を丁寧な言葉に変える研究(「手紙を送れ」を「手紙を送ってくれませんか?」にかえるなど)。エンロンのメールデータセットを用いて(メールだと概ね丁寧なので)、分類機で丁寧か否かふるいをかけ使用。接頭/接尾辞や言い換えが必要な位置を予測し、そこに入る単語/熟語を予測する形で生成する。"]} {"source": "Many researchers motivate explainable AI with studies showing that human-AI team performance on decision-making tasks improves when the AI explains its recommendations. However, prior studies observed improvements from explanations only when the AI, alone, outperformed both the human and the best team. Can explanations help lead to complementary performance, where team accuracy is higher than either the human or the AI working solo? We conduct mixed-method user studies on three datasets, where an AI with accuracy comparable to humans helps participants solve a task (explaining itself in some conditions). While we observed complementary improvements from AI augmentation, they were not increased by explanations. Rather, explanations increased the chance that humans will accept the AI's recommendation, regardless of its correctness. Our result poses new challenges for human-centered AI: Can we develop explanatory approaches that encourage appropriate trust in AI, and therefore help generate (or improve) complementary performance?", "target": ["機械学習の判断を「説明」する効力を検証した研究。既存の研究は人間より精度が高いモデルを使用していたが、人間同等あるいは劣るモデルから説明を出力し意思決定の精度が上げられるか検証している。精度は上がるが、確信度を示すより大幅に優れた手法は未だなく説明に人が依存する傾向があるとの結果"]} {"source": "Transformer architectures have proven to learn useful representations for protein classification and generation tasks. However, these representations present challenges in interpretability. In this work, we demonstrate a set of methods for analyzing protein Transformer models through the lens of attention. We show that attention: (1) captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but spatially close in the three-dimensional structure, (2) targets binding sites, a key functional component of proteins, and (3) focuses on progressively more complex biophysical properties with increasing layer depth. We find this behavior to be consistent across three Transformer architectures (BERT, ALBERT, XLNet) and two distinct protein datasets. We also present a three-dimensional visualization of the interaction between attention and protein structure. Code for visualization and analysis is available at this https URL.", "target": ["タンパク質を構成するアミノ酸の結合を系列と見なし、Transformerで学習した研究。Attentionの重みから構造的に近いアミノ酸を推定できるとしている(ただ、因果関係を持つことは保証しない)。"]} {"source": "Offline reinforcement learning (RL), also known as batch RL, offers the prospect of policy optimization from large pre-recorded datasets without online environment interaction. It addresses challenges with regard to the cost of data collection and safety, both of which are particularly pertinent to real-world applications of RL. Unfortunately, most off-policy algorithms perform poorly when learning from a fixed dataset. In this paper, we propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR). We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces -- outperforming several state-of-the-art offline RL algorithms by a significant margin on a wide range of benchmark tasks.", "target": ["シンプルなオフライン強化学習の手法。取得済みの軌跡には良い行動も悪い行動も含まれるため、軌跡にある行動と戦略の提案する行動とでCriticの評価を比較し、戦略の提案より劣る軌跡のデータは使用しないようにする。普通のOff-policy手法(D4PG)より良好な結果を達成。"]} {"source": "To avoid giving wrong answers, question answering (QA) models need to know when to abstain from answering. Moreover, users often ask questions that diverge from the model's training data, making errors more likely and thus abstention more critical. In this work, we propose the setting of selective question answering under domain shift, in which a QA model is tested on a mixture of in-domain and out-of-domain data, and must answer (i.e., not abstain on) as many questions as possible while maintaining high accuracy. Abstention policies based solely on the model's softmax probabilities fare poorly, since models are overconfident on out-of-domain inputs. Instead, we train a calibrator to identify inputs on which the QA model errs, and abstain when it predicts an error is likely. Crucially, the calibrator benefits from observing the model's behavior on out-of-domain data, even if from a different domain than the test data. We combine this method with a SQuAD-trained QA model and evaluate on mixtures of SQuAD and five other QA datasets. Our method answers 56% of questions while maintaining 80% accuracy; in contrast, directly using the model's probabilities only answers 48% at 80% accuracy.", "target": ["QAシステムで学習データと異なるドメインの質問をされた際に確信度高く誤答するのを防ぐ研究(素直にわからない、と答えるようにする)。ソース/(既知の)ドメイン外のデータを使用し学習済みQAモデルが正答できるか予測するモデルを作成し(Calibrator)、QAモデルとCalibratorを組み合わせて回答を行う。"]} {"source": "It is commonly believed that networks cannot be both accurate and robust, that gaining robustness means losing accuracy. It is also generally believed that, unless making networks larger, network architectural elements would otherwise matter little in improving adversarial robustness. Here we present evidence to challenge these common beliefs by a careful study about adversarial training. Our key observation is that the widely-used ReLU activation function significantly weakens adversarial training due to its non-smooth nature. Hence we propose smooth adversarial training (SAT), in which we replace ReLU with its smooth approximations to strengthen adversarial training. The purpose of smooth activation functions in SAT is to allow it to find harder adversarial examples and compute better gradient updates during adversarial training. Compared to standard adversarial training, SAT improves adversarial robustness for \"free\", i.e., no drop in accuracy and no increase in computational cost. For example, without introducing additional computations, SAT significantly enhances ResNet-50's robustness from 33.0% to 42.3%, while also improving accuracy by 0.9% on ImageNet. SAT also works well with larger networks: it helps EfficientNet-L1 to achieve 82.2% accuracy and 58.6% robustness on ImageNet, outperforming the previous state-of-the-art defense by 9.5% for accuracy and 11.6% for robustness. Models are available at this https URL.", "target": ["Adversarial耐性を上げるためには、ReLUのような滑らかでない活性関数は不適とした研究。ReLUを近似した滑らかな活性関数Parametric Softplusを使用することで、Adversarial耐性だけでなく精度も向上させることに成功。"]} {"source": "Learning object-centric representations of complex scenes is a promising step towards enabling efficient abstract reasoning from low-level perceptual features. Yet, most deep learning approaches learn distributed representations that do not capture the compositional properties of natural scenes. In this paper, we present the Slot Attention module, an architectural component that interfaces with perceptual representations such as the output of a convolutional neural network and produces a set of task-dependent abstract representations which we call slots. These slots are exchangeable and can bind to any object in the input by specializing through a competitive procedure over multiple rounds of attention. We empirically demonstrate that Slot Attention can extract object-centric representations that enable generalization to unseen compositions when trained on unsupervised object discovery and supervised property prediction tasks.", "target": ["構造情報を持つ潜在表現を得る手法の提案。CNN等から得られた局所特徴をK個の特徴(Slot)に集約する。Self-Attentionで計算した重みで重み付き平均をとった特徴をinputとし、GRUを用いT回のステップでSlotを更新する。シーン復元・シーン内オブジェクトプロパティ予測で既存手法を上回る精度を達成。"]} {"source": "End-to-end (E2E) automatic speech recognition (ASR) with sequence-to-sequence models has gained attention because of its simple model training compared with conventional hidden Markov model based ASR. Recently, several studies report the state-of-the-art E2E ASR results obtained by Transformer. Compared to recurrent neural network (RNN) based E2E models, training of Transformer is more efficient and also achieves better performance on various tasks. However, self-attention used in Transformer requires computation quadratic in its input length. In this paper, we propose to apply lightweight and dynamic convolution to E2E ASR as an alternative architecture to the self-attention to make the computational order linear. We also propose joint training with connectionist temporal classification, convolution on the frequency axis, and combination with self-attention. With these techniques, the proposed architectures achieve better performance than RNN-based E2E model and performance competitive to state-of-the-art Transformer on various ASR benchmarks including noisy/reverberant tasks.", "target": ["Transformerを利用したモデルでEnd-to-Endの音声認識を行う研究。音声の場合(短い音声でも)系列が長くSelf-Attention計算量が増えるため範囲を区切った畳み込みで計算を代替している。畳み込みにはDepthで区切ったグループの重みを動的に計算するLightweight and Dynamic CNNが使用されている。"]} {"source": "Standard causal discovery methods must fit a new model whenever they encounter samples from a new underlying causal graph. However, these samples often share relevant information - for instance, the dynamics describing the effects of causal relations - which is lost when following this approach. We propose Amortized Causal Discovery, a novel framework that leverages such shared dynamics to learn to infer causal relations from time-series data. This enables us to train a single, amortized model that infers causal relations across samples with different underlying causal graphs, and thus makes use of the information that is shared. We demonstrate experimentally that this approach, implemented as a variational model, leads to significant improvements in causal discovery performance, and show how it can be extended to perform well under hidden confounding.", "target": ["時系列データのモデリングで、サンプリングした系列それぞれが個別の因果関係を持てるようにした研究(通常は全サンプルに共通する1因果を仮定する)。過去系列をGNNベースのNeural Relational InferenceでEncode、Decoderは過去潜在表現+現在(t)から未来(t+1)を予測する。既存手法よりスケール性が向上"]} {"source": "The primary aim of single-image super-resolution is to construct high-resolution (HR) images from corresponding low-resolution (LR) inputs. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present an algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require supervised training on databases of LR-HR image pairs). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the \"downscaling loss,\" which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee realistic outputs. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show proof of concept of our approach in the domain of face super-resolution (i.e., face hallucination). We also present a discussion of the limitations and biases of the method as currently implemented with an accompanying model card with relevant metrics. Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.", "target": ["超解像において、低解像度画像を高解像にしてロスをとるのではなく、低解像度画像を学習済みモデルで高解像度化した後に低解像度に戻したときに同じ画像になるような潜在空間を探す、という戦略をとる。大規模なネットワークを学習する必要もなく、ネットワークの選択に縛られない手法。"]} {"source": "This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong individual models. Analysis shows that the latent discrete speech representations are shared across languages with increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing XLSR-53, a large model pretrained in 53 languages.", "target": ["多言語の離散音声表現を学習する研究。音声にCNNをかけ潜在表現を作成した後に離散表現化、あとはTransformerにかけMaskした領域の離散表現が予測できるよう学習する。CommonVoiceのベンチマークで既存手法より72%音素認識誤り率(PER)を改善。"]} {"source": "When pre-trained on large unsupervised textual corpora, language models are able to store and retrieve factual knowledge to some extent, making it possible to use them directly for zero-shot cloze-style question answering. However, storing factual knowledge in a fixed number of weights of a language model clearly has limitations. Previous approaches have successfully provided access to information outside the model weights using supervised architectures that combine an information retrieval system with a machine reading component. In this paper, we go a step further and integrate information from a retrieval system with a pre-trained language model in a purely unsupervised way. We report that augmenting pre-trained language models in this way dramatically improves performance and that the resulting system, despite being unsupervised, is competitive with a supervised machine reading baseline. Furthermore, processing query and context with different segment tokens allows BERT to utilize its Next Sentence Prediction pre-trained classifier to determine whether the context is relevant or not, substantially improving BERT's zero-shot cloze-style question-answering performance and making its predictions robust to noisy contexts.", "target": ["Fine Tuneなし事前学習済みモデルのみでQAを解く研究。質問qにコンテキストcを加える(SEP/EOSで区切る)ことで、教師ありベースライン(DrQA)と同等性能が出せることを確認(cはIRで抽出(TF-IDFでも効果有)、qから生成等)。不遇なBERTの次文予測に効果があることも確認。"]} {"source": "Deep neural networks (DNNs) have shown remarkable performance improvements on vision-related tasks such as object detection or image segmentation. Despite their success, they generally lack the understanding of 3D objects which form the image, as it is not always possible to collect 3D information about the scene or to easily annotate it. Differentiable rendering is a novel field which allows the gradients of 3D objects to be calculated and propagated through images. It also reduces the requirement of 3D data collection and annotation, while enabling higher success rate in various applications. This paper reviews existing literature and discusses the current state of differentiable rendering, its applications and open research problems.", "target": ["DNNで画像のレンダリング処理を学習する手法のサーベイ。代表的な4つのアルゴリズム(Mesh・Voxel・Point Cloud・陰的表現)が解説されているほか、実装に使用できるライブラリ、評価手法、活用用途などが幅広く調べられている。"]} {"source": "Deep reinforcement learning (RL) agents often fail to generalize to unseen scenarios, even when they are trained on many instances of semantically similar environments. Data augmentation has recently been shown to improve the sample efficiency and generalization of RL agents. However, different tasks tend to benefit from different kinds of data augmentation. In this paper, we compare three approaches for automatically finding an appropriate augmentation. These are combined with two novel regularization terms for the policy and value function, required to make the use of data augmentation theoretically sound for certain actor-critic algorithms. We evaluate our methods on the Procgen benchmark which consists of 16 procedurally-generated environments and show that it improves test performance by ~40% relative to standard RL algorithms. Our agent outperforms other baselines specifically designed to improve generalization in RL. In addition, we show that our agent learns policies and representations that are more robust to changes in the environment that do not affect the agent, such as the background. Our implementation is available at this https URL.", "target": ["深層強化学習で適切なData Augmentationを自動選択する研究。選択に多腕バンディッドを使用する一方、Augmentationにより大幅に価値関数/戦略が変わるのは理論上おかしいため(実態は同じ状態のため)、Augmentation前後の戦略の分布距離/価値関数値の差異で正則化を行っている"]} {"source": "Backdoor attacks embed hidden malicious behaviors into deep learning models, which only activate and cause misclassifications on model inputs containing a specific trigger. Existing works on backdoor attacks and defenses, however, mostly focus on digital attacks that use digitally generated patterns as triggers. A critical question remains unanswered: can backdoor attacks succeed using physical objects as triggers, thus making them a credible threat against deep learning systems in the real world? We conduct a detailed empirical study to explore this question for facial recognition, a critical deep learning task. Using seven physical objects as triggers, we collect a custom dataset of 3205 images of ten volunteers and use it to study the feasibility of physical backdoor attacks under a variety of real-world conditions. Our study reveals two key findings. First, physical backdoor attacks can be highly successful if they are carefully configured to overcome the constraints imposed by physical objects. In particular, the placement of successful triggers is largely constrained by the target model's dependence on key facial features. Second, four of today's state-of-the-art defenses against (digital) backdoors are ineffective against physical backdoors, because the use of physical objects breaks core assumptions used to construct these defenses. Our study confirms that (physical) backdoor attacks are not a hypothetical phenomenon but rather pose a serious real-world threat to critical classification tasks. We need new and more robust defenses against backdoors in the physical world.", "target": ["物理ドメインにおける、顔認識に対するバックドア攻撃の検証。バックドア活性化は背景や照明の影響を受けないが、画質には影響を受けること。また、意図せずにバックドアが活性化することが多いこと。更に従来の防御手法は(デジタルドメイン前提のため)物理ドメインでは機能しないこと明らかにした。"]} {"source": "Few-shot classification aims to recognize unseen classes when presented with only a small number of samples. We consider the problem of multi-domain few-shot image classification, where unseen classes and examples come from diverse data sources. This problem has seen growing interest and has inspired the development of benchmarks such as Meta-Dataset. A key challenge in this multi-domain setting is to effectively integrate the feature representations from the diverse set of training domains. Here, we propose a Universal Representation Transformer (URT) layer, that meta-learns to leverage universal features for few-shot classification by dynamically re-weighting and composing the most appropriate domain-specific representations. In experiments, we show that URT sets a new state-of-the-art result on Meta-Dataset. Specifically, it achieves top-performance on the highest number of data sources compared to competing methods. We analyze variants of URT and present a visualization of the attention score heatmaps that sheds light on how the model performs cross-domain generalization. Our code is available at this https URL.", "target": ["メタラーニングでFew-shotの画像分類を行う研究。個別のタスクで学習されたネットワーク(backbone)の出力を組み合わせて予測を行う。backboneの出力は結合しセット単位で平均を取り、Transformerを利用しSelf-Attentionをかけて識別用特徴を作成する。"]} {"source": "With the rapid growth of e-commerce and the popularity of online shopping, fashion retrieval has received considerable attention in the computer vision community. Different from the existing works that mainly focus on identical or similar fashion item retrieval, in this paper, we aim to study the plagiarized clothes retrieval which is somewhat ignored in the academic community while itself has great application value. One of the key challenges is that plagiarized clothes are usually modified in a certain region on the original design to escape the supervision by traditional retrieval methods. To relieve it, we propose a novel network named Plagiarized-Search-Net (PS-Net) based on regional representation, where we utilize the landmarks to guide the learning of regional representations and compare fashion items region by region. Besides, we propose a new dataset named Plagiarized Fashion for plagiarized clothes retrieval, which provides a meaningful complement to the existing fashion retrieval field. Experiments on Plagiarized Fashion dataset verify that our approach is superior to other instance-level counterparts for plagiarized clothes retrieval, showing a promising result for original design protection. Moreover, our PS-Net can also be adapted to traditional fashion retrieval and landmark estimation tasks and achieves the state-of-the-art performance on the DeepFashion and DeepFashion2 datasets.", "target": ["盗作ファッションを検出する研究。盗作の場合オリジナルの一部を改変していることが多いため、画像内の領域を検出しそれ毎に特徴量を算出(この時、ネットワーク内の別ブランチでランドマークを検出し領域検出を誘導する)、領域ごとの近さの重み付き合計から真偽判定を行う。"]} {"source": "We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs). The technique works as follows: we first encourage sparse latent representations when we train a GNN in a supervised setting, then we apply symbolic regression to components of the learned model to extract explicit physical relations. We find the correct known equations, including force laws and Hamiltonians, can be extracted from the neural network. We then apply our method to a non-trivial cosmology example-a detailed dark matter simulation-and discover a new analytic formula which can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. The symbolic expressions extracted from the GNN using our technique also generalized to out-of-distribution data better than the GNN itself. Our approach offers alternative directions for interpreting neural networks and discovering novel physical principles from the representations they learn.", "target": ["シンプルな記号で表現される数式とDNNを橋渡しする研究。Graph NNを使用し、ノード間のメッセージ=力、ノードの更新=力学法則の実行と見なし学習を行う。シンプルなモデルになるようメッセージにL1正則をかけている。最終的にeureqaというソフトで数式化すると力学法則に近い式が得られることを確認。"]} {"source": "Approaches to Grounded Language Learning typically focus on a single task-based final performance measure that may not depend on desirable properties of the learned hidden representations, such as their ability to predict salient attributes or to generalise to unseen situations. To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three sub-tasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zero-shot evaluation. We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. To this end, we extend the original GuessWhat?! dataset by including a semantic layer on top of the perceptual one. Specifically, we enrich the VisualGenome scene graphs associated with the GuessWhat?! images with abstract and situated attributes. By using diagnostic classifiers, we show that current models learn representations that are not expressive enough to encode object attributes (average F1 of 44.27). In addition, they do not learn strategies nor representations that are robust enough to perform well when novel scenes or objects are involved in gameplay (zero-shot best accuracy 50.06%).", "target": ["視覚情報に根差した言語獲得を評価するフレームワーク(GROLLA)の提案。単一タスクでなく複数タスクで評価するようになっており、VQAのようなゴール指向タスクにオブジェクトの属性予測と推測(Zero shot)の2つのタスクを加えている。フレームワークの実装としてCompGuessWhat?!というデータセットも公開"]} {"source": "Intuitively, unfamiliarity should lead to lack of confidence. In reality, current algorithms often make highly confident yet wrong predictions when faced with relevant but unfamiliar examples. A classifier we trained to recognize gender is 12 times more likely to be wrong with a 99% confident prediction if presented with a subject from a different age group than those seen during training. In this paper, we compare and evaluate several methods to improve confidence estimates for unfamiliar and familiar samples. We propose a testing methodology of splitting unfamiliar and familiar samples by attribute (age, breed, subcategory) or sampling (similar datasets collected by different people at different times). We evaluate methods including confidence calibration, ensembles, distillation, and a Bayesian model and use several metrics to analyze label, likelihood, and calibration error. While all methods reduce over-confident errors, the ensemble of calibrated models performs best overall, and T-scaling performs best among the approaches with fastest inference. Our code is available at this https URL .", "target": ["類似しているが未学習のデータに対し、過度に確信度高く予測するのを抑制する手法を比較した研究(性別を予測するモデルでは、学習時と年齢層を変えると確信度99%で12倍間違うというひどい結果になるという)。確信度のキャリブレーション(精度=確信度になるよう調整する)とアンサンブルを併用すると良い"]} {"source": "We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains. These results shed light on recent advances in computer vision and graphics that achieve state-of-the-art results by using MLPs to represent complex 3D objects and scenes. Using tools from the neural tangent kernel (NTK) literature, we show that a standard MLP fails to learn high frequencies both in theory and in practice. To overcome this spectral bias, we use a Fourier feature mapping to transform the effective NTK into a stationary kernel with a tunable bandwidth. We suggest an approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities.", "target": ["全結合層だけで、座標点からの色予測を可能にした研究(2D/3D共に)。x, y, (z)座標をフーリエ特徴に変換するだけで高解像度の予測が可能になる(普通に予測するとぼやけた画像になる)。変換に使う標準偏差(σ)とNTK(Neural Tangent Kernel #1108)の学習収束が連動することも確認。"]} {"source": "Knowledge Distillation (KD) aims to distill the knowledge of a cumbersome teacher model into a lightweight student model. Its success is generally attributed to the privileged information on similarities among categories provided by the teacher model, and in this sense, only strong teacher models are deployed to teach weaker students in practice. In this work, we challenge this common belief by following experimental observations: 1) beyond the acknowledgment that the teacher can improve the student, the student can also enhance the teacher significantly by reversing the KD procedure; 2) a poorly-trained teacher with much lower accuracy than the student can still improve the latter significantly. To explain these observations, we provide a theoretical analysis of the relationships between KD and label smoothing regularization. We prove that 1) KD is a type of learned label smoothing regularization and 2) label smoothing regularization provides a virtual teacher model for KD. From these results, we argue that the success of KD is not fully due to the similarity information between categories from teachers, but also to the regularization of soft targets, which is equally or even more important. Based on these analyses, we further propose a novel Teacher-free Knowledge Distillation (Tf-KD) framework, where a student model learns from itself or manuallydesigned regularization distribution. The Tf-KD achieves comparable performance with normal KD from a superior teacher, which is well applied when a stronger teacher model is unavailable. Meanwhile, Tf-KD is generic and can be directly deployed for training deep neural networks. Without any extra computation cost, Tf-KD achieves up to 0.65\\% improvement on ImageNet over well-established baseline models, which is superior to label smoothing regularization.", "target": ["蒸留は高精度の教師モデル=>生徒モデルの形態が一般的だが、逆でも効果があるという研究。教師出力(分布)との差異による学習はLabel Smoothingによる正則化(LSR)の一種とみなせ、LSRは元々uniformでも機能することから教師が低精度でも問題なく。なんなら教師でなくてもよい"]} {"source": "With the widespread use of deep neural networks (DNNs) in high-stake applications, the security problem of the DNN models has received extensive attention. In this paper, we investigate a specific security problem called trojan attack, which aims to attack deployed DNN systems relying on the hidden trigger patterns inserted by malicious hackers. We propose a training-free attack approach which is different from previous work, in which trojaned behaviors are injected by retraining model on a poisoned dataset. Specifically, we do not change parameters in the original model but insert a tiny trojan module (TrojanNet) into the target model. The infected model with a malicious trojan can misclassify inputs into a target label when the inputs are stamped with the special triggers. The proposed TrojanNet has several nice properties including (1) it activates by tiny trigger patterns and keeps silent for other signals, (2) it is model-agnostic and could be injected into most DNNs, dramatically expanding its attack scenarios, and (3) the training-free mechanism saves massive training efforts comparing to conventional trojan attack methods. The experimental results show that TrojanNet can inject the trojan into all labels simultaneously (all-label trojan attack) and achieves 100% attack success rate without affecting model accuracy on original tasks. Experimental analysis further demonstrates that state-of-the-art trojan detection algorithms fail to detect TrojanNet attack. The code is available at this https URL.", "target": ["DNNモデルのTrojan攻撃。モデルに小さなTrojanモジュールを挿入することで攻撃を行う(既存手法のように学習データ汚染は不要)。Trojanが埋め込まれたモデルは特定の画像のみを攻撃者の意図したクラスに分類するため(他画像は正常分類)、モデルの分類精度には殆ど影響しない。"]} {"source": "Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels, we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide ResNet, and 99.0% accuracy with full fine-tuning, matching the top supervised pretrained models. We are also competitive with self-supervised benchmarks on ImageNet when substituting pixels for a VQVAE encoding, achieving 69.0% top-1 accuracy on a linear probe of our features.", "target": ["GPT-2を利用してピクセル系列を予測した研究。低解像度にした上でピクセルを横に並べ1Dの系列にし、後は普通にTransformerに入れる。自己回帰的に予測を行う損失とBERTのMask予測損失を組み合わせて学習、Average Poolを取った特徴でのクラス予測+中間特徴による予測でFine Tuneする。"]} {"source": "In recent years, on-policy reinforcement learning (RL) has been successfully applied to many different continuous control tasks. While RL algorithms are often conceptually simple, their state-of-the-art implementations take numerous low- and high-level design decisions that strongly affect the performance of the resulting agents. Those choices are usually not extensively discussed in the literature, leading to discrepancy between published descriptions of algorithms and their implementations. This makes it hard to attribute progress in RL and slows down overall progress [Engstrom'20]. As a step towards filling that gap, we implement >50 such ``choices'' in a unified on-policy RL framework, allowing us to investigate their impact in a large-scale empirical study. We train over 250'000 agents in five continuous control environments of different complexity and provide insights and practical recommendations for on-policy training of RL agents.", "target": ["On-Policyの実装で論文に書かれていない実装やパラメーターの影響を調べた研究。組み合わせの数は膨大なので候補は絞り込んでいる。損失関数はPPO、最終レイヤは重みを1/100にしてsoftplusの後マイナス方向スライドしたほうがいい、など細かすぎるテクニックが紹介されている"]} {"source": "Yes, and no. We ask whether recent progress on the ImageNet classification benchmark continues to represent meaningful generalization, or whether the community has started to overfit to the idiosyncrasies of its labeling procedure. We therefore develop a significantly more robust procedure for collecting human annotations of the ImageNet validation set. Using these new labels, we reassess the accuracy of recently proposed ImageNet classifiers, and find their gains to be substantially smaller than those reported on the original labels. Furthermore, we find the original ImageNet labels to no longer be the best predictors of this independently-collected set, indicating that their usefulness in evaluating vision models may be nearing an end. Nevertheless, we find our annotation procedure to have largely remedied the errors in the original labels, reinforcing ImageNet as a powerful benchmark for future research in visual recognition.", "target": ["ImageNetのバリデーションデータを修正し、現在のモデルがImageNetに過適合していないか調査した研究。複数ラベルがあり得る画像や不正確なラベルが存在したため、学習済みモデルを使い確度が高いものを厳選、人手のラベル投票で妥当性を担保し新バリデーションセットを作成・再評価を行っている"]} {"source": "Pre-training is a dominant paradigm in computer vision. For example, supervised ImageNet pre-training is commonly used to initialize the backbones of object detection and segmentation models. He et al., however, show a surprising result that ImageNet pre-training has limited impact on COCO object detection. Here we investigate self-training as another method to utilize additional data on the same setup and contrast it against ImageNet pre-training. Our study reveals the generality and flexibility of self-training with three additional insights: 1) stronger data augmentation and more labeled data further diminish the value of pre-training, 2) unlike pre-training, self-training is always helpful when using stronger data augmentation, in both low-data and high-data regimes, and 3) in the case that pre-training is helpful, self-training improves upon pre-training. For example, on the COCO object detection dataset, pre-training benefits when we use one fifth of the labeled data, and hurts accuracy when we use all labeled data. Self-training, on the other hand, shows positive improvements from +1.3 to +3.4AP across all dataset sizes. In other words, self-training works well exactly on the same setup that pre-training does not work (using ImageNet to help COCO). On the PASCAL segmentation dataset, which is a much smaller dataset than COCO, though pre-training does help significantly, self-training improves upon the pre-trained model. On COCO object detection, we achieve 54.3AP, an improvement of +1.5AP over the strongest SpineNet model. On PASCAL segmentation, we achieve 90.5 mIOU, an improvement of +1.5% mIOU over the previous state-of-the-art result by DeepLabv3+.", "target": ["事前学習と自己教師学習(疑似ラベルによる学習)の有効性を調査した研究。ImageNet(分類)の事前学習はCOCO(セグメンテーション)のラベルデータが多い場合精度向上に寄与しないことが知られていたが、ImageNetで自己教師を行う場合(疑似ラベルを学習データに追加)はデータの過多によらず精度向上する"]} {"source": "This paper presents a new family of backpropagation-free neural architectures, Gated Linear Networks (GLNs). What distinguishes GLNs from contemporary neural networks is the distributed and local nature of their credit assignment mechanism; each neuron directly predicts the target, forgoing the ability to learn feature representations in favor of rapid online learning. Individual neurons can model nonlinear functions via the use of data-dependent gating in conjunction with online convex optimization. We show that this architecture gives rise to universal learning capabilities in the limit, with effective model capacity increasing as a function of network size in a manner comparable with deep ReLU networks. Furthermore, we demonstrate that the GLN learning mechanism possesses extraordinary resilience to catastrophic forgetting, performing comparably to a MLP with dropout and Elastic Weight Consolidation on standard benchmarks. These desirable theoretical and empirical properties position GLNs as a complementary technique to contemporary offline deep learning methods.", "target": ["誤差を逆伝搬せずレイヤ内のノードを学習する手法。各ノードは前レイヤの予測だけでなく重みをかけた入力を受け取り、それぞれ直接ターゲットを予測する(正式な出力は最終レイヤの第一ノードになるよう)。レイヤ間の依存はあるが個別に予測を行うため、各ノードは直接予測値との誤差から学習を行う。"]} {"source": "Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time. Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable subnetworks at initialization. This raises a foundational question: can we identify highly sparse trainable subnetworks at initialization, without ever training, or indeed without ever looking at the data? We provide an affirmative answer to this question through theory driven algorithm design. We first mathematically formulate and experimentally verify a conservation law that explains why existing gradient-based pruning algorithms at initialization suffer from layer-collapse, the premature pruning of an entire layer rendering a network untrainable. This theory also elucidates how layer-collapse can be entirely avoided, motivating a novel pruning algorithm Iterative Synaptic Flow Pruning (SynFlow). This algorithm can be interpreted as preserving the total flow of synaptic strengths through the network at initialization subject to a sparsity constraint. Notably, this algorithm makes no reference to the training data and consistently competes with or outperforms existing state-of-the-art pruning algorithms at initialization over a range of models (VGG and ResNet), datasets (CIFAR-10/100 and Tiny ImageNet), and sparsity constraints (up to 99.99 percent). Thus our data-agnostic pruning algorithm challenges the existing paradigm that, at initialization, data must be used to quantify which synapses are important.", "target": ["初期化段階で枝刈りを行う研究。既存の枝刈りは特定レイヤ内のパラメーター数が0になるレイヤ崩壊が起こる可能性があり、これを防ぐにはMagnitude(element-wiseのパラメーター絶対値)の考慮が重要なことを発見。さらに前後レイヤのMagnitudeを加味することで(SynFlow)効率的な枝刈りを実現。"]} {"source": "We introduce Pixel-aligned Implicit Function (PIFu), a highly effective implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object. Using PIFu, we propose an end-to-end deep learning method for digitizing highly detailed clothed humans that can infer both 3D surface and texture from a single image, and optionally, multiple input images. Highly intricate shapes, such as hairstyles, clothing, as well as their variations and deformations can be digitized in a unified way. Compared to existing representations used for 3D deep learning, PIFu can produce high-resolution surfaces including largely unseen regions such as the back of a person. In particular, it is memory efficient unlike the voxel representation, can handle arbitrary topology, and the resulting surface is spatially aligned with the input image. Furthermore, while previous techniques are designed to process either a single image or multiple views, PIFu extends naturally to arbitrary number of views. We demonstrate high-resolution and robust reconstructions on real world images from the DeepFashion dataset, which contains a variety of challenging clothing types. Our method achieves state-of-the-art performance on a public benchmark and outperforms the prior work for clothed human digitization from a single image.", "target": ["物体表面上の節点から3Dモデルを作るのに適した陰関数(0を境界とし物体内外を識別する)を、ピクセル点の画像特徴と推定したDepthで定義し高解像度の3D表面推定を可能にした研究。高精度だがメモリ効率の悪いフルCNNとメモリ効率は良いが荒い陰関数を上手く組み合わせている"]} {"source": "Multi-scale inference is commonly used to improve the results of semantic segmentation. Multiple images scales are passed through a network and then the results are combined with averaging or max pooling. In this work, we present an attention-based approach to combining multi-scale predictions. We show that predictions at certain scales are better at resolving particular failures modes, and that the network learns to favor those scales for such cases in order to generate better predictions. Our attention mechanism is hierarchical, which enables it to be roughly 4x more memory efficient to train than other recent approaches. In addition to enabling faster training, this allows us to train with larger crop sizes which leads to greater model accuracy. We demonstrate the result of our method on two datasets: Cityscapes and Mapillary Vistas. For Cityscapes, which has a large number of weakly labelled images, we also leverage auto-labelling to improve generalization. Using our approach we achieve a new state-of-the-art results in both Mapillary (61.1 IOU val) and Cityscapes (85.1 IOU test).", "target": ["高精度なセグメンテーションを行うために、異なるスケールのセグメンテーション結果を段階的にAttentionで合成する(物体サイズによって適した解像度が異なるという観測から考案)+事前学習済みモデルで高確信度のセグメンテーションを追加データとして使用する手法を提案。"]} {"source": "We present a framework for data-driven robotics that makes use of a large dataset of recorded robot experience and scales to several tasks using learned reward functions. We show how to apply this framework to accomplish three different object manipulation tasks on a real robot platform. Given demonstrations of a task together with task-agnostic recorded experience, we use a special form of human annotation as supervision to learn a reward function, which enables us to deal with real-world tasks where the reward signal cannot be acquired directly. Learned rewards are used in combination with a large dataset of experience from different tasks to learn a robot policy offline using batch RL. We show that using our approach it is possible to train agents to perform a variety of challenging manipulation tasks including stacking rigid objects and handling cloth.", "target": ["Human in the loopな強化学習。人のデモンストレーションやスクリプトで収集した軌跡に対し報酬の曲線を描き(reward sketch)、軌跡と報酬曲線から報酬関数を学習、それを使用し(オフラインの)強化学習を行う。"]} {"source": "Fine-tuning pre-trained transformer-based language models such as BERT has become a common practice dominating leaderboards across various NLP benchmarks. Despite the strong empirical performance of fine-tuned models, fine-tuning is an unstable process: training the same model with multiple random seeds can result in a large variance of the task performance. Previous literature (Devlin et al., 2019; Lee et al., 2020; Dodge et al., 2020) identified two potential reasons for the observed instability: catastrophic forgetting and small size of the fine-tuning datasets. In this paper, we show that both hypotheses fail to explain the fine-tuning instability. We analyze BERT, RoBERTa, and ALBERT, fine-tuned on commonly used datasets from the GLUE benchmark, and show that the observed instability is caused by optimization difficulties that lead to vanishing gradients. Additionally, we show that the remaining variance of the downstream task performance can be attributed to differences in generalization where fine-tuned models with the same training loss exhibit noticeably different test performance. Based on our analysis, we present a simple but strong baseline that makes fine-tuning BERT-based models significantly more stable than the previously proposed approaches. Code to reproduce our results is available online: this https URL.", "target": ["事前学習済みモデルの転移が安定しない理由を調査した研究。破壊的忘却が原因と言われているが、未学習だとそもそも破壊的忘却は起こらない。真の原因は勾配消失ではないかとしている。少量データでもbias correction(Adamで行われているモーメントの補正)を使い十分iterationを回せば安定するとの結果"]} {"source": "Most reinforcement learning (RL) algorithms assume online access to the environment, in which one may readily interleave updates to the policy with experience collection using that policy. However, in many real-world applications such as health, education, dialogue agents, and robotics, the cost or potential risk of deploying a new data-collection policy is high, to the point that it can become prohibitive to update the data-collection policy more than a few times during learning. With this view, we propose a novel concept of deployment efficiency, measuring the number of distinct data-collection policies that are used during policy learning. We observe that naïvely applying existing model-free offline RL algorithms recursively does not lead to a practical deployment-efficient and sample-efficient algorithm. We propose a novel model-based algorithm, Behavior-Regularized Model-ENsemble (BREMEN) that can effectively optimize a policy offline using 10-20 times fewer data than prior works. Furthermore, the recursive application of BREMEN is able to achieve impressive deployment efficiency while maintaining the same or better sample efficiency, learning successful policies from scratch on simulated robotic environments with only 5-10 deployments, compared to typical values of hundreds to millions in standard RL baselines. Codes and pre-trained models are available at this https URL .", "target": ["強化学習エージェントは戦略を変えながら環境での試行を繰り返すが、その過程で予想外の行動をとると困るため最小限の戦略変更で学習する手法を提案。Dynaのように得られた軌跡からモデルを作成し学習に使用するが、モデルのアンサンブルをとる・戦略を軌跡取得の戦略に近づけるなどし学習の偏りを防ぐ"]} {"source": "A transcompiler, also known as source-to-source translator, is a system that converts source code from a high-level programming language (such as C++ or Python) to another. Transcompilers are primarily used for interoperability, and to port codebases written in an obsolete or deprecated language (e.g. COBOL, Python 2) to a modern one. They typically rely on handcrafted rewrite rules, applied to the source code abstract syntax tree. Unfortunately, the resulting translations often lack readability, fail to respect the target language conventions, and require manual modifications in order to work properly. The overall translation process is timeconsuming and requires expertise in both the source and target languages, making code-translation projects expensive. Although neural models significantly outperform their rule-based counterparts in the context of natural language translation, their applications to transcompilation have been limited due to the scarcity of parallel data in this domain. In this paper, we propose to leverage recent approaches in unsupervised machine translation to train a fully unsupervised neural transcompiler. We train our model on source code from open source GitHub projects, and show that it can translate functions between C++, Java, and Python with high accuracy. Our method relies exclusively on monolingual source code, requires no expertise in the source or target languages, and can easily be generalized to other programming languages. We also build and release a test set composed of 852 parallel functions, along with unit tests to check the correctness of translations. We show that our model outperforms rule-based commercial baselines by a significant margin.", "target": ["プログラミング言語間の翻訳を行う研究。古くなった/非推奨の言語を他の言語にアップデートすることを企図している(Python2=>3など)。Mask復元による表現学習、ノイズ復元による生成学習、仕上げにBackTranslationと教師なし手法を組み合わせパラレルコーパス不要で学習する"]} {"source": "Modern text-to-speech synthesis pipelines typically involve multiple processing stages, each of which is designed or learnt independently from the rest. In this work, we take on the challenging task of learning to synthesise speech from normalised text or phonemes in an end-to-end manner, resulting in models which operate directly on character or phoneme input sequences and produce raw speech audio outputs. Our proposed generator is feed-forward and thus efficient for both training and inference, using a differentiable alignment scheme based on token length prediction. It learns to produce high fidelity audio through a combination of adversarial feedback and prediction losses constraining the generated audio to roughly match the ground truth in terms of its total duration and mel-spectrogram. To allow the model to capture temporal variation in the generated audio, we employ soft dynamic time warping in the spectrogram-based prediction loss. The resulting model achieves a mean opinion score exceeding 4 on a 5 point scale, which is comparable to the state-of-the-art models relying on multi-stage training and additional supervision.", "target": ["文字/音素系列から直接音声を生成する研究。一旦内部的に低サンプリングレート(200Hz)で対応を取り(Aligner)、そこからUpsampleして音声を生成、GANベースのモデルで学習する(Decoder)。対応付けが無視されないよう、スペクトログラムレベルでの敵対/予測誤差を使用している。"]} {"source": "In this paper, we introduce a novel form of value function, Q(s,s′), that expresses the utility of transitioning from a state s to a neighboring state s′ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at this http URL.", "target": ["行動評価(Q(s, a))でなく遷移評価(Q(s, s'))を学習し「望ましい遷移」を求めた後、遷移を達成するための行動を逆算する強化学習を提案。行動への依存を外し転移を行いやすくする(行動を1つ増やした~というケースに対応しやすい)。DDPGの戦略π(s)を遷移関数τ(s)に置き換えることでOff-policyで学習する"]} {"source": "While most machine translation systems to date are trained on large parallel corpora, humans learn language in a different way: by being grounded in an environment and interacting with other humans. In this work, we propose a communication game where two agents, native speakers of their own respective languages, jointly learn to solve a visual referential task. We find that the ability to understand and translate a foreign language emerges as a means to achieve shared goals. The emergent translation is interactive and multimodal, and crucially does not require parallel corpora, but only monolingual, independent text and corresponding images. Our proposed translation model achieves this by grounding the source and target languages into a shared visual modality, and outperforms several baselines on both word-level and sentence-level translation tasks. Furthermore, we show that agents in a multilingual community learn to translate better and faster than in a bilingual communication setting.", "target": ["翻訳モデルを対話的に学習する研究。人間がコミュニケーションを通じて言語を学ぶ過程を参考にしている。画像と説明のペアから画像=>自言語、相手言語=>画像の学習をそれぞれ行う。翻訳を行う際は相手言語=>画像=>自言語の順で学習(※画像は潜在表現)。多言語で学習したほうが学習速度が速いとのこと"]} {"source": "Though anomaly detection (AD) can be viewed as a classification problem (nominal vs. anomalous) it is usually treated in an unsupervised manner since one typically does not have access to, or it is infeasible to utilize, a dataset that sufficiently characterizes what it means to be \"anomalous.\" In this paper we present results demonstrating that this intuition surprisingly seems not to extend to deep AD on images. For a recent AD benchmark on ImageNet, classifiers trained to discern between normal samples and just a few (64) random natural images are able to outperform the current state of the art in deep AD. Experimentally we discover that the multiscale structure of image data makes example anomalies exceptionally informative.", "target": ["異常検知は「異常」データを十分集めるのが困難なため教師なしで行われることが多いが、画像の異常検知では少数サンプル(64~128)の教師ありで教師なしSOTAモデルを超えられるという研究結果。複数解像度の特徴が識別能力に影響しているようで、画像をぼかすと教師ありの性能はガクッと落ちる。"]} {"source": "The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While this enabled us to train large-scale neural networks in datacenters and deploy them on edge devices, the focus so far is on average-case performance. In this work, we introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical. We show how adversaries can exploit carefully crafted \\boldsymbol{sponge}~\\boldsymbol{examples}, which are inputs designed to maximise energy consumption and latency. We mount two variants of this attack on established vision and language models, increasing energy consumption by a factor of 10 to 200. Our attacks can also be used to delay decisions where a network has critical real-time performance, such as in perception for autonomous vehicles. We demonstrate the portability of our malicious inputs across CPUs and a variety of hardware accelerator chips including GPUs, and an ASIC simulator. We conclude by proposing a defense strategy which mitigates our attack by shifting the analysis of energy consumption in hardware from an average-case to a worst-case perspective.", "target": ["ニューラルネットワーク(NN)に対するDoS攻撃。NNへの入力データを細工し、推論時間やエネルギー消費の増大を図る。検証では、エネルギー消費が10倍から200倍に増加したケースを確認したとのこと。自動走行車のように推論遅延が許容されないケースでは、深刻な問題になる可能性がある。"]} {"source": "Object detection is a crucial task in computer vision systems with a wide range of applications in autonomous driving, medical imaging, retail, security, face recognition, robotics, and others. Nowadays, the neural networks-based models are used to localize and classify instances of objects of particular classes. When real-time inference is not required, the ensembles of models help to achieve better results. In this work, we present a novel method for combining predictions of object detection models: weighted boxes fusion. Our algorithm utilizes confidence scores of all proposed bounding boxes to constructs the averaged boxes. We tested method on several datasets and evaluated it in the context of the Open Images and COCO Object Detection tracks, achieving top results in these challenges. The 3D version of boxes fusion was successfully applied by the winning teams of Waymo Open Dataset and Lyft 3D Object Detection for Autonomous Vehicles challenges. The source code is publicly available at https://github.com/ ZFTurbo/Weighted-Boxes-Fusion.", "target": ["物体検出のタスクにおける、アンサンブル手法の提案。 候補となるBounding BoxをConfidence Scoreとアンサンブルのモデル数を利用した重み付けを行い、Bounding Boxを推定するWeighted Boxes Fusion(WBF)を提案。"]} {"source": "The two dominant approaches to neural text generation are fully autoregressive models, using serial beam search decoding, and non-autoregressive models, using parallel decoding with no output dependencies. This work proposes an autoregressive model with sub-linear parallel time generation. Noting that conditional random fields with bounded context can be decoded in parallel, we propose an efficient cascaded decoding approach for generating high-quality output. To parameterize this cascade, we introduce a Markov transformer, a variant of the popular fully autoregressive model that allows us to simultaneously decode with specific autoregressive context cutoffs. This approach requires only a small modification from standard autoregressive training, while showing competitive accuracy/speed tradeoff compared to existing methods on five machine translation datasets.", "target": ["高速な文生成を可能にする手法の提案。生成対象系列を区切り階層状(Cascade)に条件付けすることで並列化、かつ連続する語の空間を刈り込む(可能性が低いn-gramを除外する)ことでさらに速度を上げている。区切った範囲内での生成を学習するため境界で状態リセットを行うMarkov transformerを提案。"]} {"source": "Can we automatically group images into semantically meaningful clusters when ground-truth annotations are absent? The task of unsupervised image classification remains an important, and open challenge in computer vision. Several recent approaches have tried to tackle this problem in an end-to-end fashion. In this paper, we deviate from recent works, and advocate a two-step approach where feature learning and clustering are decoupled. First, a self-supervised task from representation learning is employed to obtain semantically meaningful features. Second, we use the obtained features as a prior in a learnable clustering approach. In doing so, we remove the ability for cluster learning to depend on low-level features, which is present in current end-to-end learning approaches. Experimental evaluation shows that we outperform state-of-the-art methods by large margins, in particular +26.6% on CIFAR10, +25.0% on CIFAR100-20 and +21.3% on STL10 in terms of classification accuracy. Furthermore, our method is the first to perform well on a large-scale dataset for image classification. In particular, we obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime without the use of any ground-truth annotations. The code is made publicly available at this https URL.", "target": ["教師なし学習でどこまでクラス分類ができるか検証した研究。疑似タスクによる表現学習を行った後、サンプルが周辺含め同一クラスになるよう制約する損失関数で学習する。これにより局所特徴に依存しないクラスタリングを可能にする。ImageNetでランダムに選択した2000クラスに対しtop-1 69.3%を達成。"]} {"source": "A multitude of cyber-physical system (CPS) applications, including design, control, diagnosis, prognostics, and a host of other problems, are predicated on the assumption of model availability. There are mainly two approaches to modeling: Physics/Equation based modeling (Model-Based, MB) and Machine Learning (ML). Recently, there is a growing consensus that ML methodologies relying on data need to be coupled with prior scientific knowledge (or physics, MB) for modeling CPS. We refer to the paradigm that combines MB approaches with ML as hybrid learning methods. Hybrid modeling (HB) methods is a growing field within both the ML and scientific communities, and are recognized as an important emerging but nascent area of research. Recently, several works have attempted to merge MB and ML models for the complete exploitation of their combined potential. However, the research literature is scattered and unorganized. So, we make a meticulous and systematic attempt at organizing and standardizing the methods of combining ML and MB models. In addition to that, we outline five metrics for the comprehensive evaluation of hybrid models. Finally, we conclude by shedding some light on the challenges of hybrid models, which we, as a research community, should focus on for harnessing the full potential of hybrid models. An additional feature of this survey is that the hybrid modeling work has been discussed with a focus on modeling cyber-physical systems.", "target": ["物理観測から物理反応を返すCyber-Physicalなシステムを構築するために、既存のモデル(理論)ベースと機械学習ベースのミックスを提案しているサーベイ。相互補完的な関係、組み合わせ方の類型、今後取り組むべきポイントがまとめられている。"]} {"source": "Technologies for the longitudinal monitoring of a person’s health are poorly integrated with clinical workflows, and have rarely produced actionable biometric data for healthcare providers. Here, we describe easily deployable hardware and software for the long-term analysis of a user’s excreta through data collection and models of human health. The ‘smart’ toilet, which is self-contained and operates autonomously by leveraging pressure and motion sensors, analyses the user’s urine using a standard-of-care colorimetric assay that traces red–green–blue values from images of urinalysis strips, calculates the flow rate and volume of urine using computer vision as a uroflowmeter, and classifies stool according to the Bristol stool form scale using deep learning, with performance that is comparable to the performance of trained medical personnel. Each user of the toilet is identified through their fingerprint and the distinctive features of their anoderm, and the data are securely stored and analysed in an encrypted cloud server. The toilet may find uses in the screening, diagnosis and longitudinal monitoring of specific patient populations.", "target": ["トイレの神様・・・でなくトイレのAIの研究。大と小いずれも活用し、医療従事者と同等の診断を行う。小の場合比色検査のスペクトルから、大の場合形状を使用して診断を行う。"]} {"source": "Penetration testing is a security exercise aimed at assessing the security of a system by simulating attacks against it. So far, penetration testing has been carried out mainly by trained human attackers and its success critically depended on the available expertise. Automating this practice constitutes a non-trivial problem, as the range of actions that a human expert may attempts against a system and the range of knowledge she relies on to take her decisions are hard to capture. In this paper, we focus our attention on simplified penetration testing problems expressed in the form of capture the flag hacking challenges, and we analyze how model-free reinforcement learning algorithms may help to solve them. In modeling these capture the flag competitions as reinforcement learning problems we highlight that a specific challenge that characterize penetration testing is the problem of discovering the structure of the problem at hand. We then show how this challenge may be eased by relying on different forms of prior knowledge that may be provided to the agent. In this way we demonstrate how the feasibility of tackling penetration testing using reinforcement learning may rest on a careful trade-off between model-free and model-based algorithms. By using techniques to inject a priori knowledge, we show it is possible to better direct the agent and restrict the space of its exploration problem, thus achieving solutions more efficiently.", "target": ["システムに侵入可能か試みるペネトレーションテストに強化学習を適用する研究。ポートスキャンを実行し(脆弱性のある)サービスを検知するという基本の初手に強化学習を適用している。スキャンを走らせるとサーバー側がポートを変えるなど防衛側の処置に対応できるかも見ている。"]} {"source": "Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.", "target": ["1750億のパラメーターを持つ巨大な言語モデルGPT-3で事前学習を行い、Few-shot(0~20,30程度)で様々なタスクを解いた研究。GPT-2は15億なので約100倍。SOTAには及ばないが、データセット全件でBERTをFine-Tuneした結果と同等の精度がFew-shotで出せることを確認。バイアス等の調査結果も記載されている"]} {"source": "Recent work has found evidence that Multilingual BERT (mBERT), a transformer-based multilingual masked language model, is capable of zero-shot cross-lingual transfer, suggesting that some aspects of its representations are shared cross-lingually. To better understand this overlap, we extend recent work on finding syntactic trees in neural networks' internal representations to the multilingual setting. We show that subspaces of mBERT representations recover syntactic tree distances in languages other than English, and that these subspaces are approximately shared across languages. Motivated by these results, we present an unsupervised analysis method that provides evidence mBERT learns representations of syntactic dependency labels, in the form of clusters which largely agree with the Universal Dependencies taxonomy. This evidence suggests that even without explicit supervision, multilingual masked language models learn certain linguistic universals.", "target": ["多言語のBERTで文法知識がどのように蓄積されているかを調べた研究。各言語の文法知識は潜在領域で供用されており、そのためある言語の学習で他言語の文法知識が復元できる現象を観測。また潜在領域の状態がUniversal Dependencyに近いとしている。"]} {"source": "We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines. Training code and pretrained models are available at this https URL.", "target": ["物体検出を予測領域と実領域(ラベル)との組み合わせ問題として解く研究。CNNで得た特徴と空間位置を表すpositional encoding(PE)をTransformer(Encoder)に入力し、出力をTransformer(Decoder)に渡し物体検出用PEでクエリする。Decoderはクラス確率と物体(中央)位置を出力。ラベルとの適合から学習する"]} {"source": "Deep learning (DL) has great influence on large parts of science and increasingly established itself as an adaptive method for new challenges in the field of Earth observation (EO). Nevertheless, the entry barriers for EO researchers are high due to the dense and rapidly developing field mainly driven by advances in computer vision (CV). To lower the barriers for researchers in EO, this review gives an overview of the evolution of DL with a focus on image segmentation and object detection in convolutional neural networks (CNN). The survey starts in 2012, when a CNN set new standards in image recognition, and lasts until late 2019. Thereby, we highlight the connections between the most important CNN architectures and cornerstones coming from CV in order to alleviate the evaluation of modern DL models. Furthermore, we briefly outline the evolution of the most popular DL frameworks and provide a summary of datasets in EO. By discussing well performing DL architectures on these datasets as well as reflecting on advances made in CV and their impact on future research in EO, we narrow the gap between the reviewed, theoretical concepts from CV and practical application in EO.", "target": ["地球観測にCNNを利用する手法のサーベイ(衛星画像への適用など)。サーベイの前半は通常のCNNのサーベイになっているが、非常に丁寧にまとめられておりこれだけでも価値がある。地球観測ではパッチの分類/物体検出の適用例などが紹介されている。"]} {"source": "Recent studies propose membership inference (MI) attacks on deep models, where the goal is to infer if a sample has been used in the training process. Despite their apparent success, these studies only report accuracy, precision, and recall of the positive class (member class). Hence, the performance of these attacks have not been clearly reported on negative class (non-member class). In this paper, we show that the way the MI attack performance has been reported is often misleading because they suffer from high false positive rate or false alarm rate (FAR) that has not been reported. FAR shows how often the attack model mislabel non-training samples (non-member) as training (member) ones. The high FAR makes MI attacks fundamentally impractical, which is particularly more significant for tasks such as membership inference where the majority of samples in reality belong to the negative (non-training) class. Moreover, we show that the current MI attack models can only identify the membership of misclassified samples with mediocre accuracy at best, which only constitute a very small portion of training samples. We analyze several new features that have not been comprehensively explored for membership inference before, including distance to the decision boundary and gradient norms, and conclude that deep models' responses are mostly similar among train and non-train samples. We conduct several experiments on image classification tasks, including MNIST, CIFAR-10, CIFAR-100, and ImageNet, using various model architecture, including LeNet, AlexNet, ResNet, etc. We show that the current state-of-the-art MI attacks cannot achieve high accuracy and low FAR at the same time, even when the attacker is given several advantages. The source code is available at this https URL.", "target": ["DNNモデルの学習データを推測するメンバーシップ推論攻撃の実用性を検証した論文。本攻撃はモデルに入力される学習サンプル/非学習サンプルで応答が異なることが前提だが、様々なデータセット/アーキテクチャで検証した結果、応答には殆ど差がなく、本攻撃は現実的ではないと結論付けている。"]} {"source": "Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.", "target": ["事前学習済みモデルが内包する知識を明確にしてQAを行う手法。End2Endだと根拠がわからないため、Queryをベクトル化し(同様にベクトル化した)Wikipediaの記事と近いものを選択、選択した記事からAnswerを生成する。学習はQAペアのみから行い選択されるべき記事は指定しない。open-domain QAでSOTA"]} {"source": "Building rich machine learning datasets in a scalable manner often necessitates a crowd-sourced data collection pipeline. In this work, we use human studies to investigate the consequences of employing such a pipeline, focusing on the popular ImageNet dataset. We study how specific design choices in the ImageNet creation process impact the fidelity of the resulting dataset---including the introduction of biases that state-of-the-art models exploit. Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for. Finally, our findings emphasize the need to augment our current model training and evaluation toolkit to take such misalignments into account. To facilitate further research, we release our refined ImageNet annotations at this https URL.", "target": ["ImageNetのラベル付けに関する研究。画像内に複数画像があるケース(20%存在)、適切なラベルがないケースがありtop-1の精度だけでは評価できないと指摘。データセット/ラベルをスケールさせるほどアノテーターにも見分けがつかなくなる矛盾も指摘している。"]} {"source": "Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.", "target": ["ImageNetのような大規模データセットで学習したモデルを効率的に転移するためのTips。データセットが大きい場合はモデルも大きくして事前学習、学習率は急に落とさずかといって低すぎず(収束を速める必要はない)、バッチは大きく正規化はGN/WSが良好との結果。"]} {"source": "Simulation is a crucial component of any robotic system. In order to simulate correctly, we need to write complex rules of the environment: how dynamic agents behave, and how the actions of each of the agents affect the behavior of others. In this paper, we aim to learn a simulator by simply watching an agent interact with an environment. We focus on graphics games as a proxy of the real environment. We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training. Given a key pressed by the agent, GameGAN \"renders\" the next screen using a carefully designed generative adversarial network. Our approach offers key advantages over existing work: we design a memory module that builds an internal map of the environment, allowing for the agent to return to previously visited locations with high visual consistency. In addition, GameGAN is able to disentangle static and dynamic components within an image making the behavior of the model more interpretable, and relevant for downstream tasks that require explicit reasoning over dynamic elements. This enables many interesting applications such as swapping different components of the game to build new games that do not exist.", "target": ["GANを使用してゲーム環境をエミュレートした研究。ゲームエンジンなしに、画像のみからPAC-MANの挙動を模倣することに成功。ゲーム画面の遷移を再帰的に処理するDynamics Engineで状態を管理し、現在/過去状態の履歴から画面の生成を行う。"]} {"source": "In this work we present a monocular visual odometry (VO) algorithm which leverages geometry-based methods and deep learning. Most existing VO/SLAM systems with superior performance are based on geometry and have to be carefully designed for different application scenarios. Moreover, most monocular systems suffer from scale-drift issue.Some recent deep learning works learn VO in an end-to-end manner but the performance of these deep systems is still not comparable to geometry-based methods. In this work, we revisit the basics of VO and explore the right way for integrating deep learning with epipolar geometry and Perspective-n-Point (PnP) method. Specifically, we train two convolutional neural networks (CNNs) for estimating single-view depths and two-view optical flows as intermediate outputs. With the deep predictions, we design a simple but robust frame-to-frame VO algorithm (DF-VO) which outperforms pure deep learning-based and geometry-based methods. More importantly, our system does not suffer from the scale-drift issue being aided by a scale consistent single-view depth CNN. Extensive experiments on KITTI dataset shows the robustness of our system and a detailed ablation study shows the effect of different factors in our system.", "target": ["DNNと既存の幾何的手法を組み合わせて自己位置推定を行う研究。Optical Flow用とDepth(単眼)用2つのCNNを用意し、2D-2Dの点一致を前者、2D-3Dの一致を後者で行いカメラ位置を推定する。2D-3Dの一致を取ることでスケールの誤差蓄積(スケールドリフト)を防止している。"]} {"source": "Machine learning (ML) is increasingly being used in image retrieval systems for medical decision making. One application of ML is to retrieve visually similar medical images from past patients (e.g. tissue from biopsies) to reference when making a medical decision with a new patient. However, no algorithm can perfectly capture an expert's ideal notion of similarity for every case: an image that is algorithmically determined to be similar may not be medically relevant to a doctor's specific diagnostic needs. In this paper, we identified the needs of pathologists when searching for similar images retrieved using a deep learning algorithm, and developed tools that empower users to cope with the search algorithm on-the-fly, communicating what types of similarity are most important at different moments in time. In two evaluations with pathologists, we found that these refinement tools increased the diagnostic utility of images found and increased user trust in the algorithm. The tools were preferred over a traditional interface, without a loss in diagnostic accuracy. We also observed that users adopted new strategies when using refinement tools, re-purposing them to test and understand the underlying algorithm and to disambiguate ML errors from their own errors. Taken together, these findings inform future human-ML collaborative systems for expert decision-making.", "target": ["ガンの病理診断時に過去症例と似た画像を検索するシステムの提案。診断者は複数の仮説を立て過去症例等をもとに絞り込みを行っていくが、各仮説で重視したい類似点が異なるため、画像検索時に強調したいポイントを指定できるようにするなどの工夫をしている。"]} {"source": "Previous researches of sketches often considered sketches in pixel format and leveraged CNN based models in the sketch understanding. Fundamentally, a sketch is stored as a sequence of data points, a vector format representation, rather than the photo-realistic image of pixels. SketchRNN studied a generative neural representation for sketches of vector format by Long Short Term Memory networks (LSTM). Unfortunately, the representation learned by SketchRNN is primarily for the generation tasks, rather than the other tasks of recognition and retrieval of sketches. To this end and inspired by the recent BERT model, we present a model of learning Sketch Bidirectional Encoder Representation from Transformer (Sketch-BERT). We generalize BERT to sketch domain, with the novel proposed components and pre-training algorithms, including the newly designed sketch embedding networks, and the self-supervised learning of sketch gestalt. Particularly, towards the pre-training task, we present a novel Sketch Gestalt Model (SGM) to help train the Sketch-BERT. Experimentally, we show that the learned representation of Sketch-BERT can help and improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.", "target": ["BERT(#959 )の技法をスケッチの軌跡系列に適用した研究。描画線の生成はLSTMなどが使われるが、スケッチの分類/検索ではリッチな潜在表現が求められるためBERTを使用。穴埋め箇所を埋めるように、消した描画を復元する形で学習し、分類/検索のタスク種別は入力の頭に付与する。"]} {"source": "Modern reinforcement learning algorithms can learn solutions to increasingly difficult control problems while at the same time reduce the amount of prior knowledge needed for their application. One of the remaining challenges is the definition of reward schemes that appropriately facilitate exploration without biasing the solution in undesirable ways, and that can be implemented on real robotic systems without expensive instrumentation. In this paper we focus on a setting in which goal tasks are defined via simple sparse rewards, and exploration is facilitated via agent-internal auxiliary tasks. We introduce the idea of simple sensor intentions (SSIs) as a generic way to define auxiliary tasks. SSIs reduce the amount of prior knowledge that is required to define suitable rewards. They can further be computed directly from raw sensor streams and thus do not require expensive and possibly brittle state estimation on real systems. We demonstrate that a learning system based on these rewards can solve complex robotic tasks in simulation and in real world settings. In particular, we show that a real robotic arm can learn to grasp and lift and solve a Ball-in-a-Cup task from scratch, when only raw sensor streams are used for both controller input and in the auxiliary reward definition.", "target": ["強化学習の探索を促進する補助タスクとして、センサーの反応を使用する研究。状態をスカラー値(センサー値)に変換し、特異な反応(最大/最小値など)が得られた場合に報酬を与える。状態を全て値系列(センサーストリーム)とすることで事前知識なしに効率的な学習が可能。"]} {"source": "Natural language is perhaps the most flexible and intuitive way for humans to communicate tasks to a robot. Prior work in imitation learning typically requires each task be specified with a task id or goal image -- something that is often impractical in open-world environments. On the other hand, previous approaches in instruction following allow agent behavior to be guided by language, but typically assume structure in the observations, actuators, or language that limit their applicability to complex settings like robotics. In this work, we present a method for incorporating free-form natural language conditioning into imitation learning. Our approach learns perception from pixels, natural language understanding, and multitask continuous control end-to-end as a single neural network. Unlike prior work in imitation learning, our method is able to incorporate unlabeled and unstructured demonstration data (i.e. no task or language labels). We show this dramatically improves language conditioned performance, while reducing the cost of language annotation to less than 1% of total data. At test time, a single language conditioned visuomotor policy trained with our method can perform a wide variety of robotic manipulation skills in a 3D environment, specified only with natural language descriptions of each task (e.g. \"open the drawer.now pick up the block.now press the green button\"). To scale up the number of instructions an agent can follow, we propose combining text conditioned policies with large pretrained neural language models. We find this allows a policy to be robust to many out-of-distribution synonym instructions, without requiring new demonstrations. See videos of a human typing live text commands to our agent at this http URL", "target": ["自然言語でロボットに指示を出す研究。指示=>操作の順でデータをそろえるのは大変なため、操作⇒指示の順でデータを作成している(操作のシーケンスを見た後、後付けで指示を作成する)。各タスクの画像/言語/idをEncodeして行動の条件付けに使うが、テスト時は言語のみを使用する。"]} {"source": "Adversarial examples are data points misclassified by neural networks. Originally, adversarial examples were limited to adding small perturbations to a given image. Recent work introduced the generalized concept of unrestricted adversarial examples, without limits on the added perturbations. In this paper, we introduce a new category of attacks that create unrestricted adversarial examples for object detection. Our key idea is to generate adversarial objects that are unrelated to the classes identified by the target object detector. Different from previous attacks, we use off-the-shelf Generative Adversarial Networks (GAN), without requiring any further training or modification. Our method consists of searching over the latent normal space of the GAN for adversarial objects that are wrongly identified by the target object detector. We evaluate this method on the commonly used Faster R-CNN ResNet-101, Inception v2 and SSD Mobilenet v1 object detectors using logo generative iWGAN-LC and SNGAN trained on CIFAR-10. The empirical results show that the generated adversarial objects are indistinguishable from non-adversarial objects generated by the GANs, transferable between the object detectors and robust in the physical world. This is the first work to study unrestricted false positive adversarial examples for object detection.", "target": ["物体検知器の敵対的サンプル(AE)を作成する新手法。従来の手法は、微細な摂動を加えたシールを物体に貼っていたが、それでも人間の目には不自然に映ることがあった。そこで本手法では、GANを使用して「ロゴ風のAE」を作成する。外観は単なるロゴに見えるため、摂動の加え方に制限がない。"]} {"source": "This paper presents Prototypical Contrastive Learning (PCL), an unsupervised representation learning method that addresses the fundamental limitations of instance-wise contrastive learning. PCL not only learns low-level features for the task of instance discrimination, but more importantly, it implicitly encodes semantic structures of the data into the learned embedding space. Specifically, we introduce prototypes as latent variables to help find the maximum-likelihood estimation of the network parameters in an Expectation-Maximization framework. We iteratively perform E-step as finding the distribution of prototypes via clustering and M-step as optimizing the network via contrastive learning. We propose ProtoNCE loss, a generalized version of the InfoNCE loss for contrastive learning, which encourages representations to be closer to their assigned prototypes. PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks with substantial improvement in low-resource transfer learning. Code and pretrained models are available at this https URL.", "target": ["データ単体同士の近さでなく、構造上の近さを表せる表現を学習する手法(馬に乗っている人、なら「馬」と「人」それぞれに近いなど)。クラスタリング(k-means)を行い中心点(プロトタイプ=「馬」「人」等に相当)を計算、プロトタイプに基づく尤度最大化、を交互に繰り返し(EMアルゴリズム)最適化を行う"]} {"source": "Reinforcement learning allows solving complex tasks, however, the learning tends to be task-specific and the sample efficiency remains a challenge. We present Plan2Explore, a self-supervised reinforcement learning agent that tackles both these challenges through a new approach to self-supervised exploration and fast adaptation to new tasks, which need not be known during exploration. During exploration, unlike prior methods which retrospectively compute the novelty of observations after the agent has already reached them, our agent acts efficiently by leveraging planning to seek out expected future novelty. After exploration, the agent quickly adapts to multiple downstream tasks in a zero or a few-shot manner. We evaluate on challenging control tasks from high-dimensional image inputs. Without any training supervision or task-specific interaction, Plan2Explore outperforms prior self-supervised exploration methods, and in fact, almost matches the performances oracle which has access to rewards. Videos and code at this https URL", "target": ["World Modelを使用してタスク独立の探索方法を学習し転移性能を高める研究。探索を促す内発的報酬として、複数モデル間の状態予測がどれだけ不一致かを使用している。World Modelと(タスク独立に)収集された軌跡をもとに(タスク固有の報酬関数へ)転移を行う。DM ControlのZero shotタスクで効果を確認"]} {"source": "Pretrained masked language models (MLMs) require finetuning for most NLP tasks. Instead, we evaluate MLMs out of the box via their pseudo-log-likelihood scores (PLLs), which are computed by masking tokens one by one. We show that PLLs outperform scores from autoregressive language models like GPT-2 in a variety of tasks. By rescoring ASR and NMT hypotheses, RoBERTa reduces an end-to-end LibriSpeech model's WER by 30% relative and adds up to +1.7 BLEU on state-of-the-art baselines for low-resource translation pairs, with further gains from domain adaptation. We attribute this success to PLL's unsupervised expression of linguistic acceptability without a left-to-right bias, greatly improving on scores from GPT-2 (+10 points on island effects, NPI licensing in BLiMP). One can finetune MLMs to give scores without masking, enabling computation in a single inference pass. In all, PLLs and their associated pseudo-perplexities (PPPLs) enable plug-and-play use of the growing number of pretrained MLMs; e.g., we use a single cross-lingual model to rescore translations in multiple languages. We release our library for language model scoring at this https URL.", "target": ["穴埋め形式の言語モデル(Masked Language Model)で各単語の出現確率(疑似尤度)を算出して系列生成のスコアリングに用いることで、精度を向上させる研究。音声認識・翻訳などで効果を確認。"]} {"source": "DeepRobust is a PyTorch adversarial learning library which aims to build a comprehensive and easy-to-use platform to foster this research field. It currently contains more than 10 attack algorithms and 8 defense algorithms in image domain and 9 attack algorithms and 4 defense algorithms in graph domain, under a variety of deep learning architectures. In this manual, we introduce the main contents of DeepRobust with detailed instructions. The library is kept updated and can be found at this https URL.", "target": ["PyTorch用の敵対的学習ライブラリ。DNNベースの画像分類器に対する10以上の攻撃手法と8つの防御手法、および、GNNに対する9つの攻撃手法と4つの防御手法を検証することが可能。オープンソースで公開されている。"]} {"source": "Modern deep learning models for NLP are notoriously opaque. This has motivated the development of methods for interpreting such models, e.g., via gradient-based saliency maps or the visualization of attention weights. Such approaches aim to provide explanations for a particular model prediction by highlighting important words in the corresponding input text. While this might be useful for tasks where decisions are explicitly influenced by individual tokens in the input, we suspect that such highlighting is not suitable for tasks where model decisions should be driven by more complex reasoning. In this work, we investigate the use of influence functions for NLP, providing an alternative approach to interpreting neural text classifiers. Influence functions explain the decisions of a model by identifying influential training examples. Despite the promise of this approach, influence functions have not yet been extensively evaluated in the context of NLP, a gap addressed by this work. We conduct a comparison between influence functions and common word-saliency methods on representative tasks. As suspected, we find that influence functions are particularly useful for natural language inference, a task in which 'saliency maps' may not have clear interpretation. Furthermore, we develop a new quantitative measure based on influence functions that can reveal artifacts in training data.", "target": ["自然言語処理モデルの推論根拠を特定するために、各学習データの影響度を調べる手法。ある学習データを抜いた時にモデルのパラメーターが大きく変わるほど影響度が高い=モデルの判断根拠となっているとする。一般的なSaliencyと併用してモデルを解析できるという。"]} {"source": "Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the ImageNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions and is more stable to hyperparameter settings such as optimizers and data augmentations. Our loss function is simple to implement, and reference TensorFlow code is released at this https URL.", "target": ["教師なしの表現学習で使われることが多いContrastive lossを教師あり学習に導入し、一般的なCross entropy lossよりも高い精度が記録できたという研究。ラベルがあるためAugmentationよりも多様性に富んだpositive、(サンプリングによらない)明らかなnegativeを選択することができる。"]} {"source": "Humans are highly adept at walking in environments with foot placement constraints, including stepping-stone scenarios where the footstep locations are fully constrained. Finding good solutions to stepping-stone locomotion is a longstanding and fundamental challenge for animation and robotics. We present fully learned solutions to this difficult problem using reinforcement learning. We demonstrate the importance of a curriculum for efficient learning and evaluate four possible curriculum choices compared to a non-curriculum baseline. Results are presented for a simulated human character, a realistic bipedal robot simulation and a monster character, in each case producing robust, plausible motions for challenging stepping stone sequences and terrains.", "target": ["二足歩行を学習するためにカリキュラム学習を使用した研究。動的に生成される石を踏み外さないよう2ステップごとに予測を行うが、石の生成位置をだんだん難しくしていく。一定範囲でuniformにサンプル、範囲境界(=新規の箇所)でサンプル、生成位置に対する戦略の報酬に応じサンプルの3手法を試している"]} {"source": "Despite the remarkable success of deep neural networks, significant concerns have emerged about their robustness to adversarial perturbations to inputs. While most attacks aim to ensure that these are imperceptible, physical perturbation attacks typically aim for being unsuspicious, even if perceptible. However, there is no universal notion of what it means for adversarial examples to be unsuspicious. We propose an approach for modeling suspiciousness by leveraging cognitive salience. Specifically, we split an image into foreground (salient region) and background (the rest), and allow significantly larger adversarial perturbations in the background, while ensuring that cognitive salience of background remains low. We describe how to compute the resulting non-salience-preserving dual-perturbation attacks on classifiers. We then experimentally demonstrate that our attacks indeed do not significantly change perceptual salience of the background, but are highly effective against classifiers robust to conventional attacks. Furthermore, we show that adversarial training with dual-perturbation attacks yields classifiers that are more robust to these than state-of-the-art robust learning approaches, and comparable in terms of robustness to conventional attacks.", "target": ["敵対的サンプルの新たな作成手法と対策の提案。認知心理学の特徴統合理論に基づいて敵対的サンプルを作成する。画像の前景と背景では、人間の注意は前景に注がれる傾向がある。よって、前景には小さな摂動、背景には大きな摂動を加えることで、人間の目に不審に思われない敵対的サンプルを作成可能。"]} {"source": "We investigate a new method for injecting backdoors into machine learning models, based on compromising the loss-value computation in the model-training code. We use it to demonstrate new classes of backdoors strictly more powerful than those in the prior literature: single-pixel and physical backdoors in ImageNet models, backdoors that switch the model to a covert, privacy-violating task, and backdoors that do not require inference-time input modifications. Our attack is blind: the attacker cannot modify the training data, nor observe the execution of his code, nor access the resulting model. The attack code creates poisoned training inputs \"on the fly,\" as the model is training, and uses multi-objective optimization to achieve high accuracy on both the main and backdoor tasks. We show how a blind attack can evade any known defense and propose new ones.", "target": ["DNNモデルにバックドアを設置する手法。OSSのコードが機械学習パイプラインに含まれることが多いことに着目し、細工したOSSの損失計算コードを標的の機械学習パイプラインに含ませることでバックドアを設置する。損失計算コードは大規模で複雑なため、コード監査で細工を検知するのは困難。"]} {"source": "Generative adversarial networks (GANs) are a powerful approach to unsupervised learning. They have achieved state-of-the-art performance in the image domain. However, GANs are limited in two ways. They often learn distributions with low support---a phenomenon known as mode collapse---and they do not guarantee the existence of a probability density, which makes evaluating generalization using predictive log-likelihood impossible. In this paper, we develop the prescribed GAN (PresGAN) to address these shortcomings. PresGANs add noise to the output of a density network and optimize an entropy-regularized adversarial loss. The added noise renders tractable approximations of the predictive log-likelihood and stabilizes the training procedure. The entropy regularizer encourages PresGANs to capture all the modes of the data distribution. Fitting PresGANs involves computing the intractable gradients of the entropy regularization term; PresGANs sidestep this intractability using unbiased stochastic estimates. We evaluate PresGANs on several datasets and found they mitigate mode collapse and generate samples with high perceptual quality. We further found that PresGANs reduce the gap in performance in terms of predictive log-likelihood between traditional GANs and variational autoencoders (VAEs).", "target": ["GANのmode collapse(似たような画像しか生成されなくなる問題)を防ぐため、lossにエントロピーの項を追加した研究。エントロピーが大きいほど分散が大きい=似た画像のみ乗せ生成を抑止できる。(周辺)エントロピーの計算は困難だが、直接不偏モンテカルロ推定で近似を行っている。"]} {"source": "Recent work has shown that some common machine learning classifiers can be compiled into Boolean circuits that have the same input-output behavior. We present a theory for unveiling the reasons behind the decisions made by Boolean classifiers and study some of its theoretical and practical implications. We define notions such as sufficient, necessary and complete reasons behind decisions, in addition to classifier and decision bias. We show how these notions can be used to evaluate counterfactual statements such as \"a decision will stick even if . because . .\" We present efficient algorithms for computing these notions, which are based on new advances on tractable Boolean circuits, and illustrate them using a case study.", "target": ["機械学習モデルの「説明」とは何かを分類している研究。出力を決定づける(=他の特徴が揺れても出力が変化しない)特徴をSufficient、その集合をComplete、全出力に現れるSufficientをNecessaryとしている。またSufficientに根拠にしてはいけない特徴(性別など)が含まれる場合をバイアスとしている。"]} {"source": "We propose to train a non-autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model. In particular, we view our non-autoregressive translation system as an inference network (Tu and Gimpel, 2018) trained to minimize the autoregressive teacher energy. This contrasts with the popular approach of training a non-autoregressive model on a distilled corpus consisting of the beam-searched outputs of such a teacher model. Our approach, which we call ENGINE (ENerGy-based Inference NEtworks), achieves state-of-the-art non-autoregressive results on the IWSLT 2014 DE-EN and WMT 2016 RO-EN datasets, approaching the performance of autoregressive models.", "target": ["翻訳文の生成に着目した研究。学習済み翻訳モデルの出力から計算するエネルギー(対数周辺尤度)が最小になるよう系列を生成する推論ネットワークを取り付けている。IWSLT/WMTの一部翻訳タスクでBLEUのSOTAを達成。"]} {"source": "While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood. In this work, we identify a set of three conditions of the multi-task optimization landscape that cause detrimental gradient interference, and develop a simple yet general approach for avoiding such interference between task gradients. We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient. On a series of challenging multi-task supervised and multi-task RL problems, this approach leads to substantial gains in efficiency and performance. Further, it is model-agnostic and can be combined with previously-proposed multi-task architectures for enhanced performance.", "target": ["マルチタスク学習で発生する勾配の対立(Task1とTask2とで進む勾配が逆方向)を解消する手法の提案。逆方向になる場合、勾配を相手勾配の直行平面(Normal plane)に射影することで計算した競合要素を差し引く(競合要素のみ減衰し他は活かす)。シンプルな手法ながらマルチタスク強化学習の性能を大幅に改善"]} {"source": "A recent source of concern for the security of neural networks is the emergence of clean-label dataset poisoning attacks, wherein correctly labeled poison samples are injected into the training dataset. While these poison samples look legitimate to the human observer, they contain malicious characteristics that trigger a targeted misclassification during inference. We propose a scalable and transferable clean-label poisoning attack against transfer learning, which creates poison images with their center close to the target image in the feature space. Our attack, Bullseye Polytope, improves the attack success rate of the current state-of-the-art by 26.75% in end-to-end transfer learning, while increasing attack speed by a factor of 12. We further extend Bullseye Polytope to a more practical attack model by including multiple images of the same object (e.g., from different angles) when crafting the poison samples. We demonstrate that this extension improves attack transferability by over 16% to unseen images (of the same object) without using extra poison samples.", "target": ["細工画像をデータセットに注入することで、画像分類モデルにバックドアを設置する手法。バックドアのトリガーとなる物体を様々な角度から観察し、それらの画像の平均特徴量を基に細工画像を作成することで、トリガーに柔軟性を持たせている。既存手法と比べて攻撃効率が格段に向上している。"]} {"source": "We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions. Our framework introduces a principled structure for the answer choices and ties them to textual span annotations. The framework is implemented in OneStopQA, a new high-quality dataset for evaluation and analysis of reading comprehension in English. We use this dataset to demonstrate that STARC can be leveraged for a key new application for the development of SAT-like reading comprehension materials: automatic annotation quality probing via span ablation experiments. We further show that it enables in-depth analyses and comparisons between machine and human reading comprehension behavior, including error distributions and guessing ability. Our experiments also reveal that the standard multiple choice dataset in NLP, RACE, is limited in its ability to measure reading comprehension. 47% of its questions can be guessed by machines without accessing the passage, and 18% are unanimously judged by humans as not having a unique correct answer. OneStopQA provides an alternative test set for reading comprehension which alleviates these shortcomings and has a substantially higher human ceiling performance.", "target": ["多肢選択QAデータセットの作り方提案とデータの公開。解答、解答根拠の解釈違い、解答根拠とは異なる箇所に基づく解答、参照文書にない知識に基づく解答の計4つから質問を構成する。解答根拠を削除した場合の回答率(=文書外知識による回答可能率)が既存データセットより低いことなどを確認。"]} {"source": "How does language model pretraining help transfer learning? We consider a simple ablation technique for determining the impact of each pretrained layer on transfer task performance. This method, partial reinitialization, involves replacing different layers of a pretrained model with random weights, then finetuning the entire model on the transfer task and observing the change in performance. This technique reveals that in BERT, layers with high probing performance on downstream GLUE tasks are neither necessary nor sufficient for high accuracy on those tasks. Furthermore, the benefit of using pretrained parameters for a layer varies dramatically with finetuning dataset size: parameters that provide tremendous performance improvement when data is plentiful may provide negligible benefits in data-scarce settings. These results reveal the complexity of the transfer learning process, highlighting the limitations of methods that operate on frozen models or single data samples.", "target": ["事前学習済み言語モデルがどのように他タスクに転移するのかを検証した研究。データ量が多い場合は下層の2~3レイヤで転移性能を十分説明でき、追加のレイヤはデータ量が少ないほど必要のようだが、タスクによって効果が変わる。文法タスク(probing)の精度はレイヤの転移性能には関係がないとのこと。"]} {"source": "We review motivations, definition, approaches, and methodology for unsupervised cross-lingual learning and call for a more rigorous position in each of them. An existing rationale for such research is based on the lack of parallel data for many of the world's languages. However, we argue that a scenario without any parallel data and abundant monolingual data is unrealistic in practice. We also discuss different training signals that have been used in previous work, which depart from the pure unsupervised setting. We then describe common methodological issues in tuning and evaluation of unsupervised cross-lingual models and present best practices. Finally, we provide a unified outlook for different types of research in this area (i.e., cross-lingual word embeddings, deep multilingual pretraining, and unsupervised machine translation) and argue for comparable evaluation of these models.", "target": ["Cross Lingualの学習に関するサーベイ。現実的に全言語間のパラレルコーパスを用意するのは困難なため、教師なしの学習を中心にまとめられている。明確な評価用データセットがない点など手法だけでなく評価の手法と課題についても触れられている。"]} {"source": "The main goal behind state-of-the-art pre-trained multilingual models such as multilingual BERT and XLM-R is enabling and bootstrapping NLP applications in low-resource languages through zero-shot or few-shot cross-lingual transfer. However, due to limited model capacity, their transfer performance is the weakest exactly on such low-resource languages and languages unseen during pre-training. We propose MAD-X, an adapter-based framework that enables high portability and parameter-efficient transfer to arbitrary tasks and languages by learning modular language and task representations. In addition, we introduce a novel invertible adapter architecture and a strong baseline method for adapting a pre-trained multilingual model to a new language. MAD-X outperforms the state of the art in cross-lingual transfer across a representative set of typologically diverse languages on named entity recognition and causal commonsense reasoning, and achieves competitive results on question answering. Our code and adapters are available at this http URL", "target": ["事前学習済みモデルを多言語・他タスクに効率的に対応させる手法の提案。言語ごと、タスクごとのAdapter(潜在表現の圧縮/展開を行う全結合層をResidualでつないだ構成)を用意するが、パラメーター数が過剰にならないように使いまわし(Invertibleな変換)を行う。XLM-Rに比べ多くの言語でNERのF1を上回る"]} {"source": "Interpretable rationales for model predictions play a critical role in practical applications. In this study, we develop models possessing interpretable inference process for structured prediction. Specifically, we present a method of instance-based learning that learns similarities between spans. At inference time, each span is assigned a class label based on its similar spans in the training set, where it is easy to understand how much each training instance contributes to the predictions. Through empirical analysis on named entity recognition, we demonstrate that our method enables to build models that have high interpretability without sacrificing performance.", "target": ["K-meansのように潜在空間上の近さから固有表現のラベルを振る手法の提案。対象スパンの潜在表現は単語/文字分散表現を入力としたBi-directionalの左右重複部分を結合して作成する。通常の分類機ベースの手法と同等精度が得られるほか、どの学習データに近いかわかるため解釈性が高まる。"]} {"source": "We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.", "target": ["離散潜在表現を学習するVQ-VAEを利用して音楽生成を行った研究。VQ-VAE-2のように潜在表現を階層状にしているが(音声の圧縮度に応じたEncoderを用意)、各階層の表現が壊れないよう全結合して出力せず個別のDecoderをでそれぞれ復元・学習する。歌詞とAttentionで対応付けた学習も行っている。"]} {"source": "Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks. However, their accuracy comes at the cost of intelligibility: it is usually unclear how they make their decisions. This hinders their applicability to high stakes decision-making domains such as healthcare. We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models. NAMs learn a linear combination of neural networks that each attend to a single input feature. These networks are trained jointly and can learn arbitrarily complex relationships between their input feature and the output. Our experiments on regression and classification datasets show that NAMs are more accurate than widely used intelligible models such as logistic regression and shallow decision trees. They perform similarly to existing state-of-the-art generalized additive models in accuracy, but can be more easily applied to real-world problems.", "target": ["複数の入力に対し複数のDNNを用意し、Additiveで統合して予測する手法の対案。これによりどの入力が重要かを解析できるなど解釈性を高めることができる。最後がAdditiveなので入力を掛け合わせた特徴が取れないが、同等精度が出せることを確認。"]} {"source": "State-of-the-art attacks on NLP models lack a shared definition of a what constitutes a successful attack. We distill ideas from past work into a unified framework: a successful natural language adversarial example is a perturbation that fools the model and follows some linguistic constraints. We then analyze the outputs of two state-of-the-art synonym substitution attacks. We find that their perturbations often do not preserve semantics, and 38% introduce grammatical errors. Human surveys reveal that to successfully preserve semantics, we need to significantly increase the minimum cosine similarities between the embeddings of swapped words and between the sentence encodings of original and perturbed sentences.With constraints adjusted to better preserve semantics and grammaticality, the attack success rate drops by over 70 percentage points.", "target": ["自然言語処理の敵対的サンプルの定義を標準化した研究。標準化に基づき、最新の単語置換攻撃(ターゲットの単語を同義語に置き換えることで摂動を付与)を評価した結果、多くの文法誤りが生じ、容易に見破られることが判明した。これに対し、既知手法の問題点を改善した新たな攻撃手法を提案している。"]} {"source": "Tackling real-world socio-economic challenges requires designing and testing economic policies. However, this is hard in practice, due to a lack of appropriate (micro-level) economic data and limited opportunity to experiment. In this work, we train social planners that discover tax policies in dynamic economies that can effectively trade-off economic equality and productivity. We propose a two-level deep reinforcement learning approach to learn dynamic tax policies, based on economic simulations in which both agents and a government learn and adapt. Our data-driven approach does not make use of economic modeling assumptions, and learns from observational data alone. We make four main contributions. First, we present an economic simulation environment that features competitive pressures and market dynamics. We validate the simulation by showing that baseline tax systems perform in a way that is consistent with economic theory, including in regard to learned agent behaviors and specializations. Second, we show that AI-driven tax policies improve the trade-off between equality and productivity by 16% over baseline policies, including the prominent Saez tax framework. Third, we showcase several emergent features: AI-driven tax policies are qualitatively different from baselines, setting a higher top tax rate and higher net subsidies for low incomes. Moreover, AI-driven tax policies perform strongly in the face of emergent tax-gaming strategies learned by AI agents. Lastly, AI-driven tax policies are also effective when used in experiments with human participants. In experiments conducted on MTurk, an AI tax policy provides an equality-productivity trade-off that is similar to that provided by the Saez framework along with higher inverse-income weighted social welfare.", "target": ["税の調整を強化学習で行う研究。現実の世界を模したゲーム環境を作成し、そこで働く(強化学習)エージェントの社会福祉(個々の収入と分配収入を掛け合わせた関数)が最大になるよう税制エージェントが学習する、という2重構成になっている。ベースとした複数の税制に比べて優良な結果を確認。"]} {"source": "The COVID-19 pandemic continues to have a devastating effect on the health and well-being of the global population. A critical step in the fight against COVID-19 is effective screening of infected patients, with one of the key screening approaches being radiology examination using chest radiography. Motivated by this and inspired by the open source efforts of the research community, in this study we introduce COVID-Net, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest X-ray (CXR) images that is open source and available to the general public. To the best of the authors' knowledge, COVID-Net is one of the first open source network designs for COVID-19 detection from CXR images at the time of initial release. We also introduce COVIDx, an open access benchmark dataset that we generated comprising of 13,975 CXR images across 13,870 patient patient cases, with the largest number of publicly available COVID-19 positive cases to the best of the authors' knowledge. Furthermore, we investigate how COVID-Net makes predictions using an explainability method in an attempt to not only gain deeper insights into critical factors associated with COVID cases, which can aid clinicians in improved screening, but also audit COVID-Net in a responsible and transparent manner to validate that it is making decisions based on relevant information from the CXR images. By no means a production-ready solution, the hope is that the open access COVID-Net, along with the description on constructing the open source COVIDx dataset, will be leveraged and build upon by both researchers and citizen data scientists alike to accelerate the development of highly accurate yet practical deep learning solutions for detecting COVID-19 cases and accelerate treatment of those who need it the most.", "target": ["肺の胸部単純X線写真から新型コロナの可能性が高いかスクリーニングするモデルの提案。学習済みのモデルとデータセットがオープンソースとして公開されている。Sensitivity 87.1(罹患⇒予測が真)・Positive Predictive 96.4(予測⇒罹患が真)で解釈性の手法も導入されている"]} {"source": "We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. The approach leverages input perturbations commonly used in computer vision tasks to regularize the value function. Existing model-free approaches, such as Soft Actor-Critic (SAC), are not able to train deep networks effectively from image pixels. However, the addition of our augmentation method dramatically improves SAC's performance, enabling it to reach state-of-the-art performance on the DeepMind control suite, surpassing model-based (Dreamer, PlaNet, and SLAC) methods and recently proposed contrastive learning (CURL). Our approach can be combined with any model-free reinforcement learning algorithm, requiring only minor modifications. An implementation can be found at this https URL.", "target": ["画像で使われるData Augmentationを使うことで既存の強化学習アルゴリズム(Soft Actor-Critic)の性能を向上させた研究。なおRotateやFlipなどは状態の意味を変えてしまうため、randomなshiftしか行っていない。連続値コントロールの環境であるDM Controlで他の表現学習系の手法より高い性能を確認。"]} {"source": "The promise of reinforcement learning is to solve complex sequential decision problems autonomously by specifying a high-level reward function only. However, reinforcement learning algorithms struggle when, as is often the case, simple and intuitive rewards provide sparse and deceptive feedback. Avoiding these pitfalls requires thoroughly exploring the environment, but creating algorithms that can do so remains one of the central challenges of the field. We hypothesise that the main impediment to effective exploration originates from algorithms forgetting how to reach previously visited states (\"detachment\") and from failing to first return to a state before exploring from it (\"derailment\"). We introduce Go-Explore, a family of algorithms that addresses these two challenges directly through the simple principles of explicitly remembering promising states and first returning to such states before intentionally exploring. Go-Explore solves all heretofore unsolved Atari games and surpasses the state of the art on all hard-exploration games, with orders of magnitude improvements on the grand challenges Montezuma's Revenge and Pitfall. We also demonstrate the practical potential of Go-Explore on a sparse-reward pick-and-place robotics task. Additionally, we show that adding a goal-conditioned policy can further improve Go-Explore's exploration efficiency and enable it to handle stochasticity throughout training. The substantial performance gains from Go-Explore suggest that the simple principles of remembering states, returning to them, and exploring from them are a powerful and general approach to exploration, an insight that may prove critical to the creation of truly intelligent learning agents.", "target": ["探索の効率を上げるため、報酬が高い状態についてはそこに戻ってから探索をリスタートする手法を提案(探索で得られた軌跡で学習)。以前Go-Exploreとして提案された手法だが、リスタートするとき自力で元の状態に復帰する方が(シミュレーターで戻すより)汎化性能等の面でメリットがあるとのこと"]} {"source": "In many machine learning problems, loss functions are weighted sums of several terms. A typical approach to dealing with these is to train multiple separate models with different selections of weights and then either choose the best one according to some criterion or keep multiple models if it is desirable to maintain a diverse set of solutions. This is inefficient both at training and at inference time. We propose a method that allows replacing multiple models trained on one loss function each by a single model trained on a distribution of losses. At test time a model trained this way can be conditioned to generate outputs corresponding to any loss from the training distribution of losses. We demonstrate this approach on three tasks with parametrized losses: beta-VAE, learned image compression, and fast style transfer.", "target": ["画像圧縮における画質と圧縮率のように、トレードオフにある指標のバランスを取り最適化する場合は指標の重みを決めて学習する必要があった(今回は画質重視etc)。この場合重みの設定ごとに学習を行う必要があるが、学習時に指標の重みも入力として与えることで1モデルで様々な設定下の出力を可能にした"]} {"source": "Backdoor data poisoning attacks have recently been demonstrated in computer vision research as a potential safety risk for machine learning (ML) systems. Traditional data poisoning attacks manipulate training data to induce unreliability of an ML model, whereas backdoor data poisoning attacks maintain system performance unless the ML model is presented with an input containing an embedded \"trigger\" that provides a predetermined response advantageous to the adversary. Our work builds upon prior backdoor data-poisoning research for ML image classifiers and systematically assesses different experimental conditions including types of trigger patterns, persistence of trigger patterns during retraining, poisoning strategies, architectures (ResNet-50, NasNet, NasNet-Mobile), datasets (Flowers, CIFAR-10), and potential defensive regularization techniques (Contrastive Loss, Logit Squeezing, Manifold Mixup, Soft-Nearest-Neighbors Loss). Experiments yield four key findings. First, the success rate of backdoor poisoning attacks varies widely, depending on several factors, including model architecture, trigger pattern and regularization technique. Second, we find that poisoned models are hard to detect through performance inspection alone. Third, regularization typically reduces backdoor success rate, although it can have no effect or even slightly increase it, depending on the form of regularization. Finally, backdoors inserted through data poisoning can be rendered ineffective after just a few epochs of additional training on a small set of clean data without affecting the model's performance.", "target": ["分類器に対するバックドア攻撃手法を体系的に評価した研究。攻撃の成功率は、バックドアが設置されたモデルのアーキテクチャや正則化技術などで大きく変化することを示している。また、細工されていないデータセットで数エポックの再学習を行うことで、バックドアの効果が無くなることも示している。"]} {"source": "Like all software systems, the execution of deep learning models is dictated in part by logic represented as data in memory. For decades, attackers have exploited traditional software programs by manipulating this data. We propose a live attack on deep learning systems that patches model parameters in memory to achieve predefined malicious behavior on a certain set of inputs. By minimizing the size and number of these patches, the attacker can reduce the amount of network communication and memory overwrites, with minimal risk of system malfunctions or other detectable side effects. We demonstrate the feasibility of this attack by computing efficient patches on multiple deep learning models. We show that the desired trojan behavior can be induced with a few small patches and with limited access to training data. We describe the details of how this attack is carried out on real systems and provide sample code for patching TensorFlow model parameters in Windows and in Linux. Lastly, we present a technique for effectively manipulating entropy on perturbed inputs to bypass STRIP, a state-of-the-art run-time trojan detection technique.", "target": ["リアルタイムのバックドア攻撃手法。分類器の実行時にメモリ上に展開されたハイパーパラメータにパッチを適用し、特定の入力データを攻撃者の意図したクラスに分類させる。システムに侵入し、管理者権限を奪取した攻撃者による攻撃を想定している。バックドア検知技術であるSTRIPをも回避可能。"]} {"source": "Machine learning applications such as finance and medicine demand accurate and justifiable predictions, barring most deep learning methods from use. In response, previous work combines decision trees with deep learning, yielding models that (1) sacrifice interpretability for accuracy or (2) sacrifice accuracy for interpretability. We forgo this dilemma by jointly improving accuracy and interpretability using Neural-Backed Decision Trees (NBDTs). NBDTs replace a neural network's final linear layer with a differentiable sequence of decisions and a surrogate loss. This forces the model to learn high-level concepts and lessens reliance on highly-uncertain decisions, yielding (1) accuracy: NBDTs match or outperform modern neural networks on CIFAR, ImageNet and better generalize to unseen classes by up to 16%. Furthermore, our surrogate loss improves the original model's accuracy by up to 2%. NBDTs also afford (2) interpretability: improving human trustby clearly identifying model mistakes and assisting in dataset debugging. Code and pretrained NBDTs are at this https URL.", "target": ["既存のDNNを(説明力の高い)木構造で予測するよう誘導するLossを加えFine Tuneすることで、高精度かつ説明力の高いモデルを構築する研究。各クラスの特徴のAverageを取っていくことで木構造をくみ上げ、ノード分岐を確定的に行う場合(Hard)/確率的に行う場合(Soft)の2種で学習する。"]} {"source": "Intriguing empirical evidence exists that deep learning can work well with exoticschedules for varying the learning rate. This paper suggests that the phenomenon may be due to Batch Normalization or BN, which is ubiquitous and provides benefits in optimization and generalization across all standard architectures. The following new results are shown about BN with weight decay and momentum (in other words, the typical use case which was not considered in earlier theoretical analyses of stand-alone BN. 1. Training can be done using SGD with momentum and an exponentially increasing learning rate schedule, i.e., learning rate increases by some  (1+α)  factor in every epoch for some  α>0 . (Precise statement in the paper.) To the best of our knowledge this is the first time such a rate schedule has been successfully used, let alone for highly successful architectures. As expected, such training rapidly blows up network weights, but the net stays well-behaved due to normalization. 2. Mathematical explanation of the success of the above rate schedule: a rigorous proof that it is equivalent to the standard setting of BN + SGD + StandardRate Tuning + Weight Decay + Momentum. This equivalence holds for other normalization layers as well, Group Normalization, LayerNormalization, Instance Norm, etc. 3. A worked-out toy example illustrating the above linkage of hyper-parameters. Using either weight decay or BN alone reaches global minimum, but convergence fails when both are used.", "target": ["学習率は徐々に下げていく、という通説を覆し指数関数的に上げていく手法の提案。Batch Normalizationが導入されている場合、Weight Decay(≒L2正則化)+一定率(c)での学習率の減衰+momentumはWeight Decayなし+指数関数的LR増大+momentumと等価であることを理論的+実験的に証明している。"]} {"source": "Attentional, RNN-based encoder-decoder architectures have achieved impressive performance on abstractive summarization of news articles. However, these methods fail to account for long term dependencies within the sentences of a document. This problem is exacerbated in multi-document summarization tasks such as summarizing the popular opinion in threads present in community question answering (CQA) websites such as Yahoo! Answers and Quora. These threads contain answers which often overlap or contradict each other. In this work, we present a hierarchical encoder based on structural attention to model such inter-sentence and inter-document dependencies. We set the popular pointer-generator architecture and some of the architectures derived from it as our baselines and show that they fail to generate good summaries in a multi-document setting. We further illustrate that our proposed model achieves significant improvement over the baselines in both single and multi-document summarization settings -- in the former setting, it beats the best baseline by 1.31 and 7.8 ROUGE-1 points on CNN and CQA datasets, respectively; in the latter setting, the performance is further improved by 1.6 ROUGE-1 points on the CQA dataset.", "target": ["QAサイトのような数珠繋ぎ(リスト)かつ相互に依存するテキストの要約を行う手法の提案。Bi-directional + Pointer-Generatorが基本だが、潜在表現をトークン表現と構造表現に分割し、構造表現についてはSelf-Attentionを貼って相互依存を加味している。CNN/CQAなどのデータセットでROUGEの改善を確認。"]} {"source": "Many domains of science have developed complex simulations to describe phenomena of interest. While these simulations provide high-fidelity models, they are poorly suited for inference and lead to challenging inverse problems. We review the rapidly developing field of simulation-based inference and identify the forces giving new momentum to the field. Finally, we describe how the frontier is expanding so that a broad audience can appreciate the profound change these developments may have on science.", "target": ["シミュレーション(データの生成)を行う手法と、近年の機械学習手法との統合について調査した研究。機械学習によりサンプル効率、生成精度、再利用性が高められるとし、具体例としてActive Learningによるサンプル効率改善、表現学習による生成精度改善が挙げられている。"]} {"source": "The task of unsupervised image-to-image translation has seen substantial advancements in recent years through the use of deep neural networks. Typically, the proposed solutions learn the characterizing distribution of two large, unpaired collections of images, and are able to alter the appearance of a given image, while keeping its geometry intact. In this paper, we explore the capabilities of neural networks to understand image structure given only a single pair of images, A and B. We seek to generate images that are structurally aligned: that is, to generate an image that keeps the appearance and style of B, but has a structural arrangement that corresponds to A. The key idea is to map between image patches at different scales. This enables controlling the granularity at which analogies are produced, which determines the conceptual distinction between style and content. In addition to structural alignment, our method can be used to generate high quality imagery in other conditional generation tasks utilizing images A and B only: guided image synthesis, style and texture transfer, text translation as well as video translation. Our code and additional results are available in this https URL.", "target": ["2つの画像間で見た目をそのままに構造を転移できるGAN。SinGANと同じようにUpSamplingした画像をGeneratorで補完するが、それをスタイル変換にも用いることが肝。CyclcGANでは、物体の形状を変えることができないことが問題だったが、逆にそれを利用して画像の構造(物体の配置等)を保ったまま、見た目(スタイル)変換を行ったという印象。"]} {"source": "Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence. Numerous success stories have rapidly spread all over science, industry and society, but its limitations have only recently come into focus. In this perspective we seek to distil how many of deep learning's problem can be seen as different symptoms of the same underlying problem: shortcut learning. Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions, such as real-world scenarios. Related issues are known in Comparative Psychology, Education and Linguistics, suggesting that shortcut learning may be a common characteristic of learning systems, biological and artificial alike. Based on these observations, we develop a set of recommendations for model interpretation and benchmarking, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.", "target": ["意図したものと違う要領で学習してしまう(形で区別してほしいのに位置を見てしまうなど)Shortcut learningに関する論文。画像、NLPなどタスクによらず一般的にそれらは存在する。通常はi.d.dを前提とした評価をする(ImageNetのtestセットなど)が、Out Of Distribution(ImageNetCなど)のデータでも評価しないと真の汎化性能は測れないとしている。"]} {"source": "Adversarial attacks for discrete data (such as texts) have been proved significantly more challenging than continuous data (such as images) since it is difficult to generate adversarial samples with gradient-based methods. Current successful attack methods for texts usually adopt heuristic replacement strategies on the character or word level, which remains challenging to find the optimal solution in the massive space of possible combinations of replacements while preserving semantic consistency and language fluency. In this paper, we propose \\textbf{BERT-Attack}, a high-quality and effective method to generate adversarial samples using pre-trained masked language models exemplified by BERT. We turn BERT against its fine-tuned models and other deep neural models in downstream tasks so that we can successfully mislead the target models to predict incorrectly. Our method outperforms state-of-the-art attack strategies in both success rate and perturb percentage, while the generated adversarial samples are fluent and semantically preserved. Also, the cost of calculation is low, thus possible for large-scale generations. The code is available at this https URL.", "target": ["テキスト分類器に対する敵対的サンプル(AE)攻撃の手法。事前学習したBERTによる「マスクされた言語モデル」を用いてAEを作成する。既存手法と比較して攻撃成功率と摂動率で優れており、生成されたAEも意味的に整合性が取れている。また、計算コストも低いため、攻撃の効率も良いとのこと。"]} {"source": "Federated machine learning which enables resource constrained node devices (e.g., mobile phones and IoT devices) to learn a shared model while keeping the training data local, can provide privacy, security and economic benefits by designing an effective communication protocol. However, the communication protocol amongst different nodes could be exploited by attackers to launch data poisoning attacks, which has been demonstrated as a big threat to most machine learning models. In this paper, we attempt to explore the vulnerability of federated machine learning. More specifically, we focus on attacking a federated multi-task learning framework, which is a federated learning framework via adopting a general multi-task learning framework to handle statistical challenges. We formulate the problem of computing optimal poisoning attacks on federated multi-task learning as a bilevel program that is adaptive to arbitrary choice of target nodes and source attacking nodes. Then we propose a novel systems-aware optimization method, ATTack on Federated Learning (AT2FL), which is efficiency to derive the implicit gradients for poisoned data, and further compute optimal attack strategies in the federated machine learning. Our work is an earlier study that considers issues of data poisoning attack for federated learning. To the end, experimental results on real-world datasets show that federated multi-task learning model is very sensitive to poisoning attacks, when the attackers either directly poison the target nodes or indirectly poison the related nodes by exploiting the communication protocol.", "target": ["Federated learningに対する攻撃手法。分散ノード上のデータを細工することで、モデルの推論精度を損なわせる事を目的としている。攻撃者が細工したデータをノードに直接注入する方法や、ノード間の通信プロトコル経由で細工データを注入する方法等、三つの手法を提案している。"]} {"source": "The quality of automatic metrics for machine translation has been increasingly called into question, especially for high-quality systems. This paper demonstrates that, while choice of metric is important, the nature of the references is also critical. We study different methods to collect references and compare their value in automated evaluation by reporting correlation with human evaluation for a variety of systems and metrics. Motivated by the finding that typical references exhibit poor diversity, concentrating around translationese language, we develop a paraphrasing task for linguists to perform on existing reference translations, which counteracts this bias. Our method yields higher correlation with human judgment not only for the submissions of WMT 2019 English to German, but also for Back-translation and APE augmented MT output, which have been shown to have low correlation with automatic metrics using standard references. We demonstrate that our methodology improves correlation with all modern evaluation metrics we look at, including embedding-based methods. To complete this picture, we reveal that multi-reference BLEU does not improve the correlation for high quality output, and present an alternative multi-reference formulation that is more effective.", "target": ["Reference側を工夫して自動評価指標(BLEU)と人手評価の相関を上げられるという研究。ターゲットの翻訳数を増やすより、翻訳+その言い換えをセットにした方が人手評価との相関が挙げられるという。BLEU以外の各種自動評価指標でも効果を確認。"]} {"source": "Overparameterized deep networks have the capacity to memorize training data with zero \\emph{training error}. Even after memorization, the \\emph{training loss} continues to approach zero, making the model overconfident and the test performance degraded. Since existing regularizers do not directly aim to avoid zero training loss, it is hard to tune their hyperparameters in order to maintain a fixed/preset level of training loss. We propose a direct solution called \\emph{flooding} that intentionally prevents further reduction of the training loss when it reaches a reasonably small value, which we call the \\emph{flood level}. Our approach makes the loss float around the flood level by doing mini-batched gradient descent as usual but gradient ascent if the training loss is below the flood level. This can be implemented with one line of code and is compatible with any stochastic optimizer and other regularizers. With flooding, the model will continue to \"random walk\" with the same non-zero training loss, and we expect it to drift into an area with a flat loss landscape that leads to better generalization. We experimentally show that flooding improves performance and, as a byproduct, induces a double descent curve of the test loss.", "target": ["過学習を防ぐシンプルな手法の提案。正則化項など回りくどいことをせず、純粋にlossがゼロになるのを防ぐ。 J(θ)~ = | J(θ) - b | + b と定義することで、lossが閾値(b)を下回る場合閾値以下には行かないようにする(gradient \"ascent\")。あらゆるタスクで汎用的に使用可能+既存正則化法と併用できる。"]} {"source": "Model-based reinforcement learning (MBRL) has recently gained immense interest due to its potential for sample efficiency and ability to incorporate off-policy data. However, designing stable and efficient MBRL algorithms using rich function approximators have remained challenging. To help expose the practical challenges in MBRL and simplify algorithm design from the lens of abstraction, we develop a new framework that casts MBRL as a game between: (1) a policy player, which attempts to maximize rewards under the learned model; (2) a model player, which attempts to fit the real-world data collected by the policy player. For algorithm development, we construct a Stackelberg game between the two players, and show that it can be solved with approximate bi-level optimization. This gives rise to two natural families of algorithms for MBRL based on which player is chosen as the leader in the Stackelberg game. Together, they encapsulate, unify, and generalize many previous MBRL algorithms. Furthermore, our framework is consistent with and provides a clear basis for heuristics known to be important in practice from prior works. Finally, through experiments we validate that our proposed algorithms are highly sample efficient, match the asymptotic performance of model-free policy gradient, and scale gracefully to high-dimensional tasks like dexterous hand manipulation. Additional details and code can be obtained from the project page at this https URL", "target": ["モデルベースの強化学習をゲーム理論の枠組みでとらえなおした研究。ミクロ経済学で寡占市場の分析に使用されるシュタックベルク均衡では、相手の反応関数を既知として生産量を決定する先駆者と先駆者に合わせる追随者の2者を想定するが、戦略/モデルをそれぞれ先駆者/追随者にあてはめ均衡点を算出。"]} {"source": "Textual information in a captured scene plays an important role in scene interpretation and decision making. Though there exist methods that can successfully detect and interpret complex text regions present in a scene, to the best of our knowledge, there is no significant prior work that aims to modify the textual information in an image. The ability to edit text directly on images has several advantages including error correction, text restoration and image reusability. In this paper, we propose a method to modify text in an image at character-level. We approach the problem in two stages. At first, the unobserved character (target) is generated from an observed character (source) being modified. We propose two different neural network architectures - (a) FANnet to achieve structural consistency with source font and (b) Colornet to preserve source color. Next, we replace the source character with the generated character maintaining both geometric and visual consistency with neighboring characters. Our method works as a unified platform for modifying text in images. We present the effectiveness of our method on COCO-Text and ICDAR datasets both qualitatively and quantitatively.", "target": ["写真中の文字を書体を維持したまま別の文字に変える手法の提案。写真に写ったポスターの文字を(わからないよう)変えたりすることが可能。置き換え対象の文字領域と置き換え後の文字(one-hotで指定)を受け取り白黒の文字画像を生成(FANet)、その後に元文字から色を付ける(ColorNet)2段構成。"]} {"source": "The offline reinforcement learning (RL) setting (also known as full batch RL), where a policy is learned from a static dataset, is compelling as progress enables RL methods to take advantage of large, previously-collected datasets, much like how the rise of large datasets has fueled results in supervised learning. However, existing online RL benchmarks are not tailored towards the offline setting and existing offline RL benchmarks are restricted to data generated by partially-trained agents, making progress in offline RL difficult to measure. In this work, we introduce benchmarks specifically designed for the offline setting, guided by key properties of datasets relevant to real-world applications of offline RL. With a focus on dataset collection, examples of such properties include: datasets generated via hand-designed controllers and human demonstrators, multitask datasets where an agent performs different tasks in the same environment, and datasets collected with mixtures of policies. By moving beyond simple benchmark tasks and data collected by partially-trained RL agents, we reveal important and unappreciated deficiencies of existing algorithms. To facilitate research, we have released our benchmark tasks and datasets with a comprehensive evaluation of existing algorithms, an evaluation protocol, and open-source examples. This serves as a common starting point for the community to identify shortcomings in existing offline RL methods and a collaborative route for progress in this emerging area.", "target": ["実用化を意識したオフライン強化学習のデータセット。通常のオフラインでは学習済みエージェントでデータを収集するが、実際に使用できるのは人の行動(=直前状態のみに依存するとは限らない(Markov性の崩れ))、無作為な軌道(ゴールなし)だったりする。そうした現実を想定した評価用データセットを構築"]} {"source": "Machine learning (ML) based approaches have been the mainstream solution for anti-phishing detection. When they are deployed on the client-side, ML-based classifiers are vulnerable to evasion attacks. However, such potential threats have received relatively little attention because existing attacks destruct the functionalities or appearance of webpages and are conducted in the white-box scenario, making it less practical. Consequently, it becomes imperative to understand whether it is possible to launch evasion attacks with limited knowledge of the classifier, while preserving the functionalities and appearance. In this work, we show that even in the grey-, and black-box scenarios, evasion attacks are not only effective on practical ML-based classifiers, but can also be efficiently launched without destructing the functionalities and appearance. For this purpose, we propose three mutation-based attacks, differing in the knowledge of the target classifier, addressing a key technical challenge: automatically crafting an adversarial sample from a known phishing website in a way that can mislead classifiers. To launch attacks in the white- and grey-box scenarios, we also propose a sample-based collision attack to gain the knowledge of the target classifier. We demonstrate the effectiveness and efficiency of our evasion attacks on the state-of-the-art, Google's phishing page filter, achieved 100% attack success rate in less than one second per website. Moreover, the transferability attack on BitDefender's industrial phishing page classifier, TrafficLight, achieved up to 81.25% attack success rate. We further propose a similarity-based method to mitigate such evasion attacks, Pelican. We demonstrate that Pelican can effectively detect evasion attacks. Our findings contribute to design more robust phishing website classifiers in practice.", "target": ["機械学習ベースのフィッシング検知器に対する回避攻撃。既存手法は、攻撃者が検知器の内部構造を把握しており、かつフィッシングサイトの外観や機能が壊れることを前提としているが、本手法は外観を壊さず、かつブラックボックスで攻撃できる。商用の検知器に対して、攻撃の有効性を確認したとのこと。"]} {"source": "Practical applications of abstractive summarization models are limited by frequent factual inconsistencies with respect to their input. Existing automatic evaluation metrics for summarization are largely insensitive to such errors. We propose an automatic evaluation protocol called QAGS (pronounced \"kags\") that is designed to identify factual inconsistencies in a generated summary. QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source. To evaluate QAGS, we collect human judgments of factual consistency on model-generated summaries for the CNN/DailyMail (Hermann et al., 2015) and XSUM (Narayan et al., 2018) summarization datasets. QAGS has substantially higher correlations with these judgments than other automatic evaluation metrics. Also, QAGS offers a natural form of interpretability: The answers and questions generated while computing QAGS indicate which tokens of a summary are inconsistent and why. We believe QAGS is a promising tool in automatically generating usable and factually consistent text.", "target": ["抽象型要約のファクトチェックを行う研究。元文書/要約それぞれを基にした場合の質問回答の差異(=元文書の事実と異なる要約をしていないか)で評価する。質問の生成はSeq2Seqベース(固有表現や名詞を回答とし生成に使用)、回答はExtractiveのモデルで行う。人手チェック結果との相関を確認"]} {"source": "Recently, NLP has seen a surge in the usage of large pre-trained models. Users download weights of models pre-trained on large datasets, then fine-tune the weights on a task of their choice. This raises the question of whether downloading untrusted pre-trained weights can pose a security threat. In this paper, we show that it is possible to construct ``weight poisoning'' attacks where pre-trained weights are injected with vulnerabilities that expose ``backdoors'' after fine-tuning, enabling the attacker to manipulate the model prediction simply by injecting an arbitrary keyword. We show that by applying a regularization method, which we call RIPPLe, and an initialization procedure, which we call Embedding Surgery, such attacks are possible even with limited knowledge of the dataset and fine-tuning procedure. Our experiments on sentiment classification, toxicity detection, and spam detection show that this attack is widely applicable and poses a serious threat. Finally, we outline practical defenses against such attacks. Code to reproduce our experiments is available at this https URL.", "target": ["文書分類器にバックドアを設置する手法と対策。自然言語処理では、一般公開されている大規模な事前学習モデルをユーザがファインチューニングして利用するケースが増えている。攻撃者は事前学習モデルの重みを細工することで、これを利用して作成されたモデルにバックドアを仕込むことが可能となる。"]} {"source": "Off-policy reinforcement learning (RL) using a fixed offline dataset of logged interactions is an important consideration in real world applications. This paper studies offline RL using the DQN replay dataset comprising the entire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate that recent off-policy deep RL algorithms, even when trained solely on this replay dataset, outperform the fully trained DQN agent. To enhance generalization in the offline setting, we present Random Ensemble Mixture (REM), a robust Q-learning algorithm that enforces optimal Bellman consistency on random convex combinations of multiple Q-value estimates. Offline REM trained on the DQN replay dataset surpasses strong RL baselines. The results here present an optimistic view that robust RL algorithms trained on sufficiently large and diverse offline datasets can lead to high quality policies. The DQN replay dataset can serve as an offline RL benchmark and is open-sourced.", "target": ["学習済みエージェントの行動履歴から学習するOffline強化学習の研究。Offline(新しいデータが取れない)状態で汎化させるため、複数エージェントの価値予測をランダムにアンサンブルして予測を行う(Random Ensemble Mixture)。これにより元エージェントを上回る性能を獲得。強化学習版蒸留ともいえる。"]} {"source": "We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other neural network and decision tree variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into the global model behavior. Finally, for the first time to our knowledge, we demonstrate self-supervised learning for tabular data, significantly improving performance with unsupervised representation learning when unlabeled data is abundant.", "target": ["テーブルデータに対するDNNの適用。決定木をDNNで模倣しEnd2Endで学習する形式で、Encode(FC=>BN=>GLU(ゲート付き線形変換))、分割、Attention行列を作成しMaskという処理を再帰的に繰り返し特徴を作成する(類似特徴をMaskでまとめる)。Decoderをつけ教師なし表現学習も可能"]} {"source": "In user targeting automation systems, concept drift in input data is one of the main challenges. It deteriorates model performance on new data over time. Previous research on concept drift mostly proposed model retraining after observing performance decreases. However, this approach is suboptimal because the system fixes the problem only after suffering from poor performance on new data. Here, we introduce an adversarial validation approach to concept drift problems in user targeting automation systems. With our approach, the system detects concept drift in new data before making inference, trains a model, and produces predictions adapted to the new data. We show that our approach addresses concept drift effectively with the AutoML3 Lifelong Machine Learning challenge data as well as in Uber's internal user targeting automation system, MaLTA.", "target": ["Trainとtestのデータ分布が異なるタスクに対して、train/testを見分ける分類器を訓練しスコアがランダムになるまで分類器の重要な特徴量を削除することで、分布を一致させるAdversarial Feature Selectionで対応した。validationをtestに近いデータを使うvalidation selectionや、trainをtest分布に従って重み付けするInverse Propensity Weightingより良かったとのこと。"]} {"source": "Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point. In this paper, we study the susceptibility of feature-based ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging \"clean label\" attacks where attackers do not control the sample labeling process. We propose the use of techniques from explainable machine learning to guide the selection of relevant features and values to create effective backdoor triggers in a model-agnostic fashion. Using multiple reference datasets for malware classification, including Windows PE files, PDFs, and Android applications, we demonstrate effective attacks against a diverse set of machine learning models and evaluate the effect of various constraints imposed on the attacker. To demonstrate the feasibility of our backdoor attacks in practice, we create a watermarking utility for Windows PE files that preserves the binary's functionality, and we leverage similar behavior-preserving alteration methodologies for Android and PDF files. Finally, we experiment with potential defensive strategies and show the difficulties of completely defending against these attacks, especially when the attacks blend in with the legitimate sample distribution.", "target": ["機械学習ベースのマルウェア分類器に対するバックドア攻撃。80万個のWindowsバイナリデータセットの1%に透かしを入れた細工データを注入し、勾配ブースティング決定木とニューラルネットワークをベースにしたマルウェア分類器を検証。97%以上の攻撃成功率を達成したとのこと。"]} {"source": "Existing algorithms for aligning cross-lingual word vector spaces assume that vector spaces are approximately isomorphic. As a result, they perform poorly or fail completely on non-isomorphic spaces. Such non-isomorphism has been hypothesised to result from typological differences between languages. In this work, we ask whether non-isomorphism is also crucially a sign of degenerate word vector spaces. We present a series of experiments across diverse languages which show that variance in performance across language pairs is not only due to typological differences, but can mostly be attributed to the size of the monolingual resources available, and to the properties and duration of monolingual training (e.g. \"under-training\").", "target": ["異なる言語の単語を潜在空間上で一致(isomorphic)させるのが多言語分散表現の目的だが、一致が取れない原因が単に言語類型学(typological)に由来するのでなく純粋なデータ量にも依存することを示した研究。単語の出現頻度だけでなく、前処理/辞書による学習(self-learning)もポイントとのこと。"]} {"source": "In this paper, we tackle the task of establishing dense visual correspondences between images containing objects of the same category. This is a challenging task due to large intra-class variations and a lack of dense pixel level annotations. We propose a convolutional neural network architecture, called adaptive neighbourhood consensus network (ANC-Net), that can be trained end-to-end with sparse key-point annotations, to handle this challenge. At the core of ANC-Net is our proposed non-isotropic 4D convolution kernel, which forms the building block for the adaptive neighbourhood consensus module for robust matching. We also introduce a simple and efficient multi-scale self-similarity module in ANC-Net to make the learned feature robust to intra-class variations. Furthermore, we propose a novel orthogonal loss that can enforce the one-to-one matching constraint. We thoroughly evaluate the effectiveness of our method on various benchmarks, where it substantially outperforms state-of-the-art methods.", "target": ["同じ種類の物体の対応する点を一対一対応のアノテーションから学習するANC-Netを提案。CNNで抽象化後に各画素の3x3周囲との類似度を特徴量にしたSとスケール違いに対応するために異方性をもつConv4Dのカーネルを使うことが特徴。"]} {"source": "We present the Neural Covidex, a search engine that exploits the latest neural ranking architectures to provide information access to the COVID-19 Open Research Dataset curated by the Allen Institute for AI. This web application exists as part of a suite of tools that we have developed over the past few weeks to help domain experts tackle the ongoing global pandemic. We hope that improved information access capabilities to the scientific literature can inform evidence-based decision making and insight generation. This paper describes our initial efforts and offers a few thoughts about lessons we have learned along the way.", "target": ["新型コロナの文献を検索できるシステムを開発した研究。LuceneベースのAnseriniをエンジンに使用し、BM25(TF-IDF+文書長が短い(=情報密度が高い)もの優先)で抽出した後にT5ベースで学習したモデル(クエリとドキュメントの関連度を直接出力する)でリランクしている。UIはRailsのBlacklightを使用。"]} {"source": "Efficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photo-realistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. This state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.", "target": ["DNNを利用した画像描画(ニューラルレンダリング)に関するサーベイ。CGで使われる物理法則、画像生成処理をそれぞれ紹介した後に、実際のレンダリング手法を制御可否や対象とする法則等様々な観点から評価を行っている。Table1は要チェック。"]} {"source": "Named entity recognition systems perform well on standard datasets comprising English news. But given the paucity of data, it is difficult to draw conclusions about the robustness of systems with respect to recognizing a diverse set of entities. We propose a method for auditing the in-domain robustness of systems, focusing specifically on differences in performance due to the national origin of entities. We create entity-switched datasets, in which named entities in the original texts are replaced by plausible named entities of the same type but of different national origin. We find that state-of-the-art systems' performance vary widely even in-domain: In the same context, entities from certain origins are more reliably recognized than entities from elsewhere. Systems perform best on American and Indian entities, and worst on Vietnamese and Indonesian entities. This auditing approach can facilitate the development of more robust named entity recognition systems, and will allow research in this area to consider fairness criteria that have received heightened attention in other predictive technology work.", "target": ["固有表現認識で、(同じ英語=in-domainでも)どの国の出来事かで認識率が変わることを指摘した研究。異なる国籍の固有表現を入れ替える(BobをTaroにする、NewYorkをTokyoにするなど)ことで検証。アメリカ/インドの内容では認識率が高いが、ベトナム/インドネシアでは低い結果。"]} {"source": "Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight GhostNet can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7\\% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at this https URL", "target": ["CNNで生成される特徴量マップの多くは、重要なそれに簡単な変換をかけることで表現できるのではないかという視点から高速化/省メモリを実現した研究。256,512等多くの特徴量マップをネットワーク中で作成する変わりに、まず少数の特徴量マップを生成し、それに簡易な変換(論文ではdepth-wise Conv)をかけることで特徴量マップを増やす。MobileNetV3より良好な結果。"]} {"source": "By design, discriminatively trained neural network classifiers produce reliable predictions only for in-distribution samples. For their real-world deployments, detecting out-of-distribution (OOD) samples is essential. Assuming OOD to be outside the closed boundary of in-distribution, typical neural classifiers do not contain the knowledge of this boundary for OOD detection during inference. There have been recent approaches to instill this knowledge in classifiers by explicitly training the classifier with OOD samples close to the in-distribution boundary. However, these generated samples fail to cover the entire in-distribution boundary effectively, thereby resulting in a sub-optimal OOD detector. In this paper, we analyze the feasibility of such approaches by investigating the complexity of producing such \"effective\" OOD samples. We also propose a novel algorithm to generate such samples using a manifold learning network (e.g., variational autoencoder) and then train an n+1 classifier for OOD detection, where the n+1^{th} class represents the OOD samples. We compare our approach against several recent classifier-based OOD detectors on MNIST and Fashion-MNIST datasets. Overall the proposed approach consistently performs better than the others.", "target": ["既存の外れ値検出手法は通常のデータだと上手く機能するが、敵対的攻撃に対して非常に脆弱であった。外れ値、正常値ともに敵対的なノイズを加えたものを学習データに加えるALOEを提案。敵対的攻撃に対して頑健性をもたせることに成功。"]} {"source": "Transformer-based NLP models are trained using hundreds of millions or even billions of parameters, limiting their applicability in computationally constrained environments. While the number of parameters generally correlates with performance, it is not clear whether the entire network is required for a downstream task. Motivated by the recent work on pruning and distilling pre-trained models, we explore strategies to drop layers in pre-trained models, and observe the effect of pruning on downstream GLUE tasks. We were able to prune BERT, RoBERTa and XLNet models up to 40%, while maintaining up to 98% of their original performance. Additionally we show that our pruned models are on par with those built using knowledge distillation, both in terms of size and performance. Our experiments yield interesting observations such as, (i) the lower layers are most critical to maintain downstream task performance, (ii) some tasks such as paraphrase detection and sentence similarity are more robust to the dropping of layers, and (iii) models trained using a different objective function exhibit different learning patterns and w.r.t the layer dropping.", "target": ["貧者のためのBERTと題し、低リソース環境でも実行できるようレイヤーのDropを行い軽量化している。単純にモデルの上層を落とす手法が最も費用対効果があり、98%の精度を維持しつつ40%サイズを落とせる(DistilBERTと同程度の性能)。"]} {"source": "We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs off-policy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 1.9x and 1.2x performance gains at the 100K environment and interaction steps benchmarks respectively. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features. Our code is open-sourced and available at this https URL.", "target": ["対象学習を強化学習に取り入れた手法。観測した状態を別々にEncodeし(queryとkey)、対象学習するとともにquery側を強化学習に使用する。勾配はquery側に流し、key側はquery側の重みの移動平均を使用する。シンプルな手法ながら連続値のDM ControlまたAtariで既存手法を上回る結果。"]} {"source": "Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution.", "target": ["生成テキストの評価にBERTを使用する研究。BERT=>合成データによる事前学習=>本トレーニング(文/評価のペアで学習)という流れ。単語Mask/BackTranslationでデータを合成するが、正則化のためランダムに単語を落とす。テキスト評価のタスクWMT/WebNLGで過去手法より良好な結果。"]} {"source": "Most neural machine translation models only rely on pairs of parallel sentences, assuming syntactic information is automatically learned by an attention mechanism. In this work, we investigate different approaches to incorporate syntactic knowledge in the Transformer model and also propose a novel, parameter-free, dependency-aware self-attention mechanism that improves its translation quality, especially for long sentences and in low-resource scenarios. We show the efficacy of each approach on WMT English-German and English-Turkish, and WAT English-Japanese translation tasks.", "target": ["Transformerの翻訳モデルに依存構造知識を組み込む研究。tokenの親までの距離を基にスコア行列を作り、それをAttention Weightにかけるというシンプルな手法。重みを一切使わないため、簡単に既存モデルに組み込むことができる。依存構造を導入する他手法より良好な結果を観測。"]} {"source": "With the increased attention and legislation for data-privacy, collaborative machine learning (ML) algorithms are being developed to ensure the protection of private data used for processing. Federated learning (FL) is the most popular of these methods, which provides privacy preservation by facilitating collaborative training of a shared model without the need to exchange any private data with a centralized server. Rather, an abstraction of the data in the form of a machine learning model update is sent. Recent studies showed that such model updates may still very well leak private information and thus more structured risk assessment is needed. In this paper, we analyze existing vulnerabilities of FL and subsequently perform a literature review of the possible attack methods targetingFL privacy protection capabilities. These attack methods are then categorized by a basic taxonomy. Additionally, we provide a literature study of the most recent defensive strategies and algorithms for FL aimed to overcome these attacks. These defensive strategies are categorized by their respective underlying defence principle. The paper concludes that the application of a single defensive strategy is not enough to provide adequate protection to all available attack methods.", "target": ["データプライバシー保護を実現するFederated Learning(FL)に対する攻撃手法と対策に関する調査結果。FLの既知の攻撃手法と対策に関する調査結果から、FLでプライバシー保護を実現するには、単一の対策ではなく、複数の対策を組み合わせることが重要であると主張している。"]} {"source": "Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs), such that the prediction of the infected model will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger, while it performs well on benign samples. Currently, most of existing backdoor attacks adopted the setting of \\emph{static} trigger, i.e., triggers across the training and testing images follow the same appearance and are located in the same area. In this paper, we revisit this attack paradigm by analyzing the characteristics of the static trigger. We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training. We further explore how to utilize this property for backdoor defense, and discuss how to alleviate such vulnerability of existing attacks.", "target": ["画像分類器に対するバックドア攻撃の考察。学習データに特殊な柄を追加することで、同じ柄を持つテストデータを攻撃者の意図したクラスに分類させるバックドア攻撃に対し、テストデータを空間変換して特殊柄の効果を弱める防御手法を提案。併せて、本防御手法を回避する新たな攻撃手法も提案している。"]} {"source": "We propose a new task towards more practical application for image generation - high-quality image synthesis from salient object layout. This new setting allows users to provide the layout of salient objects only (i.e., foreground bounding boxes and categories), and lets the model complete the drawing with an invented background and a matching foreground. Two main challenges spring from this new task: (i) how to generate fine-grained details and realistic textures without segmentation map input; and (ii) how to create a background and weave it seamlessly into standalone objects. To tackle this, we propose Background Hallucination Generative Adversarial Network (BachGAN), which first selects a set of segmentation maps from a large candidate pool via a background retrieval module, then encodes these candidate layouts via a background fusion module to hallucinate a suitable background for the given objects. By generating the hallucinated background representation dynamically, our model can synthesize high-resolution images with both photo-realistic foreground and integral background. Experiments on Cityscapes and ADE20K datasets demonstrate the advantage of BachGAN over existing methods, measured on both visual fidelity of generated images and visual alignment between output images and input layouts.", "target": ["Bounding Boxの情報から、それに従った尤もらしい画像を生成させるタスクを提案。領域分割から画像を生成するのとは異なり、詳細な背景の推察が必要だが、関連度が高そうな領域マップを複数呼び出し、それとBounding boxの情報を統合して画像を生成させるBachGANを提案。"]} {"source": "Recent work has documented the susceptibility of deep learning systems to adversarial examples, but most such attacks directly manipulate the digital input to a classifier. Although a smaller line of work considers physical adversarial attacks, in all cases these involve manipulating the object of interest, e.g., putting a physical sticker on an object to misclassify it, or manufacturing an object specifically intended to be misclassified. In this work, we consider an alternative question: is it possible to fool deep classifiers, over all perceived objects of a certain type, by physically manipulating the camera itself? We show that by placing a carefully crafted and mainly-translucent sticker over the lens of a camera, one can create universal perturbations of the observed images that are inconspicuous, yet misclassify target objects as a different (targeted) class. To accomplish this, we propose an iterative procedure for both updating the attack perturbation (to make it adversarial for a given classifier), and the threat model itself (to ensure it is physically realizable). For example, we show that we can achieve physically-realizable attacks that fool ImageNet classifiers in a targeted fashion 49.6% of the time. This presents a new class of physically-realizable threat models to consider in the context of adversarially robust machine learning. Our demo video can be viewed at: this https URL", "target": ["物理的な敵対的サンプルを作成する手法。カメラのレンズに特殊な柄のステッカーを貼ることで、DNNベースの物体認識モデルの判断を誤らせることが可能。"]} {"source": "Clean-label poisoning attacks inject innocuous looking (and \"correctly\" labeled) poison images into training data, causing a model to misclassify a targeted image after being trained on this data. We consider transferable poisoning attacks that succeed without access to the victim network's outputs, architecture, or (in some cases) training data. To achieve this, we propose a new \"polytope attack\" in which poison images are designed to surround the targeted image in feature space. We also demonstrate that using Dropout during poison creation helps to enhance transferability of this attack. We achieve transferable attack success rates of over 50% while poisoning only 1% of the training set.", "target": ["ブラックボックスで画像分類モデルにバックドアを設置する手法。細工データを注入した学習データを攻撃対象モデルに学習させることで、特定の入力データを攻撃者が意図したクラスに分類させることが可能。細工データは見た目に違和感がないため、ラベリング工程でデータの異常を検知することは困難。"]} {"source": "Machine learning models are vulnerable to adversarial examples. For the black-box setting, current substitute attacks need pre-trained models to generate adversarial examples. However, pre-trained models are hard to obtain in real-world tasks. In this paper, we propose a data-free substitute training method (DaST) to obtain substitute models for adversarial black-box attacks without the requirement of any real data. To achieve this, DaST utilizes specially designed generative adversarial networks (GANs) to train the substitute models. In particular, we design a multi-branch architecture and label-control loss for the generative model to deal with the uneven distribution of synthetic samples. The substitute model is then trained by the synthetic samples generated by the generative model, which are labeled by the attacked model subsequently. The experiments demonstrate the substitute models produced by DaST can achieve competitive performance compared with the baseline models which are trained by the same train set with attacked models. Additionally, to evaluate the practicability of the proposed method on the real-world task, we attack an online machine learning model on the Microsoft Azure platform. The remote model misclassifies 98.35% of the adversarial examples crafted by our method. To the best of our knowledge, we are the first to train a substitute model for adversarial attacks without any real data.", "target": ["敵対的サンプルをブラックボックスで作成する手法。既存手法は、敵対的サンプルを生成する代替モデルの学習に、攻撃対象モデルの学習データに近似したデータが必要であったが、本手法ではGANで作成したデータを代替モデルの学習に使用している。敵対的サンプルの精度は既存手法と殆ど同じである。"]} {"source": "Recent studies proved that deep learning approaches achieve remarkable results on face detection task. On the other hand, the advances gave rise to a new problem associated with the security of the deep convolutional neural network models unveiling potential risks of DCNNs based applications. Even minor input changes in the digital domain can result in the network being fooled. It was shown then that some deep learning-based face detectors are prone to adversarial attacks not only in a digital domain but also in the real world. In the paper, we investigate the security of the well-known cascade CNN face detection system - MTCNN and introduce an easily reproducible and a robust way to attack it. We propose different face attributes printed on an ordinary white and black printer and attached either to the medical face mask or to the face directly. Our approach is capable of breaking the MTCNN detector in a real-world scenario.", "target": ["医療用マスクや顔(頬など)に特殊な柄のパッチを貼り付けることで、MTCNNベースの顔検出モデルによる顔検出を回避する手法(Adversarial Patch)。監視カメラや防犯カメラなどから顔を秘匿するなど、犯罪に悪用される可能性もある。"]} {"source": "Subword segmentation is widely used to address the open vocabulary problem in machine translation. The dominant approach to subword segmentation is Byte Pair Encoding (BPE), which keeps the most frequent words intact while splitting the rare ones into multiple tokens. While multiple segmentations are possible even with the same vocabulary, BPE splits words into unique sequences; this may prevent a model from better learning the compositionality of words and being robust to segmentation errors. So far, the only way to overcome this BPE imperfection, its deterministic nature, was to create another subword segmentation algorithm (Kudo, 2018). In contrast, we show that BPE itself incorporates the ability to produce multiple segmentations of the same word. We introduce BPE-dropout - simple and effective subword regularization method based on and compatible with conventional BPE. It stochastically corrupts the segmentation procedure of BPE, which leads to producing multiple segmentations within the same fixed BPE framework. Using BPE-dropout during training and the standard BPE during inference improves translation quality up to 3 BLEU compared to BPE and up to 0.9 BLEU compared to the previous subword regularization.", "target": ["近年主流になっているサブワード単位での分割を行うためのBPEを改善する手法の提案。サブワードへの分割は複数の候補があり得るが、通常のBPEは頻出するペアのマージを機械的に行っていくため処理が決定的になる。そこでマージ操作をランダムにdropすることで正則化を行う。"]} {"source": "Artificial Intelligence (AI) has achieved great success in many domains, and game AI is widely regarded as its beachhead since the dawn of AI. In recent years, studies on game AI have gradually evolved from relatively simple environments (e.g., perfect-information games such as Go, chess, shogi or two-player imperfect-information games such as heads-up Texas hold'em) to more complex ones (e.g., multi-player imperfect-information games such as multi-player Texas hold'em and StartCraft II). Mahjong is a popular multi-player imperfect-information game worldwide but very challenging for AI research due to its complex playing/scoring rules and rich hidden information. We design an AI for Mahjong, named Suphx, based on deep reinforcement learning with some newly introduced techniques including global reward prediction, oracle guiding, and run-time policy adaptation. Suphx has demonstrated stronger performance than most top human players in terms of stable rank and is rated above 99.99% of all the officially ranked human players in the Tenhou platform. This is the first time that a computer program outperforms most top human players in Mahjong.", "target": ["麻雀を強化学習した研究。麻雀プラットフォームの天鳳で99%のプレイヤーより上位になる腕前。前/今の手牌から数手先の報酬を予測する、学習中に相手の手牌の情報を使用する(使用率は徐々に下げていく)、モンテカルロ法をベースにシミュレーション中に方策を更新していく手法の3つが肝となっている"]} {"source": "Machine learning research has advanced in multiple aspects, including model structures and learning methods. The effort to automate such research, known as AutoML, has also made significant progress. However, this progress has largely focused on the architecture of neural networks, where it has relied on sophisticated expert-designed layers as building blocks---or similarly restrictive search spaces. Our goal is to show that AutoML can go further: it is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space. Despite the vastness of this space, evolutionary search can still discover two-layer neural networks trained by backpropagation. These simple neural networks can then be surpassed by evolving directly on tasks of interest, e.g. CIFAR-10 variants, where modern techniques emerge in the top algorithms, such as bilinear interactions, normalized gradients, and weight averaging. Moreover, evolution adapts algorithms to different task types: e.g., dropout-like techniques appear when little data is available. We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction for the field.", "target": ["機械学習アルゴリズムを自動生成する研究。機械学習のプロセスを初期化=>予測=>学習=>(予測に戻る)と定義し、内部のロジックは高校レベルの演算を組み合わせて構築する。探索は進化戦略で行う。これにより、誤差逆伝搬で学習する2層のネットワークを発見できたという。"]} {"source": "In the enterprise email search setting, the same search engine often powers multiple enterprises from various industries: technology, education, manufacturing, etc. However, using the same global ranking model across different enterprises may result in suboptimal search quality, due to the corpora differences and distinct information needs. On the other hand, training an individual ranking model for each enterprise may be infeasible, especially for smaller institutions with limited data. To address this data challenge, in this paper we propose a domain adaptation approach that fine-tunes the global model to each individual enterprise. In particular, we propose a novel application of the Maximum Mean Discrepancy (MMD) approach to information retrieval, which attempts to bridge the gap between the global data distribution and the data distribution for a given individual enterprise. We conduct a comprehensive set of experiments on a large-scale email search engine, and demonstrate that the MMD approach consistently improves the search quality for multiple individual domains, both in comparison to the global ranking model, as well as several competitive domain adaptation baselines including adversarial learning methods.", "target": ["検索システムのドメイン適用を行った研究。Sourceをベースに、データが少ないTargetに適した検索を行う。Source/Targetのクエリ/文書を同じ潜在空間にマッピングすると当然異なる表現になるが、互いに見分けがつかないよう訂正するモデルを作成することで区別なく検索を行う"]} {"source": "Data augmentation is an effective way to improve the performance of deep networks. Unfortunately, current methods are mostly developed for high-level vision tasks (e.g., classification) and few are studied for low-level vision tasks (e.g., image restoration). In this paper, we provide a comprehensive analysis of the existing augmentation methods applied to the super-resolution task. We find that the methods discarding or manipulating the pixels or features too much hamper the image restoration, where the spatial relationship is very important. Based on our analyses, we propose CutBlur that cuts a low-resolution patch and pastes it to the corresponding high-resolution image region and vice versa. The key intuition of CutBlur is to enable a model to learn not only \"how\" but also \"where\" to super-resolve an image. By doing so, the model can understand \"how much\", instead of blindly learning to apply super-resolution to every given pixel. Our method consistently and significantly improves the performance across various scenarios, especially when the model size is big and the data is collected under real-world environments. We also show that our method improves other low-level vision tasks, such as denoising and compression artifact removal.", "target": ["超解像におけるData Augmentationの研究。局所特徴を崩すようなピクセル操作はパフォーマンスの劣化に繋がる。そこで、低解像のパッチを高解像の画像に張り付ける手法を提案。これによりどこを、どの程度編集すればよいかネットワークが学習するとのこと。"]} {"source": "Many applications of machine learning require a model to make accurate pre-dictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naive strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.", "target": ["GNNの事前学習について調べた研究。前提としてGNNの事前学習は場合によってほとんど効果がなかったり悪化することもある。ノード/グラフレベルの事前学習を提案しており、前者はn-hopのサブグラフからの接続ノード予測/Dropした属性予測、後者はマルチタスクの予測で学習する"]} {"source": "Single image view synthesis allows for the generation of new views of a scene given a single input image. This is challenging, as it requires comprehensively understanding the 3D scene from a single image. As a result, current methods typically use multiple images, train on ground-truth depth, or are limited to synthetic data. We propose a novel end-to-end model for this task; it is trained on real images without any ground-truth 3D information. To this end, we introduce a novel differentiable point cloud renderer that is used to transform a latent 3D point cloud of features into the target view. The projected features are decoded by our refinement network to inpaint missing regions and generate a realistic output image. The 3D component inside of our generative model allows for interpretable manipulation of the latent feature space at test time, e.g. we can animate trajectories from a single image. Unlike prior work, we can generate high resolution images and generalise to other input resolutions. We outperform baselines and prior work on the Matterport, Replica, and RealEstate10K datasets.", "target": ["2次元画像のみかつEnd-to-Endで他視点画像を生成する研究。入力画像から特徴マップ/深度マップを推定、2つのマップからポイントクラウドを生成し撮影視点Tに沿った画像を微分可能なレンダラーで出力する。出力画像と実物画像を使いGANの枠組みで学習する。"]} {"source": "To measure how well pretrained representations encode some linguistic property, it is common to use accuracy of a probe, i.e. a classifier trained to predict the property from the representations. Despite widespread adoption of probes, differences in their accuracy fail to adequately reflect differences in representations. For example, they do not substantially favour pretrained representations over randomly initialized ones. Analogously, their accuracy can be similar when probing for genuine linguistic labels and probing for random synthetic tasks. To see reasonable differences in accuracy with respect to these random baselines, previous work had to constrain either the amount of probe training data or its model size. Instead, we propose an alternative to the standard probes, information-theoretic probing with minimum description length (MDL). With MDL probing, training a probe to predict labels is recast as teaching it to effectively transmit the data. Therefore, the measure of interest changes from probe accuracy to the description length of labels given representations. In addition to probe quality, the description length evaluates \"the amount of effort\" needed to achieve the quality. This amount of effort characterizes either (i) size of a probing model, or (ii) the amount of data needed to achieve the high quality. We consider two methods for estimating MDL which can be easily implemented on top of the standard probing pipelines: variational coding and online coding. We show that these methods agree in results and are more informative and stable than the standard probes.", "target": ["自然言語処理でモデルの「良さ」を証明する手法の提案。通常は精度で行われるが、精度が高いから良質な表現とは限らない(表現に差異さえあれば識別できるため)。そこでyはX(データ)を圧縮伝送した姿ととらえ、伝送コスト=コードの圧縮性(通常のCE)+コード生成を行うモデルの圧縮性双方で評価を行う。"]} {"source": "Atari games have been a long-standing benchmark in the reinforcement learning (RL) community for the past decade. This benchmark was proposed to test general competency of RL algorithms. Previous work has achieved good average performance by doing outstandingly well on many games of the set, but very poorly in several of the most challenging games. We propose Agent57, the first deep RL agent that outperforms the standard human benchmark on all 57 Atari games. To achieve this result, we train a neural network which parameterizes a family of policies ranging from very exploratory to purely exploitative. We propose an adaptive mechanism to choose which policy to prioritize throughout the training process. Additionally, we utilize a novel parameterization of the architecture that allows for more consistent and stable learning.", "target": ["Atari2600に収録されている57タイトルのゲーム全てで人間のスコアを上回った手法の発表。その名もAgent57。新規の手法というより既存の手法をすべてミックスさせたものになっており、時系列記憶・内発報酬・探索活用調整の主に3つが取り入れられている。"]} {"source": "There are thousands of actively spoken languages on Earth, but a single visual world. Grounding in this visual world has the potential to bridge the gap between all these languages. Our goal is to use visual grounding to improve unsupervised word mapping between languages. The key idea is to establish a common visual representation between two languages by learning embeddings from unpaired instructional videos narrated in the native language. Given this shared embedding we demonstrate that (i) we can map words between the languages, particularly the 'visual' words; (ii) that the shared embedding provides a good initialization for existing unsupervised text-based word translation techniques, forming the basis for our proposed hybrid visual-text mapping algorithm, MUVE; and (iii) our approach achieves superior performance by addressing the shortcomings of text-based methods -- it is more robust, handles datasets with less commonality, and is applicable to low-resource languages. We apply these methods to translate words from English to French, Korean, and Japanese -- all without any parallel corpora and simply by watching many videos of people speaking while doing things.", "target": ["画像に基づいた言語知識を得る研究。Video Captionをデータに使用しており、HowTo100Mデータセット(もとは英語)を翻訳して使用。画像は共通のEncoderで、テキストは分散表現(英語以外は事前に適応レイヤを挟む)を単純にMaxpool+全結合でEncodeし、画像+テキストの結合表現が一致するよう学習する。"]} {"source": "The ability to adapt to unseen, local contexts is an important challenge that successful models of source code must overcome. One of the most popular approaches for the adaptation of such models is dynamic evaluation. With dynamic evaluation, when running a model on an unseen file, the model is updated immediately after having observed each token in that file. In this work, we propose instead to frame the problem of context adaptation as a meta-learning problem. We aim to train a base source code model that is best able to learn from information in a file to deliver improved predictions of missing tokens. Unlike dynamic evaluation, this formulation allows us to select more targeted information (support tokens) for adaptation, that is both before and after a target hole in a file. We consider an evaluation setting that we call line-level maintenance, designed to reflect the downstream task of code auto-completion in an IDE. Leveraging recent developments in meta-learning such as first-order MAML and Reptile, we demonstrate improved performance in experiments on a large scale Java GitHub corpus, compared to other adaptation baselines including dynamic evaluation. Moreover, our analysis shows that, compared to a non-adaptive baseline, our approach improves performance on identifiers and literals by 44\\% and 15\\%, respectively. Our implementation can be found at: this https URL", "target": ["メタラーニングの枠組みでソースコード補完を行う研究。補完には極論ファイルごとの予測モデルが必要だが、実際難しいため共通知を得るメタラーニングの手法を採用する。補完に有用な情報(support token)の発見をinner、実際の補完をouterで行う。FOMAML/Reptile(#677 )双方で検証している。"]} {"source": "Deep reinforcement learning (RL) policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for classifiers. However, an attacker is not usually able to directly modify another agent's observations. This might lead one to wonder: is it possible to attack an RL agent simply by choosing an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial? We demonstrate the existence of adversarial policies in zero-sum games between simulated humanoid robots with proprioceptive observations, against state-of-the-art victims trained via self-play to be robust to opponents. The adversarial policies reliably win against the victims but generate seemingly random and uncoordinated behavior. We find that these policies are more successful in high-dimensional environments, and induce substantially different activations in the victim policy network than when the victim plays against a normal opponent. Videos are available at https://adversarialpolicies.github.io/.", "target": ["強化学習エージェントに対するAdversarialの研究。状態(画像etc)にAdversarialを入れることで獲得報酬は下げられるが実際相手のセンサーを細工するのは困難。そこで同一環境内のエージェントに特定行動を取らせることで妨害する。OpenAI(Dota2)の手法でSelf-Play学習したエージェントの妨害に成功。"]} {"source": "We hypothesize that curiosity is a mechanism found by evolution that encourages meaningful exploration early in an agent's life in order to expose it to experiences that enable it to obtain high rewards over the course of its lifetime. We formulate the problem of generating curious behavior as one of meta-learning: an outer loop will search over a space of curiosity mechanisms that dynamically adapt the agent's reward signal, and an inner loop will perform standard reinforcement learning using the adapted reward signal. However, current meta-RL methods based on transferring neural network weights have only generalized between very similar tasks. To broaden the generalization, we instead propose to meta-learn algorithms: pieces of code similar to those designed by humans in ML papers. Our rich language of programs combines neural networks with other building blocks such as buffers, nearest-neighbor modules and custom loss functions. We demonstrate the effectiveness of the approach empirically, finding two novel curiosity algorithms that perform on par or better than human-designed published curiosity algorithms in domains as disparate as grid navigation with image inputs, acrobot, lunar lander, ant and hopper.", "target": ["メタラーニングを使用し環境非依存の好奇心(内発的報酬)を学習する研究。入力(状態遷移(s,a,s_t+1))を2つのNNに通し、差分から内発的報酬(実数)を出力する。この時、片側のNNは重みを固定する=ランダムな重みによる出力を(学習により)予測できるようになる=好奇心は低下する、という構成になっている"]} {"source": "Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders XTREME benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.", "target": ["多言語の読解性能を計測するためのデータセットの提案。とはいえ、0-shotを検証することを想定しており学習データは英語のみで実際多言語なのは自動翻訳したテストセットのみ。日本語含む40言語が提供されている。"]} {"source": "Weather forecasting is a long standing scientific challenge with direct social and economic impact. The task is suitable for deep neural networks due to vast amounts of continuously collected data and a rich spatial and temporal structure that presents long range dependencies. We introduce MetNet, a neural network that forecasts precipitation up to 8 hours into the future at the high spatial reso- lution of 1 km2 and at the temporal resolution of 2 minutes with a latency in the order of seconds. MetNet takes as input radar and satellite data and forecast lead time and produces a probabilistic precipitation map. The architecture uses axial self-attention to aggregate the global context from a large input patch corresponding to a million square kilometers. We evaluate the performance of MetNet at various precipitation thresholds and find that MetNet outperforms Numerical Weather Prediction at forecasts of up to 7 to 8 hours on the scale of the continental United States.", "target": ["降水量を予測する深層学習モデルの提案。既存の物理エンジンベースのモデル(NOAA)より高精度かつ高精細(予測のメッシュが細かく(1km)時間間隔も細かい(2min))。衛星画像とレーダー画像を入力に取りConvLSTMで特徴量作成、軸間のAttentionをとるAxial self-attentionにかけて予測を行う。"]} {"source": "Deep metric learning papers from the past four years have consistently claimed great advances in accuracy, often more than doubling the performance of decade-old methods. In this paper, we take a closer look at the field to see if this is actually true. We find flaws in the experimental methodology of numerous metric learning papers, and show that the actual improvements over time have been marginal at best.", "target": ["ここ4年の深層距離学習は基本的なcontrastive/triplet lossからほぼ進化していないとの指摘。validationを使ったハイパーパラメーターの適切な設定がされておらず、比較に使ったデータセットに特化したアーキテクチャも見られるとのこと。"]} {"source": "Automatic Curriculum Learning (ACL) has become a cornerstone of recent successes in Deep Reinforcement Learning (DRL).These methods shape the learning trajectories of agents by challenging them with tasks adapted to their capacities. In recent years, they have been used to improve sample efficiency and asymptotic performance, to organize exploration, to encourage generalization or to solve sparse reward problems, among others. The ambition of this work is dual: 1) to present a compact and accessible introduction to the Automatic Curriculum Learning literature and 2) to draw a bigger picture of the current state of the art in ACL to encourage the cross-breeding of existing concepts and the emergence of new ideas.", "target": ["自動カリキュラムラーニング(Automatic CL=ACL)の簡単なまとめ。学習ステップを横軸、タスクの報酬を縦軸にした時の面積を最大化することがゴールになる(=最初のステップでも高い報酬が得られる必要がある)。タスクの並び順調整、軌跡のサンプリング方法の2つに分け解説されている。"]} {"source": "We present a modern scalable reinforcement learning agent called SEED (Scalable, Efficient Deep-RL). By effectively utilizing modern accelerators, we show that it is not only possible to train on millions of frames per second but also to lower the cost of experiments compared to current methods. We achieve this with a simple architecture that features centralized inference and an optimized communication layer. SEED adopts two state of the art distributed algorithms, IMPALA/V-trace (policy gradients) and R2D2 (Q-learning), and is evaluated on Atari-57, DeepMind Lab and Google Research Football. We improve the state of the art on Football and are able to reach state of the art on Atari-57 three times faster in wall-time. For the scenarios we consider, a 40% to 80% cost reduction for running experiments is achieved. The implementation along with experiments is open-sourced so results can be reproduced and novel ideas tried out.", "target": ["強化学習で効率的に分散学習を行う手法の提案。既存の手法(IMPALA)ではLearnerから送られたモデルを使用しActorが行動を行い得られた軌跡をLearnerに送り学習する。サイズが大きいモデルをActor側で実行するため転送/実行速度で問題がある。そこでLearnerに推論まで行わせActorは状態転送に徹する。"]} {"source": "The learned weights of a neural network are often considered devoid of scrutable internal structure. In order to attempt to discern structure in these weights, we introduce a measurable notion of modularity for multi-layer perceptrons (MLPs), and investigate the modular structure of MLPs trained on datasets of small images. Our notion of modularity comes from the graph clustering literature: a \"module\" is a set of neurons with strong internal connectivity but weak external connectivity. We find that MLPs that undergo training and weight pruning are often significantly more modular than random networks with the same distribution of weights. Interestingly, they are much more modular when trained with dropout. Further analysis shows that this modularity seems to arise mostly for networks trained on learnable datasets. We also present exploratory analyses of the importance of different modules for performance and how modules depend on each other. Understanding the modular structure of neural networks, when such structure exists, will hopefully render their inner workings more interpretable to engineers.", "target": ["学習中にニューラルネットワーク内で構成されるモジュールについて調べた研究。ノード/ノード間の重みをエッジとしてグラフを構築しクラスタリング(Normalized Spectral Clustering)にかけ構成を調査。事前/事後の枝刈り、またdropoutを併用するとモジュール化が促進されるとの結果。"]} {"source": "We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x,y,z) and viewing direction (\\theta, \\phi)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.", "target": ["視点の異なる画像を生成する研究。CNNは使わずなんと全結合層だけで生成を行う。座標(x, y, z)と視点方向(θ, Φ) (=z軸回転, x軸回転)、計5つのパラメーターを引数に視点独立の色(RGB)と空間密度(σ)を出力し、これを基にレンダリングを行う。低メモリかつ高精細の出力を実現"]} {"source": "Creating open-ended algorithms, which generate their own never-ending stream of novel and appropriately challenging learning opportunities, could help to automate and accelerate progress in machine learning. A recent step in this direction is the Paired Open-Ended Trailblazer (POET), an algorithm that generates and solves its own challenges, and allows solutions to goal-switch between challenges to avoid local optima. However, the original POET was unable to demonstrate its full creative potential because of limitations of the algorithm itself and because of external issues including a limited problem space and lack of a universal progress measure. Importantly, both limitations pose impediments not only for POET, but for the pursuit of open-endedness in general. Here we introduce and empirically validate two new innovations to the original algorithm, as well as two external innovations designed to help elucidate its full potential. Together, these four advances enable the most open-ended algorithmic demonstration to date. The algorithmic innovations are (1) a domain-general measure of how meaningfully novel new challenges are, enabling the system to potentially create and solve interesting challenges endlessly, and (2) an efficient heuristic for determining when agents should goal-switch from one problem to another (helping open-ended search better scale). Outside the algorithm itself, to enable a more definitive demonstration of open-endedness, we introduce (3) a novel, more flexible way to encode environmental challenges, and (4) a generic measure of the extent to which a system continues to exhibit open-ended innovation. Enhanced POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved through other means.", "target": ["環境側を変化させエージェントを学習するPOET(#1060 )の強化版。主に学習に使う環境がどの程度有用かの測定(=環境間の差異計測方法)、いつ環境での学習をやめるか、の2点を改善している。前者は環境における各エージェントの成績ランキングを正規化した値、後者は生成/解かれた数の累積数を提案している。"]} {"source": "Inattentional blindness is the psychological phenomenon that causes one to miss things in plain sight. It is a consequence of the selective attention in perception that lets us remain focused on important parts of our world without distraction from irrelevant details. Motivated by selective attention, we study the properties of artificial agents that perceive the world through the lens of a self-attention bottleneck. By constraining access to only a small fraction of the visual input, we show that their policies are directly interpretable in pixel space. We find neuroevolution ideal for training self-attention architectures for vision-based reinforcement learning (RL) tasks, allowing us to incorporate modules that can include discrete, non-differentiable operations which are useful for our agent. We argue that self-attention has similar properties as indirect encoding, in the sense that large implicit weight matrices are generated from a small number of key-query parameters, thus enabling our agent to solve challenging vision based tasks with at least 1000x fewer parameters than existing methods. Since our agent attends to only task critical visual hints, they are able to generalize to environments where task irrelevant elements are modified while conventional methods fail. Videos of our results and source code available at this https URL", "target": ["Self-Attention + 進化戦略で強化学習を行った研究。画像をパッチに区切りFlattenすることで系列を作成し、パッチ間でSelf-Attentionを実行。Attentionを集計し参照が多いパッチ順に並び変えてControllerに入力する。これらの操作は当然微分不可能なので、進化戦略で最適化する。"]} {"source": "Model efficiency has become increasingly important in computer vision. In this paper, we systematically study neural network architecture design choices for object detection and propose several key optimizations to improve efficiency. First, we propose a weighted bi-directional feature pyramid network (BiFPN), which allows easy and fast multiscale feature fusion; Second, we propose a compound scaling method that uniformly scales the resolution, depth, and width for all backbone, feature network, and box/class prediction networks at the same time. Based on these optimizations and better backbones, we have developed a new family of object detectors, called EfficientDet, which consistently achieve much better efficiency than prior art across a wide spectrum of resource constraints. In particular, with single model and single-scale, our EfficientDet-D7 achieves state-of-the-art 55.1 AP on COCO test-dev with 77M parameters and 410B FLOPs, being 4x - 9x smaller and using 13x - 42x fewer FLOPs than previous detectors. Code is available at this https URL.", "target": ["モデルサイズ/精度が上手くバランスした物体検出器を構築する研究。マルチスケール双方向の特徴ピラミッドをベースに、スケール複合しない接続は刈り、同スケールの特徴を出力に足す枝を追加。特徴抽出には解像度が異なる画像を入力とする複数のEfficientNetを組み合わせる。"]} {"source": "We introduce AutoGluon-Tabular, an open-source AutoML framework that requires only a single line of Python to train highly accurate machine learning models on an unprocessed tabular dataset such as a CSV file. Unlike existing AutoML frameworks that primarily focus on model/hyperparameter selection, AutoGluon-Tabular succeeds by ensembling multiple models and stacking them in multiple layers. Experiments reveal that our multi-layer combination of many models offers better use of allocated training time than seeking out the best. A second contribution is an extensive evaluation of public and commercial AutoML platforms including TPOT, H2O, AutoWEKA, auto-sklearn, AutoGluon, and Google AutoML Tables. Tests on a suite of 50 classification and regression tasks from Kaggle and the OpenML AutoML Benchmark reveal that AutoGluon is faster, more robust, and much more accurate. We find that AutoGluon often even outperforms the best-in-hindsight combination of all of its competitors. In two popular Kaggle competitions, AutoGluon beat 99% of the participating data scientists after merely 4h of training on the raw data.", "target": ["テーブルデータに対し自動で予測モデルを構築するAutoGluon-Tabularの発表。モデル1つ選びパラメーター探索する手法と異なり、k-foldのBaggingで学習を行ったモデルを積んでいく(予測結果平均が次層の入力になる)。既存の商用/OSSのAutoMLを上回る精度という。"]} {"source": "Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers from quadratic compute and memory requirements with respect to sequence length. Successful approaches to reduce this complexity focused on attending to local sliding windows or a small set of locations independent of content. Our work proposes to learn dynamic sparse attention patterns that avoid allocating computation and memory to attend to content unrelated to the query of interest. This work builds upon two lines of research: it combines the modeling flexibility of prior work on content-based sparse attention with the efficiency gains from approaches based on local, temporal sparse attention. Our model, the Routing Transformer, endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention to O(n1.5d) from O(n2d) for sequence length n and hidden dimension d. We show that our model outperforms comparable sparse attention models on language modeling on Wikitext-103 (15.8 vs 18.3 perplexity) as well as on image generation on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers. Additionally, we set a new state-of-the-art on the newly released PG-19 data-set, obtaining a test perplexity of 33.2 with a 22 layer Routing Transformer model trained on sequences of length 8192.", "target": ["Transformerの計算効率を改善する研究。Attention範囲を絞る手法は固定範囲か動的範囲の2種類があったが、前者は範囲が柔軟でなく、後者は範囲を学習するため結局全範囲Attentionが必要になるという問題があった。そこでKey/Queryをk-meansのクラスタにまとめたうえでAttentionを行う手法を提案。"]} {"source": "A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs). Such GCNs utilize isotropic kernels and are therefore insensitive to the relative orientation of vertices and thus to the geometry of the mesh as a whole. We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels. Since the resulting features carry orientation information, we introduce a geometric message passing scheme defined by parallel transporting features over mesh edges. Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.", "target": ["3Dメッシュに対するGraph CNNの適用方法を改善した研究。既存手法では畳み込む隣接点で構成される角度はすべて等しいのが前提だったが(isotropic)、実際は異なるため基準角度に対する差分を加味することでこれに対応(anisotropic)。構造一致のタスク(FAUST)で大幅に精度向上"]} {"source": "We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions. To achieve this goal, IRM learns a data representation such that the optimal classifier, on top of that data representation, matches for all training distributions. Through theory and experiments, we show how the invariances learned by IRM relate to the causal structures governing the data and enable out-of-distribution generalization.", "target": ["バイアスのない機械学習モデルを構築するフレームワークの提案。特定環境依存の特徴を採用するリスクを最小化するために、環境横断で特徴量を採用した場合のlossを計算し合算値を最小化するようにする(この時、分類機の表現力が強いと特徴の質が評価できないため分類機の重み範囲を制約する)。"]} {"source": "Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training. Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed (Sun et al., 2019; Sanh, 2019). However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked. In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation. The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements. Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data. One surprising observation is that they have a compound effect even when sequentially applied on the same data. To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available.", "target": ["事前学習済みモデルは一般的にサイズが大きいが、これをいきなり転移学習するより蒸留を挟んだ方が良いという研究。蒸留は個別タスクではなく元のモデルと同様言語モデル学習で行う。"]} {"source": "Neural machine translation (NMT) has arguably achieved human level parity when trained and evaluated at the sentence-level. Document-level neural machine translation has received less attention and lags behind its sentence-level counterpart. The majority of the proposed document-level approaches investigate ways of conditioning the model on several source or target sentences to capture document context. These approaches require training a specialized NMT model from scratch on parallel document-level corpora. We propose an approach that doesn't require training a specialized model on parallel document-level corpora and is applied to a trained sentence-level NMT model at decoding time. We process the document from left to right multiple times and self-train the sentence-level model on pairs of source sentences and generated translations. Our approach reinforces the choices made by the model, thus making it more likely that the same choices will be made in other sentences in the document. We evaluate our approach on three document-level datasets: NIST Chinese-English, WMT'19 Chinese-English and OpenSubtitles English-Russian. We demonstrate that our approach has higher BLEU score and higher human preference than the baseline. Qualitative analysis of our approach shows that choices made by model are consistent across the document.", "target": ["文単位の翻訳モデルを使って文書レベルの翻訳を行う手法。文書内の各文について、自分で翻訳した文を教師データにしてパラメーターを更新していく(最初のパラメーターを取っておいて、そこから離れすぎないよう正則化をかける)。感覚的には文書全体に軽くOverfitさせることでコンテキストを加味する。"]} {"source": "Generative modeling for protein engineering is key to solving fundamental problems in synthetic biology, medicine, and material science. We pose protein engineering as an unsupervised sequence generation problem in order to leverage the exponentially growing set of proteins that lack costly, structural annotations. We train a 1.2B-parameter language model, ProGen, on ∼280M protein sequences conditioned on taxonomic and keyword tags such as molecular function and cellular component. This provides ProGen with an unprecedented range of evolutionary sequence diversity and allows it to generate with fine-grained control as demonstrated by metrics based on primary sequence similarity, secondary structure accuracy, and conformational energy.", "target": ["Transformerベースのモデルで、タンパク質の構造(アミノ酸配列)を学習/予測した研究。タンパク質の性質を加味するために、タグ付けされた系列を使用して学習している。タグはGene Ontologyからくる構造を示すキーワードと、NCBIで規定される学名の2種が使われている。"]} {"source": "We consider the problem of zero-shot coordination - constructing AI agents that can coordinate with novel partners they have not seen before (e.g. humans). Standard Multi-Agent Reinforcement Learning (MARL) methods typically focus on the self-play (SP) setting where agents construct strategies by playing the game with themselves repeatedly. Unfortunately, applying SP naively to the zero-shot coordination problem can produce agents that establish highly specialized conventions that do not carry over to novel partners they have not been trained with. We introduce a novel learning algorithm called other-play (OP), that enhances self-play by looking for more robust strategies, exploiting the presence of known symmetries in the underlying problem. We characterize OP theoretically as well as experimentally. We study the cooperative card game Hanabi and show that OP agents achieve higher scores when paired with independently trained agents. In preliminary results we also show that our OP agents obtains higher average scores when paired with human players, compared to state-of-the-art SP agents.", "target": ["初見のパートナーと上手く組んで報酬を得る強化学習(zero-shot coordination)。自動運転等では周りの自動車は完全別個に学習されているので、こうした問題設定を考える必要がある。通常の自己学習では自己=既知の相手と組むため汎化性能に問題があるため、戦略自体ではなくその軌跡を対象とし学習する"]} {"source": "We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing.", "target": ["自然言語とプログラムコード双方で事前学習したモデルの提案。翻訳と同様、自然言語/プログラムコードをSeparatorで区切って学習させる。BERT(#959 )のMask以外にELECTRA(#1539 )の置換トークン発見を目的関数に使っている。自然言語によるコード検索、欠損語推論(多肢選択)で有効性を確認。"]} {"source": "Fine-tuning pretrained contextual word embedding models to supervised downstream tasks has become commonplace in natural language processing. This process, however, is often brittle: even with the same hyperparameter values, distinct random seeds can lead to substantially different results. To better understand this phenomenon, we experiment with four datasets from the GLUE benchmark, fine-tuning BERT hundreds of times on each while varying only the random seeds. We find substantial performance increases compared to previously reported results, and we quantify how the performance of the best-found model varies as a function of the number of fine-tuning trials. Further, we examine two factors influenced by the choice of random seed: weight initialization and training data order. We find that both contribute comparably to the variance of out-of-sample performance, and that some weight initializations perform well across all tasks explored. On small datasets, we observe that many fine-tuning trials diverge part of the way through training, and we offer best practices for practitioners to stop training less promising runs early. We publicly release all of our experimental data, including training and validation scores for 2,100 trials, to encourage further analysis of training dynamics during fine-tuning.", "target": ["BERT(#959 )の転移学習で、ランダムシードの影響を調べた研究。ハイパーパラメーターは固定しシードのみ変えてGLUE benchmarkの計測を行ったところ、公開モデルより数ポイント高い、文法許容性(CoLA)/文間関係(RTE)に至っては7pt高い精度が得られた。重みの初期化と学習データの順序が効くとのこと。"]} {"source": "Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference. We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute: self-supervised pretraining and high-resource machine translation. We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps. Moreover, this acceleration in convergence typically outpaces the additional computational overhead of using larger models. Therefore, the most compute-efficient training strategy is to counterintuitively train extremely large models but stop after a small number of iterations. This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models. However, we show that large models are more robust to compression techniques such as quantization and pruning than small models. Consequently, one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models.", "target": ["Transformer(#329 )のダウンサイジングに適した戦略を調べた研究。通常は小さいサイズのモデルを学習することが多く実際iterationの回転速度は高いが、大きいサイズのモデルを使ったほうが収束が速いという結果。大型のモデルで素早く学習し、その後ダウンサイズするのが最適としている。"]} {"source": "A wide variety of deep learning techniques from style transfer to multitask learning rely on training affine transformations of features. Most prominent among these is the popular feature normalization technique BatchNorm, which normalizes activations and then subsequently applies a learned affine transform. In this paper, we aim to understand the role and expressive power of affine parameters used to transform features in this way. To isolate the contribution of these parameters from that of the learned features they transform, we investigate the performance achieved when training only these parameters in BatchNorm and freezing all weights at their random initializations. Doing so leads to surprisingly high performance considering the significant limitations that this style of training imposes. For example, sufficiently deep ResNets reach 82% (CIFAR-10) and 32% (ImageNet, top-5) accuracy in this configuration, far higher than when training an equivalent number of randomly chosen parameters elsewhere in the network. BatchNorm achieves this performance in part by naturally learning to disable around a third of the random features. Not only do these results highlight the expressive power of affine parameters in deep learning, but - in a broader sense - they characterize the expressive power of neural networks constructed simply by shifting and rescaling random features.", "target": ["Batch Normalization(BN)の効果を検証した研究。正規化した値をスケーリング/シフトするパラメーター(γ/β)のみ学習し、他全てのネットワークのパラメーターを初期化時のまま固定し検証。BNのパラメーター学習のみでCIFAR-10で83%の精度を達成。γは1/4~1/2程度のチャンネルを抑制する働きが見られた。"]} {"source": "The lottery ticket hypothesis (Frankle and Carbin, 2018), states that a randomly-initialized network contains a small subnetwork such that, when trained in isolation, can compete with the performance of the original network. We prove an even stronger hypothesis (as was also conjectured in Ramanujan et al., 2019), showing that for every bounded distribution and every target network with bounded weights, a sufficiently over-parameterized neural network with random weights contains a subnetwork with roughly the same accuracy as the target network, without any further training.", "target": ["ランダムな(ReLUの)ネットワークの中には、学習なしにネットワーク全体と同等性能になるサブネットワークが存在することを理論的に示した研究。深さdのネットワークは深さ2dのサブネットワークとして表現できるとのこと。学習と同等の効果が枝刈りで得られるという主張"]} {"source": "Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers' connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains.", "target": ["ネットワークの接続/重みを固定/ランダム化するRandomized NNについてのサーベイ。手法としては古典的なもの(RNN勃興期のEcho state/Reservoir networkなど)。特徴抽出は固定で、各特徴に対する重み(β)を学習する。パラメーターの大半が固定なので高速な学習が可能。"]} {"source": "While robot learning has demonstrated promising results for enabling robots to automatically acquire new skills, a critical challenge in deploying learning-based systems is scale: acquiring enough data for the robot to effectively generalize broadly. Imitation learning, in particular, has remained a stable and powerful approach for robot learning, but critically relies on expert operators for data collection. In this work, we target this challenge, aiming to build an imitation learning system that can continuously improve through autonomous data collection, while simultaneously avoiding the explicit use of reinforcement learning, to maintain the stability, simplicity, and scalability of supervised imitation. To accomplish this, we cast the problem of imitation with autonomous improvement into a multi-task setting. We utilize the insight that, in a multi-task setting, a failed attempt at one task might represent a successful attempt at another task. This allows us to leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted. Using an initial dataset of multi-task demonstration data, the robot autonomously collects trials which are only sparsely labeled with a binary indication of whether the trial accomplished any useful task or not. We then embed the trials into a learned latent space of tasks, trained using only the initial demonstration dataset, to draw similarities between various trials, enabling the robot to achieve one-shot generalization to new tasks. In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement, and in contrast to reinforcement learning algorithms, our method can effectively improve from sparse, task-agnostic reward signals.", "target": ["模倣学習とメタラーニングを組み合わせて、自己学習を行う手法の提案。メタラーニングを行ったうえで様々なタスクを行わせ、失敗した動作データをフィルタしたうえで学習データに加え再度メタラーニングを行う。これによりタスク横断(=汎用性の高い)スキルを半自動で学習する。"]} {"source": "Exploration in sparse reward environments remains one of the key challenges of model-free reinforcement learning. Instead of solely relying on extrinsic rewards provided by the environment, many state-of-the-art methods use intrinsic rewards to encourage exploration. However, we show that existing methods fall short in procedurally-generated environments where an agent is unlikely to visit a state more than once. We propose a novel type of intrinsic reward which encourages the agent to take actions that lead to significant changes in its learned state representation. We evaluate our method on multiple challenging procedurally-generated tasks in MiniGrid, as well as on tasks with high-dimensional observations used in prior work. Our experiments demonstrate that this approach is more sample efficient than existing exploration methods, particularly for procedurally-generated MiniGrid environments. Furthermore, we analyze the learned behavior as well as the intrinsic reward received by our agent. In contrast to previous approaches, our intrinsic reward does not diminish during the course of training and it rewards the agent substantially more for interacting with objects that it can control.", "target": ["強化学習における内発的報酬を改善した研究。通常の内発的報酬は予測差(想定した環境変化と実際の環境変化の差異)を使用するが、提案手法(RIDE)では実際に環境に起こった変化(単に状態差=impact)を使用する。予測差は内発的報酬でなくloss側に組み込み、行動差(実際/逆算行動の差)と合わせて学習する。"]} {"source": "Recent studies have shown that many important aspects of neural network learning take place within the very earliest iterations or epochs of training. For example, sparse, trainable sub-networks emerge (Frankle et al., 2019), gradient descent moves into a small subspace (Gur-Ari et al., 2018), and the network undergoes a critical period (Achille et al., 2019). Here, we examine the changes that deep neural networks undergo during this early phase of training. We perform extensive measurements of the network state during these early iterations of training and leverage the framework of Frankle et al. (2019) to quantitatively probe the weight distribution and its reliance on various aspects of the dataset. We find that, within this framework, deep networks are not robust to reinitializing with random weights while maintaining signs, and that weight distributions are highly non-independent even after only a few hundred iterations. Despite this behavior, pre-training with blurred inputs or an auxiliary self-supervised task can approximate the changes in supervised networks, suggesting that these changes are not inherently label-dependent, though labels significantly accelerate this process. Together, these results help to elucidate the network changes occurring during this pivotal initial period of learning.", "target": ["DNNの宝くじ仮説について、再抽選(学習初期段階に巻き戻して再度宝くじ(=良いサブネットワーク)を引き当てる手法)が有効化を調べた研究。学習の非常に初期の段階で当たりは決まっているようで、浅いネットワークで有効な符号を維持したままの巻き戻しは深いネットワークでは効果がないとのこと。"]} {"source": "We introduce Language World Models, a class of language-conditional generative model which interpret natural language messages by predicting latent codes of future observations. This provides a visual grounding of the message, similar to an enhanced observation of the world, which may include objects outside of the listening agent's field-of-view. We incorporate this \"observation\" into a persistent memory state, and allow the listening agent's policy to condition on it, akin to the relationship between memory and controller in a World Model. We show this improves effective communication and task success in 2D gridworld speaker-listener navigation tasks. In addition, we develop two losses framed specifically for our model-based formulation to promote positive signalling and positive listening. Finally, because messages are interpreted in a generative model, we can visualize the model beliefs to gain insight into how the communication channel is utilized.", "target": ["将来の環境変化を予測するWorld Modelで、メッセージを条件付けに使用した研究。エージェントの観測には限界があるので、外から教えてやる。時系列の環境観測(画像)はVAEで処理(V)、メッセージ(離散表現)とくみ合わせて信念ベクトルを作成し(M)、制御を行う(C)。ナビゲーションタスクで効果を確認。"]} {"source": "Many reinforcement learning algorithms use value functions to guide the search for better policies. These methods estimate the value of a single policy while generalizing across many states. The core idea of this paper is to flip this convention and estimate the value of many policies, for a single set of states. This approach opens up the possibility of performing direct gradient ascent in policy space without seeing any new data. The main challenge for this approach is finding a way to represent complex policies that facilitates learning and generalization. To address this problem, we introduce a scalable, differentiable fingerprinting mechanism that retains essential policy information in a concise embedding. Our empirical results demonstrate that combining these three elements (learned Policy Evaluation Network, policy fingerprints, gradient ascent) can produce policies that outperform those that generated the training data, in zero-shot manner.", "target": ["強化学習では戦略の行動を価値関数で評価するのが一般的だが、この際価値関数は全戦略に適切な状態価値の出力を目指しており戦略を改善する誘導は(直接的には)してくれない。そこで(状態でなく)戦略の出力(Fingerprint)を入力に評価を出力するネットワークを構築し直接戦略を評価できるようにしている"]} {"source": "Back-translation provides a simple yet effective approach to exploit monolingual corpora in Neural Machine Translation (NMT). Its iterative variant, where two opposite NMT models are jointly trained by alternately using a synthetic parallel corpus generated by the reverse model, plays a central role in unsupervised machine translation. In order to start producing sound translations and provide a meaningful training signal to each other, existing approaches rely on either a separate machine translation system to warm up the iterative procedure, or some form of pre-training to initialize the weights of the model. In this paper, we analyze the role that such initialization plays in iterative back-translation. Is the behavior of the final system heavily dependent on it? Or does iterative back-translation converge to a similar solution given any reasonable initialization? Through a series of empirical experiments over a diverse set of warmup systems, we show that, although the quality of the initial system does affect final performance, its effect is relatively small, as iterative back-translation has a strong tendency to convergence to a similar solution. As such, the margin of improvement left for the initialization method is narrow, suggesting that future research should focus more on improving the iterative mechanism itself.", "target": ["Back-translationによる相互学習において、初期プロセスの重要さを調査した研究。相互学習では相手の翻訳システムをBack-translationに使用するが、最初は学習していないため相手翻訳の使用率を低い状態から始めるWarm-upが使われる。しかしWarm-upの差による最終的なパフォーマンス差は小さいとの結果"]} {"source": "Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.", "target": ["現時点でBERT(Transformer)(#959 , #329 )についてわかっている/いないことをまとめたサーベイ。BERTで学習されている内容の解析やレイヤ解析、学習方法やアーキテクチャー修正など様々な研究がまとめられており、それら解析手法の限界について述べている。今後はReasoningを計測するタスクの発明が必要としている"]} {"source": "Multi-task reinforcement learning (RL) aims to simultaneously learn policies for solving many tasks. Several prior works have found that relabeling past experience with different reward functions can improve sample efficiency. Relabeling methods typically ask: if, in hindsight, we assume that our experience was optimal for some task, for what task was it optimal? In this paper, we show that hindsight relabeling is inverse RL, an observation that suggests that we can use inverse RL in tandem for RL algorithms to efficiently solve many tasks. We use this idea to generalize goal-relabeling techniques from prior work to arbitrary classes of tasks. Our experiments confirm that relabeling data using inverse RL accelerates learning in general multi-task settings, including goal-reaching, domains with discrete sets of rewards, and those with linear reward functions.", "target": ["マルチタスクの強化学習で、各タスクの経験を局面に応じ使い分ける研究。目的のタスクでどれだけ有用か(=軌跡の近さ)を逆強化学習で推定し、有用な軌跡を高く(逆は低く)評価する(=報酬を与える)ことで組み合わせ方の誘導を行う。逆強化で強化学習をブーストする汎用的なフレームワークを提案している。"]} {"source": "Polygon meshes are an efficient representation of 3D geometry, and are of central importance in computer graphics, robotics and games development. Existing learning-based approaches have avoided the challenges of working with 3D meshes, instead using alternative object representations that are more compatible with neural architectures and training approaches. We present an approach which models the mesh directly, predicting mesh vertices and faces sequentially using a Transformer-based architecture. Our model can condition on a range of inputs, including object classes, voxels, and images, and because the model is probabilistic it can produce samples that capture uncertainty in ambiguous scenarios. We show that the model is capable of producing high-quality, usable meshes, and establish log-likelihood benchmarks for the mesh-modelling task. We also evaluate the conditional models on surface reconstruction metrics against alternative methods, and demonstrate competitive performance despite not training directly on this task.", "target": ["Transformerを使用した3Dメッシュ生成。各頂点の位置をz=>y=>x順に並び変えて(=低い点~高い点へ向かう系列にする)、Transformerで系列予測を行う。面も低~高で並び替え、頂点群の潜在表現(Encoder出力)を基に推定していく(面の場合使ってる頂点がかぶってはいけないのでマスクを行う)。"]} {"source": "Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it. Much work has gone towards preventing the default tendency of machine learning models to catastrophically forget, yet virtually all such work involves manually-designed solutions to the problem. We instead advocate meta-learning a solution to catastrophic forgetting, allowing AI to learn to continually learn. Inspired by neuromodulatory processes in the brain, we propose A Neuromodulated Meta-Learning Algorithm (ANML). It differentiates through a sequential learning process to meta-learn an activation-gating function that enables context-dependent selective activation within a deep neural network. Specifically, a neuromodulatory (NM) neural network gates the forward pass of another (otherwise normal) neural network called the prediction learning network (PLN). The NM network also thus indirectly controls selective plasticity (i.e. the backward pass of) the PLN. ANML enables continual learning without catastrophic forgetting at scale: it produces state-of-the-art continual learning performance, sequentially learning as many as 600 classes (over 9,000 SGD updates).", "target": ["継続的学習で破壊的忘却を防ぐネットワーク機構の提案。通常のネットワーク出力をelement wiseの演算でフィルタするネットワーク(Neuromodulatory network)を導入。フィルタするネットワークはouter loop(メタラーニング)は行うがinner loop(タスク個別の学習)では更新しない。"]} {"source": "We present a generative model for multitask conditional language generation. Our guiding hypothesis is that a shared set of latent skills underlies many disparate language generation tasks, and that explicitly modelling these skills in a task embedding space can help with both positive transfer across tasks and with efficient adaptation to new tasks. We instantiate this task embedding space as a latent variable in a latent variable sequence-to-sequence model. We evaluate this hypothesis by curating a series of monolingual text-to-text language generation datasets - covering a broad range of tasks and domains - and comparing the performance of models both in the multitask and few-shot regimes. We show that our latent task variable model outperforms other sequence-to-sequence baselines on average across tasks in the multitask setting. In the few-shot learning setting on an unseen test dataset (i.e., a new task), we demonstrate that model adaptation based on inference in the latent task space is more robust than standard fine-tuning based parameter adaptation and performs comparably in terms of overall performance. Finally, we examine the latent task representations learnt by our model and show that they cluster tasks in a natural way.", "target": ["自然言語の生成にメタラーニングの考えを導入した研究。スキル分布からスキルのサンプリングを行い、Encoderの潜在表現/Decoderの直前表現に結合して生成を行う。変分推定を使うことで、スキルサンプリングまで一気通貫で学習する。Few-shotの生成で通常転移よりも良好な結果を確認。"]} {"source": "Transformer-based models have brought a radical change to neural machine translation. A key feature of the Transformer architecture is the so-called multi-head attention mechanism, which allows the model to focus simultaneously on different parts of the input. However, recent works have shown that most attention heads learn simple, and often redundant, positional patterns. In this paper, we propose to replace all but one attention head of each encoder layer with simple fixed -- non-learnable -- attentive patterns that are solely based on position and do not require any external knowledge. Our experiments with different data sizes and multiple language pairs show that fixing the attention heads on the encoder side of the Transformer at training time does not impact the translation quality and even increases BLEU scores by up to 3 points in low-resource scenarios.", "target": ["TransformerのAttention Headはだいたい単純で同じパターンなので(現在のトークン、前/後のトークンetc)、わかりきったパターンを学習せず固定して残ったHead1つのみを学習させる手法を提案。学習データが少なく済み、低リソース言語の翻訳でフル学習よりBLEUが向上、中~大規模でもそれほど低下なし。"]} {"source": "We aim to improve question answering (QA) by decomposing hard questions into simpler sub-questions that existing QA systems are capable of answering. Since labeling questions with decompositions is cumbersome, we take an unsupervised approach to produce sub-questions, also enabling us to leverage millions of questions from the internet. Specifically, we propose an algorithm for One-to-N Unsupervised Sequence transduction (ONUS) that learns to map one hard, multi-hop question to many simpler, single-hop sub-questions. We answer sub-questions with an off-the-shelf QA model and give the resulting answers to a recomposition model that combines them into a final answer. We show large QA improvements on HotpotQA over a strong baseline on the original, out-of-domain, and multi-hop dev sets. ONUS automatically learns to decompose different kinds of questions, while matching the utility of supervised and heuristic decomposition methods for QA and exceeding those methods in fluency. Qualitatively, we find that using sub-questions is promising for shedding light on why a QA system makes a prediction.", "target": ["複雑な質問をシンプルな質問に分解する研究。分解の教師データを集めるのは大変なので、シンプルな質問集を用意し類似度ベースで抜いてくる(そのままでは元の質問と固有名詞などが異なるので、spaCyでエンティティを検知し置換)。複雑⇒収集した質問のデータで言語モデルベースのSeq2Seqを転移学習する"]} {"source": "Many problems in machine learning can be cast as learning functions from sets to graphs, or more generally to hypergraphs; in short, Set2Graph functions. Examples include clustering, learning vertex and edge features on graphs, and learning features on triplets in a collection. A natural approach for building Set2Graph models is to characterize all linear equivariant set-to-hypergraph layers and stack them with non-linear activations. This poses two challenges: (i) the expressive power of these networks is not well understood; and (ii) these models would suffer from high, often intractable computational and memory complexity, as their dimension grows exponentially. This paper advocates a family of neural network models for learning Set2Graph functions that is both practical and of maximal expressive power (universal), that is, can approximate arbitrary continuous Set2Graph functions over compact sets. Testing these models on different machine learning tasks, mainly an application to particle physics, we find them favorable to existing baselines.", "target": ["データ集合(セット)の結合方法(=グラフ化方法)を学習する研究。2ノードならペア、3ノード以上ならハイパーグラフになり結合するペアが増えるほど行列の次元数が増え計算が現実的でなくなる。そこでSet=>Graphを直接行わずSet=>Set(φ)、Set=>Edge表現(β: ベクトルの結合)、Edge=>グラフ(ψ)の3変換で行う"]} {"source": "Nearly every commodity imaging system we directly interact with, or indirectly rely on, leverages power efficient, application-adjustable black-box hardware image signal processing (ISPs) units, running either in dedicated hardware blocks, or as proprietary software modules on programmable hardware. The configuration parameters of these black-box ISPs often have complex interactions with the output image, and must be adjusted prior to deployment according to application-specific quality and performance metrics. Today, this search is commonly performed manually by \"golden eye\" experts or algorithm developers leveraging domain expertise. We present a fully automatic system to optimize the parameters of black-box hardware and software image processing pipelines according to any arbitrary (i.e., application-specific) metric. We leverage a differentiable mapping between the configuration space and evaluation metrics, parameterized by a convolutional neural network that we train in an end-to-end fashion with imaging hardware in-the-loop. Unlike prior art, our differentiable proxies allow for high-dimension parameter search with stochastic first-order optimizers, without explicitly modeling any lower-level image processing transformations. As such, we can efficiently optimize black-box image processing pipelines for a variety of imaging applications, reducing application-specific configuration times from months to hours. Our optimization method is fully automatic, even with black-box hardware in the loop. We validate our method on experimental data for real-time display applications, object detection, and extreme low-light imaging. The proposed approach outperforms manual search qualitatively and quantitatively for all domain-specific applications tested. When applied to traditional denoisers, we demonstrate that—just by changing hyperparameters—traditional algorithms can outperform recent deep learning methods by a substantial margin on recent benchmarks.", "target": ["画像信号処理(レンズの光学補正や画像センサのノイズ除去など)を行うISPには様々なパラメーターがあり、今までは職人がチューニングしていた。そこで、ISPをエミュレートする微分可能なネットワークを作成し、パラメーターをEndtoEndで学習する仕組みを提案。"]} {"source": "We address the problem of learning structured policies for continuous control. In traditional reinforcement learning, policies of agents are learned by MLPs which take the concatenation of all observations from the environment as input for predicting actions. In this work, we propose NerveNet to explicitly model the structure of an agent, which naturally takes the form of a graph. Specifically, serving as the agent's policy network, NerveNet first propagates information over the structure of the agent and then predict actions for different parts of the agent. In the experiments, we first show that our NerveNet is comparable to state-of-the-art methods on standard MuJoCo environments. We further propose our customized reinforcement learning environments for benchmarking two types of structure transfer learning tasks, i.e., size and disability transfer. We demonstrate that policies learned by NerveNet are significantly better than policies learned by other models and are able to transfer even in a zero-shot setting.", "target": ["強化学習で、エージェントのパーツをノード、パーツ間の物理連動をエッジとしてグラフ化しGraph Neural Networkを適用するという研究。各パーツの位置/角度/加速度を状態とし、グラフで伝搬を行ったうえで戦略に渡す潜在表現を作成する。MuJoCoで高い性能を記録するとともに、転移性能が高いことも確認"]} {"source": "Acquiring abilities in the absence of a task-oriented reward function is at the frontier of reinforcement learning research. This problem has been studied through the lens of empowerment, which draws a connection between option discovery and information theory. Information-theoretic skill discovery methods have garnered much interest from the community, but little research has been conducted in understanding their limitations. Through theoretical analysis and empirical evidence, we show that existing algorithms suffer from a common limitation -- they discover options that provide a poor coverage of the state space. In light of this, we propose 'Explore, Discover and Learn' (EDL), an alternative approach to information-theoretic skill discovery. Crucially, EDL optimizes the same information-theoretic objective derived from the empowerment literature, but addresses the optimization problem using different machinery. We perform an extensive evaluation of skill discovery methods on controlled environments and show that EDL offers significant advantages, such as overcoming the coverage problem, reducing the dependence of learned skills on the initial state, and allowing the user to define a prior over which behaviors should be learned. Code is publicly available at this https URL.", "target": ["強化学習において、相互情報量の最大化によりスキル獲得を行う研究。スキルは状態から推定を行う方法(Reverse=(H(Z|S))と、スキルによる状態変化から双方から推定を行う方法(Forward=H(S|Z))の2通りがあるが後者を採用。戦略と独立した状態/スキル分布を使用することで探索効率を向上させた。"]} {"source": "Based on interviews with 28 organizations, we found that industry practitioners are not equipped with tactical and strategic tools to protect, detect and respond to attacks on their Machine Learning (ML) systems. We leverage the insights from the interviews and we enumerate the gaps in perspective in securing machine learning systems when viewed in the context of traditional software security development. We write this paper from the perspective of two personas: developers/ML engineers and security incident responders who are tasked with securing ML systems as they are designed, developed and deployed ML systems. The goal of this paper is to engage researchers to revise and amend the Security Development Lifecycle for industrial-grade software in the adversarial ML era.", "target": ["機械学習モデルへの攻撃に対する対策をインタビューした結果をまとめた資料。セキュリティ/ヘルスケア/政府機関等の28機関を対象にしており、対策を計画している機関は6で、また防衛方法についての知識も不足しているという。どの攻撃が脅威と思うか、など興味深い質問もある"]} {"source": "In this paper, we explore illustrations in children's books as a new domain in unpaired image-to-image translation. We show that although the current state-of-the-art image-to-image translation models successfully transfer either the style or the content, they fail to transfer both at the same time. We propose a new generator network to address this issue and show that the resulting network strikes a better balance between style and content. There are no well-defined or agreed-upon evaluation metrics for unpaired image-to-image translation. So far, the success of image translation models has been based on subjective, qualitative visual comparison on a limited number of images. To address this problem, we propose a new framework for the quantitative evaluation of image-to-illustration models, where both content and style are taken into account using separate classifiers. In this new evaluation framework, our proposed model performs better than the current state-of-the-art models on the illustrations dataset. Our code and pretrained models can be found at this https URL.", "target": ["写真を童話風に変換する研究。イラストへの変換では、コンテンツの形式を変えつつ(写真⇒線画)、スタイル(色など)も変換する必要がある。残差を加える(additive)のでなく結合する(concatenative)なskip connectionを使用。スタイル/コンテンツ双方を分類器で判定する評価指標を導入、既存手法を上回る"]} {"source": "Knowledge distillation introduced in the deep learning context is a method to transfer knowledge from one architecture to another. In particular, when the architectures are identical, this is called self-distillation. The idea is to feed in predictions of the trained model as new target values for retraining (and iterate this loop possibly a few times). It has been empirically observed that the self-distilled model often achieves higher accuracy on held out data. Why this happens, however, has been a mystery: the self-distillation dynamics does not receive any new information about the task and solely evolves by looping over training. To the best of our knowledge, there is no rigorous understanding of this phenomenon. This work provides the first theoretical analysis of self-distillation. We focus on fitting a nonlinear function to training data, where the model space is Hilbert space and fitting is subject to ℓ2 regularization in this function space. We show that self-distillation iterations modify regularization by progressively limiting the number of basis functions that can be used to represent the solution. This implies (as we also verify empirically) that while a few rounds of self-distillation may reduce over-fitting, further rounds may lead to under-fitting and thus worse performance.", "target": ["自己蒸留(学習済みモデルの予測値を使用して学習する手法)でなぜ性能が向上するのか理論的に検証した研究。蒸留を繰り返すことでモデル内部で使用できる基底関数が徐々に減っていくため、序盤では正則化効果が得られ性能が向上、しかし繰り返し続けると(表現力がなくなり)Underfit状態になることを指摘"]} {"source": "This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.", "target": ["対照学習の構成要素を分解し最適な選択を探索した研究。対照(ペア)にするData Augmentation、loss前ヘッドの有無/変換方法、バッチサイズ等のパラメーターの3つを調査。Random Crop/Color、非線形変換ヘッド、大きめバッチ/長めの学習stepがキーで、ImageNetにおいて1%のラベルで85.8% top-5を達成。"]} {"source": "Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network, requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as interpretability and modularity.", "target": ["言語モデルとクエリ検索を組み合わせた事前学習の提案。Maskをかけた単語を予測する点はBERT型学習と同じだが、予測時に他コーパスからの検索も行う。最終的にはクエリ(質問)の回答根拠になる文を他コーパスから検索/元コンテキストに加えたうえで回答を行う。オープンドメインのQAで良好な精度。"]} {"source": "This study tackles unsupervised domain adaptation of reading comprehension (UDARC). Reading comprehension (RC) is a task to learn the capability for question answering with textual sources. State-of-the-art models on RC still do not have general linguistic intelligence; i.e., their accuracy worsens for out-domain datasets that are not used in the training. We hypothesize that this discrepancy is caused by a lack of the language modeling (LM) capability for the out-domain. The UDARC task allows models to use supervised RC training data in the source domain and only unlabeled passages in the target domain. To solve the UDARC problem, we provide two domain adaptation models. The first one learns the out-domain LM and in-domain RC task sequentially. The second one is the proposed model that uses a multi-task learning approach of LM and RC. The models can retain both the RC capability acquired from the supervised data in the source domain and the LM capability from the unlabeled data in the target domain. We evaluated the models on UDARC with five datasets in different domains. The models outperformed the model without domain adaptation. In particular, the proposed model yielded an improvement of 4.3/4.2 points in EM/F1 in an unseen biomedical domain.", "target": ["質問回答などの読解タスクで言語モデルをうまく転移させる手法の提案。言語モデルを学習させてから転移先で読解タスクを学習する(Sequential)が最もシンプルだが、言語モデルと転移先での読解を同時学習するMultitask型を提案(レイヤを一部共有)。後者が概ね前者より良好。"]} {"source": "In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations. This model has a number of attractive properties: it not only improves language modeling performance, but is also able to annotate the posterior probability of entity spans for a given text through relations. Experiments demonstrate empirical improvements over both a word-based baseline language model and a previous approach that incorporates knowledge graph information. Qualitative analysis further demonstrates the proposed model's ability to learn to predict appropriate relations in context.", "target": ["知識グラフを利用した言語モデルの提案。誕生日や出身地などは、語彙から選択するより知識グラフから選択する方がはるかに容易。WORDを生成するかRELを使うかスイッチする変数を設けて、RELの場合事前学習した関係/ノードの潜在表現を使い選択を行う。知識構造を持つ文書で大幅なPerplexity改善に成功"]} {"source": "In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions. However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional (e.g. images). For this reason, previous works have considered partial models, which model only part of the observation. In this paper, we show that partial models can be causally incorrect: they are confounded by the observations they don't model, and can therefore lead to incorrect planning. To address this, we introduce a general family of partial models that are provably causally correct, yet remain fast because they do not need to fully model future observations.", "target": ["シミュレーションのモデルベース(MuZero等 #1477)では行動にバイアスがかかることを指摘した研究。シミュレーションは性質上ある状態から連続した行動をとった結果を推定するが、これにより報酬が行動系列自体に依存することがある。依存を解消するため因果推論の考えを用い、交絡をカットする仲介変数を導入"]} {"source": "Undirected neural sequence models such as BERT (Devlin et al., 2019) have received renewed interest due to their success on discriminative natural language understanding tasks such as question-answering and natural language inference. The problem of generating sequences directly from these models has received relatively little attention, in part because generating from undirected models departs significantly from conventional monotonic generation in directed sequence models. We investigate this problem by proposing a generalized model of sequence generation that unifies decoding in directed and undirected models. The proposed framework models the process of generation rather than the resulting sequence, and under this framework, we derive various neural sequence models as special cases, such as autoregressive, semi-autoregressive, and refinement-based non-autoregressive models. This unification enables us to adapt decoding algorithms originally developed for directed sequence models to undirected sequence models. We demonstrate this by evaluating various handcrafted and learned decoding strategies on a BERT-like machine translation model (Lample & Conneau, 2019). The proposed approach achieves constant-time translation results on par with linear-time translation results from the same undirected sequence model, while both are competitive with the state-of-the-art on WMT'14 English-German translation.", "target": ["系列生成の統合的な仕組みの提案。既存の順次生成型だけでなく、BERTのような穴埋め型(前後のtokenに明示的なつながりがないことから\"Undirected\"と称されている)も考慮している。最初に系列の長さを予測し、全maskされた系列を順次置き換えていく形でモデル化を行っている(左⇒右だと通常生成になる)"]} {"source": "This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of optimizing one-step-ahead prediction in the traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction that predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large-scale dataset (160GB), respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pre-training corpus.", "target": ["言語モデルの学習で、1つ先だけでなく複数先(n-gram)の予測を行わせる研究。Encoderは据え置きで、Decode時にpreviousのトークンを使用していくN-Stream Self-Attentionを提案。要約/質問生成のタスクで既存のROUGE等を上回る結果。また、学習epochも少なく済んだという。"]} {"source": "During the past five years the Bayesian deep learning community has developed increasingly accurate and efficient approximate inference procedures that allow for Bayesian inference in deep neural networks. However, despite this algorithmic progress and the promise of improved uncertainty quantification and sample efficiency there are---as of early 2020---no publicized deployments of Bayesian neural networks in industrial practice. In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: we demonstrate through careful MCMC sampling that the posterior predictive induced by the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates obtained from SGD. Furthermore, we demonstrate that predictive performance is improved significantly through the use of a \"cold posterior\" that overcounts evidence. Such cold posteriors sharply deviate from the Bayesian paradigm but are commonly used as heuristic in Bayesian deep learning papers. We put forward several hypotheses that could explain cold posteriors and evaluate the hypotheses through experiments. Our work questions the goal of accurate posterior approximations in Bayesian deep learning: If the true Bayes posterior is poor, what is the use of more accurate approximations? Instead, we argue that it is timely to focus on understanding the origin of the improved performance of cold posteriors.", "target": ["Bayesian DNNが発展する一方実務で使われないのはなぜなのかを問うた研究。事後分布による予測が単純なSGDベースの予測より悪い点、しかし理論上相反するcold posterior(損失関数≒エネルギー関数のスケール(温度)を1より下げる手法)を使うと上手くいくことを確認。今後より解析が求められることを示唆"]} {"source": "The Winograd Schema Challenge (WSC) (Levesque, Davis, and Morgenstern 2011), a benchmark for commonsense reasoning, is a set of 273 expert-crafted pronoun resolution problems originally designed to be unsolvable for statistical models that rely on selectional preferences or word associations. However, recent advances in neural language models have already reached around 90% accuracy on variants of WSC. This raises an important question whether these models have truly acquired robust commonsense capabilities or whether they rely on spurious biases in the datasets that lead to an overestimation of the true capabilities of machine commonsense. To investigate this question, we introduce WinoGrande, a large-scale dataset of 44k problems, inspired by the original WSC design, but adjusted to improve both the scale and the hardness of the dataset. The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations. The best state-of-the-art methods on WinoGrande achieve 59.4-79.1%, which are 15-35% below human performance of 94.0%, depending on the amount of the training data allowed. Furthermore, we establish new state-of-the-art results on five related benchmarks - WSC (90.1%), DPR (93.1%), COPA (90.6%), KnowRef (85.6%), and Winogender (97.1%). These results have dual implications: on one hand, they demonstrate the effectiveness of WinoGrande when used as a resource for transfer learning. On the other hand, they raise a concern that we are likely to be overestimating the true capabilities of machine commonsense across all these benchmarks. We emphasize the importance of algorithmic bias reduction in existing and future benchmarks to mitigate such overestimation.", "target": ["常識の獲得を問うWinograd Schema Challengeを拡大した研究。オリジナルのデータセットは質問数が273しかなく、既存のモデルで90%以上解けてしまっていた。そのため、数を増やすとともにRoBERTaを使った分類機で言語モデル的に解けるものをはじくなどの工夫を行っている。"]} {"source": "In this paper, we present a novel algorithm, FastWordBug, to efficiently generate small text perturbations in a black-box setting that forces a sentiment analysis or text classification mode to make an incorrect prediction. By combining the part of speech attributes of words, we propose a scoring method that can quickly identify important words that affect text classification. We evaluate FastWordBug on three real-world text datasets and two state-of-the-art machine learning models under black-box setting. The results show that our method can significantly reduce the accuracy of the model, and at the same time, we can call the model as little as possible, with the highest attack efficiency. We also attack two popular real-world cloud services of NLP, and the results show that our method works as well.", "target": ["テキスト分類に対するAdversarialの提案。単語単体のconfidenceでなく、品詞(POS)レベルで集計を行いスコア化している。confidenceで文選択⇒ターゲット文内の重要POS単語を抽出⇒編集という流れ。LSTM/CNNベース双方のモデルで検証し、既存手法より精度が下がることを確認。"]} {"source": "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on hidden-to-hidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.", "target": ["LSTMに対する正則化と最適化方法を提案した研究。様々な手法を提案しているが、再帰(h_t-1)にかかる重みに対しDropConnectをかける手法は、CuDNNLSTMなど高速だがdropout非対応のセルの外側で使用できるため、速度と正則化を両立できる。PTB/WikiText2双方で顕著な効果を確認"]} {"source": "Security of machine learning models is a concern as they may face adversarial attacks for unwarranted advantageous decisions. While research on the topic has mainly been focusing on the image domain, numerous industrial applications, in particular in finance, rely on standard tabular data. In this paper, we discuss the notion of adversarial examples in the tabular domain. We propose a formalization based on the imperceptibility of attacks in the tabular domain leading to an approach to generate imperceptible adversarial examples. Experiments show that we can generate imperceptible adversarial examples with a high fooling rate.", "target": ["テーブルデータに対するAdversarial Exampleを検証した研究。ターゲットのラベルを変動させつつ、特徴量の重要度が変わらないようにデータを改ざんする(=データ/モデルにアクセス可能なWhiteboxの設定)。既存手法より精度(誤らせられる確率)は劣るが、編集量は少なく済む。"]} {"source": "The goal of our paper is to semantically edit parts of an image matching a given text that describes desired attributes (e.g., texture, colour, and background), while preserving other contents that are irrelevant to the text. To achieve this, we propose a novel generative adversarial network (ManiGAN), which contains two key components: text-image affine combination module (ACM) and detail correction module (DCM). The ACM selects image regions relevant to the given text and then correlates the regions with corresponding semantic words for effective manipulation. Meanwhile, it encodes original image features to help reconstruct text-irrelevant contents. The DCM rectifies mismatched attributes and completes missing contents of the synthetic image. Finally, we suggest a new metric for evaluating image manipulation results, in terms of both the generation of new attributes and the reconstruction of text-irrelevant contents. Extensive experiments on the CUB and COCO datasets demonstrate the superior performance of the proposed method. Code is available at this https URL.", "target": ["文書により精細に画像を操作できるManiGANを提案。画像情報に文書情報にAffine変換を加え、最後に元画像と参照することで操作したい場所以外が変化しないように修正を加える。指定した部分のみがキチンと変化している結果。"]} {"source": "Keyword extraction has received an increasing attention as an important research topic which can lead to have advancements in diverse applications such as document context categorization, text indexing and document classification. In this paper we propose STF-IDF, a novel semantic method based on TF-IDF, for scoring word importance of informal documents in a corpus. A set of nearly four million documents from health-care social media was collected and was trained in order to draw semantic model and to find the word embeddings. Then, the features of semantic space were utilized to rearrange the original TF-IDF scores through an iterative solution so as to improve the moderate performance of this algorithm on informal texts. After testing the proposed method with 200 randomly chosen documents, our method managed to decrease the TF-IDF mean error rate by a factor of 50% and reaching the mean error of 13.7%, as opposed to 27.2% of the original TF-IDF.", "target": ["TF-IDFは全体として頻出する単語にペナルティをかけるが、この場合一時的に多用される単語は意味があるにも関わらず重要でないとされる(流行時多用される\"インフルエンザ\"など)。そのため全体のコンテキストに沿う単語を重視するよう、Word2Vecの距離をベースにTF-IDFの表現を更新していく手法を提案。"]} {"source": "We present Meena, a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media conversations. This 2.6B parameter neural network is simply trained to minimize perplexity of the next token. We also propose a human evaluation metric called Sensibleness and Specificity Average (SSA), which captures key elements of a human-like multi-turn conversation. Our experiments show strong correlation between perplexity and SSA. The fact that the best perplexity end-to-end trained Meena scores high on SSA (72% on multi-turn evaluation) suggests that a human-level SSA of 86% is potentially within reach if we can better optimize perplexity. Additionally, the full version of Meena (with a filtering mechanism and tuned decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots we evaluated.", "target": ["人間のように対話できるチャットボットを目指した研究。既存の合理性(Sensibleness)の評価だけでは汎用的な反応(\"ok\"など)の繰り返しで高い評価になることから、特異性(Specificity)=その対話独自の返答ができているかを評価。Transformerを構造探索したモデルを使用している"]} {"source": "In this paper, we introduce a new vision-language pre-trained model -- ImageBERT -- for image-text joint embedding. Our model is a Transformer-based model, which takes different modalities as input and models the relationship between them. The model is pre-trained on four tasks simultaneously: Masked Language Modeling (MLM), Masked Object Classification (MOC), Masked Region Feature Regression (MRFR), and Image Text Matching (ITM). To further enhance the pre-training quality, we have collected a Large-scale weAk-supervised Image-Text (LAIT) dataset from Web. We first pre-train the model on this dataset, then conduct a second stage pre-training on Conceptual Captions and SBU Captions. Our experiments show that multi-stage pre-training strategy outperforms single-stage pre-training. We also fine-tune and evaluate our pre-trained ImageBERT model on image retrieval and text retrieval tasks, and achieve new state-of-the-art results on both MSCOCO and Flickr30k datasets.", "target": ["BERTの学習方法をマルチモーダル入力(言語/画像)に適用した研究。Maskした言語を予測する通常の事前学習に加え、マスクしたオブジェクトのラベル・特徴を予測するタスク、そして言語/画像の一致を予測するタスクを行う(オブジェクトはFaster-RCNNで事前抽出)。"]} {"source": "This work targets the automated minimum-energy optimization of Quantized Neural Networks (QNNs) - networks using low precision weights and activations. These networks are trained from scratch at an arbitrary fixed point precision. At iso-accuracy, QNNs using fewer bits require deeper and wider network architectures than networks using higher precision operators, while they require less complex arithmetic and less bits per weights. This fundamental trade-off is analyzed and quantified to find the minimum energy QNN for any benchmark and hence optimize energy-efficiency. To this end, the energy consumption of inference is modeled for a generic hardware platform. This allows drawing several conclusions across different benchmarks. First, energy consumption varies orders of magnitude at iso-accuracy depending on the number of bits used in the QNN. Second, in a typical system, BinaryNets or int4 implementations lead to the minimum energy solution, outperforming int8 networks up to 2-10x at iso-accuracy. All code used for QNN training is available from this https URL.", "target": ["ネットワークの重み/活性演算を量子ビット化(離散ノード化)して行う手法の提案。量子化の度合いは1~4bitの範囲が良く、精度/消費エネルギーのバランスがとれるとのこと。Keras的に使える実装が公開されている。"]} {"source": "We present a new method for efficient high-quality image segmentation of objects and scenes. By analogizing classical computer graphics methods for efficient rendering with over- and undersampling challenges faced in pixel labeling tasks, we develop a unique perspective of image segmentation as a rendering problem. From this vantage, we present the PointRend (Point-based Rendering) neural network module: a module that performs point-based segmentation predictions at adaptively selected locations based on an iterative subdivision algorithm. PointRend can be flexibly applied to both instance and semantic segmentation tasks by building on top of existing state-of-the-art models. While many concrete implementations of the general idea are possible, we show that a simple design already achieves excellent results. Qualitatively, PointRend outputs crisp object boundaries in regions that are over-smoothed by previous methods. Quantitatively, PointRend yields significant gains on COCO and Cityscapes, for both instance and semantic segmentation. PointRend's efficiency enables output resolutions that are otherwise impractical in terms of memory or computation compared to existing approaches. Code has been made available at this https URL.", "target": ["Instance Segmentationにおいて、不確かな画素を個々に予測予測することで高解像度のセグメンテーションをできるPointRendを提案。解像度を徐々に上げていき、マスク境界部分などの画素を個々に予測することで精緻なセグメンテーションを達成。Mask R-CNN等既存のネットワークにPointRendのヘッドをつけることで活用も容易。"]} {"source": "This paper demonstrates that multilingual denoising pre-training produces significant performance gains across a wide variety of machine translation (MT) tasks. We present mBART -- a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pre-training a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text. Pre-training a complete model allows it to be directly fine tuned for supervised (both sentence-level and document-level) and unsupervised machine translation, with no task-specific modifications. We demonstrate that adding mBART initialization produces performance gains in all but the highest-resource settings, including up to 12 BLEU points for low resource MT and over 5 BLEU points for many document-level and unsupervised models. We also show it also enables new types of transfer to language pairs with no bi-text or that were not in the pre-training corpus, and present extensive analysis of which factors contribute the most to effective pre-training.", "target": ["事前学習を行うことで翻訳の精度を向上させた研究。はじめにシャッフルした文の順番を直しつつ穴あけ単語を補完するタスクを様々な言語で学習。次に翻訳の学習(転移学習)を行う。Back Translation(BT)と併用可能だが単体ではBTと同等/若干悪い結果。ただ低リソース言語の翻訳でBLEUの大幅な向上を確認"]} {"source": "Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling. Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -- just 4 labels per class. Since FixMatch bears many similarities to existing SSL methods that achieve worse performance, we carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. We make our code available at this https URL.", "target": ["半教師あり手法で、弱Augmentationのデータに対し予測の確信度が高いものを学習データとして採用し、今度は強Augmentationをかけた上で予測結果が変わらないよう学習する。シンプルな手法ながら、CIFAR-10において250ラベルのみで94.93%を達成。"]} {"source": "Applying Q-learning to high-dimensional or continuous action spaces can be difficult due to the required maximization over the set of possible actions. Motivated by techniques from amortized inference, we replace the expensive maximization over all actions with a maximization over a small subset of possible actions sampled from a learned proposal distribution. The resulting approach, which we dub Amortized Q-learning (AQL), is able to handle discrete, continuous, or hybrid action spaces while maintaining the benefits of Q-learning. Our experiments on continuous control tasks with up to 21 dimensional actions show that AQL outperforms D3PG (Barth-Maron et al, 2018) and QT-Opt (Kalashnikov et al, 2018). Experiments on structured discrete action spaces demonstrate that AQL can efficiently learn good policies in spaces with thousands of discrete actions.", "target": ["通常のQ-learningは候補となる行動すべてに対し価値を計算する必要があるため行動数が多い/連続値のケースでは適用しにくかった。そこで状態に合わせてありえる行動をサンプリングする分布を挟み、候補を絞り込む手法を提案。苦手とする連続値コントロールで他手法より良好な成績を獲得。"]} {"source": "The paper analyzes the accuracy of publicly available object-recognition systems on a geographically diverse dataset. This dataset contains household items and was designed to have a more representative geographical coverage than commonly used image datasets in object recognition. We find that the systems perform relatively poorly on household items that commonly occur in countries with a low household income. Qualitative analyses suggest the drop in performance is primarily due to appearance differences within an object class (e.g., dish soap) and due to items appearing in a different context (e.g., toothbrushes appearing outside of bathrooms). The results of our study suggest that further work is needed to make object-recognition systems work equally well for people across different countries and income levels.", "target": ["物体検出の地理的汎用性について調査した研究(世帯収入で区分けしているが、実質的には地理的に分けられている)。所得が低い世帯が多い非西洋の地域では、日用品の物探検出精度が10~20%低下するという結果(識別モデルは各種クラウドベンダの物体検出サービスが使用されている)。"]} {"source": "Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor robot navigation. In this work we address unsupervised learning of scene depth and robot ego-motion where supervision is provided by monocular videos, as cameras are the cheapest, least restrictive and most ubiquitous sensor for robotics. Previous work in unsupervised image-to-depth learning has established strong baselines in the domain. We propose a novel approach which produces higher quality results, is able to model moving objects and is shown to transfer across data domains, e.g. from outdoors to indoor scenes. The main idea is to introduce geometric structure in the learning process, by modeling the scene and the individual objects; camera ego-motion and object motions are learned from monocular videos as input. Furthermore an online refinement method is introduced to adapt learning on the fly to unknown domains. The proposed approach outperforms all state-of-the-art approaches, including those that handle motion e.g. through learned flow. Our results are comparable in quality to the ones which used stereo as supervision and significantly improve depth prediction on scenes and datasets which contain a lot of object motion. The approach is of practical relevance, as it allows transfer across environments, by transferring models trained on data collected for robot navigation in urban scenes to indoor navigation settings. The code associated with this paper can be found at this https URL.", "target": ["単眼の動画情報のみから深度推定を行う手法の提案。カメラの情報予測を介して次フレームの画像を予測して誤差をとることで教師信号を得る。また、Segmentationにより個々の物体の動作を予測することで精度向上が可能。KITTI datasetでSOTA"]} {"source": "Point cloud is point sets defined in 3D metric space. Point cloud has become one of the most significant data format for 3D representation. Its gaining increased popularity as a result of increased availability of acquisition devices, such as LiDAR, as well as increased application in areas such as robotics, autonomous driving, augmented and virtual reality. Deep learning is now the most powerful tool for data processing in computer vision, becoming the most preferred technique for tasks such as classification, segmentation, and detection. While deep learning techniques are mainly applied to data with a structured grid, point cloud, on the other hand, is unstructured. The unstructuredness of point clouds makes use of deep learning for its processing directly very challenging. Earlier approaches overcome this challenge by preprocessing the point cloud into a structured grid format at the cost of increased computational cost or lost of depth information. Recently, however, many state-of-the-arts deep learning techniques that directly operate on point cloud are being developed. This paper contains a survey of the recent state-of-the-art deep learning techniques that mainly focused on point cloud data. We first briefly discussed the major challenges faced when using deep learning directly on point cloud, we also briefly discussed earlier approaches which overcome the challenges by preprocessing the point cloud into a structured grid. We then give the review of the various state-of-the-art deep learning approaches that directly process point cloud in its unstructured form. We introduced the popular 3D point cloud benchmark datasets. And we also further discussed the application of deep learning in popular 3D vision tasks including classification, segmentation and detection.", "target": ["点群データを扱うDNNのサーベイ。センサーの開発が進んだことで点群データ自体は手に入りやすくなっているが、密度不均一・非構造・順列不変(並びに意味がない)といった特性で扱いにくかった。アプローチは前処理による構造化(Voxel化)、多視点の2次元画像化があり双方について仕組み/手法を解説。"]} {"source": "We propose Sideways, an approximate backpropagation scheme for training video models. In standard backpropagation, the gradients and activations at every computation step through the model are temporally synchronized. The forward activations need to be stored until the backward pass is executed, preventing inter-layer (depth) parallelization. However, can we leverage smooth, redundant input streams such as videos to develop a more efficient training scheme? Here, we explore an alternative to backpropagation; we overwrite network activations whenever new ones, i.e., from new frames, become available. Such a more gradual accumulation of information from both passes breaks the precise correspondence between gradients and activations, leading to theoretically more noisy weight updates. Counter-intuitively, we show that Sideways training of deep convolutional video networks not only still converges, but can also potentially exhibit better generalization compared to standard synchronized backpropagation.", "target": ["動画のような時系列かつ各データ(フレーム)間にそんなに差がないデータを効率的に学習する手法の提案。通常はフレームごとにforward/backwardを行うが、複数フレームのforward結果を混ぜてbackwardを行う。これにより学習速度を向上させる一方汎化性能の向上も確認できた。"]} {"source": "Many advances in Natural Language Processing have been based upon more expressive models for how inputs interact with the context in which they occur. Recurrent networks, which have enjoyed a modicum of success, still lack the generalization and systematicity ultimately required for modelling language. In this work, we propose an extension to the venerable Long Short-Term Memory in the form of mutual gating of the current input and the previous output. This mechanism affords the modelling of a richer space of interactions between inputs and their context. Equivalently, our model can be viewed as making the transition function given by the LSTM context-dependent. Experiments demonstrate markedly improved generalization on language modelling in the range of 3–4 perplexity points on Penn Treebank and Wikitext-2, and 0.01–0.05 bpc on four character-based datasets. We establish a new state of the art on all datasets with the exception of Enwik8, where we close a large gap between the LSTM and Transformer models.", "target": ["LSTMを強化する前処理の提案。入力/隠れ層を双方向にgateしていくことでcontextualizeし、表現力の高まった入力/隠れ層をLSTMに入力する。単純な処理ながら言語モデルで良好な精度(perplexity)を確認。"]} {"source": "Most generative models of audio directly generate samples in one of two domains: time or frequency. While sufficient to express any signal, these representations are inefficient, as they do not utilize existing knowledge of how sound is generated and perceived. A third approach (vocoders/synthesizers) successfully incorporates strong domain knowledge of signal processing and perception, but has been less actively researched due to limited expressivity and difficulty integrating with modern auto-differentiation-based machine learning methods. In this paper, we introduce the Differentiable Digital Signal Processing (DDSP) library, which enables direct integration of classic signal processing elements with deep learning methods. Focusing on audio synthesis, we achieve high-fidelity generation without the need for large autoregressive models or adversarial losses, demonstrating that DDSP enables utilizing strong inductive biases without losing the expressive power of neural networks. Further, we show that combining interpretable modules permits manipulation of each separate model component, with applications such as independent control of pitch and loudness, realistic extrapolation to pitches not seen during training, blind dereverberation of room acoustics, transfer of extracted room acoustics to new environments, and transformation of timbre between disparate sources. In short, DDSP enables an interpretable and modular approach to generative modeling, without sacrificing the benefits of deep learning. The library will is available at https://github.com/magenta/ddsp and we encourage further contributions from the community and domain experts.", "target": ["直接音声信号やスペクトログラムを生成するのではなく、従来の音声合成処理に倣い正弦波の合成で音声を生成する研究。複雑な音の生成には面倒なパラメーター調整が欠かせなかったが、この「パラメーター調整」の部分をニューラルネットが担当し音声の出力は通常の音声合成処理になっている。"]} {"source": "In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent. An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image. Information concerning absolute position is inherently useful, and it is reasonable to assume that deep CNNs may implicitly learn to encode this information if there is a means to do so. In this paper, we test this hypothesis revealing the surprising degree of absolute position information that is encoded in commonly used neural networks. A comprehensive set of experiments show the validity of this hypothesis and shed light on how and where this information is represented while offering clues to where positional information is derived from in deep CNNs.", "target": ["CNNは位置情報を学習しているか、学習しているとしたらどこ情報があるか検証した研究。VGG/ResNetの中間特徴から、X/Y方向等のピクセル強度勾配を予測できるか検証している(入力画像(犬or猫等)とは独立の出力を設定)。位置情報は深い層にあり、paddingが学習の肝との結果。"]} {"source": "Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis. Our code is publicly available.", "target": ["Self-Attention(Transformer)とCNNは同等の表現力をもつと主張した論文。position encodingにはrelativeを使用し、自己位置の情報は0に落としている。理論・実験(ただCIFAR10のみ)双方から等質性を検証している。"]} {"source": "Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2\\pi, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2\\pi generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2\\pi produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps.", "target": ["言語と画像双方を使ってエージェントを学習させる試み。Goalを自然言語で指示する研究はあったが、環境説明も文書で提示するのは珍しい。言語/画像入力を相互作用させるモジュール(FiLM2)を提案、これを複数重ねたモデル(txt2π)で学習を行っている。どちらか片方より良い結果が得られることを確認。"]} {"source": "The ability to synthesize style and content of different images to form a visually coherent image holds great promise in various applications such as stylistic painting, design prototyping, image editing, and augmented reality. However, the majority of works in image style transfer have focused on transferring the style of an image to the entirety of another image, and only a very small number of works have experimented on methods to transfer style to an instance of another image. Researchers have proposed methods to circumvent the difficulty of transferring style to an instance in an arbitrary shape. In this paper, we propose a topologically inspired algorithm called Forward Stretching to tackle this problem by transforming an instance into a tensor representation, which allows us to transfer style to this instance itself directly. Forward Stretching maps pixels to specific positions and interpolate values between pixels to transform an instance to a tensor. This algorithm allows us to introduce a method to transfer arbitrary style to an instance in an arbitrary shape. We showcase the results of our method in this paper.", "target": ["物体を抽出してスタイル変換を行う研究。画像全体と異なり、抽出された物体は元画像(四角)と形状が異なるためスタイル変換が難しかったが、一度形状を変換してスタイル変換してもとに戻すことにより、スタイル変換を可能にした。"]} {"source": "In this work we ask whether it is possible to create a \"universal\" detector for telling apart real images from these generated by a CNN, regardless of architecture or dataset used. To test this, we collect a dataset consisting of fake images generated by 11 different CNN-based image generator models, chosen to span the space of commonly used architectures today (ProGAN, StyleGAN, BigGAN, CycleGAN, StarGAN, GauGAN, DeepFakes, cascaded refinement networks, implicit maximum likelihood estimation, second-order attention super-resolution, seeing-in-the-dark). We demonstrate that, with careful pre- and post-processing and data augmentation, a standard image classifier trained on only one specific CNN generator (ProGAN) is able to generalize surprisingly well to unseen architectures, datasets, and training methods (including the just released StyleGAN2). Our findings suggest the intriguing possibility that today's CNN-generated images share some common systematic flaws, preventing them from achieving realistic image synthesis. Code and pre-trained networks are available at this https URL .", "target": ["GANで生成した画像は簡単に本物と見分けられるという研究。生成画像のPG-GANの偽画像で学習したモデルも他のGANの生成画像を見分けられているので、CNNベースの生成モデルは何か共通点があるのかもしれない、とのこと。人が見たときの生成の質と確信度はあまり相関がないようだ。"]} {"source": "This paper investigates the intriguing question of whether we can create learning algorithms that automatically generate training data, learning environments, and curricula in order to help AI agents rapidly learn. We show that such algorithms are possible via Generative Teaching Networks (GTNs), a general approach that is, in theory, applicable to supervised, unsupervised, and reinforcement learning, although our experiments only focus on the supervised case. GTNs are deep neural networks that generate data and/or training environments that a learner (e.g. a freshly initialized neural network) trains on for a few SGD steps before being tested on a target task. We then differentiate through the entire learning process via meta-gradients to update the GTN parameters to improve performance on the target task. GTNs have the beneficial property that they can theoretically generate any type of data or training environment, making their potential impact large. This paper introduces GTNs, discusses their potential, and showcases that they can substantially accelerate learning. We also demonstrate a practical and exciting application of GTNs: accelerating the evaluation of candidate architectures for neural architecture search (NAS), which is rate-limited by such evaluations, enabling massive speed-ups in NAS. GTN-NAS improves the NAS state of the art, finding higher performing architectures when controlling for the search proposal mechanism. GTN-NAS also is competitive with the overall state of the art approaches, which achieve top performance while using orders of magnitude less computation than typical NAS methods. Speculating forward, GTNs may represent a first step toward the ambitious goal of algorithms that generate their own training data and, in doing so, open a variety of interesting new research questions and directions.", "target": ["メタ学習において、外ループでデータを最適化して良いモデルを作成する研究。高速に学習を進められる。それによってランダム生成されたネットワークを高速評価できるので、それでNeual Architecture Searchができる。データ拡張手法と合わせることで、ENASより精度UP"]} {"source": "Previous monocular depth estimation methods take a single view and directly regress the expected results. Though recent advances are made by applying geometrically inspired loss functions during training, the inference procedure does not explicitly impose any geometrical constraint. Therefore these models purely rely on the quality of data and the effectiveness of learning to generalize. This either leads to suboptimal results or the demand of huge amount of expensive ground truth labelled data to generate reasonable results. In this paper, we show for the first time that the monocular depth estimation problem can be reformulated as two sub-problems, a view synthesis procedure followed by stereo matching, with two intriguing properties, namely i) geometrical constraints can be explicitly imposed during inference; ii) demand on labelled depth data can be greatly alleviated. We show that the whole pipeline can still be trained in an end-to-end fashion and this new formulation plays a critical role in advancing the performance. The resulting model outperforms all the previous monocular depth estimation methods as well as the stereo block matching method in the challenging KITTI dataset by only using a small number of real training data. The model also generalizes well to other monocular depth estimation benchmarks. We also discuss the implications and the advantages of solving monocular depth estimation using stereo methods.", "target": ["深度推定を単眼カメラで予測する際、学習時は両眼カメラの情報を使うが推論時は単眼の情報のみしか扱えない。左カメラの画像から右カメラの画像を合成するというステップを入れることにより、推論時でも仮想的に両眼カメラの情報を扱えるようにした。"]} {"source": "In this work, we propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping. Unlike many existing face swapping works that leverage only limited information from the target image when synthesizing the swapped face, our framework, in its first stage, generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively. We propose a novel attributes encoder for extracting multi-level target face attributes, and a new generator with carefully designed Adaptive Attentional Denormalization (AAD) layers to adaptively integrate the identity and the attributes for face synthesis. To address the challenging facial occlusions, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net). It is trained to recover anomaly regions in a self-supervised way without any manual annotations. Extensive experiments on wild faces demonstrate that our face swapping results are not only considerably more perceptually appealing, but also better identity preserving in comparison to other state-of-the-art methods.", "target": ["顔を入れ替えるGAN。先行研究と異なり転移先の影や遮蔽物、角度をよく再現できている。新しいAdaptive NormであるAADを使ったAEI-Netで生成したものを、targetのAEI-Net再構成誤差と共にHEAR-Netに入れて微調整を行う2stageの機構。"]} {"source": "Overparameterized neural networks can be highly accurate on average on an i.i.d. test set, yet consistently fail on atypical groups of the data (e.g., by learning spurious correlations that hold on average but not in such groups). Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups. However, we find that naively applying group DRO to overparameterized neural networks fails: these models can perfectly fit the training data, and any model with vanishing average training loss also already has vanishing worst-case training loss. Instead, the poor worst-case performance arises from poor generalization on some groups. By coupling group DRO models with increased regularization---stronger-than-typical L2 regularization or early stopping---we achieve substantially higher worst-group accuracies, with 10-40 percentage point improvements on a natural language inference task and two image tasks, while maintaining high average accuracies. Our results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization. Finally, we introduce a stochastic optimization algorithm for the group DRO setting and provide convergence guarantees for the new algorithm.", "target": ["学習データとテストデータの間の分布差異に対応できる学習方法の提案。学習データを構成する複数の分布のうち、最もロスが高くなる分布(worst-case)にも対応できるようにするという考え方。グループサイズを使用した正則化により、汎化性能を上げつつも精度が落ちないようにできることを確認。"]} {"source": "Deep reinforcement learning approaches have shown impressive results in a variety of different domains, however, more complex heterogeneous architectures such as world models require the different neural components to be trained separately instead of end-to-end. While a simple genetic algorithm recently showed end-to-end training is possible, it failed to solve a more complex 3D task. This paper presents a method called Deep Innovation Protection (DIP) that addresses the credit assignment problem in training complex heterogenous neural network models end-to-end for such environments. The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in multi-component network, allowing other components to adapt. We investigate the emergent representations of these evolved networks, which learn to predict properties important for the survival of the agent, without the need for a specific forward-prediction loss.", "target": ["進化戦略で3Dの複雑な環境に挑戦した研究。World Modelsをベースにし、状態/状態遷移を学習するVAE/MDN-RNNが更新された際はその上で動作するcontrollerが慣れるまで猶予を設ける(進化戦略選択時にrewardだけでなくageを加味する)ようにしている。性能よりは適用可能性を調査した研究になっている。"]} {"source": "Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.", "target": ["言語モデルの学習で、MASKで隠した単語を当てるのでなく紛らわしい単語に置換した単語を検出する手法の提案。前段に通常のMLM=Masked Language Model(小サイズ)を置き、穴埋めされた=紛らわしい単語に置き換えられた文字列を本体の言語モデルに送って学習する。BERT/RoBERTaを性能/効率で上回る。"]} {"source": "We introduce NN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a -nearest neighbors (NN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data. Applying this transformation to a strong Wikitext-103 LM, with neighbors drawn from the original training set, our NN-LM achieves a new state-of-the-art perplexity of 15.79 -- a 2.9 point improvement with no additional training. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge. Together, these results strongly suggest that learning similarity between sequences of text is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail.", "target": ["事前学習済みモデルで得られた表現でkNNを行うことで、言語モデルを実装した研究。事前学習済みモデルの強みは良質な潜在表現にあるので、予測するのでなくcontextが近いものをkNNで選択しその次単語を取っている。普通に予測する場合に比べPerplexityが1pt前後低下。"]} {"source": "The free access to large-scale public databases, together with the fast progress of deep learning techniques, in particular Generative Adversarial Networks, have led to the generation of very realistic fake content with its corresponding implications towards society in this era of fake news. This survey provides a thorough review of techniques for manipulating face images including DeepFake methods, and methods to detect such manipulations. In particular, four types of facial manipulation are reviewed: i) entire face synthesis, ii) identity swap (DeepFakes), iii) attribute manipulation, and iv) expression swap. For each manipulation group, we provide details regarding manipulation techniques, existing public databases, and key benchmarks for technology evaluation of fake detection methods, including a summary of results from those evaluations. Among all the aspects discussed in the survey, we pay special attention to the latest generation of DeepFakes, highlighting its improvements and challenges for fake detection. In addition to the survey information, we also discuss open issues and future trends that should be considered to advance in the field.", "target": ["DNNを用いたフェイク画像、特に顔画像の生成/編集についてまとめられたサーベイ。生成/編集されたものを検出する手法・データセットについてもまとめられている。"]} {"source": "Graph Neural Networks (graph NNs) are a promising deep learning approach for analyzing graph-structured data. However, it is known that they do not improve (or sometimes worsen) their predictive performance as we pile up many layers and add non-lineality. To tackle this problem, we investigate the expressive power of graph NNs via their asymptotic behaviors as the layer size tends to infinity. Our strategy is to generalize the forward propagation of a Graph Convolutional Network (GCN), which is a popular graph NN variant, as a specific dynamical system. In the case of a GCN, we show that when its weights satisfy the conditions determined by the spectra of the (augmented) normalized Laplacian, its output exponentially approaches the set of signals that carry information of the connected components and node degrees only for distinguishing nodes. Our theory enables us to relate the expressive power of GCNs with the topological information of the underlying graphs inherent in the graph spectra. To demonstrate this, we characterize the asymptotic behavior of GCNs on the Erd\\H{o}s -- R\\'{e}nyi graph. We show that when the Erd\\H{o}s -- R\\'{e}nyi graph is sufficiently dense and large, a broad range of GCNs on it suffers from the ``information loss\" in the limit of infinite layers with high probability. Based on the theory, we provide a principled guideline for weight normalization of graph NNs. We experimentally confirm that the proposed weight scaling enhances the predictive performance of GCNs in real data. Code is available at https://github.com/delta2323/gnn-asymptotics.", "target": ["既存のGCNが、層を重ねても精度が向上しない理由を解析した研究。グラフの形とGCNの重みスケールが特定条件を満たす場合(グラフがある程度密/大の場合大抵該当)、層を重ねていくと同連結成分/同次数(接続数)のノードは見分けがつかなくなる(Over-smoothing)。そこで次数を用いた正則化でこれを軽減。"]} {"source": "Emotion recognition in conversation (ERC) has received much attention, lately, from researchers due to its potential widespread applications in diverse areas, such as health-care, education, and human resources. In this paper, we present Dialogue Graph Convolutional Network (DialogueGCN), a graph neural network based approach to ERC. We leverage self and inter-speaker dependency of the interlocutors to model conversational context for emotion recognition. Through the graph network, DialogueGCN addresses context propagation issues present in the current RNN-based methods. We empirically show that this method alleviates such issues, while outperforming the current state of the art on a number of benchmark emotion classification datasets.", "target": ["対話中の感情を検出する研究。発話をノード(CNNで特徴抽出=>GRUで時系列の潜在表現を作成)、発話間の依存(誰から誰に)をEdgeとし、有効のグラフを作成しGraph Convolutionにかけ判定を行っている。既存の手法をF1で2pt~上回る精度を達成。"]} {"source": "We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must \\textit{alone} be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.", "target": ["クラス分類をCNN各層の特徴マップ x 重みの掛け合わせから行うことで、特徴マップのどこを重視すればよいか(=Attention)を学習させる試み。分類/セグメンテーションで精度向上が見られたほか、Adversarialへの耐性も確認できたとのこと。"]} {"source": "We present a generalization of the Cauchy/Lorentzian, Geman-McClure, Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and L2 loss functions. By introducing robustness as a continuous parameter, our loss function allows algorithms built around robust loss minimization to be generalized, which improves performance on basic vision tasks such as registration and clustering. Interpreting our loss as the negative log of a univariate density yields a general probability distribution that includes normal and Cauchy distributions as special cases. This probabilistic interpretation enables the training of neural networks in which the robustness of the loss automatically adapts itself during training, which improves performance on learning-based tasks such as generative image synthesis and unsupervised monocular depth estimation, without requiring any manual parameter tuning.", "target": ["一つのパラメーターを調整するだけでL2を始めとした様々なコスト関数を表現できる汎用関数の提案。これによりコスト関数をまたいだ\"汎化性能\"を検証/獲得することに道が開ける。"]} {"source": "Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O( L 2 ) to O( L log L ), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.", "target": ["Transformerの演算を効率化する手法。Attentionの計算におけるQKではQに近いKだけ考慮される(=遠い所の計算は無駄)なので、ハッシュ関数を使ってソートしたうえでchunkに区切り、chunk内(似た者同士)/前chunk(接続)のみAttention計算を行う。またReversibleのResNetによる効率化も行っている。"]} {"source": "We propose a novel learning method to rectify document images with various distortion types from a single input image. As opposed to previous learning-based methods, our approach seeks to first learn the distortion flow on input image patches rather than the entire image. We then present a robust technique to stitch the patch results into the rectified document by processing in the gradient domain. Furthermore, we propose a second network to correct the uneven illumination, further improving the readability and OCR accuracy. Due to the less complex distortion present on the smaller image patches, our patch-based approach followed by stitching and illumination correction can significantly improve the overall accuracy in both the synthetic and real datasets.", "target": ["画像のゆがみを補正する研究。画像全体ではなくパッチ+パッチ周辺の領域に切り出し、それらの勾配をつなぎ合わせることで本来の画像全体で得られる勾配フローを構築し、そこからサンプリングして画像を生成している。"]} {"source": "Automatic differentiation frameworks are optimized for exactly one thing: computing the average mini-batch gradient. Yet, other quantities such as the variance of the mini-batch gradients or many approximations to the Hessian can, in theory, be computed efficiently, and at the same time as the gradient. While these quantities are of great interest to researchers and practitioners, current deep-learning software does not support their automatic calculation. Manually implementing them is burdensome, inefficient if done naively, and the resulting code is rarely shared. This hampers progress in deep learning, and unnecessarily narrows research to focus on gradient descent and its variants; it also complicates replication studies and comparisons between newly developed methods that require those quantities, to the point of impossibility. To address this problem, we introduce BackPACK, an efficient framework built on top of PyTorch, that extends the backpropagation algorithm to extract additional information from first- and second-order derivatives. Its capabilities are illustrated by benchmark reports for computing additional quantities on deep neural networks, and an example application by testing several recent curvature approximations for optimization.", "target": ["通常のDNNフレームワークはミニバッチの勾配平均しか計算してくれないが、分散やL2-norm、また2次のモーメントなどは新しい最適化手法(やその検討)にとって欠かせない情報になっている。そこで、これらを効率的に計算する仕組みをPyTorchベースで実装したという研究。"]} {"source": "Exploration of new superconductors still relies on the experience and intuition of experts and is largely a process of experimental trial and error. In one study, only 3% of the candidate materials showed superconductivity. Here, we report the first deep learning model for finding new superconductors. We introduced the method named \"reading periodic table\" which represented the periodic table in a way that allows deep learning to learn to read the periodic table and to learn the law of elements for the purpose of discovering novel superconductors that are outside the training data. It is recognized that it is difficult for deep learning to predict something outside the training data. Although we used only the chemical composition of materials as information, we obtained an R2 value of 0.92 for predicting Tc for materials in a database of superconductors. We also introduced the method named \"garbage-in\" to create synthetic data of non-superconductors that do not exist. Non-superconductors are not reported, but the data must be required for deep learning to distinguish between superconductors and non-superconductors. We obtained three remarkable results. The deep learning can predict superconductivity for a material with a precision of 62%, which shows the usefulness of the model; it found the recently discovered superconductor CaBi2 and another one Hf0.5Nb0.2V2Zr0.3, neither of which is in the superconductor database; and it found Fe-based high-temperature superconductors (discovered in 2008) from the training data before 2008. These results open the way for the discovery of new high-temperature superconductor families. The candidate materials list, data, and method are openly available from the link this https URL.", "target": ["超伝導物質をDLで見つけるという研究。入力は化合物の組成と周期表のみで、超伝導電位温度Tcを予測する。情報が少ないながら、そこそこの精度。"]} {"source": "Despite an impressive performance from the latest GAN for generating hyper-realistic images, GAN discriminators have difficulty evaluating the quality of an individual generated sample. This is because the task of evaluating the quality of a generated image differs from deciding if an image is real or fake. A generated image could be perfect except in a single area but still be detected as fake. Instead, we propose a novel approach for detecting where errors occur within a generated image. By collaging real images with generated images, we compute for each pixel, whether it belongs to the real distribution or generated distribution. Furthermore, we leverage attention to model long-range dependency; this allows detection of errors which are reasonable locally but not holistically. For evaluation, we show that our error detection can act as a quality metric for an individual image, unlike FID and IS. We leverage Improved Wasserstein, BigGAN, and StyleGAN to show a ranking based on our metric correlates impressively with FID scores. Our work opens the door for better understanding of GAN and the ability to select the best samples from a GAN model.", "target": ["GANの新しい評価指標Pixel Distance(PD)を提案。生成画像と実画像をコラージュし、パッチ毎に真偽どちら所属するかのスコアをつける。画像毎にスコアが算出でき、可視化も容易。"]} {"source": "Recent research has made the surprising finding that state-of-the-art deep learning models sometimes fail to generalize to small variations of the input. Adversarial training has been shown to be an effective approach to overcome this problem. However, its application has been limited to enforcing invariance to analytically defined transformations like ℓp-norm bounded perturbations. Such perturbations do not necessarily cover plausible real-world variations that preserve the semantics of the input (such as a change in lighting conditions). In this paper, we propose a novel approach to express and formalize robustness to these kinds of real-world transformations of the input. The two key ideas underlying our formulation are (1) leveraging disentangled representations of the input to define different factors of variations, and (2) generating new input images by adversarially composing the representations of different images. We use a StyleGAN model to demonstrate the efficacy of this framework. Specifically, we leverage the disentangled latent representations computed by a StyleGAN model to generate perturbations of an image that are similar to real-world variations (like adding make-up, or changing the skin-tone of a person) and train models to be invariant to these perturbations. Extensive experiments show that our method improves generalization and reduces the effect of spurious correlations (reducing the error rate of a \"smile\" detector by 21% for example).", "target": ["Adversarial exampleを使ってモデルの頑健性を向上させる方法AdvMixを提案。画像xを生成する潜在変数を、ラベルに関わる部分とそうでない部分disentangleし、ラベルに関わらない潜在変数を変えて一番誤分類しやすいサンプルを作成する。"]} {"source": "We propose to reinterpret a standard discriminative classifier of p(y|x) as an energy based model for the joint distribution p(x, y). In this setting, the standard class probabilities can be easily computed as well as unnormalized values of p(x) and p(x|y). Within this framework, standard discriminative architectures may be used and the model can also be trained on unlabeled data. We demonstrate that energy based training of the joint distribution improves calibration, robustness, and out-of-distribution detection while also enabling our models to generate samples rivaling the quality of recent GAN approaches. We improve upon recently proposed techniques for scaling up the training of energy based models and present an approach which adds little overhead compared to standard classification training. Our approach is the first to achieve performance rivaling the state-of-the-art in both generative and discriminative learning within one hybrid model.", "target": ["クラス分類ではクラスごとのスコア(logit)をsoftmaxにかけて確率を出力するが、これはp(x, y)(=同時分布)を表現するエネルギーモデルにも使用できる(同時分布を周辺化けすることでp(x)も得られる)。分類と生成を表裏一体で表現できるということで、双方のタスクでSOTAを達成"]} {"source": "Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks. To overcome this problem, we present a novel approach based on task-conditioned hypernetworks, i.e., networks that generate the weights of a target model based on task identity. Continual learning (CL) is less difficult for this class of models thanks to a simple key feature: instead of recalling the input-output relations of all previously seen data, task-conditioned hypernetworks only require rehearsing task-specific weight realizations, which can be maintained in memory using a simple regularizer. Besides achieving state-of-the-art performance on standard CL benchmarks, additional experiments on long task sequences reveal that task-conditioned hypernetworks display a very large capacity to retain previous memories. Notably, such long memory lifetimes are achieved in a compressive regime, when the number of trainable hypernetwork weights is comparable or smaller than target network size. We provide insight into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and show that task-conditioned hypernetworks demonstrate transfer learning. Finally, forward information transfer is further supported by empirical results on a challenging CL benchmark based on the CIFAR-10/100 image datasets.", "target": ["破壊的忘却を防ぎつつ連続的にタスクを学習する手法の提案。タスクに応じた重みを生成するネットワーク(ハイパーネットワーク)を構築し、学習済みタスクの(パラメーター生成を行う)重みが変化しないよう正則化をかけて学習する。重みを連続的でなくパーツ(chunk)に分け一括生成する手法も提案している"]} {"source": "One of the most ubiquitous analysis tools employed in single-cell transcriptomics and cytometry is t-distributed stochastic neighbor embedding (t-SNE) [1], used to visualize individual cells as points on a 2D scatter plot such that similar cells are positioned close together. Recently, a related algorithm, called uniform manifold approximation and projection (UMAP) [2] has attracted substantial attention in the single-cell community. In Nature Biotechnology, Becht et al. [3] argued that UMAP is preferable to t-SNE because it better preserves the global structure of the data and is more consistent across runs. Here we show that this alleged superiority of UMAP can be entirely attributed to different choices of initialization in the implementations used by Becht et al.: t-SNE implementations by default used random initialization, while the UMAP implementation used a technique called Laplacian eigenmaps [4] to initialize the embedding. We show that UMAP with random initialization preserves global structure as poorly as t-SNE with random initialization, while t-SNE with informative initialization performs as well as UMAP with informative initialization. Hence, contrary to the claims of Becht et al., their experiments do not demonstrate any advantage of the UMAP algorithm per se, but rather warn against using random initialization.", "target": ["UMAP(#852 )はt-SNEよりもデータの構造をよく捉えられるとされているが、その良さはアルゴリズムではなく初期値に依存するという主張。t-SNEの初期化はランダムだがUMAPは次元削除にも使われるLaplacian eigenmap(ラプラス固有写像)を使っており、この差が出力の差につながっているという。"]} {"source": "Understanding causes and effects in mechanical systems is an essential component of reasoning in the physical world. This work poses a new problem of counterfactual learning of object mechanics from visual input. We develop the CoPhy benchmark to assess the capacity of the state-of-the-art models for causal physical reasoning in a synthetic 3D environment and propose a model for learning the physical dynamics in a counterfactual setting. Having observed a mechanical experiment that involves, for example, a falling tower of blocks, a set of bouncing balls or colliding objects, we learn to predict how its outcome is affected by an arbitrary intervention on its initial conditions, such as displacing one of the objects in the scene. The alternative future is predicted given the altered past and a latent representation of the confounders learned by the model in an end-to-end fashion with no supervision. We compare against feedforward video prediction baselines and show how observing alternative experiences allows the network to capture latent physical properties of the environment, which results in significantly more accurate predictions at the level of super human performance.", "target": ["物理法則の因果を学習する研究。A=>Bという状態に至るまでの過程に加え、Aに介入(ブロックを動かすなど)した場合(C)の到達先(D)が収録されている。このデータを使用し、Graph Convolution(+RNN)でオブジェクトの位置遷移を学習している。"]} {"source": "Meta-learning is a promising strategy for learning to efficiently learn within new tasks, using data gathered from a distribution of tasks. However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task. In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with a time-varying task. We present meta-learning via online changepoint analysis (MOCA), an approach which augments a meta-learning algorithm with a differentiable Bayesian changepoint detection scheme. The framework allows both training and testing directly on time series data without segmenting it into discrete tasks. We demonstrate the utility of this approach on a nonlinear meta-regression benchmark as well as two meta-image-classification benchmarks.", "target": ["タスクが連続的に変化する場合のメタラーニングについて定式化した研究。画像識別でラベルの意味が途中から変わるタスクを実験タスクとして挙げている。観測した状態に基づいて現在どのタスクかを推定し(ベイズ推定)、タスク推定確率/タスクに応じた重み(観測履歴から算出)から予測を行い学習する。"]} {"source": "This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI, +13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7% in XNLI accuracy for Swahili and 11.4% for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code, data and models publicly available.", "target": ["100言語、総計2.5Tのデータで事前学習することで多言語のパフォーマンスを向上させた研究。言語単体のスコアを崩すことなく多言語に対応できたとしている。どの言語のサンプリングレートを上げるかは、特定言語のパフォーマンスを取るか汎用性を取るかのトレードオフになるという。"]} {"source": "Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.", "target": ["Graph ConvolutionをRelation Extractionに応用した研究。係り受け関係をもとにグラフを構築するが、普通に使うとノイズになるエッジやパスに含まれない否定形の認識ができないため、主語/目的語間の最短距離(SDP)をベースに接続しているノードを+1まで含む隣接行列を作成している"]} {"source": "Answering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations. Neural module networks (NMNs) learn to parse such questions as executable programs composed of learnable modules, performing well on synthetic visual QA domains. However, we find that it is challenging to learn these models for non-synthetic questions on open-domain text, where a model needs to deal with the diversity of natural language and perform a broader range of reasoning. We extend NMNs by: (a) introducing modules that reason over a paragraph of text, performing symbolic reasoning (such as arithmetic, sorting, counting) over numbers and dates in a probabilistic and differentiable manner; and (b) proposing an unsupervised auxiliary loss to help extract arguments associated with the events in text. Additionally, we show that a limited amount of heuristically-obtained question program and intermediate module output supervision provides sufficient inductive bias for accurate learning. Our proposed model significantly outperforms state-of-the-art models on a subset of the DROP dataset that poses a variety of reasoning challenges that are covered by our modules.", "target": ["質問文をクエリに変換し回答する研究。日本で一番人口が多い都市は?という質問ならrelocate(find-max-num(filter(find(都市), 日本), 人口))という形にパース、実行する(findなどのモジュールは決め打ち)。パース/実行が正確でなくても正解しうるため意図的な学習データ(誘導バイアス)を使用している"]} {"source": "Training a neural network is synonymous with learning the values of the weights. By contrast, we demonstrate that randomly weighted neural networks contain subnetworks which achieve impressive performance without ever training the weight values. Hidden in a randomly weighted Wide ResNet-50 we show that there is a subnetwork (with random weights) that is smaller than, but matches the performance of a ResNet-34 trained on ImageNet. Not only do these \"untrained subnetworks\" exist, but we provide an algorithm to effectively find them. We empirically show that as randomly weighted neural networks with fixed weights grow wider and deeper, an \"untrained subnetwork\" approaches a network with learned weights in accuracy. Our code and pretrained models are available at this https URL.", "target": ["重みではなく、結合方法を学習することで通常の学習済みモデルに匹敵する精度を出す研究。重みは初期値のまま、結合可否の評価を誤差逆伝播で学習し、その上位数%で選ばれた結合のみを使用して推論を行う"]} {"source": "Multi-task learning is an open and challenging problem in computer vision. The typical way of conducting multi-task learning with deep neural networks is either through handcrafted schemes that share all initial layers and branch out at an adhoc point, or through separate task-specific networks with an additional feature sharing/fusion mechanism. Unlike existing methods, we propose an adaptive sharing approach, called AdaShare, that decides what to share across which tasks to achieve the best recognition accuracy, while taking resource efficiency into account. Specifically, our main idea is to learn the sharing pattern through a task-specific policy that selectively chooses which layers to execute for a given task in the multi-task network. We efficiently optimize the task-specific policy jointly with the network weights, using standard back-propagation. Experiments on several challenging and diverse benchmark datasets with a variable number of tasks well demonstrate the efficacy of our approach over state-of-the-art methods. Project page: this https URL.", "target": ["マルチタスクを行う際、どこまで共有にしてどこから固有にするのかは悩ましい問題となる。そこで、使うレイヤ/飛ばすレイヤをタスクによって選択する手法を提案(一つのタスクしか選択しないレイヤは固有レイヤになる)。セグメンテーションや表面の法線方向推定で効果を確認。"]} {"source": "The generalization and learning speed of a multi-class neural network can often be significantly improved by using soft targets that are a weighted average of the hard targets and the uniform distribution over labels. Smoothing the labels in this way prevents the network from becoming over-confident and label smoothing has been used in many state-of-the-art models, including image classification, language translation and speech recognition. Despite its widespread use, label smoothing is still poorly understood. Here we show empirically that in addition to improving generalization, label smoothing improves model calibration which can significantly improve beam-search. However, we also observe that if a teacher network is trained with label smoothing, knowledge distillation into a student network is much less effective. To explain these observations, we visualize how label smoothing changes the representations learned by the penultimate layer of the network. We show that label smoothing encourages the representations of training examples from the same class to group in tight clusters. This results in loss of information in the logits about resemblances between instances of different classes, which is necessary for distillation, but does not hurt generalization or calibration of the model's predictions.", "target": ["マルチクラスの分類などで用いられるlabel smoothingの効果と影響を調査した研究。label smoothingにより同クラスの潜在表現(最終層手前)は固まる傾向がみられ、これにより精度への貢献が生まれるが、蒸留を行う場合はクラス間の類似度が必要なため逆効果になるという結果。"]} {"source": "Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators. The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, using likelihood as a decoding objective leads to text that is bland and strangely repetitive. In this paper, we reveal surprising distributional differences between human text and machine text. In addition, we find that decoding strategies alone can dramatically effect the quality of machine text, even when generated from exactly the same neural language model. Our findings motivate Nucleus Sampling, a simple but effective method to draw the best out of neural generation. By sampling text from the dynamic nucleus of the probability distribution, which allows for diversity while effectively truncating the less reliable tail of the distribution, the resulting text better demonstrates the quality of human text, yielding enhanced diversity without sacrificing fluency and coherence.", "target": ["NMTや言語モデルでテキスト生成を行う場合にtop-kではなく累積確率に基づいて採用する単語を選ぶNucleus Sampling(top-p)を提案。同じフレーズの繰り返しを抑制し、より人間らしい出力を得られる"]} {"source": "We present a simple, fully-convolutional model for real-time (>30 fps) instance segmentation that achieves competitive results on MS COCO evaluated on a single Titan Xp, which is significantly faster than any previous state-of-the-art approach. Moreover, we obtain this result after training on only one GPU. We accomplish this by breaking instance segmentation into two parallel subtasks: (1) generating a set of prototype masks and (2) predicting per-instance mask coefficients. Then we produce instance masks by linearly combining the prototypes with the mask coefficients. We find that because this process doesn't depend on repooling, this approach produces very high-quality masks and exhibits temporal stability for free. Furthermore, we analyze the emergent behavior of our prototypes and show they learn to localize instances on their own in a translation variant manner, despite being fully-convolutional. We also propose Fast NMS, a drop-in 12 ms faster replacement for standard NMS that only has a marginal performance penalty. Finally, by incorporating deformable convolutions into the backbone network, optimizing the prediction head with better anchor scales and aspect ratios, and adding a novel fast mask re-scoring branch, our YOLACT++ model can achieve 34.1 mAP on MS COCO at 33.5 fps, which is fairly close to the state-of-the-art approaches while still running at real-time.", "target": ["リアルタイムのセグメンテーションを行う手法の提案。複数のマスク生成と、インスタンスごとのマスク配合率の出力を組み合わせてセグメンテーションを行う。SOTAに近いmAPをリアルタイムで実現。"]} {"source": "On April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approximately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task.", "target": ["OpenAIから、Dota2での挑戦を総括する論文が公開(全66pの大作)。モデルやゲームのバージョンが更新される中、スクラッチから学習していては費用的にも時間的にも間に合わないためパラメーターの持越し(\"外科手術\"と言われている)をどう行ったのか、また学習の工夫など細かいところまで書いてある"]} {"source": "We seek to align agent behavior with a user's objectives in a reinforcement learning setting with unknown dynamics, an unknown reward function, and unknown unsafe states. The user knows the rewards and unsafe states, but querying the user is expensive. To address this challenge, we propose an algorithm that safely and interactively learns a model of the user's reward function. We start with a generative model of initial states and a forward dynamics model trained on off-policy data. Our method uses these models to synthesize hypothetical behaviors, asks the user to label the behaviors with rewards, and trains a neural network to predict the rewards. The key idea is to actively synthesize the hypothetical behaviors from scratch by maximizing tractable proxies for the value of information, without interacting with the environment. We call this method reward query synthesis via trajectory optimization (ReQueST). We evaluate ReQueST with simulated users on a state-based 2D navigation task and the image-based Car Racing video game. The results show that ReQueST significantly outperforms prior methods in learning reward models that transfer to new environments with different initial state distributions. Moreover, ReQueST safely trains the reward model to detect unsafe states, and corrects reward hacking before deploying the agent.", "target": ["人がお手本を示すことが難しい特異なケースやレアケースを模倣学習する研究。(様々な状態から)軌跡を生成し、人が良い/悪いのフィードバックを与えることで報酬関数を学習、それでもってモデルベースの学習を行う。"]} {"source": "Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to \\textit{explain} black box models, rather than creating models that are \\textit{interpretable} in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward -- it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.", "target": ["ブラックボックスの機械学習モデルを解釈する方法を編み出すのをやめなさい、という批判論文。解釈手法は元のモデルを完全に説明できない(もしできたら解釈可能な手法と互換になる)だけでなく、「説明」は多くの場合説明になってないなど様々な点が指摘されている。"]} {"source": "As machine learning black boxes are increasingly being deployed in domains such as healthcare and criminal justice, there is growing emphasis on building tools and techniques for explaining these black boxes in an interpretable manner. Such explanations are being leveraged by domain experts to diagnose systematic errors and underlying biases of black boxes. In this paper, we demonstrate that post hoc explanations techniques that rely on input perturbations, such as LIME and SHAP, are not reliable. Specifically, we propose a novel scaffolding technique that effectively hides the biases of any given classifier by allowing an adversarial entity to craft an arbitrary desired explanation. Our approach can be used to scaffold any biased classifier in such a way that its predictions on the input data distribution still remain biased, but the post hoc explanations of the scaffolded classifier look innocuous. Using extensive evaluation with multiple real-world datasets (including COMPAS), we demonstrate how extremely biased (racist) classifiers crafted by our framework can easily fool popular explanation techniques such as LIME and SHAP into generating innocuous explanations which do not reflect the underlying biases.", "target": ["機械学習モデルの挙動を解釈するための代表的手法LIME/SHAPをだます手法。手法はシンプルでLIME/SHAPが生成するデータかどうかを分類し、判定結果によりunbiased/biasedなモデルを使い分ける。これによりバイアスがないと見せかけつつバイアスのある予測を行うことが可能。"]} {"source": "The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images. Many pretext tasks lead to representations that are covariant with image transformations. We argue that, instead, semantic representations ought to be invariant under such transformations. Specifically, we develop Pretext-Invariant Representation Learning (PIRL, pronounced as \"pearl\") that learns invariant representations based on pretext tasks. We use PIRL with a commonly used pretext task that involves solving jigsaw puzzles. We find that PIRL substantially improves the semantic quality of the learned image representations. Our approach sets a new state-of-the-art in self-supervised learning from images on several popular benchmarks for self-supervised learning. Despite being unsupervised, PIRL outperforms supervised pre-training in learning image representations for object detection. Altogether, our results demonstrate the potential of self-supervised learning of image representations with good invariance properties.", "target": ["アノテーションを必要としない疑似タスク(pretext task)を用いて良質な表現を得る研究。通常は疑似タスクを直接解いて学習するが(パーツをばらした画像を復元するなど)、本研究では変換前後の表現が近しくなるよう学習する方法を取っている(元/パーツばらしで表現が同じ等)。"]} {"source": "Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing. Current pre-training procedures usually focus on training the model with several simple tasks to grasp the co-occurrence of words or sentences. However, besides co-occurring, there exists other valuable lexical, syntactic and semantic information in training corpora, such as named entity, semantic closeness and discourse relations. In order to extract to the fullest extent, the lexical, syntactic and semantic information from training corpora, we propose a continual pre-training framework named ERNIE 2.0 which builds and learns incrementally pre-training tasks through constant multi-task learning. Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on 16 tasks including English tasks on GLUE benchmarks and several common tasks in Chinese. The source codes and pre-trained models have been released at this https URL.", "target": ["自然言語処理における事前学習を改善した研究。単語/文/意味それぞれの表現を獲得させるためにマルチタスクを行うが、最初からすべてのタスクでなく徐々にタスクを増やしていく方法を取っている(これにより学習時間を短縮しつつ破壊的忘却を防止する)。BERTに数ポイント差をつける精度を達成。"]} {"source": "The ability to learn new concepts with small amounts of data is a critical aspect of intelligence that has proven challenging for deep learning methods. Meta-learning has emerged as a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks. However, most meta-learning algorithms implicitly require that the meta-training tasks be mutually-exclusive, such that no single model can solve all of the tasks at once. For example, when creating tasks for few-shot image classification, prior work uses a per-task random assignment of image classes to N-way classification labels. If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes. This requirement means that the user must take great care in designing the tasks, for example by shuffling labels or removing task identifying information from the inputs. In some domains, this makes meta-learning entirely inapplicable. In this paper, we address this challenge by designing a meta-regularization objective using information theory that places precedence on data-driven adaptation. This causes the meta-learner to decide what must be learned from the task training data and what should be inferred from the task testing input. By doing so, our algorithm can successfully use data from non-mutually-exclusive tasks to efficiently adapt to novel tasks. We demonstrate its applicability to both contextual and gradient-based meta-learning algorithms, and apply it in practical settings where applying standard meta-learning has been difficult. Our approach substantially outperforms standard meta-learning algorithms in these settings.", "target": ["メタラーニングの学習は複数タスクが用意されていることが前提になっているが、個々のタスクをデザインすることは難しく、データ数が少ないとタスクを丸ごと覚えてしまうMemorizationが発生する。そこで、データ分布を経由させるパスを設けることで正則化を行う手法を提案している(x=>yに加えx=>D=>y)"]} {"source": "Light-weight convolutional neural networks (CNNs) suffer performance degradation as their low computational budgets constrain both the depth (number of convolution layers) and the width (number of channels) of CNNs, resulting in limited representation capability. To address this issue, we present Dynamic Convolution, a new design that increases model complexity without increasing the network depth or width. Instead of using a single convolution kernel per layer, dynamic convolution aggregates multiple parallel convolution kernels dynamically based upon their attentions, which are input dependent. Assembling multiple kernels is not only computationally efficient due to the small kernel size, but also has more representation power since these kernels are aggregated in a non-linear way via attention. By simply using dynamic convolution for the state-of-the-art architecture MobileNetV3-Small, the top-1 accuracy of ImageNet classification is boosted by 2.9% with only 4% additional FLOPs and 2.9 AP gain is achieved on COCO keypoint detection.", "target": ["CNNの計算コストを削減するため、kernelをレイヤ間で共有する手法。入力に対しAttentionを計算し、その重み(配合率)でkernelを組み合わせて畳み込みを計算する。既存のアーキテクチャ(Mobilenet)に組み入れることで、速度/精度双方の改善に成功。"]} {"source": "We develop Upside-Down Reinforcement Learning (UDRL), a method for learning to act using only supervised learning techniques. Unlike traditional algorithms, UDRL does not use reward prediction or search for an optimal policy. Instead, it trains agents to follow commands such as \"obtain so much total reward in so much time.\" Many of its general principles are outlined in a companion report; the goal of this paper is to develop a practical learning algorithm and show that this conceptually simple perspective on agent training can produce a range of rewarding behaviors for multiple episodic environments. Experiments show that on some tasks UDRL's performance can be surprisingly competitive with, and even exceed that of some traditional baseline algorithms developed over decades of research. Based on these results, we suggest that alternative approaches to expected reward maximization have an important role to play in training useful autonomous agents.", "target": ["強化学習を教師ありで行うことを目指した変わり種の研究。価値関数(状態sにおける行動aの良さ)を推定するのではなく、状態sから目標c(tステップ以内に報酬rを達成)を達成するための行動を推定する。目標は適当な戦略によるサンプリングから更新、目標と状態/行動のセットから戦略を更新するを繰り返す"]} {"source": "Market makers play an important role in providing liquidity to markets by continuously quoting prices at which they are willing to buy and sell, and managing inventory risk. In this paper, we build a multi-agent simulation of a dealer market and demonstrate that it can be used to understand the behavior of a reinforcement learning (RL) based market maker agent. We use the simulator to train an RL-based market maker agent with different competitive scenarios, reward formulations and market price trends (drifts). We show that the reinforcement learning agent is able to learn about its competitor's pricing policy; it also learns to manage inventory by smartly selecting asymmetric prices on the buy and sell sides (skewing), and maintaining a positive (or negative) inventory depending on whether the market price drift is positive (or negative). Finally, we propose and test reward formulations for creating risk averse RL-based market maker agents.", "target": ["金融商品取引の価格設定に強化学習を応用した研究。直接取引のオークションでなく、仲介者を経由するマーケットメイクで適用している。仲介者は実際の販売/買い価格が相場よりどれだけよかったか+販売用に保有する在庫が報酬になる。平均分散最適化のポートフォリオ理論を組み込んだほうが良好な結果"]} {"source": "We propose semantic region-adaptive normalization (SEAN), a simple but effective building block for Generative Adversarial Networks conditioned on segmentation masks that describe the semantic regions in the desired output image. Using SEAN normalization, we can build a network architecture that can control the style of each semantic region individually, e.g., we can specify one style reference image per region. SEAN is better suited to encode, transfer, and synthesize style than the best previous method in terms of reconstruction quality, variability, and visual quality. We evaluate SEAN on multiple datasets and report better quantitative metrics (e.g. FID, PSNR) than the current state of the art. SEAN also pushes the frontier of interactive image editing. We can interactively edit images by changing segmentation masks or the style for any given region. We can also interpolate styles from two reference images per region.", "target": ["画像とその領域分割マスクをもとに、そのスタイルを転移させる新しい正規化層SEANを提案。 (Encoderで)固定次元にしたStyle Matrixと各領域分割マスク毎に独立でStyleの入れ込みをするため、精密、かつ、任意の意味数を持つ領域マスクを使ってスタイル変換が可能。"]} {"source": "Many deep learning models are vulnerable to the adversarial attack, i.e., imperceptible but intentionally-designed perturbations to the input can cause incorrect output of the networks. In this paper, using information geometry, we provide a reasonable explanation for the vulnerability of deep learning models. By considering the data space as a non-linear space with the Fisher information metric induced from a neural network, we first propose an adversarial attack algorithm termed one-step spectral attack (OSSA). The method is described by a constrained quadratic form of the Fisher information matrix, where the optimal adversarial perturbation is given by the first eigenvector, and the model vulnerability is reflected by the eigenvalues. The larger an eigenvalue is, the more vulnerable the model is to be attacked by the corresponding eigenvector. Taking advantage of the property, we also propose an adversarial detection method with the eigenvalues serving as characteristics. Both our attack and detection algorithms are numerically optimized to work efficiently on large datasets. Our evaluations show superior performance compared with other methods, implying that the Fisher information is a promising approach to investigate the adversarial attacks and defenses.", "target": ["画像の分類問題におけるKL(p(y|x)|| p(y|x+η))を最大にするようなAdversarial example ηを生成する攻撃手法の提案とその検知方法の提案。通常のadversarial example(FGSM)より小さいノイズでfooling rateの向上を確認"]} {"source": "Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-to-sparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static. Code used in our work can be found in this http URL.", "target": ["\"宝くじ仮説\"のように良い初期値を選ぶのではなく、どのような初期値でも最初から疎で高精度なネットワークを学習させるThe Rigged Lottery (RigL)を提案。『疎なNN学習→パラメータ小な部分を削除→勾配大同士を結合』を繰り返して学習。学習時間も大きく伸びず、精度そのままで推論速度向上が確認。"]} {"source": "Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence. Whereas numerous studies have developed these principles independently, here, we combine them, building on the assumption of a class-conditional noise process to directly estimate the joint distribution between noisy (given) labels and uncorrupted (unknown) labels. This results in a generalized CL which is provably consistent and experimentally performant. We present sufficient conditions where CL exactly finds label errors, and show CL performance exceeding seven recent competitive approaches for learning with noisy labels on the CIFAR dataset. Uniquely, the CL framework is not coupled to a specific data modality or model (e.g., we use CL to find several label errors in the presumed error-free MNIST dataset and improve sentiment classification on text data in Amazon Reviews). We also employ CL on ImageNet to quantify ontological class overlap (e.g., estimating 645 \"missile\" images are mislabeled as their parent class \"projectile\"), and moderately increase model accuracy (e.g., for ResNet) by cleaning data prior to training. These results are replicable using the open-source cleanlab release.", "target": ["ノイズが含まれたラベルを枝刈りして学習する手法の提案。モデルの出力確率(Confidence)からラベル間の混合行列を推定し、ノイズが多いクラスをConfidenceに従い間引くことでcleanな作成する。ImageNetには10万のラベルのノイズが含まれるという。"]} {"source": "Recent superhuman results in games have largely been achieved in a variety of zero-sum settings, such as Go and Poker, in which agents need to compete against others. However, just like humans, real-world AI systems have to coordinate and communicate with other agents in cooperative partially observable environments as well. These settings commonly require participants to both interpret the actions of others and to act in a way that is informative when being interpreted. Those abilities are typically summarized as theory f mind and are seen as crucial for social interactions. In this paper we propose two different search techniques that can be applied to improve an arbitrary agreed-upon policy in a cooperative partially observable game. The first one, single-agent search, effectively converts the problem into a single agent setting by making all but one of the agents play according to the agreed-upon policy. In contrast, in multi-agent search all agents carry out the same common-knowledge search procedure whenever doing so is computationally feasible, and fall back to playing according to the agreed-upon policy otherwise. We prove that these search procedures are theoretically guaranteed to at least maintain the original performance of the agreed-upon policy (up to a bounded approximation error). In the benchmark challenge problem of Hanabi, our search technique greatly improves the performance of every agent we tested and when applied to a policy trained using RL achieves a new state-of-the-art score of 24.61 / 25 in the game, compared to a previous-best of 24.08 / 25.", "target": ["Facebookが多人数協力かつ不完全情報ゲームのHanabiで人間同等と評価される成果を達成(誰かを負かすゲームではないので、他メンバーからの評価になる)。他プレイヤーの戦略を固定した探索と、相手プレイヤーの行動から自分の手札確率を推定する探索2種を提案(ただ後者は演算が現実的な場合行う)。"]} {"source": "Normalizing flows provide a general mechanism for defining expressive probability distributions, only requiring the specification of a (usually simple) base distribution and a series of bijective transformations. There has been much recent work on normalizing flows, ranging from improving their expressive power to expanding their application. We believe the field has now matured and is in need of a unified perspective. In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference. We place special emphasis on the fundamental principles of flow design, and discuss foundational topics such as expressive power and computational trade-offs. We also broaden the conceptual framing of flows by relating them to more general probability transformations. Lastly, we summarize the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.", "target": ["Normalizing Flowの理論、また応用がまとめられたサーベイ。1, 2でNormalizing Flowの仕組み、3, 4で実際どうFlowを組み立てるのかを解説しており、以後の節で応用例についても紹介している(研究の羅列でなく、応用原理が書いてあってとても良い)。"]} {"source": "Transfer learning has fundamentally changed the landscape of natural language processing (NLP) research. Many existing state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks. However, due to limited data resources from downstream tasks and the extremely large capacity of pre-trained models, aggressive fine-tuning often causes the adapted model to overfit the data of downstream tasks and forget the knowledge of the pre-trained model. To address the above issue in a more principled manner, we propose a new computational framework for robust and efficient fine-tuning for pre-trained language models. Specifically, our proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the capacity of the model; 2. Bregman proximal point optimization, which is a class of trust-region methods and can prevent knowledge forgetting. Our experiments demonstrate that our proposed method achieves the state-of-the-art performance on multiple NLP benchmarks.", "target": ["破壊的忘却を防ぎつつ、事前学習済み言語モデルを転移する手法の提案。既存は職人芸的に学習率をチューニングする手法が多かったが、本手法はシンプルなハイパラ設定のみで使える。似た入力は似た出力になるよう、またベースの出力と大きく変わらないよう正則化をかける。"]} {"source": "Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions. AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half.", "target": ["回転やCutなどのAugmentationを、重みをつけてミックスする手法の提案。Augmentation前後のサンプルの分布距離(Jensen-Shannon)をlossに加えることで、ミックスの最適化/元のデータ分布と離れたサンプルが生成されないよう制約をかけている。"]} {"source": "We show that a variety of modern deep learning tasks exhibit a \"double-descent\" phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.", "target": ["過学習の壁を越えてtest lossが下がる現象(double-descent #1353)を画像/言語双方の代表的なモデル(ResNet/Transformer)で検証した研究。あるモデル/学習方法で覚えきれるデータ数をeffective model complexity (EMC)とし壁の位置はこれに相関があるのではとしている(データ数を増やすと位置が右にずれる)。"]} {"source": "Training generative adversarial networks requires balancing of delicate adversarial dynamics. Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes. In this work, we improve CS-GAN with natural gradient-based latent optimisation and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator. Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet (128×128) dataset. Our model achieves an Inception Score (IS) of 148 and an Fréchet Inception Distance (FID) of 3.4, an improvement of 17% and 32% in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters.", "target": ["潜在変数の最適化によりGAN性能を向上させ、ImageNetのSOTAを大きく更新。サンプリングした潜在変数zを使って出力したDの勾配を使って一度zを最適化し(z')、それを使ってDとGの学習を行う。自然勾配法で潜在変数を最適化すると効果が大きい。"]} {"source": "We develop BatchBALD, a tractable approximation to the mutual information between a batch of points and model parameters, which we use as an acquisition function to select multiple informative points jointly for the task of deep Bayesian active learning. BatchBALD is a greedy linear-time 1−1e-approximate algorithm amenable to dynamic programming and efficient caching. We compare BatchBALD to the commonly used approach for batch data acquisition and find that the current approach acquires similar and redundant points, sometimes performing worse than randomly acquiring data. We finish by showing that, using BatchBALD to consider dependencies within an acquisition batch, we achieve new state of the art performance on standard benchmarks, providing substantial data efficiency improvements in batch acquisition.", "target": ["パラメーターとデータとの相互情報量に基づきバッチを選択する手法の提案。既存の手法はデータ間の相互情報量を過剰に見積もっていたが、本手法ではそれを解消することでより理想的なサンプリングを実現している。"]} {"source": "The problem of verifying whether a textual hypothesis holds based on the given evidence, also known as fact verification, plays an important role in the study of natural language understanding and semantic representation. However, existing studies are mainly restricted to dealing with unstructured evidence (e.g., natural language sentences and documents, news, etc), while verification under structured evidence, such as tables, graphs, and databases, remains under-explored. This paper specifically aims to study the fact verification given semi-structured data as evidence. To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED. TabFact is challenging since it involves both soft linguistic reasoning and hard symbolic reasoning. To address these reasoning challenges, we design two different models: Table-BERT and Latent Program Algorithm (LPA). Table-BERT leverages the state-of-the-art pre-trained language model to encode the linearized tables and statements into continuous vectors for verification. LPA parses statements into programs and executes them against the tables to obtain the returned binary value for verification. Both methods achieve similar accuracy but still lag far behind human performance. We also perform a comprehensive analysis to demonstrate great future opportunities. The data and code of the dataset are provided in \\url{this https URL}.", "target": ["表データに関するQAタスクの提案。単一行のみに関する質問と、複数行にまたがる質問の2種類が収録されている。回答に至るための演算を段階的に選択していく手法(LPA)と、テーブルの行要素をSEPで区切って一行ごとBERTの潜在表現にしてマッチをかける手法(Table-BERT)の2種類をベースラインとしている。"]} {"source": "The success of lottery ticket initializations (Frankle and Carbin, 2019) suggests that small, sparsified networks can be trained so long as the network is initialized appropriately. Unfortunately, finding these \"winning ticket\" initializations is computationally expensive. One potential solution is to reuse the same winning tickets across a variety of datasets and optimizers. However, the generality of winning ticket initializations remains unclear. Here, we attempt to answer this question by generating winning tickets for one training configuration (optimizer and dataset) and evaluating their performance on another configuration. Perhaps surprisingly, we found that, within the natural images domain, winning ticket initializations generalized across a variety of datasets, including Fashion MNIST, SVHN, CIFAR-10/100, ImageNet, and Places365, often achieving performance close to that of winning tickets generated on the same dataset. Moreover, winning tickets generated using larger datasets consistently transferred better than those generated using smaller datasets. We also found that winning ticket initializations generalize across optimizers with high performance. These results suggest that winning ticket initializations generated by sufficiently large datasets contain inductive biases generic to neural networks more broadly which improve training across many settings and provide hope for the development of better initialization methods.", "target": ["良い初期値のみがネットワークの性能を左右するという\"宝くじ理論\"において、良い初期値は、データセットを変えても良い初期値であるという結果。モデル、データセット、Optimizerを変えて実験しているが、けっこう転移できる。 CIFAR10の半分で選んだ初期値をImageNetやSVHN等でも活用できる。大きなデータセット、多様なクラスをもつデータセットの初期値の方がより性能が良い模様"]} {"source": "We present UDify, a multilingual multi-task model capable of accurately predicting universal part-of-speech, morphological features, lemmas, and dependency trees simultaneously for all 124 Universal Dependencies treebanks across 75 languages. By leveraging a multilingual BERT self-attention model pretrained on 104 languages, we found that fine-tuning it on all datasets concatenated together with simple softmax classifiers for each UD task can meet or exceed state-of-the-art UPOS, UFeats, Lemmas, (and especially) UAS, and LAS scores, without requiring any recurrent or language-specific components. We evaluate UDify for multilingual learning, showing that low-resource languages benefit the most from cross-linguistic annotations. We also evaluate for zero-shot learning, with results suggesting that multilingual training provides strong UD predictions even for languages that neither UDify nor BERT have ever been trained on.", "target": ["BERT(#959 )をベースにしたネットワークで75言語124の木構造(Universal Dependency)を学習させた研究。Word Pieceベースの多言語BERTの上にタスク個別のレイヤ(形態素推定、係り受け推定など)を乗せマルチタスクの転移学習を行う(この時ULMFiTを使用)。予測時は、複数のAttentionレイヤに重みを掛けて使用する"]} {"source": "A structured understanding of our world in terms of objects, relations, and hierarchies is an important component of human cognition. Learning such a structured world model from raw sensory data remains a challenge. As a step towards this goal, we introduce Contrastively-trained Structured World Models (C-SWMs). C-SWMs utilize a contrastive approach for representation learning in environments with compositional structure. We structure each state embedding as a set of object representations and their relations, modeled by a graph neural network. This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process. We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent, simple Atari games, and a multi-object physics simulation. Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.", "target": ["知識グラフで使用されるTransEを、画像(強化学習の状態認識)+複数要素認識に対応させた研究。TransEではトリプルの構成要素をベクトルにして演算で関係を表現するが(吾輩+職業=猫等)、それを状態+行動=次状態という強化学習の文脈に読み替え画像=>ベクトル時に特徴マップを対応要素に分け処理している"]} {"source": "Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question. This paper introduces a new graph-based recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions. Our retriever model trains a recurrent neural network that learns to sequentially retrieve evidence paragraphs in the reasoning path by conditioning on the previously retrieved documents. Our reader model ranks the reasoning paths and extracts the answer span included in the best reasoning path. Experimental results show state-of-the-art results in three open-domain QA datasets, showcasing the effectiveness and robustness of our method. Notably, our method achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points.", "target": ["複数ステップの推論が必要なQAを解く研究。記事間のリンクをもとにグラフを構築し、妥当な推論パスを選択した後回答を行う。具体的には、質問に該当しそうな記事をまず絞り込み(TF-IDF)、BERTを使い記事選択、選択した記事をRNNに入力し次の記事予測、と続ける。Beam Searchで有力なパスを絞り回答する"]} {"source": "Adversarial examples are commonly viewed as a threat to ConvNets. Here we present an opposite perspective: adversarial examples can be used to improve image recognition models if harnessed in the right manner. We propose AdvProp, an enhanced adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to our method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples. We show that AdvProp improves a wide range of models on various image recognition tasks and performs better when the models are bigger. For instance, by applying AdvProp to the latest EfficientNet-B7 [28] on ImageNet, we achieve significant improvements on ImageNet (+0.7%), ImageNet-C (+6.5%), ImageNet-A (+7.0%), Stylized-ImageNet (+4.8%). With an enhanced EfficientNet-B8, our method achieves the state-of-the-art 85.5% ImageNet top-1 accuracy without extra data. This result even surpasses the best model in [20] which is trained with 3.5B Instagram images (~3000X more than ImageNet) and ~9.4X more parameters. Models are available at this https URL.", "target": ["敵対的サンプルを使ってImageNetとノイズ付きのImageNetの精度を大幅に改善する研究。Adeversarial Trainingにおいて、通常データと敵対的サンプルが通るBatch Normalizationを分けることを提案。ノイズ等がない通常のデータとノイズが乗っているデータはドメインが違うため、2つのデータを混合した分布で学習するのは不適ではないか、という考えに基づく。シンプルで応用範囲も広く、かなり強力。"]} {"source": "We investigate the extent to which individual attention heads in pretrained transformer language models, such as BERT and RoBERTa, implicitly capture syntactic dependency relations. We employ two methods---taking the maximum attention weight and computing the maximum spanning tree---to extract implicit dependency relations from the attention weights of each layer/head, and compare them to the ground-truth Universal Dependency (UD) trees. We show that, for some UD relation types, there exist heads that can recover the dependency type significantly better than baselines on parsed English text, suggesting that some self-attention heads act as a proxy for syntactic structure. We also analyze BERT fine-tuned on two datasets---the syntax-oriented CoLA and the semantics-oriented MNLI---to investigate whether fine-tuning affects the patterns of their self-attention, but we do not observe substantial differences in the overall dependency relations extracted using our methods. Our results suggest that these models have some specialist attention heads that track individual dependency types, but no generalist head that performs holistic parsing significantly better than a trivial baseline, and that analyzing attention weights directly may not reveal much of the syntactic knowledge that BERT-style models are known to learn.", "target": ["BERTに係り受け関係がどれだけ学習されているかを、Attentionの重みから検証した研究。関係ごと(nsubjやobjなど)と、ROOTからのパース双方を検証している。BERTは学習をしているものの、シンプルなベースラインより若干いい程度。転移学習(MNLI/CoLAを使用)してもあまり精度の向上は見られず。"]} {"source": "In sequence to sequence learning, the self-attention mechanism proves to be highly effective, and achieves significant improvements in many tasks. However, the self-attention mechanism is not without its own flaws. Although self-attention can model extremely long dependencies, the attention in deep layers tends to overconcentrate on a single token, leading to insufficient use of local information and difficultly in representing long sequences. In this work, we explore parallel multi-scale representation learning on sequence data, striving to capture both long-range and short-range language structures. To this end, we propose the Parallel MUlti-Scale attEntion (MUSE) and MUSE-simple. MUSE-simple contains the basic idea of parallel multi-scale sequence representation learning, and it encodes the sequence in parallel, in terms of different scales with the help from self-attention, and pointwise transformation. MUSE builds on MUSE-simple and explores combining convolution and self-attention for learning sequence representations from more different scales. We focus on machine translation and the proposed approach achieves substantial performance improvements over Transformer, especially on long sequences. More importantly, we find that although conceptually simple, its success in practice requires intricate considerations, and the multi-scale attention must build on unified semantic space. Under common setting, the proposed model achieves substantial performance and outperforms all previous models on three main machine translation tasks. In addition, MUSE has potential for accelerating inference due to its parallelism. Code will be available at this https URL", "target": ["Self-Attentionはトークン単体を重視する傾向があり、このため長い系列の学習に弱い。そこで、単語周辺のコンテキストを考慮できるようPoint / Depth wiseのConvolutionを併用するという提案。翻訳におけるBLEUスコアの向上を確認、Point wiseの組み合わせだけでも効果あり。"]} {"source": "Modern object detectors rely heavily on rectangular bounding boxes, such as anchors, proposals and the final predictions, to represent objects at various recognition stages. The bounding box is convenient to use but provides only a coarse localization of objects and leads to a correspondingly coarse extraction of object features. In this paper, we present \\textbf{RepPoints} (representative points), a new finer representation of objects as a set of sample points useful for both localization and recognition. Given ground truth localization and recognition targets for training, RepPoints learn to automatically arrange themselves in a manner that bounds the spatial extent of an object and indicates semantically significant local areas. They furthermore do not require the use of anchors to sample a space of bounding boxes. We show that an anchor-free object detector based on RepPoints can be as effective as the state-of-the-art anchor-based detection methods, with 46.5 AP and 67.4 AP_{50} on the COCO test-dev detection benchmark, using ResNet-101 model. Code is available at this https URL.", "target": ["Bounding Box(BB)の代わりに複数の点を使って物体検知を行うRPDetを提案。Bounding Boxでは姿勢や物体の形を捉えられないことから、それらを表現できる点の組(Red Points)をBounding Boxの代替として使う意図。Red PointsからBounding Boxを算出し、正解のBounding Boxとの差分をもとに学習を行う。既存のアンカーフリーの手法を超えるだけでなく、CocoでSOTA。"]} {"source": "Texts like news, encyclopedias, and some social media strive for objectivity. Yet bias in the form of inappropriate subjectivity - introducing attitudes via framing, presupposing truth, and casting doubt - remains ubiquitous. This kind of bias erodes our collective trust and fuels social conflict. To address this issue, we introduce a novel testbed for natural language generation: automatically bringing inappropriately subjective text into a neutral point of view (\"neutralizing\" biased text). We also offer the first parallel corpus of biased language. The corpus contains 180,000 sentence pairs and originates from Wikipedia edits that removed various framings, presuppositions, and attitudes from biased sentences. Last, we propose two strong encoder-decoder baselines for the task. A straightforward yet opaque CONCURRENT system uses a BERT encoder to identify subjective words as part of the generation process. An interpretable and controllable MODULAR algorithm separates these steps, using (1) a BERT-based classifier to identify problematic words and (2) a novel join embedding through which the classifier can edit the hidden states of the encoder. Large-scale human evaluation across four domains (encyclopedias, news headlines, books, and political speeches) suggests that these algorithms are a first step towards the automatic identification and reduction of bias.", "target": ["主観的な記述を客観的な記述に編集する研究。Wikipediaの編集履歴から修正前(=主観(バイアスあり))と修正後を抽出し、パラレルコーパスとして学習する。BERTベースのEncoderで主観的な単語を予測し、確率分布と潜在表現を合算しDecoderに渡す。Decoderはそれに基づき編集するか素通しするか判断する。"]} {"source": "Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially. Prior art in the field has largely considered supervised or reinforcement learning tasks, and often assumes full knowledge of task labels and boundaries. In this work, we propose an approach (CURL) to tackle a more general problem that we will refer to as unsupervised continual learning. The focus is on learning representations without any knowledge about task identity, and we explore scenarios when there are abrupt changes between tasks, smooth transitions from one task to another, or even when the data is shuffled. The proposed approach performs task inference directly within the model, is able to dynamically expand to capture new concepts over its lifetime, and incorporates additional rehearsal-based techniques to deal with catastrophic forgetting. We demonstrate the efficacy of CURL in an unsupervised learning setting with MNIST and Omniglot, where the lack of labels ensures no information is leaked about the task. Further, we demonstrate strong performance compared to prior art in an i.i.d setting, or when adapting the technique to supervised tasks such as incremental class learning.", "target": ["タスクをまたぐ潜在表現を獲得させる試み。通常のVAEに、クラス識別を行う分布を併せて学習する。上層は新タスク(既存のモデルでは精度が低いサンプル)に適応したコンポーネントを動的に追加していき、(タスク共通と学習する)下層は破壊的忘却を防ぐため生成モデルを併用した学習を行う。"]} {"source": "Planning methods can solve temporally extended sequential decision making problems by composing simple behaviors. However, planning requires suitable abstractions for the states and transitions, which typically need to be designed by hand. In contrast, model-free reinforcement learning (RL) can acquire behaviors from low-level inputs directly, but often struggles with temporally extended tasks. Can we utilize reinforcement learning to automatically form the abstractions needed for planning, thus obtaining the best of both approaches? We show that goal-conditioned policies learned with RL can be incorporated into planning, so that a planner can focus on which states to reach, rather than how those states are reached. However, with complex state observations such as images, not all inputs represent valid states. We therefore also propose using a latent variable model to compactly represent the set of valid states for the planner, so that the policies provide an abstraction of actions, and the latent variable model provides an abstraction of states. We compare our method with planning-based and model-free methods and find that our method significantly outperforms prior work when evaluated on image-based robot navigation and manipulation tasks that require non-greedy, multi-staged behavior.", "target": ["タスクを解くときに、サブゴールを設定して学習する手法の提案。サブゴールは実画像ではなく潜在表現で表現される。画像表現はVAEで、フレーム間の到達困難性はTDM(ゴールを条件づけた価値関数)で計測し、ゴールに向けて容易に到達可能なパスをつなげる形でタスクを解く。"]} {"source": "Visual intelligence at the edge is becoming a growing necessity for low latency applications and situations where real-time decision is vital. Object detection, the first step in visual data analytics, has enjoyed significant improvements in terms of state-of-the-art accuracy due to the emergence of Convolutional Neural Networks (CNNs) and Deep Learning. However, such complex paradigms intrude increasing computational demands and hence prevent their deployment on resource-constrained devices. In this work, we propose a hierarchical framework that enables to detect objects in high-resolution video frames, and maintain the accuracy of state-of-the-art CNN-based object detectors while outperforming existing works in terms of processing speed when targeting a low-power embedded processor using an intelligent data reduction mechanism. Moreover, a use-case for pedestrian detection from Unmanned-Areal-Vehicle (UAV) is presented showing the impact that the proposed approach has on sensitivity, average processing time and power consumption when is implemented on different platforms. Using the proposed selection process our framework manages to reduce the processed data by 100x leading to under 4W power consumption on different edge devices.", "target": ["高解像度画像を処理するEdgeカメラ用に高速で省電力なEdgeNetを提案した。位置候補算出部、検出、Optical Flowによるトラッキングに分かれる。高解像度なのでRPN的なものを使った方が早いのかもしれない。Tiny-YoloV3より高速で正確。"]} {"source": "Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.", "target": ["環境の動作と戦略を同時かつEnd-to-Endに学習する手法の提案。モンテカルロ木探索がベースだが、シミュレーションは実環境でなくモデルベースで行う。実際の行動軌跡はReplay Bufferに格納し、そこからサンプルした軌跡(実行動)から学習を行う。囲碁・チェス・将棋でAlphaZero、AtariでR2D2を上回る。"]} {"source": "We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA. Code has been made available at: this https URL", "target": ["異なるフレームレート(低速/高速)で処理した結果をマージして、動画認識を行う手法。低速側で空間情報、高速側で動作情報を把握することを目的としている(脳内のP細胞/M細胞をモデルにしている(Pは低速で詳細、Mは高速で単純))。Kineticsなどの動作認識のタスクでSOTA。"]} {"source": "Recent work has shown that data augmentation has the potential to significantly improve the generalization of deep learning models. Recently, automated augmentation strategies have led to state-of-the-art results in image classification and object detection. While these strategies were optimized for improving validation accuracy, they also led to state-of-the-art results in semi-supervised learning and improved robustness to common corruptions of images. An obstacle to a large-scale adoption of these methods is a separate search phase which increases the training complexity and may substantially increase the computational cost. Additionally, due to the separate search phase, these approaches are unable to adjust the regularization strength based on model or dataset size. Automated augmentation policies are often found by training small models on small datasets and subsequently applied to train larger models. In this work, we remove both of these obstacles. RandAugment has a significantly reduced search space which allows it to be trained on the target task with no need for a separate proxy task. Furthermore, due to the parameterization, the regularization strength may be tailored to different model and dataset sizes. RandAugment can be used uniformly across different tasks and datasets and works out of the box, matching or surpassing all previous automated augmentation approaches on CIFAR-10/100, SVHN, and ImageNet. On the ImageNet dataset we achieve 85.0% accuracy, a 0.6% increase over the previous state-of-the-art and 1.0% increase over baseline augmentation. On object detection, RandAugment leads to 1.0-1.3% improvement over baseline augmentation, and is within 0.3% mAP of AutoAugment on COCO. Finally, due to its interpretable hyperparameter, RandAugment may be used to investigate the role of data augmentation with varying model and dataset size. Code is available online.", "target": ["Data Augmentationを自動で行うAuto Augmentを、学習と同時に行う手法。通常のAuto Augmentは探索コストが非常に高いため同時実行は困難だった。そこで探索空間を絞り込むことで(変換の数を絞る(N)、変換のパラメーターは1つに集約(最終的に画像に対して与えるゆがみ=Mのみ))、学習での同時探索を実現"]} {"source": "Recently, large-scale few-shot learning (FSL) becomes topical. It is discovered that, for a large-scale FSL problem with 1,000 classes in the source domain, a strong baseline emerges, that is, simply training a deep feature embedding model using the aggregated source classes and performing nearest neighbor (NN) search using the learned features on the target classes. The state-of-the-art largescale FSL methods struggle to beat this baseline, indicating intrinsic limitations on scalability. To overcome the challenge, we propose a novel large-scale FSL model by learning transferable visual features with the class hierarchy which encodes the semantic relations between source and target classes. Extensive experiments show that the proposed model significantly outperforms not only the NN baseline but also the state-of-the-art alternatives. Furthermore, we show that the proposed model can be easily extended to the large-scale zero-shot learning (ZSL) problem and also achieves the state-of-the-art results.", "target": ["Few-shotの画像認識で、クラス間の階層構造を利用した研究。クラス間に共通する上位クラスを認識できるようにすることで、分類クラス数が多くなってもメタ認識を活用できるようにする。上位クラスは、クラス名のWord2Vec上の類似度を使用する方法をとっている。"]} {"source": "We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.", "target": ["教師なしの表現学習で教師ありの表現(ImageNetの潜在表現等)を上回った研究。queryとマッチするデータを辞書から抜くというタスクで学習する。辞書は固定ではなく、queue処理でどんどん置き換わっていく。辞書のキーを作るencoderはquery側からコピーするが、momentumを使い徐々に変更していく。"]} {"source": "Variational autoencoders optimize an objective that combines a reconstruction loss (the distortion) and a KL term (the rate). The rate is an upper bound on the mutual information, which is often interpreted as a regularizer that controls the degree of compression. We here examine whether inclusion of the rate also acts as an inductive bias that improves generalization. We perform rate-distortion analyses that control the strength of the rate term, the network capacity, and the difficulty of the generalization problem. Decreasing the strength of the rate paradoxically improves generalization in most settings, and reducing the mutual information typically leads to underfitting. Moreover, we show that generalization continues to improve even after the mutual information saturates, indicating that the gap on the bound (i.e. the KL divergence relative to the inference marginal) affects generalization. This suggests that the standard Gaussian prior is not an inductive bias that typically aids generalization, prompting work to understand what choices of priors improve generalization in VAEs.", "target": ["VAEについて、モデルの複雑さと汎化性能の関係などを調べた研究。学習データとテストデータの距離が近い場合は複雑(層が深い)モデルのほうが性能が良いが(というかOverfitしても問題ない)、離れた場合は単純なモデルが上回る。ただKL距離の正則化をゆるめると、複雑なモデルのほうが性能が良くなる。"]} {"source": "Training complex machine learning models for prediction often requires a large amount of data that is not always readily available. Leveraging these external datasets from related but different sources is therefore an important task if good predictive models are to be built for deployment in settings where data can be rare. In this paper we propose a novel approach to the problem in which we use multiple GAN architectures to learn to translate from one dataset to another, thereby allowing us to effectively enlarge the target dataset, and therefore learn better predictive models than if we simply used the target dataset. We show the utility of such an approach, demonstrating that our method improves the prediction performance on the target domain over using just the target dataset and also show that our framework outperforms several other benchmarks on a collection of real-world medical datasets.", "target": ["複数データセットを統合して、データサイズを増やそうという試み。全体としては、GANを利用しデータセット間の「翻訳」を学習させるイメージ。ソースiから潜在表現Zを通し見分けがつかないターゲットjのデータを生成すると同時に、Zから復元したi、生成データからZに戻した場合の一致を取るように学習"]} {"source": "We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2. Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. On ImageNet, we first train an EfficientNet model on labeled images and use it as a teacher to generate pseudo labels for 300M unlabeled images. We then train a larger EfficientNet as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher. Models are available at this https URL. Code is available at this https URL.", "target": ["学習したモデルで疑似ラベルを付与する手法で、ImageNetのSOTAを更新した研究。EfficientNetをフルラベルのデータで学習した後、JFTのデータに疑似ラベルを付与する。フルラベルと疑似ラベルを混ぜて生徒モデルを学習する(生徒学習時はノイズを加える)。さらに教師と生徒を交代し学習していく。"]} {"source": "We present Emu, a system that semantically enhances multilingual sentence embeddings. Our framework fine-tunes pre-trained multilingual sentence embeddings using two main components: a semantic classifier and a language discriminator. The semantic classifier improves the semantic similarity of related sentences, whereas the language discriminator enhances the multilinguality of the embeddings via multilingual adversarial training. Our experimental results based on several language pairs show that our specialized embeddings outperform the state-of-the-art multilingual sentence embedding model on the task of cross-lingual intent classification using only monolingual labeled data.", "target": ["シンプルな手法で、多言語の単語分散表現をチューニングする手法(Emu)。Encoderを、同じ意味の文がまとまるように(類似度ベースのクラスタリング)、言語の識別ができないように(他言語のEncoder結果と見分けられないように)学習を行う。ラベル/パラレルコーパスが不要で、多くのケースで精度向上を記録"]} {"source": "Machine translation is highly sensitive to the size and quality of the training data, which has led to an increasing interest in collecting and filtering large parallel corpora. In this paper, we propose a new method for this task based on multilingual sentence embeddings. In contrast to previous approaches, which rely on nearest neighbor retrieval with a hard threshold over cosine similarity, our proposed method accounts for the scale inconsistencies of this measure, considering the margin between a given sentence pair and its closest candidates instead. Our experiments show large improvements over existing methods. We outperform the best published results on the BUCC mining task and the UN reconstruction task by more than 10 F1 and 30 precision points, respectively. Filtering the English-German ParaCrawl corpus with our approach, we obtain 31.2 BLEU points on newstest2014, an improvement of more than one point over the best official filtered version.", "target": ["多言語の分散表現を得る手法(LASER)。Bi-directional LSTMのEncoder/Decoderが基本で、Encoderで処理した文はMax-poolをとり、Decode時に常に言語IDとともにconcatする。Encoderが言語独立の表現獲得を担当し、Decoderが言語固有の復元を担当する形で学習を行う。"]} {"source": "We present a systematic study of adversarial attacks on state-of-the-art object detection frameworks. Using standard detection datasets, we train patterns that suppress the objectness scores produced by a range of commonly used detectors, and ensembles of detectors. Through extensive experiments, we benchmark the effectiveness of adversarially trained patches under both white-box and black-box settings, and quantify transferability of attacks between datasets, object classes, and detector models. Finally, we present a detailed study of physical world attacks using printed posters and wearable clothes, and rigorously quantify the performance of such attacks with different metrics.", "target": ["物体検知に対するAdversarial Attackの研究。妨害するパッチを作って、それが別のモデルに有効かどうかを検証した。割りと攻撃自体は転移可能で図のように服に印刷しても有効。"]} {"source": "Convolutional neural network (CNN), in particular the Unet, is a powerful method for medical image segmentation. To date Unet has demonstrated state-of-art performance in many complex medical image segmentation tasks, especially under the condition when the training and testing data share the same distribution (i.e. come from the same source domain). However, in clinical practice, medical images are acquired from different vendors and centers. The performance of a U-Net trained from a particular source domain, when transferred to a different target domain (e.g. different vendor, acquisition parameter), can drop unexpectedly. Collecting a large amount of annotation from each new domain to retrain the U-Net is expensive, tedious, and practically impossible. In this work, we proposed a generic framework to address this problem, consisting of (1) an unpaired generative adversarial network (GAN) for vendor-adaptation, and (2) a Unet for object segmentation. In the proposed Unet-GAN architecture, GAN learns from Unet at the feature level that is segmentation-specific. We used cardiac cine MRI as the example, with three major vendors (Philips, Siemens, and GE) as three domains, while the methodology can be extended to medical images segmentation in general. The proposed method showed significant improvement of the segmentation results across vendors. The proposed Unet-GAN provides an annotation-free solution to the cross-vendor medical image segmentation problem, potentially extending a trained deep learning model to multi-center and multi-vendor use in real clinical scenario.", "target": ["RIのベンダーが違うと左心室の領域分割が失敗するので、CycleGANを使ってドメイン(ベンダー)変換を行なった上で、領域分割をすると上手くいった。このようなDomain shiftはML界隈では一般的な問題なので、ドメインの数が少ない場合は汎用的な戦略かもしれない。"]} {"source": "Multi-head attention layers, as used in the Transformer neural sequence model, are a powerful alternative to RNNs for moving information across and between sequences. While training these layers is generally fast and simple, due to parallelizability across the length of the sequence, incremental inference (where such paralleization is impossible) is often slow, due to the memory-bandwidth cost of repeatedly loading the large \"keys\" and \"values\" tensors. We propose a variant called multi-query attention, where the keys and values are shared across all of the different attention \"heads\", greatly reducing the size of these tensors and hence the memory bandwidth requirements of incremental decoding. We verify experimentally that the resulting models can indeed be much faster to decode, and incur only minor quality degradation from the baseline.", "target": ["Transformer (#329 )のSelf-Attentionにおいて、各ヘッドでkeyとvalueを共有するMalti-query Attentionを提案。key valueのロード時間が大きいため、そこを共有することで高速化が可能になる。精度を落とさずに10倍以上の高速化を実現。"]} {"source": "Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation. Code: this http URL.", "target": ["ネットワークの蒸留を行う際に、出力層一歩手前の層の表現を近づける手法。通常は教師側と生徒側の出力(labelに対する確率分布)を近づけるようにするが、これだと出力の元である表現構造をうまく学習できないのでは?という着想に基づく。12の蒸留ベンチマーク全てで既存手法を大きく上回る。"]} {"source": "State-of-the-art models in NLP are now predominantly based on deep neural networks that are opaque in terms of how they come to make predictions. This limitation has increased interest in designing more interpretable deep models for NLP that reveal the `reasoning' behind model outputs. But work in this direction has been conducted on different datasets and tasks with correspondingly unique aims and metrics; this makes it difficult to track progress. We propose the Evaluating Rationales And Simple English Reasoning (ERASER) benchmark to advance research on interpretable models in NLP. This benchmark comprises multiple datasets and tasks for which human annotations of \"rationales\" (supporting evidence) have been collected. We propose several metrics that aim to capture how well the rationales provided by models align with human rationales, and also how faithful these rationales are (i.e., the degree to which provided rationales influenced the corresponding predictions). Our hope is that releasing this benchmark facilitates progress on designing more interpretable NLP systems. The benchmark, code, and documentation are available at this https URL", "target": ["説明可能な自然言語処理モデルの構築にむけて、GLUEのように複数タスクで根拠の抽出能力を検証するベンチマーク(ERASER)が公開。MultiRCなどの既存タスクに根拠文アノテーションを行い、名前の通り根拠を削ることによる分類確率変動+全文/根拠のみで推定した場合の差分で根拠をとらえているか計測する"]} {"source": "Neural abstractive summarization models are able to generate summaries which have high overlap with human references. However, existing models are not optimized for factual correctness, a critical metric in real-world applications. In this work, we develop a general framework where we evaluate the factual correctness of a generated summary by fact-checking it automatically against its reference using an information extraction module. We further propose a training strategy which optimizes a neural summarization model with a factual correctness reward via reinforcement learning. We apply the proposed method to the summarization of radiology reports, where factual correctness is a key requirement. On two separate datasets collected from hospitals, we show via both automatic and human evaluation that the proposed approach substantially improves the factual correctness and overall quality of outputs over a competitive neural summarization system, producing radiology summaries that approach the quality of human-authored ones.", "target": ["放射線診断書の「正確な」要約を行う研究。既存の要約は主に単語の一致度(ROUGE)で評価されるため、診断結果が異なっていても一致度が高ければ「良い要約」になってします。そこで、ROUGEに加え事実一致度(CheXpertで使われた放射線診断書専用パーサーで抽出)を報酬にした強化学習で学習を行っている。"]} {"source": "Development sets are impractical to obtain for real low-resource languages, since using all available data for training is often more effective. However, development sets are widely used in research papers that purport to deal with low-resource natural language processing (NLP). Here, we aim to answer the following questions: Does using a development set for early stopping in the low-resource setting influence results as compared to a more realistic alternative, where the number of training epochs is tuned on development languages? And does it lead to overestimation or underestimation of performance? We repeat multiple experiments from recent work on neural models for low-resource NLP and compare results for models obtained by training with and without development sets. On average over languages, absolute accuracy differs by up to 1.4%. However, for some languages and tasks, differences are as big as 18.0% accuracy. Our results highlight the importance of realistic experimental setups in the publication of low-resource NLP research results.", "target": ["低リソース言語はデータが少ないため、チューニング用のdevセットを大きくとると学習データが少なくなる=精度への影響が大きいのでは?という点を調べた研究(手法でなくdevの過多で性能に差が出てしまう)。他言語をdevとみなしearly-stopのepochを決める方法と比較すると大きな差が出る言語/タスクあり"]} {"source": "We present CoSQL, a corpus for building cross-domain, general-purpose database (DB) querying dialogue systems. It consists of 30k+ turns plus 10k+ annotated SQL queries, obtained from a Wizard-of-Oz (WOZ) collection of 3k dialogues querying 200 complex DBs spanning 138 domains. Each dialogue simulates a real-world DB query scenario with a crowd worker as a user exploring the DB and a SQL expert retrieving answers with SQL, clarifying ambiguous questions, or otherwise informing of unanswerable questions. When user questions are answerable by SQL, the expert describes the SQL and execution results to the user, hence maintaining a natural interaction flow. CoSQL introduces new challenges compared to existing task-oriented dialogue datasets:(1) the dialogue states are grounded in SQL, a domain-independent executable representation, instead of domain-specific slot-value pairs, and (2) because testing is done on unseen databases, success requires generalizing to new domains. CoSQL includes three tasks: SQL-grounded dialogue state tracking, response generation from query results, and user dialogue act prediction. We evaluate a set of strong baselines for each task and show that CoSQL presents significant challenges for future research. The dataset, baselines, and leaderboard will be released at this https URL.", "target": ["対話形式での情報抽出を目指したデータセットCoSQLが公開。対話を通じて相手が目的とするデータを抽出するためのSQLを構築していく。テストセットのDBはトレーニングデータに存在しない(テストは未知のDBで行われる)、あいまいな箇所は聞き返す必要ありなど、かなりチャレンジングなデータセット"]} {"source": "Most existing event extraction (EE) methods merely extract event arguments within the sentence scope. However, such sentence-level EE methods struggle to handle soaring amounts of documents from emerging applications, such as finance, legislation, health, etc., where event arguments always scatter across different sentences, and even multiple such event mentions frequently co-exist in the same document. To address these challenges, we propose a novel end-to-end model, Doc2EDAG, which can generate an entity-based directed acyclic graph to fulfill the document-level EE (DEE) effectively. Moreover, we reformalize a DEE task with the no-trigger-words design to ease the document-level event labeling. To demonstrate the effectiveness of Doc2EDAG, we build a large-scale real-world dataset consisting of Chinese financial announcements with the challenges mentioned above. Extensive experiments with comprehensive analyses illustrate the superiority of Doc2EDAG over state-of-the-art methods. Data and codes can be found at this https URL.", "target": ["ドキュメントからのイベント抽出タスクを提案した研究。イベントの記載は複数文にまたがるため、文単体でなく文書全体からの抽出を対象にしている。具体的には文書から誰と・誰が・いつ・何した・結果は、といった情報をテーブル形式にまとめる。中国の企業文書データセット(ChFinAnn)を対象に実験"]} {"source": "This paper makes two contributions towards understanding how the hyperparameters of stochastic gradient descent affect the final training loss and test accuracy of neural networks. First, we argue that stochastic gradient descent exhibits two regimes with different behaviours; a noise dominated regime which typically arises for small or moderate batch sizes, and a curvature dominated regime which typically arises when the batch size is large. In the noise dominated regime, the optimal learning rate increases as the batch size rises, and the training loss and test accuracy are independent of batch size under a constant epoch budget. In the curvature dominated regime, the optimal learning rate is independent of batch size, and the training loss and test accuracy degrade as the batch size rises. We support these claims with experiments on a range of architectures including ResNets, LSTMs and autoencoders. We always perform a grid search over learning rates at all batch sizes. Second, we demonstrate that small or moderately large batch sizes continue to outperform very large batches on the test set, even when both models are trained for the same number of steps and reach similar training losses. Furthermore, when training Wide-ResNets on CIFAR-10 with a constant batch size of 64, the optimal learning rate to maximize the test accuracy only decays by a factor of 2 when the epoch budget is increased by a factor of 128, while the optimal learning rate to minimize the training loss decays by a factor of 16. These results confirm that the noise in stochastic gradients can introduce beneficial implicit regularization.", "target": ["SGDにおける、学習率とバッチサイズ、精度の関係を調べた研究。SGDはバッチサイズが小~中の時noise dominantであり、大の時curvature dominant(曲率支配的)としている。前者の時テスト精度はバッチサイズに依存しないが、後者は大きくなると下がる傾向があるという。"]} {"source": "A key component of successfully reading a passage of text is the ability to apply knowledge gained from the passage to a new situation. In order to facilitate progress on this kind of reading, we present ROPES, a challenging benchmark for reading comprehension targeting Reasoning Over Paragraph Effects in Situations. We target expository language describing causes and effects (e.g., “animal pollinators increase efficiency of fertilization in flowers”), as they have clear implications for new situations. A system is presented a background passage containing at least one of these relations, a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the background passage in the context of the situation. We collect background passages from science textbooks and Wikipedia that contain such phenomena, and ask crowd workers to author situations, questions, and answers, resulting in a 14,322 question dataset. We analyze the challenges of this task and evaluate the performance of state-of-the-art reading comprehension models. The best model performs only slightly better than randomly guessing an answer of the correct type, at 61.6% F1, well below the human performance of 89.0%.", "target": ["テキストに含まれる関係性の理解を問うデータセットROPESの公開。科学関係の教科書やWikipediaから抽出したテキストをベースにしており、形式は国語の問題だが内容は理科といった印象の問題が収録されている。RoBERTaでF1が61.6%だが、人間では89%と大きな開きあり。"]} {"source": "In this work, we present an empirical study of generation order for machine translation. Building on recent advances in insertion-based modeling, we first introduce a soft order-reward framework that enables us to train models to follow arbitrary oracle generation policies. We then make use of this framework to explore a large variety of generation orders, including uninformed orders, location-based orders, frequency-based orders, content-based orders, and model-based orders. Curiously, we find that for the WMT'14 English \\to German translation task, order does not have a substantial impact on output quality, with unintuitive orderings such as alphabetical and shortest-first matching the performance of a standard Transformer. This demonstrates that traditional left-to-right generation is not strictly necessary to achieve high performance. On the other hand, results on the WMT'18 English \\to Chinese task tend to vary more widely, suggesting that translation for less well-aligned language pairs may be more sensitive to generation order.", "target": ["翻訳における生成順序の影響を調べた研究(単にleft to rightでなく、前置詞などの機能語を後に回して生成など様々なバリエーションがある)。Transformerをベースに挿入箇所(slot)の表現を作成し、tokenと挿入位置分布双方を予測するモデルを作成し検証したが、順序変更で大きな差はなかった。"]} {"source": "Generative models of natural images have progressed towards high fidelity samples by the strong leveraging of scale. We attempt to carry this success to the field of video modeling by showing that large Generative Adversarial Networks trained on the complex Kinetics-600 dataset are able to produce video samples of substantially higher complexity and fidelity than previous work. Our proposed model, Dual Video Discriminator GAN (DVD-GAN), scales to longer and higher resolution videos by leveraging a computationally efficient decomposition of its discriminator. We evaluate on the related tasks of video synthesis and video prediction, and achieve new state-of-the-art Fréchet Inception Distance for prediction for Kinetics-600, as well as state-of-the-art Inception Score for synthesis on the UCF-101 dataset, alongside establishing a strong baseline for synthesis on Kinetics-600.", "target": ["条件付き動画生成タスクのためのGANを提案.高解像度で長い動画(48x256x256)の生成に成功し,UCF-101データセットでSoTAを達成.提案手法ではBigGANベースのアーキテクチャを採用し,計算量の削減を目的に2つのdiscriminatorを提案している.実験ではUCF101よりもさらに大きいKinetics-600データセットでの実験も行い,様々な動画長・画像サイズでベースラインを提示している."]} {"source": "Generative Adversarial Networks (GANs) are known to be difficult to train, despite considerable research effort. Several regularization techniques for stabilizing training have been proposed, but they introduce non-trivial computational overheads and interact poorly with existing techniques like spectral normalization. In this work, we propose a simple, effective training stabilizer based on the notion of consistency regularization---a popular technique in the semi-supervised learning literature. In particular, we augment data passing into the GAN discriminator and penalize the sensitivity of the discriminator to these augmentations. We conduct a series of experiments to demonstrate that consistency regularization works effectively with spectral normalization and various GAN architectures, loss functions and optimizer settings. Our method achieves the best FID scores for unconditional image generation compared to other regularization methods on CIFAR-10 and CelebA. Moreover, Our consistency regularized GAN (CR-GAN) improves state-of-the-art FID scores for conditional generation from 14.73 to 11.48 on CIFAR-10 and from 8.73 to 6.66 on ImageNet-2012.", "target": ["画像に変換を加える前後でDiscriminatorの出力値が変化しないような制約をかけるConsistency Regularizationにより、GANの学習を安定させる。実装もシンプルならが効果も強力。"]} {"source": "Generative adversarial networks (GANs) are a powerful approach to unsupervised learning. They have achieved state-of-the-art performance in the image domain. However, GANs are limited in two ways. They often learn distributions with low support---a phenomenon known as mode collapse---and they do not guarantee the existence of a probability density, which makes evaluating generalization using predictive log-likelihood impossible. In this paper, we develop the prescribed GAN (PresGAN) to address these shortcomings. PresGANs add noise to the output of a density network and optimize an entropy-regularized adversarial loss. The added noise renders tractable approximations of the predictive log-likelihood and stabilizes the training procedure. The entropy regularizer encourages PresGANs to capture all the modes of the data distribution. Fitting PresGANs involves computing the intractable gradients of the entropy regularization term; PresGANs sidestep this intractability using unbiased stochastic estimates. We evaluate PresGANs on several datasets and found they mitigate mode collapse and generate samples with high perceptual quality. We further found that PresGANs reduce the gap in performance in terms of predictive log-likelihood between traditional GANs and variational autoencoders (VAEs).", "target": ["Mode崩壊を防ぐPresGANSを提案。入力/生成画像にノイズを加えることで密度分布を保証することでエントロピーをロス関数に導入。モード崩壊を起こしやすいクラス間データインバランス環境下でも比較的安定している。また、密度分布により生成画像の対数尤度による評価も可能にした"]} {"source": "This paper considers the problem of efficient exploration of unseen environments, a key challenge in AI. We propose a `learning to explore' framework where we learn a policy from a distribution of environments. At test time, presented with an unseen environment from the same distribution, the policy aims to generalize the exploration strategy to visit the maximum number of unique states in a limited number of steps. We particularly focus on environments with graph-structured state-spaces that are encountered in many important real-world applications like software testing and map building. We formulate this task as a reinforcement learning problem where the `exploration' agent is rewarded for transitioning to previously unseen environment states and employ a graph-structured memory to encode the agent's past trajectory. Experimental results demonstrate that our approach is extremely effective for exploration of spatial maps; and when applied on the challenging problems of coverage-guided software-testing of domain-specific programs and real-world mobile applications, it outperforms methods that have been hand-engineered by human experts.", "target": ["未知の環境を探索するための手法を学習させる試み。環境=グラフと見なし、今まで訪れたことがないノードを発見することが報酬に繋がるという強化学習の枠組みで実装を行っている(報酬はグラフ全体に対するカバレッジとなる)。ノード特徴の取得にはGated GCNを使用している。"]} {"source": "Generative adversarial networks (GANs) are a powerful approach to unsupervised learning. They have achieved state-of-the-art performance in the image domain. However, GANs are limited in two ways. They often learn distributions with low support---a phenomenon known as mode collapse---and they do not guarantee the existence of a probability density, which makes evaluating generalization using predictive log-likelihood impossible. In this paper, we develop the prescribed GAN (PresGAN) to address these shortcomings. PresGANs add noise to the output of a density network and optimize an entropy-regularized adversarial loss. The added noise renders tractable approximations of the predictive log-likelihood and stabilizes the training procedure. The entropy regularizer encourages PresGANs to capture all the modes of the data distribution. Fitting PresGANs involves computing the intractable gradients of the entropy regularization term; PresGANs sidestep this intractability using unbiased stochastic estimates. We evaluate PresGANs on several datasets and found they mitigate mode collapse and generate samples with high perceptual quality. We further found that PresGANs reduce the gap in performance in terms of predictive log-likelihood between traditional GANs and variational autoencoders (VAEs).", "target": ["模倣学習で適切な「読み飛ばし」を行う研究。各フレームの行動を完全にまねするのではなく、Meta policyにより適切なフレーム(SubGoal)を選択し、SubGoal間の行動系列をLow-level policyで生成することで学習を行う。単純にランダムに読み飛ばすより良好な結果。"]} {"source": "We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test set. Our analysis sheds light on the shortcomings of current state-of-the-art models, and shows that non-expert annotators are successful at finding their weaknesses. The data collection method can be applied in a never-ending learning scenario, becoming a moving target for NLU, rather than a static benchmark that will quickly saturate.", "target": ["文関係推定(NLI)のタスクで、人を使ってAdversarial Trainingを行った研究。人がデータを作成し、それに対するモデルの推定結果が間違っていた(かつ人が作成したデータが適切)だった場合、学習データに加えられ再学習を行う。BERT/XLNet/RoBERTaの3つを使って実験しており、特徴の差異が見られる結果。"]} {"source": "Deep generative models for graphs have shown great promise in the area of drug design, but have so far found little application beyond generating graph-structured molecules. In this work, we demonstrate a proof of concept for the challenging task of road network extraction from image data. This task can be framed as image-conditioned graph generation, for which we develop the Generative Graph Transformer (GGT), a deep autoregressive model that makes use of attention mechanisms for image conditioning and the recurrent generation of graphs. We benchmark GGT on the application of road network extraction from semantic segmentation data. For this, we introduce the Toulouse Road Network dataset, based on real-world publicly-available data. We further propose the StreetMover distance: a metric based on the Sinkhorn distance for effectively evaluating the quality of road network generation. The code and dataset are publicly available.", "target": ["衛星画像上の道路をグラフで認識する研究。描画ソフトで点を打ちながら線を引くように、ノードを生成/接続しながら道路をなぞっていく。CNNによる画像特徴と生成済みのノード特徴(座標)、接続情報を結合し、TransformerでSelf-Attentionをかけながら生成を行う(ただ精度貢献は僅かな印象)。"]} {"source": "Much of model-based reinforcement learning involves learning a model of an agent's world, and training an agent to leverage this model to perform a task more efficiently. While these models are demonstrably useful for agents, every naturally occurring model of the world of which we are aware---e.g., a brain---arose as the byproduct of competing evolutionary pressures for survival, not minimization of a supervised forward-predictive loss via gradient descent. That useful models can arise out of the messy and slow optimization process of evolution suggests that forward-predictive modeling can arise as a side-effect of optimization under the right circumstances. Crucially, this optimization process need not explicitly be a forward-predictive loss. In this work, we introduce a modification to traditional reinforcement learning which we call observational dropout, whereby we limit the agents ability to observe the real environment at each timestep. In doing so, we can coerce an agent into learning a world model to fill in the observation gaps during reinforcement learning. We show that the emerged world model, while not explicitly trained to predict the future, can help the agent learn key skills required to perform well in its environment. Videos of our results available at this https URL", "target": ["モデルベースの強化学習では実環境と同等のモデルを構築し学習することが一つのゴールになるが、そもそも学習に完全なモデルは必要ないのではという示唆。エージェントの観測を部分的に落とし(observational dropout)構築したモデルは部分的な予測しかできなかったが、学習に十分有用なことを確認。"]} {"source": "Model-agnostic meta-learners aim to acquire meta-learned parameters from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. With the flexibility in the choice of models, those frameworks demonstrate appealing performance on a variety of domains such as few-shot image classification and reinforcement learning. However, one important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. In this paper, we augment MAML with the capability to identify the mode of tasks sampled from a multimodal task distribution and adapt quickly through gradient updates. Specifically, we propose a multimodal MAML (MMAML) framework, which is able to modulate its meta-learned prior parameters according to the identified mode, allowing more efficient fast adaptation. We evaluate the proposed model on a diverse set of few-shot learning tasks, including regression, image classification, and reinforcement learning. The results not only demonstrate the effectiveness of our model in modulating the meta-learned prior in response to the characteristics of tasks but also show that training on a multimodal distribution can produce an improvement over unimodal training.", "target": ["メタラーニングで、たった一つの初期値をすべてのタスクに使うのでなく、タスクに応じた初期値を生成しようという研究。具体的には、タスクの特徴を表すベクトルを何件かのサンプルから作成し、そのベクトルを基にパラメーターを生成、学習するという形態。"]} {"source": "Deep neural networks (DNN) are quickly becoming the de facto standard modeling method for many natural language generation (NLG) tasks. In order for such models to truly be useful, they must be capable of correctly generating utterances for novel meaning representations (MRs) at test time. In practice, even sophisticated DNNs with various forms of semantic control frequently fail to generate utterances faithful to the input MR. In this paper, we propose an architecture agnostic selftraining method to sample novel MR/text utterance pairs to augment the original training data. Remarkably, after training on the augmented data, even simple encoder-decoder models with greedy decoding are capable of generating semantically correct utterances that are as good as state-of-the-art outputs in both automatic and human evaluations of quality.", "target": ["フレームベースの対話で、意図/属性に即した発話文を生成させる研究(意図/属性=>[推薦する]/[映画=君の名は]など)。データセットから学習した発話生成モデルの文をParseして意図/属性を取り出し、それをデータの加えるというBack Parseとも呼べる手法を取っている。"]} {"source": "Sliding-window object detectors that generate bounding-box object predictions over a dense, regular grid have advanced rapidly and proven popular. In contrast, modern instance segmentation approaches are dominated by methods that first detect object bounding boxes, and then crop and segment these regions, as popularized by Mask R-CNN. In this work, we investigate the paradigm of dense sliding-window instance segmentation, which is surprisingly under-explored. Our core observation is that this task is fundamentally different than other dense prediction tasks such as semantic segmentation or bounding-box object detection, as the output at every spatial location is itself a geometric structure with its own spatial dimensions. To formalize this, we treat dense instance segmentation as a prediction task over 4D tensors and present a general framework called TensorMask that explicitly captures this geometry and enables novel operators on 4D tensors. We demonstrate that the tensor view leads to large gains over baselines that ignore this structure, and leads to results comparable to Mask R-CNN. These promising results suggest that TensorMask can serve as a foundation for novel advances in dense mask prediction and a more complete understanding of the task. Code will be made available.", "target": ["物体検出で、Sliding WindowのWindow表現を3次元(C, H, W)から4次元(V, U, H, W)に拡張したという研究。3次元の場合C=V x U (対象ピクセルの周辺ピクセル(V x U個)を縦に重ねる)となるが、4次元の場合は対象ピクセルを含むV x Uの「領域」を重ねる(各領域が2次元のサイズを持つため総計4次元となる)。"]} {"source": "Currently used metrics for assessing summarization algorithms do not account for whether summaries are factually consistent with source documents. We propose a weakly-supervised, model-based approach for verifying factual consistency and identifying conflicts between source documents and a generated summary. Training data is generated by applying a series of rule-based transformations to the sentences of source documents. The factual consistency model is then trained jointly for three tasks: 1) identify whether sentences remain factually consistent after transformation, 2) extract a span in the source documents to support the consistency prediction, 3) extract a span in the summary sentence that is inconsistent if one exists. Transferring this model to summaries generated by several state-of-the art models reveals that this highly scalable approach substantially outperforms previous models, including those trained with strong supervision using standard datasets for natural language inference and fact checking. Additionally, human evaluation shows that the auxiliary span extraction tasks provide useful assistance in the process of verifying factual consistency.", "target": ["抽象型要約の事実チェックを行う研究。現在モデルから生成される抽象要約の30%は元文と相反するケースがあるという。これを検出するためBERTをベースにルールで生成した事実一致/不一致のデータでFine Tuning(弱教師学習)を行う。ルールは否定形挿入/代名詞置換以外にBack Translationも行っている"]} {"source": "Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.", "target": ["機械学習モデルのSpecを記述するためのフォーマットを検討した研究。モデルの概要/用途/既知の注意点(バイアスなど)/評価指標とその値/学習・評価データといった項目が列挙されている。モデル管理を行う際の参考になるかも。"]} {"source": "To achieve a successful grasp, gripper attributes such as its geometry and kinematics play a role as important as the object geometry. The majority of previous work has focused on developing grasp methods that generalize over novel object geometry but are specific to a certain robot hand. We propose UniGrasp, an efficient data-driven grasp synthesis method that considers both the object geometry and gripper attributes as inputs. UniGrasp is based on a novel deep neural network architecture that selects sets of contact points from the input point cloud of the object. The proposed model is trained on a large dataset to produce contact points that are in force closure and reachable by the robot hand. By using contact points as output, we can transfer between a diverse set of multifingered robotic hands. Our model produces over 90% valid contact points in Top10 predictions in simulation and more than 90% successful grasps in real world experiments for various known two-fingered and three-fingered grippers. Our model also achieves 93%, 83% and 90% successful grasps in real world experiments for an unseen two-fingered gripper and two unseen multi-fingered anthropomorphic robotic hands.", "target": ["多様なロボットハンドに対応できるモデルを構築する試み。様々なオブジェクトを/様々なロボットハンドで/つかむことを目指している。ロボットハンドの特徴をポイントクラウドから学習するが(Auto Encoder)、ジョイント、ジョイントの最大/最小角を得るためmean/max/min 3種類のPoolingを使用している。"]} {"source": "Video-to-video synthesis (vid2vid) aims at converting an input semantic video, such as videos of human poses or segmentation masks, to an output photorealistic video. While the state-of-the-art of vid2vid has advanced significantly, existing approaches share two major limitations. First, they are data-hungry. Numerous images of a target human subject or a scene are required for training. Second, a learned model has limited generalization capability. A pose-to-human vid2vid model can only synthesize poses of the single person in the training set. It does not generalize to other humans that are not in the training set. To address the limitations, we propose a few-shot vid2vid framework, which learns to synthesize videos of previously unseen subjects or scenes by leveraging few example images of the target at test time. Our model achieves this few-shot generalization capability via a novel network weight generation module utilizing an attention mechanism. We conduct extensive experimental validations with comparisons to strong baselines using several large-scale video datasets including human-dancing videos, talking-head videos, and street-scene videos. The experimental results verify the effectiveness of the proposed framework in addressing the two limitations of existing vid2vid approaches.", "target": ["Few-shotでVideo2Videoを行う研究。変換を行うモデルは、1.入力(フレーム+条件(ポーズなど))からの特徴抽出(H)、2.前フレームとの差分からの特徴抽出(W)、3.入力+差分の合成(M)の3要素からなるが、1で使用する重みを動的に生成することで様々な入力(=学習データにない入力)に対応できるようにしている"]} {"source": "Robot learning has emerged as a promising tool for taming the complexity and diversity of the real world. Methods based on high-capacity models, such as deep networks, hold the promise of providing effective generalization to a wide range of open-world environments. However, these same methods typically require large amounts of diverse training data to generalize effectively. In contrast, most robotic learning experiments are small-scale, single-domain, and single-robot. This leads to a frequent tension in robotic learning: how can we learn generalizable robotic controllers without having to collect impractically large amounts of data for each separate experiment? In this paper, we propose RoboNet, an open database for sharing robotic experience, which provides an initial pool of 15 million video frames, from 7 different robot platforms, and study how it can be used to learn generalizable models for vision-based robotic manipulation. We combine the dataset with two different learning algorithms: visual foresight, which uses forward video prediction models, and supervised inverse models. Our experiments test the learned algorithms' ability to work across new objects, new tasks, new scenes, new camera viewpoints, new grippers, or even entirely new robots. In our final experiment, we find that by pre-training on RoboNet and fine-tuning on data from a held-out Franka or Kuka robot, we can exceed the performance of a robot-specific training approach that uses 4x-20x more data. For videos and data, see the project webpage: https://www.robonet.wiki/", "target": ["ロボット操作に強化学習を使う際、学習時間が大きな課題となる。そこで、7種類のロボットで記録した1500万のビデオフレームのデータセットを公開。様々なロボットの軌跡データから表現学習(次フレーム予測/過去フレーム予測)を行うことで、Zero/Few shotが可能な転移性能の高いモデルを構築できる。"]} {"source": "The utility of linguistic annotation in neural machine translation seemed to had been established in past papers. The experiments were however limited to recurrent sequence-to-sequence architectures and relatively small data settings. We focus on the state-of-the-art Transformer model and use comparably larger corpora. Specifically, we try to promote the knowledge of source-side syntax using multi-task learning either through simple data manipulation techniques or through a dedicated model component. In particular, we train one of Transformer attention heads to produce source-side dependency tree. Overall, our results cast some doubt on the utility of multi-task setups with linguistic information. The data manipulation techniques, recommended in previous works, prove ineffective in large data settings. The treatment of self-attention as dependencies seems much more promising: it helps in translation and reveals that Transformer model can very easily grasp the syntactic structure. An important but curious result is, however, that identical gains are obtained by using trivial \"linear trees\" instead of true dependencies. The reason for the gain thus may not be coming from the added linguistic knowledge but from some simpler regularizing effect we induced on self-attention matrices.", "target": ["翻訳で依存構造を学習させることの効果を検証した研究。少数データの場合(マルチタスクで)依存構造を学習させることは有効だが、学習データのサイズが大きい場合顕著な効果は見られなかったという結果。Transformerモデルでは、実際の依存構造とダミーの構造を入力した場合とで精度の変化があまりない"]} {"source": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.", "target": ["分類や要約、関係推定といった自然言語処理のタスクを全て言語モデル形式で解こうという研究。Transformerをベースに様々なモデル形式(Attentionのかけ方/構成)、目的関数などの組み合わせを試し、結果としてGLUE/SuperGLUEなどでSOTAを達成。"]} {"source": "Condescending language use is caustic; it can bring dialogues to an end and bifurcate communities. Thus, systems for condescension detection could have a large positive impact. A challenge here is that condescension is often impossible to detect from isolated utterances, as it depends on the discourse and social context. To address this, we present TalkDown, a new labeled dataset of condescending linguistic acts in context. We show that extending a language-only model with representations of the discourse improves performance, and we motivate techniques for dealing with the low rates of condescension overall. We also use our model to estimate condescension rates in various online communities and relate these differences to differing community norms.", "target": ["会話中の見下した発言を検出する研究。Redditから抽出したコメントを使用しており、明確に前の発言を引用して指摘しているものを厳選している。検出は、単に発話だけよりもコンテキストを使用したほうが正確だったという結果(ベースラインはBERTをデータセットでFine Tuneして使用している)。"]} {"source": "Parsers are available for only a handful of the world's languages, since they require lots of training data. How far can we get with just a small amount of training data? We systematically compare a set of simple strategies for improving low-resource parsers: data augmentation, which has not been tested before; cross-lingual training; and transliteration. Experimenting on three typologically diverse low-resource languages---North Sámi, Galician, and Kazah---We find that (1) when only the low-resource treebank is available, data augmentation is very helpful; (2) when a related high-resource treebank is available, cross-lingual training is helpful and complements data augmentation; and (3) when the high-resource treebank uses a different writing system, transliteration into a shared orthographic spaces is also very helpful.", "target": ["低リソースの言語で係り受け解析を行うのに有効な手法を調査した研究。ある単語(Dependency)を消すcropや係り受け構造を維持したまま語順を変えるrotate、同品詞の単語で置き換えるnonceといったData Augmentationの手法が有効であることを確認。また使用文字が異なる場合その置き換えも有効。"]} {"source": "Pretrained language models are promising particularly for low-resource languages as they only require unlabelled data. However, training existing models requires huge amounts of compute, while pretrained cross-lingual models often underperform on low-resource languages. We propose Multi-lingual language model Fine-Tuning (MultiFiT) to enable practitioners to train and fine-tune language models efficiently in their own language. In addition, we propose a zero-shot method using an existing pretrained cross-lingual model. We evaluate our methods on two widely used cross-lingual classification datasets where they outperform models pretrained on orders of magnitude more data and compute. We release all models and code.", "target": ["言語モデルを使用して、クラス分類の他言語適用を行った研究。言語モデルの学習はTargetの言語で行い(Wikipedia/分類対象文書の2段階)、(リソースが豊富な)他言語でFine-Tuningされた分類機で生成した疑似ラベルで分類を学習、Zero-shotでTargetの分類ができるようにするという構成。"]} {"source": "While we would like agents that can coordinate with humans, current algorithms such as self-play and population-based training create agents that can coordinate with themselves. Agents that assume their partner to be optimal or similar to them can converge to coordination protocols that fail to understand and be understood by humans. To demonstrate this, we introduce a simple environment that requires challenging coordination, based on the popular game Overcooked, and learn a simple model that mimics human play. We evaluate the performance of agents trained via self-play and population-based training. These agents perform very well when paired with themselves, but when paired with our human model, they are significantly worse than agents designed to play with the human model. An experiment with a planning algorithm yields the same conclusion, though only when the human-aware planner is given the exact human model that it is playing with. A user study with real humans shows this pattern as well, though less strongly. Qualitatively, we find that the gains come from having the agent adapt to the human's gameplay. Given this result, we suggest several approaches for designing agents that learn about humans in order to better coordinate with them. Code is available at this https URL.", "target": ["Self-Playが協調学習に有効なのか調査した研究。高度な協調は戦略の相互理解(役割分担)が必要であり、相手が理解しない場合局所最適な戦略に負ける可能性があるという指摘。Self Playで学習したエージェント(仮説としては局所最適)を人と組ませるとパフォーマンスが悪化することを確認(=役割分担がNG)"]} {"source": "This paper presents a study of semi-supervised learning with large convolutional networks. We propose a pipeline, based on a teacher/student paradigm, that leverages a large collection of unlabelled images (up to 1 billion). Our main goal is to improve the performance for a given target architecture, like ResNet-50 or ResNext. We provide an extensive analysis of the success factors of our approach, which leads us to formulate some recommendations to produce high-accuracy models for image classification with semi-supervised learning. As a result, our approach brings important gains to standard architectures for image, video and fine-grained classification. For instance, by leveraging one billion unlabelled images, our learned vanilla ResNet-50 achieves 81.2% top-1 accuracy on the ImageNet benchmark.", "target": ["粗いラベル(ルールでつけたラベルや明示的でないラベル(ハッシュタグetc))を使用するWeakly supervisedと、学習済みモデルで疑似ラベルをつけて学習するSemi-supervisedを融合させた研究。Weakで学習+少数のlabelでFine tuneしたTeacherで疑似ラベルを付けて、StudentのSemi-superviseを行うという構成"]} {"source": "This paper presents the results of the WMT19 Metrics Shared Task. Participants were asked to score the outputs of the translations systems competing in the WMT19 News Translation Task with automatic metrics. 13 research groups submitted 24 metrics, 10 of which are reference-less “metrics” and constitute submissions to the joint task with WMT19 Quality Estimation Task, “QE as a Metric”. In addition, we computed 11 baseline metrics, with 8 commonly applied baselines (BLEU, SentBLEU, NIST, WER, PER, TER, CDER, and chrF) and 3 reimplementations (chrF+, sacreBLEU-BLEU, and sacreBLEU-chrF). Metrics were evaluated on the system level, how well a given metric correlates with the WMT19 official manual ranking, and segment level, how well the metric correlates with human judgements of segment quality. This year, we use direct assessment (DA) as our only form of manual evaluation.", "target": ["WMT19における機械翻訳システムの評価に関するレポート。人手評価を行う際にトップクラスの機械翻訳システムばかりを使うと、どの出力がいいか判別できず人手評価と自動評価の相関が低くなるという結果。バラエティを持たせておくときちんと相関する。"]} {"source": "It is challenging for current one-step retrieve-and-read question answering (QA) systems to answer questions like \"Which novel by the author of 'Armada' will be adapted as a feature film by Steven Spielberg?\" because the question seldom contains retrievable clues about the missing entity (here, the author). Answering such a question requires multi-hop reasoning where one must gather information about the missing entity (or facts) to proceed with further reasoning. We present GoldEn (Gold Entity) Retriever, which iterates between reading context and retrieving more supporting documents to answer open-domain multi-hop questions. Instead of using opaque and computationally expensive neural retrieval models, GoldEn Retriever generates natural language search queries given the question and available context, and leverages off-the-shelf information retrieval systems to query for missing entities. This allows GoldEn Retriever to scale up efficiently for open-domain multi-hop reasoning while maintaining interpretability. We evaluate GoldEn Retriever on the recently proposed open-domain multi-hop QA dataset, HotpotQA, and demonstrate that it outperforms the best previously published model despite not using pretrained language models such as BERT.", "target": ["「アーティストAがXの次に出したアルバムは?」など多段階の情報収集が必要な質問に回答する研究。段階的にクエリを発行しQAとIRを繰り返す手法を採用している(最終的に、IRで収集した情報を総合して回答する)。既存の情報とのオーバーラップが多い=関連する(知識グラフ的に接続有)とし検索を行っていく"]} {"source": "A popular heuristic for improved performance in Generative adversarial networks (GANs) is to use some form of gradient penalty on the discriminator. This gradient penalty was originally motivated by a Wasserstein distance formulation. However, the use of gradient penalty in other GAN formulations is not well motivated. We present a unifying framework of expected margin maximization and show that a wide range of gradient-penalized GANs (e.g., Wasserstein, Standard, Least-Squares, and Hinge GANs) can be derived from this framework. Our results imply that employing gradient penalties induces a large-margin classifier (thus, a large-margin discriminator in GANs). We describe how expected margin maximization helps reduce vanishing gradients at fake (generated) samples, a known problem in GANs. From this framework, we derive a new L^\\infty gradient norm penalty with Hinge loss which generally produces equally good (or better) generated output in GANs than L^2-norm penalties (based on the Fréchet Inception Distance).", "target": ["Maxmarginの手法を抽象化して、SVM/GANを同じ枠組みでとらえようという研究。確率分布の距離を使用するGANが 学習効果にすぐれることを示し、発展させた手法としてL∞ gradient normを使用する方法を提案している。"]} {"source": "We demonstrate that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot. This is made possible by two key components: a novel algorithm, which we call automatic domain randomization (ADR) and a robot platform built for machine learning. ADR automatically generates a distribution over randomized environments of ever-increasing difficulty. Control policies and vision state estimators trained with ADR exhibit vastly improved sim2real transfer. For control policies, memory-augmented models trained on an ADR-generated distribution of environments show clear signs of emergent meta-learning at test time. The combination of ADR with our custom robot platform allows us to solve a Rubik's cube with a humanoid robot hand, which involves both control and state estimation problems. Videos summarizing our results are available: this https URL", "target": ["ロボットアームでルービックキューブを解けるようにした研究。キューブはソルバーで解いてしまい、動きを実践するマニピュレーションを学習する。アルゴリズムはOpenAI FiveのPPOベースで、学習が進むと自動的に環境をより難しいものにするAutomatic Domain Randomization (ADR)を採用している。"]} {"source": "The machine learning community currently has no standardized process for documenting datasets, which can lead to severe consequences in high-stakes domains. To address this gap, we propose datasheets for datasets. In the electronics industry, every component, no matter how simple or complex, is accompanied with a datasheet that describes its operating characteristics, test results, recommended uses, and other information. By analogy, we propose that every dataset be accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses, and so on. Datasheets for datasets will facilitate better communication between dataset creators and dataset consumers, and encourage the machine learning community to prioritize transparency and accountability.", "target": ["データセット作成のためのチェックリストを提示した研究。データセットの中身、作成方法について、実際よくユーザーから寄せられる質問をベースにチェックすべきポイントがまとめられている。"]} {"source": "Neural sequence generation is typically performed token-by-token and left-to-right. Whenever a token is generated only previously produced tokens are taken into consideration. In contrast, for problems such as sequence classification, bidirectional attention, which takes both past and future tokens into consideration, has been shown to perform much better. We propose to make the sequence generation process bidirectional by employing special placeholder tokens. Treated as a node in a fully connected graph, a placeholder token can take past and future tokens into consideration when generating the actual output token. We verify the effectiveness of our approach experimentally on two conversational tasks where the proposed bidirectional model outperforms competitive baselines by a large margin.", "target": ["Bi-directionalの効果をDecodeでも得ようという研究。生成する文字列をプレースフォルダの連続とし、ランダムに選択したプレースフォルダを実単語に置き換えていくことで様々な位置の単語情報を生成時に考慮できるようにしている。対話のデータセット(DailyDialog等)で高い効果を確認。"]} {"source": "We evaluate 179 classifiers arising from 17 families (discriminant analysis, Bayesian, neural networks, support vector machines, decision trees, rule-based classifiers, boosting, bagging, stacking, random forests and other ensembles, generalized linear models, nearest-neighbors, partial least squares and principal component regression, logistic and multinomial regression, multiple adaptive regression splines and other methods), implemented in Weka, R (with and without the caret package), C and Matlab, including all the relevant classifiers available today. We use 121 data sets, which represent the whole UCI data base (excluding the large- scale problems) and other own real problems, in order to achieve significant conclusions about the classifier behavior, not dependent on the data set collection. The classifiers most likely to be the bests are the random forest (RF) versions, the best of which (implemented in R and accessed via caret) achieves 94.1% of the maximum accuracy overcoming 90% in the 84.3% of the data sets. However, the difference is not statistically significant with the second best, the SVM with Gaussian kernel implemented in C using LibSVM, which achieves 92.3% of the maximum accuracy. A few models are clearly better than the remaining ones: random forest, SVM with Gaussian and polynomial kernels, extreme learning machine with Gaussian kernel, C5.0 and avNNet (a committee of multi-layer perceptrons implemented in R with the caret package). The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in the top-20, respectively).", "target": ["UCI databaseに登録されているすべてのデータセットに対し、17種類の分類機のパフォーマンを計測した研究。結果はランダムフォレストが1位で、SVM/Neural Net/アンサンブル(boosting)と続く。"]} {"source": "We investigate how annotators’ insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying harm against minority populations. We first uncover unexpected correlations between surface markers of African American English (AAE) and ratings of toxicity in several widely-used hate speech datasets. Then, we show that models trained on these corpora acquire and propagate these biases, such that AAE tweets and tweets by self-identified African Americans are up to two times more likely to be labelled as offensive compared to others. Finally, we propose *dialect* and *race priming* as ways to reduce the racial bias in annotation, showing that when annotators are made explicitly aware of an AAE tweet’s dialect they are significantly less likely to label the tweet as offensive.", "target": ["ヘイトスピーチの検出を学習するためのデータセット自体にバイアスが含まれていると指摘した研究。具体的には、黒人の人が話す方言(African American English)が含まれる場合内容と無関係にヘイトと判断されることが多いという(誤検知の確率が通常9%に対し含まれると46%)"]} {"source": "In this paper, we propose a novel technique for improving the stochastic gradient descent (SGD) method to train deep networks, which we term \\emph{PowerSGD}. The proposed PowerSGD method simply raises the stochastic gradient to a certain power during iterations and introduces only one additional parameter, namely, the power exponent (when , PowerSGD reduces to SGD). We further propose PowerSGD with momentum, which we term \\emph{PowerSGDM}, and provide convergence rate analysis on both PowerSGD and PowerSGDM methods. Experiments are conducted on popular deep learning models and benchmark datasets. Empirical results show that the proposed PowerSGD and PowerSGDM obtain faster initial training speed than adaptive gradient methods, comparable generalization ability with SGD, and improved robustness to hyper-parameter selection and vanishing gradients. PowerSGD is essentially a gradient modifier via a nonlinear transformation. As such, it is orthogonal and complementary to other techniques for accelerating gradient-based optimization.", "target": ["SGDを改善するシンプルな手法の提案。学習率を調整するのでなく勾配自体に非線形関数を適用しており、具体的には学習率 * (sign(g) * |g|γ)を行っている(γのノルムにsignを掛けたものを学習に使用)。収束速度が速くなるが最終的な汎化性能はSGDに劣る。勾配消失に強い特性有"]} {"source": "Many (but not all) approaches self-qualifying as \"meta-learning\" in deep learning and reinforcement learning fit a common pattern of approximating the solution to a nested optimization problem. In this paper, we give a formalization of this shared pattern, which we call GIMLI, prove its general requirements, and derive a general-purpose algorithm for implementing similar approaches. Based on this analysis and algorithm, we describe a library of our design, higher, which we share with the community to assist and enable future research into these kinds of meta-learning approaches. We end the paper by showcasing the practical applications of this framework and library through illustrative experiments and ablation studies which they facilitate.", "target": ["メタラーニング(学習率調整なども含む)を行うための汎用的な演算フレームワークの提案。内部で行われるT回のパラメーター更新(opt)により外側(メタ側)のパラメーターを調整する(metaopt)という入れ子ループの構造をとっている。これを実装したPyTorchライブラリhigherを公開。"]} {"source": "This paper presents a method to solve the city metro network expansion problem using reinforcement learning (RL). In this method, we formulate the metro expansion as a process of sequential station selection, and design feasibility rules based on the selected station sequence to ensure the reasonable connection patterns of metro line. Following this formulation, we train an actor critic model to design the next metro line. The actor is a seq2seq network with attention mechanism to generate the parameterized policy which is the probability distribution over feasible stations. The critic is used to estimate the expected reward, which is determined by the output station sequences generated by the actor during training, in order to reduce the training variance. The learning procedure only requires the reward calculation, thus our general method can be extended to multi-factor cases easily. Considering origin-destination (OD) trips and social equity, we expand the current metro network in Xi'an, China, based on the real mobility information of 24,770,715 mobile phone users in the whole city. The results demonstrate the effectiveness of our method.", "target": ["専門家の知見なしに強化学習を用いて、西安の地下鉄を最適に拡張させるという研究。アクセスの良さと公平性をRewardとし、RNNで駅を逐次生成、actor-criticで最適化する。公平性を最大にすると発展しているエリアを通るようになる等の変化があり、うまく2つの項目が機能していることがわかる。 ICLR2020"]} {"source": "The \"double descent\" risk curve was proposed to qualitatively describe the out-of-sample prediction accuracy of variably-parameterized machine learning models. This article provides a precise mathematical analysis for the shape of this curve in two simple data models with the least squares/least norm predictor. Specifically, it is shown that the risk peaks when the number of features p is close to the sample size n, but also that the risk decreases towards its minimum as p increases beyond n. This behavior is contrasted with that of \"prescient\" models that select features in an a priori optimal order.", "target": ["モデルの特徴量数と汎化性能の関係について調査した研究。過学習のリスクは特徴量数=データ数の場合に最大となるが、その境界を超えると逆に低下することを示唆(もちろん、事前知識で必要最低限の特徴を選択することは意味がある)。ガウス/フーリエの2モデルで検証している。"]} {"source": "In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines. Our goal is an algorithm that utilizes only simple and convergent maximum likelihood loss functions, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised learning steps: one to regress onto target values for a value function, and another to regress onto weighted target actions for the policy. The method is simple and general, can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. We provide a theoretical motivation for AWR and analyze its properties when incorporating off-policy data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym benchmark tasks, and show that it achieves competitive performance compared to a number of well-established state-of-the-art RL algorithms. AWR is also able to acquire more effective policies than most off-policy algorithms when learning from purely static datasets with no additional environmental interactions. Furthermore, we demonstrate our algorithm on challenging continuous control tasks with highly complex simulated characters.", "target": ["Replay Buffer内の軌跡を活用して学習性能を向上させる手法の提案。価値関数の値とReplay Bufferから計算できる割引現在価値の期待値の差が最小になるよう、またこの差( アドバンテージ)が最大になるよう戦略を更新する。シンプルながらSOTA手法に匹敵する性能"]} {"source": "Although neural machine translation models reached high translation quality, the autoregressive nature makes inference difficult to parallelize and leads to high translation latency. Inspired by recent refinement-based approaches, we propose LaNMT, a latent-variable non-autoregressive model with continuous latent variables and deterministic inference procedure. In contrast to existing approaches, we use a deterministic inference algorithm to find the target sequence that maximizes the lowerbound to the log-probability. During inference, the length of translation automatically adapts itself. Our experiments show that the lowerbound can be greatly increased by running the inference algorithm, resulting in significantly improved translation quality. Our proposed model closes the performance gap between non-autoregressive and autoregressive approaches on ASPEC Ja-En dataset with 8.6x faster decoding. On WMT'14 En-De dataset, our model narrows the gap with autoregressive baseline to 2.0 BLEU points with 12.5x speedup. By decoding multiple initial latent variables in parallel and rescore using a teacher model, the proposed model further brings the gap down to 1.0 BLEU point on WMT'14 En-De task with 6.8x speedup.", "target": ["Tokenを順次生成していく自己回帰型の翻訳ではなく、潜在表現から同時にサンプリングして文生成を行う研究。精度は犠牲になるが(BLEU -2.0)、圧倒的に高速(7.8~12.5倍)というメリットがある。Transformerベースのモデルで潜在表現を作成するが、生成文長も予測し生成を行っている。"]} {"source": "Contextualized word representations, such as ELMo and BERT, were shown to perform well on a various of semantic and structural (syntactic) task. In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that discards the lexical semantics, but keeps the structural information. To this end, we automatically generate groups of sentences which are structurally similar but semantically different, and use metric-learning approach to learn a transformation that emphasizes the structural component that is encoded in the vectors. We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics. Finally, we demonstrate the utility of our distilled representations by showing that they outperform the original contextualized representations in few-shot parsing setting.", "target": ["自然言語処理でも潜在表現の分解に挑戦した研究。具体的には文構造と文意を別々に識別できるモデルを、教師なしで得ることを目指している。学習にTriplet lossを使っており、構造が同じで語彙が異なるものをPositive、その逆をNegativeとし学習。都合よいデータを作るため言語モデルなどを駆使している"]} {"source": "We introduce a dialogue policy based on a transformer architecture, where the self-attention mechanism operates over the sequence of dialogue turns. Recent work has used hierarchical recurrent neural networks to encode multiple utterances in a dialogue context, but we argue that a pure self-attention mechanism is more suitable. By default, an RNN assumes that every item in a sequence is relevant for producing an encoding of the full sequence, but a single conversation can consist of multiple overlapping discourse segments as speakers interleave multiple topics. A transformer picks which turns to include in its encoding of the current dialogue state, and is naturally suited to selectively ignoring or attending to dialogue history. We compare the performance of the Transformer Embedding Dialogue (TED) policy to an LSTM and to the REDP, which was specifically designed to overcome this limitation of RNNs.", "target": ["Transformerをシンプルに対話へ適用した研究。既存の手法は対話履歴をRNN+Attentionで活用していたが、そこをTransformer(Self-Attention)に置き換えている。階層型のRNN手法と同程度の精度だが、こちらの方が高速かつシンプル。"]} {"source": "Generative Adversarial Networks (GANs) are a type of generative model which have received much attention due to their ability to model complex real-world data. Despite their recent successes, the process of training GANs remains challenging, suffering from instability problems such as non-convergence, vanishing or exploding gradients, and mode collapse. In recent years, a diverse set of approaches have been proposed which focus on stabilizing the GAN training procedure. The purpose of this survey is to provide a comprehensive overview of the GAN training stabilization methods which can be found in the literature. We discuss the advantages and disadvantages of each approach, offer a comparative summary, and conclude with a discussion of open problems.", "target": ["GANを安定させるためのモデル構造、目的関数、学習方法などなどがまとめられた資料。結論としてどれもハイパーパラメーター探索以上の効果を上げていないのでは?としている。ゲーム理論の導入やマルチエージェント設定など最近の取り組みも取り上げられている。"]} {"source": "Reinforcement learning (RL) has seen great advancements in the past few years. Nevertheless, the consensus among the RL community is that currently used model-free methods, despite all their benefits, suffer from extreme data inefficiency. To circumvent this problem, novel model-based approaches were introduced that often claim to be much more efficient than their model-free counterparts. In this paper, however, we demonstrate that the state-of-the-art model-free Rainbow DQN algorithm can be trained using a much smaller number of samples than it is commonly reported. By simply allowing the algorithm to execute network updates more frequently we manage to reach similar or better results than existing model-based techniques, at a fraction of complexity and computational costs. Furthermore, based on the outcomes of the study, we argue that the agent similar to the modified Rainbow DQN that is presented in this paper should be used as a baseline for any future work aimed at improving sample efficiency of deep reinforcement learning.", "target": ["モデルフリーの手法はモデルベースに比べて効率が悪いと言われているが、モデルベースはモデルを使った更新を何度も行う。モデルフリー側も同じくらい更新すれば同等の性能が得られるのではないか?とし、素のRainbowを1 iterationで複数回(8回がベスト)更新することで良好な精度を達成。"]} {"source": "As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.", "target": ["BERT(#959 )を蒸留したモデルの提案。目的関数として通常の言語モデル以外に親の出力分布との近さ、また潜在表現との近さを導入している(3つ合わせて\"triple loss\"と呼んでいる)。オリジナルより60%高速/軽量で、ELMoより概ね高性能+オリジナルの95%程度の性能を達成。"]} {"source": "Neural text generation is a key tool in natural language applications, but it is well known there are major problems at its core. In particular, standard likelihood training and decoding leads to dull and repetitive outputs. While some post-hoc fixes have been proposed, in particular top-k and nucleus sampling, they do not address the fact that the token-level probabilities predicted by the model are poor. In this paper we show that the likelihood objective itself is at fault, resulting in a model that assigns too much probability to sequences containing repeats and frequent words, unlike those from the human training distribution. We propose a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model. We show that both token and sequence level unlikelihood training give less repetitive, less dull text while maintaining perplexity, giving superior generations using standard greedy or beam search. According to human evaluations, our approach with standard beam search also outperforms the currently popular decoding methods of nucleus sampling or beam blocking, thus providing a strong alternative to existing techniques.", "target": ["深層学習を使ったテキスト生成でよくみられる「繰り返し」を抑制する手法の提案。(通常の)次に来る単語の予測確率を上げる以外に、「次来てはいけない単語(Unlikelihood)」の予測確率を下げるよう学習させる(来てはいけない=既出/生成済みの単語)。シンプルな手法ながら、人手評価のスコアが有意に上昇"]} {"source": "We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning. We find the Compressive Transformer obtains state-of-the-art language modelling results in the WikiText-103 and Enwik8 benchmarks, achieving 17.1 ppl and 0.97bpc respectively. We also find it can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task. To promote the domain of long-range sequence learning, we propose a new open-vocabulary language modelling benchmark derived from books, PG-19.", "target": ["長い系列を扱うには当然過去の系列を取っておく必要がありメモリを圧迫する。そのため過去の系列は圧縮するという手法の提案。潜在表現に1次元の畳み込みをかけ、かつ圧縮前のAttentionが再現されるように(Attentionされるような表現を含むように)学習するのが良い"]} {"source": "In this paper, we investigate the problem of training neural machine translation (NMT) systems with a dataset of more than 40 billion bilingual sentence pairs, which is larger than the largest dataset to date by orders of magnitude. Unprecedented challenges emerge in this situation compared to previous NMT work, including severe noise in the data and prohibitively long training time. We propose practical solutions to handle these issues and demonstrate that large-scale pretraining significantly improves NMT performance. We are able to push the BLEU score of WMT17 Chinese-English dataset to 32.3, with a significant performance boost of +3.2 over existing state-of-the-art results.", "target": ["400億のパラレルコーパスで翻訳モデルを学習させる試み。データが多すぎてNvidia V100 512個で3ヵ月学習させても2epochしか進んでいないという。そのためデータを分けてモデルを学習し、アンサンブルを取る手法も提案。最終的に評価対象のデータのみでFine Tuningすることで大幅なBLEUの向上を確認。"]} {"source": "Recent learning-to-plan methods have shown promising results on planning directly from observation space. Yet, their ability to plan for long-horizon tasks is limited by the accuracy of the prediction model. On the other hand, classical symbolic planners show remarkable capabilities in solving long-horizon tasks, but they require predefined symbolic rules and symbolic states, restricting their real-world applicability. In this work, we combine the benefits of these two paradigms and propose a learning-to-plan method that can directly generate a long-term symbolic plan conditioned on high-dimensional observations. We borrow the idea of regression (backward) planning from classical planning literature and introduce Regression Planning Networks (RPN), a neural network architecture that plans backward starting at a task goal and generates a sequence of intermediate goals that reaches the current observation. We show that our model not only inherits many favorable traits from symbolic planning, e.g., the ability to solve previously unseen tasks but also can learn from visual inputs in an end-to-end manner. We evaluate the capabilities of RPN in a grid world environment and a simulated 3D kitchen environment featuring complex visual scenes and long task horizons, and show that it achieves near-optimal performance in completely new task instances.", "target": ["シンボルによる指示(「キャベツ」を「煮る」など)を実行するモデルの提案。次の状態を予測していく形では正確/長期で行うのは難しいため、最終状態から逆側に予測していく手法を採用。タスクについてもサブゴールに分割することで学習しやすく+長いタスクに対応できるようにしている。"]} {"source": "Latent tree learning(LTL) methods learn to parse sentences using only indirect supervision from a downstream task. Recent advances in latent tree learning have made it possible to recover moderately high quality tree structures by training with language modeling or auto-encoding objectives. In this work, we explore the hypothesis that decoding in machine translation, as a conditional language modeling task, will produce better tree structures since it offers a similar training signal as language modeling, but with more semantic signal. We adapt two existing latent-tree language models--PRPN andON-LSTM--for use in translation. We find that they indeed recover trees that are better in F1 score than those seen in language modeling on WSJ test set, while maintaining strong translation quality. We observe that translation is a better objective than language modeling for inducing trees, marking the first success at latent tree learning using a machine translation objective. Additionally, our findings suggest that, although translation provides better signal for inducing trees than language modeling, translation models can perform well without exploiting the latent tree structure.", "target": ["文のツリー構造を学習するのに適切なタスクを調べた研究。ツリー構造を学ぶとされるモデル(Ordered Neurons等 #1210 )を言語モデル/翻訳双方で学習し、翻訳の方がツリーを学ぶのに有効であることを確認。ツリー情報がある方がBLEUのスコアが高くなるが、分散が大きくなることも確認"]} {"source": "Data augmentation has been widely applied as an effective methodology to improve generalization in particular when training deep neural networks. Recently, researchers proposed a few intensive data augmentation techniques, which indeed improved accuracy, yet we notice that these methods augment data have also caused a considerable gap between clean and augmented data. In this paper, we revisit this problem from an analytical perspective, for which we estimate the upper-bound of expected risk using two terms, namely, empirical risk and generalization error, respectively. We develop an understanding of data augmentation as regularization, which highlights the major features. As a result, data augmentation significantly reduces the generalization error, but meanwhile leads to a slightly higher empirical risk. On the assumption that data augmentation helps models converge to a better region, the model can benefit from a lower empirical risk achieved by a simple method, i.e., using less-augmented data to refine the model trained on fully-augmented data. Our approach achieves consistent accuracy gain on a few standard image classification benchmarks, and the gain transfers to object detection.", "target": ["拡張データで学習したあと、最後に拡張なしデータで調整するというステップを加えるという研究。データ拡張は良いが、不適なデータが出てくる可能性があるので最後の微調整では綺麗なものを使った方が良いのでは、という提案"]} {"source": "Spatial and time-dependent data is of interest in many applications. This task is difficult due to its complex spatial dependency, long-range temporal dependency, data non-stationarity, and data heterogeneity. To address these challenges, we propose Forecaster, a graph Transformer architecture. Specifically, we start by learning the structure of the graph that parsimoniously represents the spatial dependency between the data at different locations. Based on the topology of the graph, we sparsify the Transformer to account for the strength of spatial dependency, long-range temporal dependency, data non-stationarity, and data heterogeneity. We evaluate Forecaster in the problem of forecasting taxi ride-hailing demand and show that our proposed architecture significantly outperforms the state-of-the-art baselines.", "target": ["GraphデータにTransformerを適用する研究。Gaussian Markov random fieldでグラフのエッジを作成し、それをGraph拡張した(エッジがあるところのみに結合がある)Transformerに入れこむ。タクシー需要予測で成果."]} {"source": "Automatic news comment generation is a new testbed for techniques of natural language generation. In this paper, we propose a \"read-attend-comment\" procedure for news comment generation and formalize the procedure with a reading network and a generation network. The reading network comprehends a news article and distills some important points from it, then the generation network creates a comment by attending to the extracted discrete points and the news title. We optimize the model in an end-to-end manner by maximizing a variational lower bound of the true objective using the back-propagation algorithm. Experimental results on two datasets indicate that our model can significantly outperform existing methods in terms of both automatic evaluation and human judgment.", "target": ["ニュースに対するコメントを自動生成する研究。ニュース記事は長いため、単語表現以外に文内位置と記事全体位置2つのposition embeddingを使用し全結合でマージしている。その後タイトルにAttentionを貼りつつspanを予測したうえでコメント生成に使用(spanのラベルはないため中間表現のような形になる)"]} {"source": "In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. We first train a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network. We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning at search time. With a single PruningNet trained for the target network, we can search for various Pruned Networks under different constraints with little human participation. Compared to the state-of-the-art pruning methods, we have demonstrated superior performances on MobileNet V1/V2 and ResNet. Codes are available on this https URL.", "target": ["チャンネルの枝刈りをする手法の提案。学習済みのモデルを枝刈りしていくスタイルではなく、任意の構造に対し適切な重みを生成できるネットワーク(PruningNet)を学習し、生成した重みをセットした様々な構造から適切なものを選ぶ方式を取っている。これにより作成後のFine Tuningを不要にしている。"]} {"source": "We present a framework for data-driven robotics that makes use of a large dataset of recorded robot experience and scales to several tasks using learned reward functions. We show how to apply this framework to accomplish three different object manipulation tasks on a real robot platform. Given demonstrations of a task together with task-agnostic recorded experience, we use a special form of human annotation as supervision to learn a reward function, which enables us to deal with real-world tasks where the reward signal cannot be acquired directly. Learned rewards are used in combination with a large dataset of experience from different tasks to learn a robot policy offline using batch RL. We show that using our approach it is possible to train agents to perform a variety of challenging manipulation tasks including stacking rigid objects and handling cloth.", "target": ["強化学習を使用したロボット操作を行うための、実践的なシステムフレームワークの提案。人間のデモ/ロボットの行動はすべからくストレージに蓄積し、そのデータを使って報酬曲線(動画に対する報酬遷移のアノテーション)を作成、そこから学習するという形態。"]} {"source": "Generative adversarial networks have seen rapid development in recent years and have led to remarkable improvements in generative modelling of images. However, their application in the audio domain has received limited attention, and autoregressive models, such as WaveNet, remain the state of the art in generative modelling of audio signals such as human speech. To address this paucity, we introduce GAN-TTS, a Generative Adversarial Network for Text-to-Speech. Our architecture is composed of a conditional feed-forward generator producing raw speech audio, and an ensemble of discriminators which operate on random windows of different sizes. The discriminators analyse the audio both in terms of general realism, as well as how well the audio corresponds to the utterance that should be pronounced. To measure the performance of GAN-TTS, we employ both subjective human evaluation (MOS - Mean Opinion Score), as well as novel quantitative metrics (Fréchet DeepSpeech Distance and Kernel DeepSpeech Distance), which we find to be well correlated with MOS. We show that GAN-TTS is capable of generating high-fidelity speech with naturalness comparable to the state-of-the-art models, and unlike autoregressive models, it is highly parallelisable thanks to an efficient feed-forward generator. Listen to GAN-TTS reading this abstract at this https URL.", "target": ["音声合成にGANを適用した研究。音声全体ではなくランダムな幅で切り取ったものに対し識別を行う他、言語/ピッチで条件付けた識別も行う。双方について複数識別機をアンサンブルして判定を行う。評価については純粋な入力音声との近さと、言語の発音としてのらしさ双方を測る指標を提案。"]} {"source": "In natural language processing, it has been observed recently that generalization could be greatly improved by finetuning a large-scale language model pretrained on a large unlabeled corpus. Despite its recent success and wide adoption, finetuning a large pretrained language model on a downstream task is prone to degenerate performance when there are only a small number of training instances available. In this paper, we introduce a new regularization technique, to which we refer as \"mixout\", motivated by dropout. Mixout stochastically mixes the parameters of two models. We show that our mixout technique regularizes learning to minimize the deviation from one of the two models and that the strength of regularization adapts along the optimization trajectory. We empirically evaluate the proposed mixout and its variants on finetuning a pretrained language model on downstream tasks. More specifically, we demonstrate that the stability of finetuning and the average accuracy greatly increase when we use the proposed approach to regularize finetuning of BERT on downstream tasks in GLUE.", "target": ["Dropoutにヒントを得た、事前学習済みモデルをFine Tuningする手法の提案。Dropoutが確率的にConnectionを落とすように、2つのモデル(VanillaとPretrained)間でパラメーターを確率的にSwapする。これにより、破壊的忘却を防ぎつつ転移学習後の性能安定性を上げることができたという。"]} {"source": "Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT~\\citep{devlin2018bert}. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and \\squad benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at https://github.com/google-research/ALBERT.", "target": ["BERT(#959 )を軽量化しつつ精度も上げたという研究。Embeddingを行列分解することでパラメーター数を抑えつつ隠れ層のサイズを増やせるようにする、レイヤ間でパラメーターを共有する、連続文判定の代わりに文順序Swapを導入、の3点を実施している。"]} {"source": "An important research direction in machine learning has centered around developing meta-learning algorithms to tackle few-shot learning. An especially successful algorithm has been Model Agnostic Meta-Learning (MAML), a method that consists of two optimization loops, with the outer loop finding a meta-initialization, from which the inner loop can efficiently learn new tasks. Despite MAML's popularity, a fundamental open question remains -- is the effectiveness of MAML due to the meta-initialization being primed for rapid learning (large, efficient changes in the representations) or due to feature reuse, with the meta initialization already containing high quality features? We investigate this question, via ablation studies and analysis of the latent representations, finding that feature reuse is the dominant factor. This leads to the ANIL (Almost No Inner Loop) algorithm, a simplification of MAML where we remove the inner loop for all but the (task-specific) head of a MAML-trained network. ANIL matches MAML's performance on benchmark few-shot image classification and RL and offers computational improvements over MAML. We further study the precise contributions of the head and body of the network, showing that performance on the test tasks is entirely determined by the quality of the learned features, and we can remove even the head of the network (the NIL algorithm). We conclude with a discussion of the rapid learning vs feature reuse question for meta-learning algorithms more broadly.", "target": ["メタラーニングについて、タスクに共通する初期値が効いているのか特徴抽出が効いているのかを検証した研究。各タスクを学習するinner loopの初期値を最適化するようouter loopを学習するのが基本だが、inner loopではほぼタスク特化の内容を学習していたためinner loopそのものを省略する構成を提案"]} {"source": "A series of recent papers has used a parsing algorithm due to Shen et al. (2018) to recover phrase-structure trees based on proxies for \"syntactic depth.\" These proxy depths are obtained from the representations learned by recurrent language models augmented with mechanisms that encourage the (unsupervised) discovery of hierarchical structure latent in natural language sentences. Using the same parser, we show that proxies derived from a conventional LSTM language model produce trees comparably well to the specialized architectures used in previous work. However, we also provide a detailed analysis of the parsing algorithm, showing (1) that it is incomplete---that is, it can recover only a fraction of possible trees---and (2) that it has a marked bias for right-branching structures which results in inflated performance in right-branching languages like English. Our analysis shows that evaluating with biased parsing algorithms can inflate the apparent structural competence of language models.", "target": ["LSTMに階層情報(構文解析など)を学習させる機構を取り入れた手法は、(近年発表されたOrdered Neurons #1210 を含めて)LSTMと同等ではないかという提議。潜在表現から構造を推定するパーサーが英語などの右分岐構造に寄っており、モデルというよりパーサーの問題になっていると指摘している。"]} {"source": "Understanding people's actions and interactions typically depends on seeing them. Automating the process of action recognition from visual data has been the topic of much research in the computer vision community. But what if it is too dark, or if the person is occluded or behind a wall? In this paper, we introduce a neural network model that can detect human actions through walls and occlusions, and in poor lighting conditions. Our model takes radio frequency (RF) signals as input, generates 3D human skeletons as an intermediate representation, and recognizes actions and interactions of multiple people over time. By translating the input to an intermediate skeleton-based representation, our model can learn from both vision-based and RF-based datasets, and allow the two tasks to help each other. We show that our model achieves comparable accuracy to vision-based action recognition systems in visible scenarios, yet continues to work accurately when people are not visible, hence addressing scenarios that are beyond the limit of today's vision-based action recognition.", "target": ["Wi-Fiで使われているような高周波(RF)を利用して暗い/壁に隠れていても行動検出を行えるようにした研究。RF/画像双方を入力としてとれるネットワークを構築し、交互学習することで(特徴として強い)画像から学んだ情報をRFでも使えるようにしている。結果として、RFのみで画像と同等の精度を達成。"]} {"source": "Given an outfit, what small changes would most improve its fashionability? This question presents an intriguing new vision challenge. We introduce Fashion++, an approach that proposes minimal adjustments to a full-body clothing outfit that will have maximal impact on its fashionability. Our model consists of a deep image generation neural network that learns to synthesize clothing conditioned on learned per-garment encodings. The latent encodings are explicitly factorized according to shape and texture, thereby allowing direct edits for both fit/presentation and color/patterns/material, respectively. We show how to bootstrap Web photos to automatically train a fashionability model, and develop an activation maximization-style approach to transform the input image into its more fashionable self. The edits suggested range from swapping in a new garment to tweaking its color, how it is worn (e.g., rolling up sleeves), or its fit (e.g., making pants baggier). Experiments demonstrate that Fashion++ provides successful edits, both according to automated metrics and human opinion.", "target": ["Facebookがファッション提案のためのネットワークFashion++を公開。服を丸ごと交換、ではなくインナーやアクセサリーといった微細な変更提案を行うことができる。形状から生成してテクスチャーで埋めるという2ネットワークを使った生成を行い、Discriminatorのスコアが上がるよう学習する。"]} {"source": "Policy gradient methods are among the most effective methods in challenging reinforcement learning problems with large state and/or action spaces. However, little is known about even their most basic theoretical convergence properties, including: if and how fast they converge to a globally optimal solution or how they cope with approximation error due to using a restricted class of parametric policies. This work provides provable characterizations of the computational, approximation, and sample size properties of policy gradient methods in the context of discounted Markov Decision Processes (MDPs). We focus on both: \"tabular\" policy parameterizations, where the optimal policy is contained in the class and where we show global convergence to the optimal policy; and parametric policy classes (considering both log-linear and neural policy classes), which may not contain the optimal policy and where we provide agnostic learning results. One central contribution of this work is in providing approximation guarantees that are average case -- which avoid explicit worst-case dependencies on the size of state space -- by making a formal connection to supervised learning under distribution shift. This characterization shows an important interplay between estimation error, approximation error, and exploration (as characterized through a precisely defined condition number).", "target": ["Policy Gradientについて、最適解への収束過程やその速度について調査した研究。各状態/行動にパラメーターがふられるTable Caseと、パラメーター数が集約されたFunction Approximationに分けて検証を行っている。探索が不十分になると局所最適に陥りやすくなり、探索は初期値の影響が大きいとのこと"]} {"source": "In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine learning, yet very limited systematic research is available in the context of deep learning. In our study, we use three benchmark datasets of increasing complexity, MNIST, CIFAR-10 and ImageNet, to investigate the effects of imbalance on classification and perform an extensive comparison of several methods to address the issue: oversampling, undersampling, two-phase training, and thresholding that compensates for prior class probabilities. Our main evaluation metric is area under the receiver operating characteristic curve (ROC AUC) adjusted to multi-class tasks since overall accuracy metric is associated with notable difficulties in the context of imbalanced data. Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance; (iv) as opposed to some classical machine learning models, oversampling does not cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest.", "target": ["ラベルの不均衡がDNN(CNN)に及ぼす影響と、効果的な対処法について調査した研究。不均衡の影響はCNNでも例外なく発生し、対処法としてはOversamplingが良いという結論(DNNの場合Overfitは起きにくいとのこと)。Accuracy重視の場合不均衡補正(Thresholding)もかけた方が良い"]} {"source": "Reward learning enables the application of reinforcement learning (RL) to tasks where reward is defined by human judgment, building a model of reward by asking humans questions. Most work on reward learning has used simulated environments, but complex information about values is often expressed in natural language, and we believe reward learning for language is a key to making RL practical and safe for real-world tasks. In this paper, we build on advances in generative pretraining of language models to apply reward learning to four natural language tasks: continuing text with positive sentiment or physically descriptive language, and summarization tasks on the TL;DR and CNN/Daily Mail datasets. For stylistic continuation we achieve good results with only 5,000 comparisons evaluated by humans. For summarization, models trained with 60,000 comparisons copy whole sentences from the input but skip irrelevant preamble; this leads to reasonable ROUGE scores and very good performance according to our human labelers, but may be exploiting the fact that labelers rely on simple heuristics.", "target": ["事前学習済み言語モデルを、人の嗜好に合わせてFine Tuningする研究。GPT-2をベースに使い、Contextとお題(Positiveに、要約を生成、など)を与えて生成を行わせる。生成文に対する人の嗜好で強化学習を行う。人の嗜好はScale AIというサービスを使用しオンラインで取得している。"]} {"source": "A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint that is, up to small constant factors, no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.", "target": ["メタラーニングの手法MAMLを改良したiMAMLの提案。MAMLは(各タスクに共通する)ベストな初期位置を探す手法だが、1.初期パラメーター 2.各タスクのパラメーター 3.出力という流れのため誤差伝搬のフローが長く勾配消失が起る。そこで1=2という正則化制約を入れ2から1を学習できるよう改善。"]} {"source": "Through multi-agent competition, the simple objective of hide-and-seek, and standard reinforcement learning algorithms at scale, we find that agents create a self-supervised autocurriculum inducing multiple distinct rounds of emergent strategy, many of which require sophisticated tool use and coordination. We find clear evidence of six emergent phases in agent strategy in our environment, each of which creates a new pressure for the opposing team to adapt; for instance, agents learn to build multi-object shelters using moveable boxes which in turn leads to agents discovering that they can overcome obstacles using ramps. We further provide evidence that multi-agent competition may scale better with increasing environment complexity and leads to behavior that centers around far more human-relevant skills than other self-supervised reinforcement learning methods such as intrinsic motivation. Finally, we propose transfer and fine-tuning as a way to quantitatively evaluate targeted capabilities, and we compare hide-and-seek agents to both intrinsic motivation and random initialization baselines in a suite of domain-specific intelligence tests.", "target": ["隠れる側/見つける側に分かれて戦うゲームを通じ、相互の戦略を洗練できたとする研究。戦略のパラメーターはエージェント間共通だが行動時は個別に行動する。視認可能な範囲に応じてマスクを掛けた上でAttention+LSTMで行動推定を行うネットワークをPPOで学習する(価値関数はマスク無し)。"]} {"source": "We seek to understand how the representations of individual tokens and the structure of the learned feature space evolve between layers in deep neural networks under different learning objectives. We focus on the Transformers for our analysis as they have been shown effective on various tasks, including machine translation (MT), standard left-to-right language models (LM) and masked language modeling (MLM). Previous work used black-box probing tasks to show that the representations learned by the Transformer differ significantly depending on the objective. In this work, we use canonical correlation analysis and mutual information estimators to study how information flows across Transformer layers and how this process depends on the choice of learning objective. For example, as you go from bottom to top layers, information about the past in left-to-right language models gets vanished and predictions about the future get formed. In contrast, for MLM, representations initially acquire information about the context around the token, partially forgetting the token identity and producing a more generalized token representation. The token identity then gets recreated at the top MLM layers.", "target": ["Transformer(#329 )において、入力から出力の過程で単語表現がどう変化していくかを調べた研究。言語モデル/翻訳/BERT(#959 )的学習(Masked)の3種で調査。入力と各レイヤ内表現の相互情報量を調べたところ、言語モデル/翻訳は徐々に乖離していく一方(翻訳の方が緩やか)BERTは後半逆に入力へ近づくフェーズが存在した"]} {"source": "Deep learning models achieve impressive performance for skeleton-based human action recognition. However, the robustness of these models to adversarial attacks remains largely unexplored due to their complex spatio-temporal nature that must represent sparse and discrete skeleton joints. This work presents the first adversarial attack on skeleton-based action recognition with graph convolutional networks. The proposed targeted attack, termed Constrained Iterative Attack for Skeleton Actions (CIASA), perturbs joint locations in an action sequence such that the resulting adversarial sequence preserves the temporal coherence, spatial integrity, and the anthropomorphic plausibility of the skeletons. CIASA achieves this feat by satisfying multiple physical constraints, and employing spatial skeleton realignments for the perturbed skeletons along with regularization of the adversarial skeletons with Generative networks. We also explore the possibility of semantically imperceptible localized attacks with CIASA, and succeed in fooling the state-of-the-art skeleton action recognition models with high confidence. CIASA perturbations show high transferability for black-box attacks. We also show that the perturbed skeleton sequences are able to induce adversarial behavior in the RGB videos created with computer graphics. A comprehensive evaluation with NTU and Kinetics datasets ascertains the effectiveness of CIASA for graph-based skeleton action recognition and reveals the imminent threat to the spatio-temporal deep learning tasks in general.", "target": ["Pose(骨格)検出モデルに対するAdversarial Attack。内容的にはグラフ構造+時系列のデータに対するAttackとなっており、微細なノード特徴/接続変更(Edgeを落とす/つなぐなど)を行うことで認識クラスを誤らせることを目的としている。8割以上精度を落とせるという結果。"]} {"source": "We propose a system that finds the strongest supporting evidence for a given answer to a question, using passage-based question-answering (QA) as a testbed. We train evidence agents to select the passage sentences that most convince a pretrained QA model of a given answer, if the QA model received those sentences instead of the full passage. Rather than finding evidence that convinces one model alone, we find that agents select evidence that generalizes; agent-chosen evidence increases the plausibility of the supported answer, as judged by other QA models and humans. Given its general nature, this approach improves QA in a robust manner: using agent-selected evidence (i) humans can correctly answer questions with only ~20% of the full passage and (ii) QA models can generalize to longer passages and harder questions.", "target": ["QAモデルの回答根拠を探る研究。多肢選択問題を対象としており、Judgeモデルを回答できるよう学習させる一方、その回答を(他の回答に)誘導する根拠を抽出できるようAgentを学習させる。抽出した根拠は、人間が見ても回答を支持すると思えるものであることを確認。"]} {"source": "Deep learning techniques have become the method of choice for researchers working on algorithmic aspects of recommender systems. With the strongly increased interest in machine learning in general, it has, as a result, become difficult to keep track of what represents the state-of-the-art at the moment, e.g., for top-n recommendation tasks. At the same time, several recent publications point out problems in today's research practice in applied machine learning, e.g., in terms of the reproducibility of the results or the choice of the baselines when proposing new models. In this work, we report the results of a systematic analysis of algorithmic proposals for top-n recommendation tasks. Specifically, we considered 18 algorithms that were presented at top-level research conferences in the last years. Only 7 of them could be reproduced with reasonable effort. For these methods, it however turned out that 6 of them can often be outperformed with comparably simple heuristic methods, e.g., based on nearest-neighbor or graph-based techniques. The remaining one clearly outperformed the baselines but did not consistently outperform a well-tuned non-neural linear ranking method. Overall, our work sheds light on a number of potential problems in today's machine learning scholarship and calls for improved scientific practices in this area. Source code of our experiments and full results are available at: this https URL.", "target": ["Top-Nの推薦を行う深層学習系の手法について、シンプル/古典的な手法(素直にレーティング順に推薦するTopPopular、Item/Userを対象にKNNを行う手法など)と比較を行った研究。実装が再現できるかで足切りした7つの手法をベースと比較しているが、結果はほぼ負けている状態。"]} {"source": "Recent research towards understanding neural networks probes models in a top-down manner, but is only able to identify model tendencies that are known a priori. We propose Susceptibility Identification through Fine-Tuning (SIFT), a novel abstractive method that uncovers a model's preferences without imposing any prior. By fine-tuning an autoencoder with the gradients from a fixed classifier, we are able to extract propensities that characterize different kinds of classifiers in a bottom-up manner. We further leverage the SIFT architecture to rephrase sentences in order to predict the opposing class of the ground truth label, uncovering potential artifacts encoded in the fixed classification model. We evaluate our method on three diverse tasks with four different models. We contrast the propensities of the models as well as reproduce artifacts reported in the literature.", "target": ["テキスト分類モデルの挙動を診断するために、転移学習を使う手法の提案。Encoder/Decoder形式で学習したAuto Encoder(学習するテキストは分類対象とは別)を、事前学習済み分類機にくっつけてDecoderのみ学習させる(Encoder/分類機の重みは固定)。Decoderがどんな単語を選択するようになるかで診断する"]} {"source": "Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions.", "target": ["時系列予測の問題において、機械学習のモデルより既存の統計モデル(ARMAモデルなど)の方が予測精度において優良な結果が出るという研究。データへの適合=予測精度の向上ではないことも実験で示している。機械学習の研究では統計モデルとの比較も入れるべきという提言をしている。"]} {"source": "Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as \"fill-in-the-blank\" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at this https URL.", "target": ["BERT(#959)の中に関係知識が蓄積されていることを実験で確認した研究。Fact関係(a is b)は高い精度で推論できた一方、1:多・多:多の推論精度は低かったという結果。転移学習なしBERT vs 教師あり+関係推定特化モデルでprecision@10が57.1% vs 63.5%となかなかの精度。"]} {"source": "Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data via model-based source attribution. We have released multiple full-sized, pretrained versions of CTRL at this https URL.", "target": ["言語モデルによる事前学習を行う際に、挙動をコントロールするコードを付けた研究。これによりWikipeidaスタイル、Redditスタイルといった条件を与えた上で生成を行うことができる(Questionに対してAnswerを生成するといった制御も行っている)。"]} {"source": "Neural networks are part of many contemporary NLP systems, yet their empirical successes come at the price of vulnerability to adversarial attacks. Previous work has used adversarial training and data augmentation to partially mitigate such brittleness, but these are unlikely to find worst-case adversaries due to the complexity of the search space arising from discrete text perturbations. In this work, we approach the problem from the opposite direction: to formally verify a system's robustness against a predefined class of adversarial attacks. We study text classification under synonym replacements or character flip perturbations. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation -- a formal model verification method. We modify the conventional log-likelihood training objective to train models that can be efficiently verified, which would otherwise come with exponential search complexity. The resulting models show only little difference in terms of nominal accuracy, but have much improved verified accuracy under perturbations and come with an efficiently computable formal guarantee on worst case adversaries.", "target": ["テキスト分類でAdversarialの手法によらない耐性を獲得させる研究。同義語への言い換えや文字置換をノイズとし、ノイズによる変動上界を計算するInterval Bound Propagation (#1084) で学習を行う。結果としてAdversarialは常に上界を狙うわけではない一方上界の計算コストは高いし層が深いと使えないしと・・・"]} {"source": "Almost all existing machine translation models are built on top of character-based vocabularies: characters, subwords or words. Rare characters from noisy text or character-rich languages such as Japanese and Chinese however can unnecessarily take up vocabulary slots and limit its compactness. Representing text at the level of bytes and using the 256 byte set as vocabulary is a potential solution to this issue. High computational cost has however prevented it from being widely deployed or used in practice. In this paper, we investigate byte-level subwords, specifically byte-level BPE (BBPE), which is compacter than character vocabulary and has no out-of-vocabulary tokens, but is more efficient than using pure bytes only is. We claim that contextualizing BBPE embeddings is necessary, which can be implemented by a convolutional or recurrent layer. Our experiments show that BBPE has comparable performance to BPE while its size is only 1/8 of that for BPE. In the multilingual setting, BBPE maximizes vocabulary sharing across many languages and achieves better translation quality. Moreover, we show that BBPE enables transferring models between languages with non-overlapping character sets.", "target": ["翻訳において、バイト列に対してsub wordを行おうという研究。UTF-8のバイト列を対象としており、実質未知語0にすることが可能(Decoderでバイト列を文字に複合する)。DepthWiseConvかBi-directional GRUでコンテキストを含んだ表現を得てからTransformerに入れることで、純Byte-Pairと同等の性能を確認"]} {"source": "We consider the problem of discovering novel object categories in an image collection. While these images are unlabelled, we also assume prior knowledge of related but different image classes. We use such prior knowledge to reduce the ambiguity of clustering, and improve the quality of the newly discovered classes. Our contributions are twofold. The first contribution is to extend Deep Embedded Clustering to a transfer learning setting; we also improve the algorithm by introducing a representation bottleneck, temporal ensembling, and consistency. The second contribution is a method to estimate the number of classes in the unlabelled data. This also transfers knowledge from the known classes, using them as probes to diagnose different choices for the number of classes in the unlabelled subset. We thoroughly evaluate our method, substantially outperforming state-of-the-art techniques in a large number of benchmarks, including ImageNet, OmniGlot, CIFAR-100, CIFAR-10, and SVHN.", "target": ["ラベルありデータの転移学習で教師なしデータのクラス数を推定する研究。Deep Embedding Clusteringの拡張した。複数のクラスタ数で①②を実験し、それらの平均をラベルなしデータのクラス数とする。 ①ラベルデータを2つ(train/valid)に分割し、train+unlabelにK-meansを適用してvalidをうまく分類できるクラスタ数 ②クラスタ評価指標(CVI)が最良"]} {"source": "The recent success of natural language understanding (NLU) systems has been troubled by results highlighting the failure of these models to generalize in a systematic and robust way. In this work, we introduce a diagnostic benchmark suite, named CLUTRR, to clarify some key issues related to the robustness and systematicity of NLU systems. Motivated by classic work on inductive logic programming, CLUTRR requires that an NLU system infer kinship relations between characters in short stories. Successful performance on this task requires both extracting relationships between entities, as well as inferring the logical rules governing these relationships. CLUTRR allows us to precisely measure a model's ability for systematic generalization by evaluating on held-out combinations of logical rules, and it allows us to evaluate a model's robustness by adding curated noise facts. Our empirical results highlight a substantial performance gap between state-of-the-art NLU models (e.g., BERT and MAC) and a graph neural network model that works directly with symbolic inputs---with the graph-based model exhibiting both stronger generalization and greater robustness.", "target": ["モデルの論理推論能力を検証するためのデータセットの公開。ボブの母親がアリス/アリスの父親がジョン=>ジョンはボブの祖父、というような血縁関係の推論を対象としており、論理構造を生成した後にテキストをつけるという方式で作成している。CLEVR攻略モデルでも精度が出ていないタスクとなっている"]} {"source": "We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We show that this extractive step significantly improves summarization results. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher rouge scores. Note: The abstract above was not written by the authors, it was generated by one of the models presented in this paper.", "target": ["論文のような長い文書を要約する手法の提案。この論文の要約自体が提案手法で書かれているという小洒落た構成。抽出型要約と事前学習済み言語モデルを組み合わせており、言語モデル学習時に本文=>要約(Abstract)の順に並べることで生成かつ要約を学習させたとしている"]} {"source": "In this paper we present a method to learn word embeddings that are resilient to misspellings. Existing word embeddings have limited applicability to malformed texts, which contain a non-negligible amount of out-of-vocabulary words. We propose a method combining FastText with subwords and a supervised task of learning misspelling patterns. In our method, misspellings of each word are embedded close to their correct variants. We train these embeddings on a new dataset we are releasing publicly. Finally, we experimentally show the advantages of this approach on both intrinsic and extrinsic NLP tasks using public test sets.", "target": ["誤字脱字に強い分散表現の提案。fastTextをベースにして、ミススペルの単語と本物の単語の分散表現を近づけるLossの項を導入している。ミススペルについては、別途辞書を用意する(このため教師ありとの併用の側面がある)。"]} {"source": "In an effort to better understand the different ways in which the discount factor affects the optimization process in reinforcement learning, we designed a set of experiments to study each effect in isolation. Our analysis reveals that the common perception that poor performance of low discount factors is caused by (too) small action-gaps requires revision. We propose an alternative hypothesis that identifies the size-difference of the action-gap across the state-space as the primary cause. We then introduce a new method that enables more homogeneous action-gaps by mapping value estimates to a logarithmic space. We prove convergence for this method under standard assumptions and demonstrate empirically that it indeed enables lower discount factors for approximate reinforcement-learning methods. This in turn allows tackling a class of reinforcement-learning problems that are challenging to solve with traditional methods.", "target": ["強化学習において、なぜFunction Approximationで低い割引率を使うとパフォーマンスが出ないのかを分析した研究。低割引率が最適/二番目の行動確率の差を小さくするからと考えられていたが、状態ごとにスケールが大きく異なるようになるからではないかと示唆している。"]} {"source": "Most sequence-to-sequence (seq2seq) models are autoregressive; they generate each token by conditioning on previously generated tokens. In contrast, non-autoregressive seq2seq models generate all tokens in one pass, which leads to increased efficiency through parallel processing on hardware such as GPUs. However, directly modeling the joint distribution of all tokens simultaneously is challenging, and even with increasingly complex model structures accuracy lags significantly behind autoregressive models. In this paper, we propose a simple, efficient, and effective model for non-autoregressive sequence generation using latent variable models. Specifically, we turn to generative flow, an elegant technique to model complex distributions using neural networks, and design several layers of flow tailored for modeling the conditional density of sequential latent variables. We evaluate this model on three neural machine translation (NMT) benchmark datasets, achieving comparable performance with state-of-the-art non-autoregressive NMT models and almost constant decoding time w.r.t the sequence length.", "target": ["Non-autoregressiveな系列生成モデルの提案。seq2seqなどの系列モデルは自己回帰型であり(前回の生成が今回の生成に影響する)、精度は高いが順番にしか生成できなかった。そこで系列要素を直接推定する手法を使用し、生成のための潜在表現をFlowベースのモデルで得ることで実行速度/精度面双方を改善。"]} {"source": "In this paper, we address a novel task, namely weakly-supervised spatio-temporally grounding natural sentence in video. Specifically, given a natural sentence and a video, we localize a spatio-temporal tube in the video that semantically corresponds to the given sentence, with no reliance on any spatio-temporal annotations during training. First, a set of spatio-temporal tubes, referred to as instances, are extracted from the video. We then encode these instances and the sentence using our proposed attentive interactor which can exploit their fine-grained relationships to characterize their matching behaviors. Besides a ranking loss, a novel diversity loss is introduced to train the proposed attentive interactor to strengthen the matching behaviors of reliable instance-sentence pairs and penalize the unreliable ones. Moreover, we also contribute a dataset, called VID-sentence, based on the ImageNet video object detection dataset, to serve as a benchmark for our task. Extensive experimental results demonstrate the superiority of our model over the baseline approaches.", "target": ["自然言語の記述に一致する画像フレームシーケンス(動画中の物体)を検出するタスクを提案した研究。インスタンスごとのフレームアノテーションがある既存データセット(ImageNetベース)にテキストの記述を付与。インスタンス検出後に自然言語でAttentionを貼るネットワークをベースラインとしている。"]} {"source": "Advances in learning and representations have reinvigorated work that connects language to other modalities. A particularly exciting direction is Vision-and-Language Navigation(VLN), in which agents interpret natural language instructions and visual scenes to move through environments and reach goals. Despite recent progress, current research leaves unclear how much of a role language understanding plays in this task, especially because dominant evaluation metrics have focused on goal completion rather than the sequence of actions corresponding to the instructions. Here, we highlight shortcomings of current metrics for the Room-to-Room dataset (Anderson et al.,2018b) and propose a new metric, Coverage weighted by Length Score (CLS). We also show that the existing paths in the dataset are not ideal for evaluating instruction following because they are direct-to-goal shortest paths. We join existing short paths to form more challenging extended paths to create a new data set, Room-for-Room (R4R). Using R4R and CLS, we show that agents that receive rewards for instruction fidelity outperform agents that focus on goal completion.", "target": ["自然言語によるナビゲーションタスクの評価を見直す提案。既存のデータセットでは最短経路を指示するものが多く、言語指示に従っているかあいまいだった。そこで既存データセット内の経路をつないで長い系列を作成し、系列のカバレッジで評価することを提案。"]} {"source": "We present the Visually Grounded Neural Syntax Learner (VG-NSL), an approach for learning syntactic representations and structures without any explicit supervision. The model learns by looking at natural images and reading paired captions. VG-NSL generates constituency parse trees of texts, recursively composes representations for constituents, and matches them with images. We define concreteness of constituents by their matching scores with images, and use it to guide the parsing of text. Experiments on the MSCOCO data set show that VG-NSL outperforms various unsupervised parsing approaches that do not use visual grounding, in terms of F1 scores against gold parse trees. We find that VGNSL is much more stable with respect to the choice of random initialization and the amount of training data. We also find that the concreteness acquired by VG-NSL correlates well with a similar measure defined by linguists. Finally, we also apply VG-NSL to multiple languages in the Multi30K data set, showing that our model consistently outperforms prior unsupervised approaches.", "target": ["文のまとまった構成要素(句)を検出するために、画像情報を用いた研究。Image Captionのデータを使用して、抽出した句と画像が近くなるよう学習する。句と画像の対応を学習することで、精度の向上だけでなく学習データや初期化への依存を抑えることができた。"]} {"source": "We present a simple new method where an emergent NMT system is used for simultaneously selecting training data and learning internal NMT representations. This is done in a self-supervised way without parallel data, in such a way that both tasks enhance each other during training. The method is language independent, introduces no additional hyper-parameters, and achieves BLEU scores of 29.21 (en2fr) and 27.36 (fr2en) on newstest2014 using English and French Wikipedia data for training.", "target": ["翻訳を学習しつつ、学習するデータを選択させる手法の提案。翻訳ができるということは良い文表現も得られるはずであり、これを利用して表現の近いソース/ターゲットの文ペアを発見し学習データに追加する(ソース/ターゲット双方の表現が得られるようモデルを双方向にしている)"]} {"source": "Previous work on end-to-end translation from speech has primarily used frame-level features as speech representations, which creates longer, sparser sequences than text. We show that a naive method to create compressed phoneme-like speech representations is far more effective and efficient for translation than traditional frame-level speech features. Specifically, we generate phoneme labels for speech frames and average consecutive frames with the same label to create shorter, higher-level source sequences for translation. We see improvements of up to 5 BLEU on both our high and low resource language pairs, with a reduction in training time of 60%. Our improvements hold across multiple data sizes and two language pairs.", "target": ["音声翻訳において、フレームレベルの特徴を音素レベルに集約して用いる手法の提案。事前に音素ラベルの判定を行う必要があるが、精度を向上させ学習時間も削減することができる。音素ラベル判定は、言語独立の手法(CTC)を用いることで低リソース言語でも可能なことを確認。"]} {"source": "Attention mechanisms have become ubiquitous in NLP. Recent architectures, notably the Transformer, learn powerful context-aware word representations through layered, multi-headed attention. The multiple heads learn diverse types of word relationships. However, with standard softmax attention, all attention heads are dense, assigning a non-zero weight to all context words. In this work, we introduce the adaptively sparse Transformer, wherein attention heads have flexible, context-dependent sparsity patterns. This sparsity is accomplished by replacing softmax with \\alpha-entmax: a differentiable generalization of softmax that allows low-scoring words to receive precisely zero weight. Moreover, we derive a method to automatically learn the \\alpha parameter -- which controls the shape and sparsity of \\alpha-entmax -- allowing attention heads to choose between focused or spread-out behavior. Our adaptively sparse Transformer improves interpretability and head diversity when compared to softmax Transformers on machine translation datasets. Findings of the quantitative and qualitative analysis of our approach include that heads in different layers learn different sparsity preferences and tend to be more diverse in their attention distributions than softmax Transformers. Furthermore, at no cost in accuracy, sparsity in attention heads helps to uncover different head specializations.", "target": ["TransformerのAttentionについて、SpanをAdaptiveにしつつSparseにもする手法の提案。通常のsoftmaxでは割当確率がゼロにはならないので、これがノイズになる場合がある。そこで割当確率ゼロを許容するα-entmaxを用いている(=Sparseを可能にする)。ただ、softmaxとの性能差は微妙。"]} {"source": "Despite impressive empirical successes of neural machine translation (NMT) on standard benchmarks, limited parallel data impedes the application of NMT models to many language pairs. Data augmentation methods such as back-translation make it possible to use monolingual data to help alleviate these issues, but back-translation itself fails in extreme low-resource scenarios, especially for syntactically divergent languages. In this paper, we propose a simple yet effective solution, whereby target-language sentences are re-ordered to match the order of the source and used as an additional source of training-time supervision. Experiments with simulated low-resource Japanese-to-English, and real low-resource Uyghur-to-English scenarios find significant improvements over other semi-supervised alternatives.", "target": ["Back Translationを行うための翻訳モデルがそもそも作成できない低リソース言語のためのAugmentation方法。Targetの文をSourceの語順に並び替えて、各単語を辞書で変換するシンプルな方法を提案。文長が長いほど効果が高く、また語順が異なる言語間の翻訳に適した評価指標RIBESで改善を確認。"]} {"source": "Advances in multicore processors and accelerators have opened the flood gates to greater exploration and application of machine learning techniques to a variety of applications. These advances, along with breakdowns of several trends including Moore's Law, have prompted an explosion of processors and accelerators that promise even greater computational and machine learning capabilities. These processors and accelerators are coming in many forms, from CPUs and GPUs to ASICs, FPGAs, and dataflow accelerators. This paper surveys the current state of these processors and accelerators that have been publicly announced with performance and power consumption numbers. The performance and power values are plotted on a scatter graph and a number of dimensions and observations from the trends on this plot are discussed and analyzed. For instance, there are interesting trends in the plot regarding power consumption, numerical precision, and inference versus training. We then select and benchmark two commercially-available low size, weight, and power (SWaP) accelerators as these processors are the most interesting for embedded and mobile machine learning inference applications that are most applicable to the DoD and other SWaP constrained users. We determine how they actually perform with real-world images and neural network models, compare those results to the reported performance and power consumption values and evaluate them against an Intel CPU that is used in some embedded applications.", "target": ["機械学習の演算速度を向上させるためのプロセッサについて(CPU/GPU/ASIC/FPGAなど)、公開されているもののスペックがまとめられたサーベイ。デバイス種別・パフォーマンス・消費電力・演算精度・用途(学習or推論)がまとめられたFigure 2は必見。"]} {"source": "Describing images with text is a fundamental problem in vision-language research. Current studies in this domain mostly focus on single image captioning. However, in various real applications (e.g., image editing, difference interpretation, and retrieval), generating relational captions for two images, can also be very useful. This important problem has not been explored mostly due to lack of datasets and effective models. To push forward the research in this direction, we first introduce a new language-guided image editing dataset that contains a large number of real image pairs with corresponding editing instructions. We then propose a new relational speaker model based on an encoder-decoder architecture with static relational attention and sequential multi-head attention. We also extend the model with dynamic relational attention, which calculates visual alignment while decoding. Our models are evaluated on our newly collected and two public datasets consisting of image pairs annotated with relationship sentences. Experimental results, based on both automatic and human evaluation, demonstrate that our model outperforms all baselines and existing methods on all the datasets.", "target": ["2画像間の関係記述を生成する研究。自然言語による画像編集(背景を塗りつぶして欲しいetc)を目指している。Redditの画像編集掲示板などからデータを収集し、変換前後の画像特徴を結合したベクトルにAttentionをはるモデルをベースに、計4つの応用モデルを提案している。"]} {"source": "Users of machine translation systems may desire to obtain multiple candidates translated in different ways. In this work, we attempt to obtain diverse translations by using sentence codes to condition the sentence generation. We describe two methods to extract the codes, either with or without the help of syntax information. For diverse generation, we sample multiple candidates, each of which conditioned on a unique code. Experiments show that the sampled translations have much higher diversity scores when using reasonable sentence codes, where the translation quality is still on par with the baselines even under strong constraint imposed by the codes. In qualitative analysis, we show that our method is able to generate paraphrase translations with drastically different structures. The proposed approach can be easily adopted to existing translation systems as no modification to the model is required.", "target": ["機械翻訳において単一ではなく複数の、かつ「使える」翻訳を提示することを目指した研究。単純にDecoderのサンプリングで対処するのでなく(こうすると翻訳品質が落ちがち)、潜在表現で条件付けして生成を行う。この表現としてBERTベース+TreeLSTMで学習した構文情報を利用。"]} {"source": "Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently compared solely based on their results on the downstream task. While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies. In this paper, we propose to evaluate the NAS search phase. To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection. We find that: (i) On average, the state-of-the-art NAS algorithms perform similarly to the random policy; (ii) the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process. We believe that our evaluation framework will be key to designing NAS strategies that consistently discover architectures superior to random ones.", "target": ["AutoMLアルゴリズムとrandom searchを比較した研究。学習したPolicyとrandom samplingとで条件を揃えて比較(randomは複数シードを取り、最終的なモデルは同epoch数学習)。結果randomを大きく超えるものはなかった。また、Weight Shareをすると探索結果が悪くなるという重要な示唆。"]} {"source": "We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters. Our code and datasets are available at this https URL or this https URL.", "target": ["CAMのようにchannel-wiseの重要度を示すAttentionモジュールと、AdaINとLayerNormを組み合わせたAdaLINを用いたスタイル変換U-GAT-ITを提案。各チャネルのAttention weightは計Auxiliary Classifierで計算させる。"]} {"source": "Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pre-training tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pre-trained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pre-trained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR2, and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pre-training strategies significantly contribute to our strong results; and also present several attention visualizations for the different encoders. Code and pre-trained models publicly available at: this https URL", "target": ["VQAのような、画像+言語のタスクでTransformerを適用した研究。画像は物体領域の位置ベクトルを使ってSelf-Attention(物体間の関係を学習)、言語は通常通りSelf-Attention、最後にCross(言語to画像、画像to言語のAttention計算)をした後にSelf-Attentionをとって出力を行う。事前学習を通じSOTAを達成"]} {"source": "We introduce a method to provide vectorial representations of visual classification tasks which can be used to reason about the nature of those tasks and their relations. Given a dataset with ground-truth labels and a loss function defined over those labels, we process images through a \"probe network\" and compute an embedding based on estimates of the Fisher information matrix associated with the probe network parameters. This provides a fixed-dimensional embedding of the task that is independent of details such as the number of classes and does not require any understanding of the class label semantics. We demonstrate that this embedding is capable of predicting task similarities that match our intuition about semantic and taxonomic relations between different visual tasks (e.g., tasks based on classifying different types of plants are similar) We also demonstrate the practical value of this framework for the meta-task of selecting a pre-trained feature extractor for a new task. We present a simple meta-learning framework for learning a metric on embeddings that is capable of predicting which feature extractors will perform well. Selecting a feature extractor with task embedding obtains a performance close to the best available feature extractor, while costing substantially less than exhaustively training and evaluating on all available feature extractors.", "target": ["事前学習済みモデルを使用してタスクの埋め込み表現を作る研究。これによりタスク間の類似度の比較や、適切な事前学習済みモデルの選択ができるようになる。事前学習済みモデルの(固定された)特徴が、タスクでどの程度重視されるかをノイズ挿入前後の分布差異から計測する"]} {"source": "Recurrent neural networks (RNNs) are particularly well-suited for modeling long-term dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate. While a number of works attempt to mitigate this effect through gated recurrent units, well-chosen parametric constraints, and skip-connections, we develop a novel perspective that seeks to evolve the hidden state on the equilibrium manifold of an ordinary differential equation (ODE). We propose a family of novel RNNs, namely {\\em Equilibriated Recurrent Neural Networks} (ERNNs) that overcome the gradient decay or explosion effect and lead to recurrent models that evolve on the equilibrium manifold. We show that equilibrium points are stable, leading to fast convergence of the discretized ODE to fixed points. Furthermore, ERNNs account for long-term dependencies, and can efficiently recall informative aspects of data from the distant past. We show that ERNNs achieve state-of-the-art accuracy on many challenging data sets with 3-10x speedups, 1.5-3x model size reduction, and with similar prediction cost relative to vanilla RNNs.", "target": ["RNNの隠れ層を常微分方程式で推定する場合、常微分方程式の解は均衡点に収束する。均衡点では勾配の計算が可能であり、何度計算しても変わらない=勾配の爆発や消失が発生しない、とした研究。隠れ層の推定方式を変えることで、RNNの根本的な問題を解決している。"]} {"source": "Current state-of-the-art methods for image segmentation form a dense image representation where the color, shape and texture information are all processed together inside a deep CNN. This however may not be ideal as they contain very different type of information relevant for recognition. Here, we propose a new two-stream CNN architecture for semantic segmentation that explicitly wires shape information as a separate processing branch, i.e. shape stream, that processes information in parallel to the classical stream. Key to this architecture is a new type of gates that connect the intermediate layers of the two streams. Specifically, we use the higher-level activations in the classical stream to gate the lower-level activations in the shape stream, effectively removing noise and helping the shape stream to only focus on processing the relevant boundary-related information. This enables us to use a very shallow architecture for the shape stream that operates on the image-level resolution. Our experiments show that this leads to a highly effective architecture that produces sharper predictions around object boundaries and significantly boosts performance on thinner and smaller objects. Our method achieves state-of-the-art performance on the Cityscapes benchmark, in terms of both mask (mIoU) and boundary (F-score) quality, improving by 2% and 4% over strong baselines.", "target": ["セグメンテーションのタスクで、色やテクスチャーではなく形状に注目する機構を導入した研究。具体的には、通常の画像を処理するネットワーク(Regular Stream)と並行してAttentionからShape検知を行うShape Streamを導入する。Shapeの学習を行うため、Boundaryに関するlossを組み込んでいる。"]} {"source": "Metric learning aims to measure the similarity among samples while using an optimal distance metric for learning tasks. Metric learning methods, which generally use a linear projection, are limited in solving real-world problems demonstrating non-linear characteristics. Kernel approaches are utilized in metric learning to address this problem. In recent years, deep metric learning, which provides a better solution for nonlinear data through activation functions, has attracted researchers' attention in many different areas. This article aims to reveal the importance of deep metric learning and the problems dealt with in this field in the light of recent studies. As far as the research conducted in this field are concerned, most existing studies that are inspired by Siamese and Triplet networks are commonly used to correlate among samples while using shared weights in deep metric learning. The success of these networks is based on their capacity to understand the similarity relationship among samples. Moreover, sampling strategy, appropriate distance metric, and the structure of the network are the challenging factors for researchers to improve the performance of the network model. This article is considered to be important, as it is the first comprehensive study in which these factors are systematically analyzed and evaluated as a whole and supported by comparing the quantitative results of the methods..", "target": ["深層学習によるMetric Learningのサーベイ。Metric Learningの重要な要素として学習データのサンプリング、ネットワーク構成、lossの3点を挙げており、特にサンプリングは(注目されがちな)他2つと同等に重要であるとしている。"]} {"source": "Many applications of machine learning require a model to make accurate pre-dictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naive strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.", "target": ["GNNの事前学習に関する研究。NodeレベルとGraphレベル双方の事前学習を行なっており、前者はSkipgram的に周辺ノードから周辺構造を予測/Maskingしたノードの予測、後者は事前にグラフ構造から予測を行うタスクを解かせている。併用で精度が上がるが、Graph単独だと下がる。"]} {"source": "Causal inference requires theory and prior knowledge to structure analyses, and is not usually thought of as an arena for the application of prediction modelling. However, contemporary causal inference methods, premised on counterfactual or potential outcomes approaches, often include processing steps before the final estimation step. The purposes of this paper are: (i) to overview the recent emergence of prediction underpinning steps in contemporary causal inference methods as a useful perspective on contemporary causal inference methods, and (ii) explore the role of machine learning (as one approach to ‘best prediction’) in causal inference. Causal inference methods covered include propensity scores, inverse probability of treatment weights (IPTWs), G computation and targeted maximum likelihood estimation (TMLE). Machine learning has been used more for propensity scores and TMLE, and there is potential for increased use in G computation and estimation of IPTWs.", "target": ["予測に基づく因果推論の手法と、「予測」を行うための機械学習手法についてまとめられた記事。治療への割り当て確率を計算する傾向スコア(割り当て確率が等しい人をペアにする)、傾向スコアの逆数を使って比較対象を揃えるIPTWs、治療の平均効果を計算するG-computationなどが紹介されている。"]} {"source": "We introduce a temperature into the exponential function and replace the softmax output layer of neural nets by a high temperature generalization. Similarly, the logarithm in the log loss we use for training is replaced by a low temperature logarithm. By tuning the two temperatures we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural nets by our bi-temperature generalization of logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large data sets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method using the Tsallis divergence.", "target": ["二値分類でよく使われるLogistic Lossを改善した研究。lossはexponentialな形であるため、外れ値に極端なペナルティがかかり決定境界も付近のノイズに引きずられてしまう。この影響を緩和するため、lossの急激な上昇を抑え(t1)確率の割り当てを緩やかにしている(t2)。"]} {"source": "Transformer networks have lead to important progress in language modeling and machine translation. These models include two consecutive modules, a feed-forward layer and a self-attention layer. The latter allows the network to capture long term dependencies and are often regarded as the key ingredient in the success of Transformers. Building upon this intuition, we propose a new model that solely consists of attention layers. More precisely, we augment the self-attention layers with persistent memory vectors that play a similar role as the feed-forward layer. Thanks to these vectors, we can remove the feed-forward layer without degrading the performance of a transformer. Our evaluation shows the benefits brought by our model on standard character and word level language modeling benchmarks.", "target": ["Transformerの中にある全結合層をなくして、Attentionレイヤのみにする構成の提案。全結合層のバイアスを落として活性化関数をReLUからSoftmaxにすると通常のAttentionを掛けた伝搬と同様と見なせるので、Attentionレイヤ一本に統合しその代わり潜在表現の幅を増やしている。"]} {"source": "We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters.", "target": ["Transformerが見ているAttentionの範囲は実際は狭いことが多いため、範囲を限定することで計算を簡略化する研究。どれくらいの範囲を取るべきかは事前に決めることが困難なため、参照範囲を学習できるようにする。具体的には、Attentionにマスクを掛ける形で実装する。"]} {"source": "Breakthroughs in machine learning are rapidly changing science and society, yet our fundamental understanding of this technology has lagged far behind. Indeed, one of the central tenets of the field, the bias-variance trade-off, appears to be at odds with the observed behavior of methods used in the modern machine learning practice. The bias-variance trade-off implies that a model should balance under-fitting and over-fitting: rich enough to express underlying structure in data, simple enough to avoid fitting spurious patterns. However, in the modern practice, very rich models such as neural networks are trained to exactly fit (i.e., interpolate) the data. Classically, such models would be considered over-fit, and yet they often obtain high accuracy on test data. This apparent contradiction has raised questions about the mathematical foundations of machine learning and their relevance to practitioners. In this paper, we reconcile the classical understanding and the modern practice within a unified performance curve. This \"double descent\" curve subsumes the textbook U-shaped bias-variance trade-off curve by showing how increasing model capacity beyond the point of interpolation results in improved performance. We provide evidence for the existence and ubiquity of double descent for a wide spectrum of models and datasets, and we posit a mechanism for its emergence. This connection between the performance and the structure of machine learning models delineates the limits of classical analyses, and has implications for both the theory and practice of machine learning.", "target": ["過学習を超えた先に、汎化性能が逆に上がっていくエリアが存在するのではという研究。学習データへ完全にFitしていても汎化性能を持つという経験則を基に提唱されている。モデルが十分な複雑性(パラメーター数など)を持つ場合、学習データ間の補完(データ構造の理解?)を学習する領域に進むという。"]} {"source": "Building an open-domain conversational agent is a challenging problem. Current evaluation methods, mostly post-hoc judgments of static conversation, do not capture conversation quality in a realistic interactive context. In this paper, we investigate interactive human evaluation and provide evidence for its necessity; we then introduce a novel, model-agnostic, and dataset-agnostic method to approximate it. In particular, we propose a self-play scenario where the dialog system talks to itself and we calculate a combination of proxies such as sentiment and semantic coherence on the conversation trajectory. We show that this metric is capable of capturing the human-rated quality of a dialog model better than any automated metric known to-date, achieving a significant Pearson correlation (r>.7, p<.05). To investigate the strengths of this novel metric and interactive evaluation in comparison to state-of-the-art metrics and human evaluation of static conversations, we perform extended experiments with a set of models, including several that make novel improvements to recent hierarchical dialog generation architectures through sentiment and semantic knowledge distillation on the utterance level. Finally, we open-source the interactive evaluation platform we built and the dataset we collected to allow researchers to efficiently deploy and evaluate dialog models.", "target": ["対話システムで、発話に対する人間の評価を予測する関数を作り、それに基づき自己対話(Self-Play)で学習したという研究。評価関数は感情、文関係、質問らしさ(?を含むか)を組み合わせた線形関数。再現可能なよう学習済みモデル+ルールで構成され実装も公開されている。"]} {"source": "Most existing adversarial defenses only measure robustness to L_p adversarial attacks. Not only are adversaries unlikely to exclusively create small L_p perturbations, adversaries are unlikely to remain fixed. Adversaries adapt and evolve their attacks; hence adversarial defenses must be robust to a broad range of unforeseen attacks. We address this discrepancy between research and reality by proposing a new evaluation framework called ImageNet-UA. Our framework enables the research community to test ImageNet model robustness against attacks not encountered during training. To create ImageNet-UA's diverse attack suite, we introduce a total of four novel adversarial attacks. We also demonstrate that, in comparison to ImageNet-UA, prevailing L_inf robustness assessments give a narrow account of model robustness. By evaluating current defenses with ImageNet-UA, we find they provide little robustness to unforeseen attacks. We hope the greater variety and realism of ImageNet-UA enables development of more robust defenses which can generalize beyond attacks seen during training.", "target": ["機械学習モデルのAdversarial耐性を測るスコア(UAR)の提案。現在の防衛方法は未知のAdversarialに対して脆弱なことを実験から指摘。そのため、複数のAdversarialに対してどの程度防衛可能かを、個別に学習した(各Adversarialの防衛に特化した)モデルと比較して性能を検証する。"]} {"source": "This paper describes Facebook FAIR's submission to the WMT19 shared news translation task. We participate in two language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes, as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations. This system improves upon our WMT'18 submission by 4.5 BLEU points.", "target": ["Facebook(FAIR)が機械翻訳のタスクWMT19で使用した手法。FAIRSEQで実装したTransformerがベースで、データのクリーニング(長すぎる文、文字列長の比率が大きいペアを除外など)やアンサンブルによるBackTranslation、全結合層のサイズ拡大などの工夫で人の翻訳を超える評価"]} {"source": "There has been considerable growth and interest in industrial applications of machine learning (ML) in recent years. ML engineers, as a consequence, are in high demand across the industry, yet improving the efficiency of ML engineers remains a fundamental challenge. Automated machine learning (AutoML) has emerged as a way to save time and effort on repetitive tasks in ML pipelines, such as data pre-processing, feature engineering, model selection, hyperparameter optimization, and prediction result analysis. In this paper, we investigate the current state of AutoML tools aiming to automate these tasks. We conduct various evaluations of the tools on many datasets, in different data segments, to examine their performance, and compare their advantages and disadvantages on different test cases.", "target": ["Auto-MLを実現する各種ツールの機能と、様々なデータセットに対するパフォーマンスを調査したサーベイ。ツールによって得意不得意があり(二値分類に強いけどマルチラベルに弱いなど)、また実行時間にも差がある。商用のツールは前処理やデータ構造の分析もサポートしていることが多いよう。"]} {"source": "The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Here, we study its mechanism in details. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate (i.e., it has problematically large variance in the early stage), suggest warmup works as a variance reduction technique, and provide both empirical and theoretical evidence to verify our hypothesis. We further propose RAdam, a new variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Extensive experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the effectiveness and robustness of our proposed method. All implementations are available at: this https URL.", "target": ["経験的に良いと知られていたWarm-up(学習初期で低い学習率を使用する)を自動的に行う手法。Adamのような学習率を自動調整する手法は、学習初期(=まだサンプルがない状態)の調整で誤った方向に誘導されがちなことを指摘。そこで方向(勾配)の分散を抑える調整を導入している"]} {"source": "This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model. Because such models are sometimes trained on sensitive data (e.g., the text of users' private messages), this methodology can benefit privacy by allowing deep-learning practitioners to select means of training that minimize such memorization. In experiments, we show that unintended memorization is a persistent, hard-to-avoid issue that can have serious consequences. Specifically, for models trained without consideration of memorization, we describe new, efficient procedures that can extract unique, secret sequences, such as credit card numbers. We show that our testing strategy is a practical and easy-to-use first line of defense, e.g., by describing its application to quantitatively limit data exposure in Google's Smart Compose, a commercial text-completion neural network trained on millions of users' email messages.", "target": ["学習済みモデルから学習データの情報を抜く研究。学習データにあったものは通常よりperplexityが低くなることから、低下の度合いをMemorizationとしている(なおOverfitとMemorizationは異なるとのこと)。これを利用し、学習データ中にあった系列をBeam Searchで推定している"]} {"source": "Consistent and reproducible evaluation of Deep Reinforcement Learning (DRL) is not straightforward. In the Arcade Learning Environment (ALE), small changes in environment parameters such as stochasticity or the maximum allowed play time can lead to very different performance. In this work, we discuss the difficulties of comparing different agents trained on ALE. In order to take a step further towards reproducible and comparable DRL, we introduce SABER, a Standardized Atari BEnchmark for general Reinforcement learning algorithms. Our methodology extends previous recommendations and contains a complete set of environment parameters as well as train and test procedures. We then use SABER to evaluate the current state of the art, Rainbow. Furthermore, we introduce a human world records baseline, and argue that previous claims of expert or superhuman performance of DRL might not be accurate. Finally, we propose Rainbow-IQN by extending Rainbow with Implicit Quantile Networks (IQN) leading to new state-of-the-art performance. Source code is available for reproducibility.", "target": ["強化学習で使われるAtariのゲームを条件を揃えて使おうという提言。条件を揃えた環境をSABERと名付け、ベースラインとなる人間のスコアを素人からワールドスコアに更新。この結果、人間のパフォーマンスにはまだ遠いことを確認。またAtariのゲームに評価を難しくするバグが存在することも報告している"]} {"source": "This paper introduces the Behaviour Suite for Reinforcement Learning, or bsuite for short. bsuite is a collection of carefully-designed experiments that investigate core capabilities of reinforcement learning (RL) agents with two objectives. First, to collect clear, informative and scalable problems that capture key issues in the design of general and efficient learning algorithms. Second, to study agent behaviour through their performance on these shared benchmarks. To complement this effort, we open source this http URL, which automates evaluation and analysis of any agent on bsuite. This library facilitates reproducible and accessible research on the core issues in RL, and ultimately the design of superior learning algorithms. Our code is Python, and easy to use within existing projects. We include examples with OpenAI Baselines, Dopamine as well as new reference implementations. Going forward, we hope to incorporate more excellent experiments from the research community, and commit to a periodic review of bsuite from a committee of prominent researchers.", "target": ["強化学習アルゴリズムをテストするためのベンチマークを作成する試み(画像におけるMNIST的な)。Agentが情報を記憶できる長さを問うmemory length、負の報酬を超えて探索できるかを問うdeep seaといった環境が提供されている。今後も実験環境を拡充して行く予定とのこと。"]} {"source": "While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.", "target": ["自然言語処理の論文から、タスク・データセット・評価指標(メトリクス)の情報(TDM)、そしてスコア(S)を抽出する研究。論文のTDMをまず予測し、該当するスコアを抜いてくる形で抽出を行なっている。データセットの作成、またPDFからの情報抽出など地道なところから実装されている。"]} {"source": "Materials discovery is decisive for tackling urgent challenges related to energy, the environment, health care and many others. In chemistry, conventional methodologies for innovation usually rely on expensive and incremental strategies to optimize properties from molecular structures. On the other hand, inverse approaches map properties to structures, thus expediting the design of novel useful compounds. In this chapter, we examine the way in which current deep generative models are addressing the inverse chemical discovery paradigm. We begin by revisiting early inverse design algorithms. Then, we introduce generative models for molecular systems and categorize them according to their architecture and molecular representation. Using this classification, we review the evolution and performance of important molecular generation schemes reported in the literature. Finally, we conclude highlighting the prospects and challenges of generative models as cutting edge tools in materials discovery.", "target": ["目的とする物質の特性から分子構造を生成する研究のサーベイ。古典的なモンテカルロ/遺伝的アルゴリズムから、GANや強化学習を使用した手法までを解説してくれている。Figure5はとても分かりやすくて必見。"]} {"source": "Progress on object detection is enabled by datasets that focus the research community's attention on open challenges. This process led us from simple images to complex scenes and from bounding boxes to segmentation masks. In this work, we introduce LVIS (pronounced `el-vis'): a new dataset for Large Vocabulary Instance Segmentation. We plan to collect ~2 million high-quality instance segmentation masks for over 1000 entry-level object categories in 164k images. Due to the Zipfian distribution of categories in natural images, LVIS naturally has a long tail of categories with few training samples. Given that state-of-the-art deep learning methods for object detection perform poorly in the low-sample regime, we believe that our dataset poses an important and exciting new scientific challenge. LVIS is available at this http URL.", "target": ["16万4千点の画像に対して1200カテゴリ・200万以上のセグメント情報を付与したデータセット。読み方は「エルビス」。まれにしか出現しないオブジェクトもセグメント情報を持ち、ロングテールなデータセットとなっている。このロングテールという特徴が、教師データの少ないクラスで認識精度に欠ける深層学習手法に、新たな挑戦的課題を与えることを期待している。"]} {"source": "Enabling machines to respond appropriately to natural language commands could greatly expand the number of people to whom they could be of service. Recently, advances in neural network-trained word embeddings have empowered non-embodied text-processing algorithms, and suggest they could be of similar utility for embodied machines. Here we introduce a method that does so by training robots to act similarly to semantically-similar word2vec encoded commands. We show that this enables them to act appropriately, after training, to previously-unheard commands. Finally, we show that inducing such an alignment between motoric and linguistic similarities can be facilitated or hindered by the mechanical structure of the robot. This points to future, large scale methods that find and exploit relationships between action, language, and robot structure.", "target": ["ロボットを自然言語で動かす研究。似た単語には似た動作を行わせることを目的としている。モデルはRNNを使用しており、指示(単語ベクトルの系列)を入力した後ノードの接続を切って動作環境に配置し、今度はセンサーデータを入力とし行動を出力する(指示とセンサーの入力ノードは別れている)。"]} {"source": "Manually labeling objects by tracing their boundaries is a laborious process. In Polygon-RNN++ the authors proposed Polygon-RNN that produces polygonal annotations in a recurrent manner using a CNN-RNN architecture, allowing interactive correction via humans-in-the-loop. We propose a new framework that alleviates the sequential nature of Polygon-RNN, by predicting all vertices simultaneously using a Graph Convolutional Network (GCN). Our model is trained end-to-end. It supports object annotation by either polygons or splines, facilitating labeling efficiency for both line-based and curved objects. We show that Curve-GCN outperforms all existing approaches in automatic mode, including the powerful PSP-DeepLab and is significantly more efficient in interactive mode than Polygon-RNN++. Our model runs at 29.3ms in automatic, and 2.6ms in interactive mode, making it 10x and 100x faster than Polygon-RNN++.", "target": ["画像のセグメンテーションを効率化する研究。既存の研究は物体を囲む頂点を連続的に予測するが(CNNで特徴=>RNNで予測)、本研究では最初に物体を円形に囲む頂点群を作成しその位置をGraph Convolutionで一気に最適化する。頂点一致とマスク一致のlossで学習を行う。"]} {"source": "There has been significant progress on pose estimation and increasing interests on pose tracking in recent years. At the same time, the overall algorithm and system complexity increases as well, making the algorithm analysis and comparison more difficult. This work provides simple and effective baseline methods. They are helpful for inspiring and evaluating new ideas for the field. State-of-the-art results are achieved on challenging benchmarks. The code will be available at this https URL.", "target": ["近年公開されてきたHuman Pose Estimationのアルゴリズムよりシンプルなアルゴリズムの提案。具体的にはResNetをbackboneとし、出力層に数層のdeconvolutional layersを加えただけのものになっている。またCOCO(mAP)などの評価指標でSOTAを達成。"]} {"source": "Presentation bias is one of the key challenges when learning from implicit feedback in search engines, as it confounds the relevance signal. While it was recently shown how counterfactual learning-to-rank (LTR) approaches \\cite{Joachims/etal/17a} can provably overcome presentation bias when observation propensities are known, it remains to show how to effectively estimate these propensities. In this paper, we propose the first method for producing consistent propensity estimates without manual relevance judgments, disruptive interventions, or restrictive relevance modeling assumptions. First, we show how to harvest a specific type of intervention data from historic feedback logs of multiple different ranking functions, and show that this data is sufficient for consistent propensity estimation in the position-based model. Second, we propose a new extremum estimator that makes effective use of this data. In an empirical evaluation, we find that the new estimator provides superior propensity estimates in two real-world systems -- Arxiv Full-text Search and Google Drive Search. Beyond these two points, we find that the method is robust to a wide range of settings in simulation studies.", "target": ["ランキングの学習では入手しやすいクリックのデータを使用したい。ただクリックの発生は表示順の影響(Examine)を考慮する必要がある。Examineは位置を固定的に入れ替えることで推定可能だが(何度もやれば純粋に表示順の差異になる)、コストが高いためA/Bテストで使う複数Rankerから推定する手法を提案"]} {"source": "Agricultural applications such as yield prediction, precision agriculture and automated harvesting need systems able to infer the crop state from low-cost sensing devices. Proximal sensing using affordable cameras combined with computer vision has seen a promising alternative, strengthened after the advent of convolutional neural networks (CNNs) as an alternative for challenging pattern recognition problems in natural images. Considering fruit growing monitoring and automation, a fundamental problem is the detection, segmentation and counting of individual fruits in orchards. Here we show that for wine grapes, a crop presenting large variability in shape, color, size and compactness, grape clusters can be successfully detected, segmented and tracked using state-of-the-art CNNs. In a test set containing 408 grape clusters from images taken on a trellis-system based vineyard, we have reached an F 1 -score up to 0.91 for instance segmentation, a fine separation of each cluster from other structures in the image that allows a more accurate assessment of fruit size and shape. We have also shown as clusters can be identified and tracked along video sequences recording orchard rows. We also present a public dataset containing grape clusters properly annotated in 300 images and a novel annotation methodology for segmentation of complex objects in natural images. The presented pipeline for annotation, training, evaluation and tracking of agricultural patterns in images can be replicated for different crops and production systems. It can be employed in the development of sensing components for several agricultural and environmental applications.", "target": ["物体検出の技術を利用して、ワイン畑のぶどうをカウントする研究。単純な物体検出の適用に留まらず、アノテーション技法の開発(ぶどう/背景を線で示すと領域提案を行う)、3D情報を利用したダブルカウントの防止(別角度から見た同じぶどうを識別)など、実用が意識された内容。"]} {"source": "As data becomes the fuel driving technological and economic growth, a fundamental challenge is how to quantify the value of data in algorithmic predictions and decisions. For example, in healthcare and consumer markets, it has been suggested that individuals should be compensated for the data that they generate, but it is not clear what is an equitable valuation for individual data. In this work, we develop a principled framework to address data valuation in the context of supervised machine learning. Given a learning algorithm trained on n data points to produce a predictor, we propose data Shapley as a metric to quantify the value of each training datum to the predictor performance. Data Shapley value uniquely satisfies several natural properties of equitable data valuation. We develop Monte Carlo and gradient-based methods to efficiently estimate data Shapley values in practical settings where complex learning algorithms, including neural networks, are trained on large datasets. In addition to being equitable, extensive experiments across biomedical, image and synthetic data demonstrate that data Shapley has several other benefits: 1) it is more powerful than the popular leave-one-out or leverage score in providing insight on what data is more valuable for a given learning task; 2) low Shapley value data effectively capture outliers and corruptions; 3) high Shapley value data inform what type of new data to acquire to improve the predictor.", "target": ["予測精度に対する個々のデータ点の価値を表す評価指標(Data Shapley)を提案した研究.この評価指標は,厳密に求めることが計算量的に困難であるため,モンテカルロ法や勾配に基づいた方法で近似値を求めている."]} {"source": "We propose an intriguingly simple method for the construction of adversarial images in the black-box setting. In constrast to the white-box scenario, constructing black-box adversarial images has the additional constraint on query budget, and efficient attacks remain an open problem to date. With only the mild assumption of continuous-valued confidence scores, our highly query-efficient algorithm utilizes the following simple iterative principle: we randomly sample a vector from a predefined orthonormal basis and either add or subtract it to the target image. Despite its simplicity, the proposed method can be used for both untargeted and targeted attacks -- resulting in previously unprecedented query efficiency in both settings. We demonstrate the efficacy and efficiency of our algorithm on several real world settings including the Google Cloud Vision API. We argue that our proposed algorithm should serve as a strong baseline for future black-box attacks, in particular because it is extremely fast and its implementation requires less than 20 lines of PyTorch code.", "target": ["訓練データやモデルパラメータを知らない状況(black-box setting)で,必要なクエリ数を抑えつつ敵対的画像を生成する手法(SimBA)を提案した研究."]} {"source": "Sorting an array is a fundamental routine in machine learning, one that is used to compute rank-based statistics, cumulative distribution functions (CDFs), quantiles, or to select closest neighbors and labels. The sorting function is however piece-wise constant (the sorting permutation of a vector does not change if the entries of that vector are infinitesimally perturbed) and therefore has no gradient information to back-propagate. We propose a framework to sort elements that is algorithmically differentiable. We leverage the fact that sorting can be seen as a particular instance of the optimal transport (OT) problem on \\mathbb{R}, from input values to a predefined array of sorted values (e.g. 1,2,\\dots,n if the input array has n elements). Building upon this link , we propose generalized CDFs and quantile operators by varying the size and weights of the target presorted array. Because this amounts to using the so-called Kantorovich formulation of OT, we call these quantities K-sorts, K-CDFs and K-quantiles. We recover differentiable algorithms by adding to the OT problem an entropic regularization, and approximate it using a few Sinkhorn iterations. We call these operators S-sorts, S-CDFs and S-quantiles, and use them in various learning settings: we benchmark them against the recently proposed neuralsort [Grover et al. 2019], propose applications to quantile regression and introduce differentiable formulations of the top-k accuracy that deliver state-of-the art performance.", "target": ["ランキングや最近傍の選択といった、値のオーダー(ソート)に基づく処理を微分可能にしてEnd-to-Endで学習できるようにする研究。ソートを最適輸送問題(OT)と見なし(並んだ2値の差を最小化する)、これをOTへの収束が証明されているSinkhornアルゴリズムで解く。"]} {"source": "Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees remains a challenging research question. Existing approaches employing rule based hard-pruning strategies for selecting relevant partial dependency structures may not always yield optimal results. In this work, we propose Attention Guided Graph Convolutional Networks (AGGCNs), a novel model which directly takes full dependency trees as inputs. Our model can be understood as a soft-pruning approach that automatically learns how to selectively attend to the relevant sub-structures useful for the relation extraction task. Extensive results on various tasks including cross-sentence n-ary relation extraction and large-scale sentence-level relation extraction show that our model is able to better leverage the structural information of the full dependency trees, giving significantly better results than previous approaches.", "target": ["構文解析結果を利用して関係認識のタスクを解く研究。係り受け関係を隣接行列にしてGCNを行うのが基本だが、係り受けの隣接行列をSelf-Attentionのレイヤに通して全結合のグラフを出力している。これにより接続がないところも含め、どの関係を重視するか学習させる。"]} {"source": "Over-dependence on domain ontology and lack of knowledge sharing across domains are two practical and yet less studied problems of dialogue state tracking. Existing approaches generally fall short in tracking unknown slot values during inference and often have difficulties in adapting to new domains. In this paper, we propose a Transferable Dialogue State Generator (TRADE) that generates dialogue states from utterances using a copy mechanism, facilitating knowledge transfer when predicting (domain, slot, value) triplets not encountered during training. Our model is composed of an utterance encoder, a slot gate, and a state generator, which are shared across domains. Empirical results demonstrate that TRADE achieves state-of-the-art joint goal accuracy of 48.62% for the five domains of MultiWOZ, a human-human dialogue dataset. In addition, we show its transferring ability by simulating zero-shot and few-shot dialogue state tracking for unseen domains. TRADE achieves 60.58% joint goal accuracy in one of the zero-shot domains, and is able to adapt to few-shot cases without forgetting already trained domains.", "target": ["マルチドメインに対応するための対話システムの研究。発話のEncoder、Domain/Slotに該当するValueをコピーメカニズムで抽出するState Generatorが基本だが、複数あるDomain/Slotに対しそもそも予測するかフィルタするSlot Gateを組み込んでいる。この仕組みがFew-shotの学習でも有効なことを確認。"]} {"source": "Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.", "target": ["BERT(#959)の学習方法を細かく調査した研究。基本的に長い時間、大きなコーパス、大きなバッチで訓練するほどよくなる。segmentが次の文のものか当てさせるタスク(Next sentence prediction)をやめる、動的なマスクを行うのも有効。チューニングで、様々な亜種の結果を上回る"]} {"source": "We show that training a deep network using batch normalization is equivalent to approximate inference in Bayesian models. We further demonstrate that this finding allows us to make meaningful estimates of the model uncertainty using conventional architectures, without modifications to the network or the training procedure. Our approach is thoroughly validated by measuring the quality of uncertainty in a series of empirical experiments on different tasks. It outperforms baselines with strong statistical significance, and displays competitive performance with recent Bayesian approaches.", "target": ["予測の不確実性を推定する手法の提案。既存研究ではDropoutを使った不確実推定があったが、Dropoutよりもよく使われているBatch Normalizationを使い推定する。正規化に用いる平均/分散を学習データからサンプリングすることでモデル出力の分布を得る。"]} {"source": "The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018 [2]. These computations have a surprisingly large carbon footprint [38]. Ironically, deep learning was inspired by the human brain, which is remarkably energy efficient. Moreover, the financial cost of the computations can make it difficult for academics, students, and researchers, in particular those from emerging economies, to engage in deep learning research. This position paper advocates a practical solution by making efficiency an evaluation criterion for research alongside accuracy and related measures. In addition, we propose reporting the financial cost or \"price tag\" of developing, training, and running models to provide baselines for the investigation of increasingly efficient methods. Our goal is to make AI both greener and more inclusive---enabling any inspired undergraduate with a laptop to write high-quality research papers. Green AI is an emerging focus at the Allen Institute for AI.", "target": ["機械学習の研究で、精度だけでなく効率性もきちんと重視しようという意見論文。単純に演算コストの高いモデルの研究(RedAI)を非難しているわけではなく、効率性を考えた研究(GreenAI)より重視されすぎていることを指摘している。Greenの評価指標として、浮動小数点の演算回数(FPO)の使用を提案している"]} {"source": "Dramatic advances in generative models have resulted in near photographic quality for artificially rendered faces, animals and other objects in the natural world. In spite of such advances, a higher level understanding of vision and imagery does not arise from exhaustively modeling an object, but instead identifying higher-level attributes that best summarize the aspects of an object. In this work we attempt to model the drawing process of fonts by building sequential generative models of vector graphics. This model has the benefit of providing a scale-invariant representation for imagery whose latent representation may be systematically manipulated and exploited to perform style propagation. We demonstrate these results on a large dataset of fonts and highlight how such a model captures the statistical dependencies and richness of this dataset. We envision that our model can find use as a tool for graphic designers to facilitate font design.", "target": ["軌跡で描画するベクター画像の生成を行った研究。画像のスタイルをVAEで抽出し、スタイルに基づきMDN(Mixture Density Network)で描画を行っている。潜在表現の分析も行っており、似たフォントスタイルが潜在表現空間上でも近いことを確認している。"]} {"source": "We are surprised to find that BERT's peak performance of 77% on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them. This analysis informs the construction of an adversarial dataset on which all models achieve random accuracy. Our adversarial dataset provides a more robust assessment of argument comprehension and should be adopted as the standard in future work.", "target": ["BERTが論理展開の根拠を選択するタスクで人間を超える精度を出せたのは、データに問題があったからとする研究。「AはBをおこし(Reason)、BはCにつながるから(Warrant)、Aは良い/悪いetc(Claim)」という論理展開で正しいWarrantを選択するが、Warrantにnotがあるかどうかで半分以上正答できたという。"]} {"source": "This paper explores an incremental training strategy for the skip-gram model with negative sampling (SGNS) from both empirical and theoretical perspectives. Existing methods of neural word embeddings, including SGNS, are multi-pass algorithms and thus cannot perform incremental model update. To address this problem, we present a simple incremental extension of SGNS and provide a thorough theoretical analysis to demonstrate its validity. Empirical experiments demonstrated the correctness of the theoretical analysis as well as the practical usefulness of the incremental algorithm.", "target": ["学習済みの単語分散表現を、SkipGram+Negative Samplingで追加学習する手法の解説記事。Negative Samplingに使用するノイズ分布を勾配法で更新していくことで、真のノイズ分布の計算(全単語のデータが必要)を回避しつつ単語数が多い場合真の分布に近づくことを担保している。"]} {"source": "This paper addresses the generation of referring expressions that not only refer to objects correctly but also let humans find them quickly. As a target becomes relatively less salient, identifying referred objects itself becomes more difficult. However, the existing studies regarded all sentences that refer to objects correctly as equally good, ignoring whether they are easily understood by humans. If the target is not salient, humans utilize relationships with the salient contexts around it to help listeners to comprehend it better. To derive this information from human annotations, our model is designed to extract information from the target and from the environment. Moreover, we regard that sentences that are easily understood are those that are comprehended correctly and quickly by humans. We optimized this by using the time required to locate the referred objects by humans and their accuracies. To evaluate our system, we created a new referring expression dataset whose images were acquired from Grand Theft Auto V (GTA V), limiting targets to persons. Experimental results show the effectiveness of our approach. Our code and dataset are available at this https URL.", "target": ["見分けのつきにくい物体に対し、人間が識別しやすい説明つける研究。テキストを生成するSpeakerは評価が高いテキストを生成するよう学習される一方、事前学習済みのReinforcerの評価を最大化するようにも学習される。評価は、テキストの正確性だけでなく識別の速さ(アノテーターが判断)も加味される。"]} {"source": "The goal of graph representation learning is to embed each vertex in a graph into a low-dimensional vector space. Existing graph representation learning methods can be classified into two categories: generative models that learn the underlying connectivity distribution in the graph, and discriminative models that predict the probability of edge existence between a pair of vertices. In this paper, we propose GraphGAN, an innovative graph representation learning framework unifying above two classes of methods, in which the generative model and discriminative model play a game-theoretical minimax game. Specifically, for a given vertex, the generative model tries to fit its underlying true connectivity distribution over all other vertices and produces \"fake\" samples to fool the discriminative model, while the discriminative model tries to detect whether the sampled vertex is from ground truth or generated by the generative model. With the competition between these two models, both of them can alternately and iteratively boost their performance. Moreover, when considering the implementation of generative model, we propose a novel graph softmax to overcome the limitations of traditional softmax function, which can be proven satisfying desirable properties of normalization, graph structure awareness, and computational efficiency. Through extensive experiments on real-world datasets, we demonstrate that GraphGAN achieves substantial gains in a variety of applications, including link prediction, node classification, and recommendation, over state-of-the-art baselines.", "target": ["Graphの接続生成をGANで行う手法の研究。あるノードに接続しうるノードの生成をGeneratorで行い、Discriminatorは実際の接続分布からサンプリングされたものか判定を行う。Gを単なるsoftmaxにすると全ノードの確率計算が発生してしまうため、BFSで部分木を生成し絞り込む"]} {"source": "An important goal in deep learning is to learn versatile, high-level feature representations of input data. However, standard networks' representations seem to possess shortcomings that, as we illustrate, prevent them from fully realizing this goal. In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks. It turns out that representations learned by robust models address the aforementioned shortcomings and make significant progress towards learning a high-level encoding of inputs. In particular, these representations are approximately invertible, while allowing for direct visualization and manipulation of salient input features. More broadly, our results indicate adversarial robustness as a promising avenue for improving learned representations. Our code and models for reproducing these results is available at this https URL .", "target": ["DNNで学習される特徴表現をロバストにしようという研究。既存のモデルでは人間には異なる画像に見えても特徴表現的には同じということがあるが、Adversarialの学習と似た枠組みで、ノイズが乗っても予測が可能なよう学習することでこの現象を緩和することができる。"]} {"source": "We propose GraphNVP, the first invertible, normalizing flow-based molecular graph generation model. We decompose the generation of a graph into two steps: generation of (i) an adjacency tensor and (ii) node attributes. This decomposition yields the exact likelihood maximization on graph-structured data, combined with two novel reversible flows. We empirically demonstrate that our model efficiently generates valid molecular graphs with almost no duplicated molecules. In addition, we observe that the learned latent space can be used to generate molecules with desired chemical properties.", "target": ["可逆変換可能なFlowモデルで、分子生成を行う手法。VAEは分布を経由するため厳密な生成ができない、GANは生成対象のコントロールが難しいという点をFlowモデルの厳密な潜在表現が得られる+そこからの生成も可能という特性で解消している。ノード/エッジの潜在表現を別々に計算し生成を行なっている。"]} {"source": "We introduce Pixel-aligned Implicit Function (PIFu), a highly effective implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object. Using PIFu, we propose an end-to-end deep learning method for digitizing highly detailed clothed humans that can infer both 3D surface and texture from a single image, and optionally, multiple input images. Highly intricate shapes, such as hairstyles, clothing, as well as their variations and deformations can be digitized in a unified way. Compared to existing representations used for 3D deep learning, PIFu can produce high-resolution surfaces including largely unseen regions such as the back of a person. In particular, it is memory efficient unlike the voxel representation, can handle arbitrary topology, and the resulting surface is spatially aligned with the input image. Furthermore, while previous techniques are designed to process either a single image or multiple views, PIFu extends naturally to arbitrary number of views. We demonstrate high-resolution and robust reconstructions on real world images from the DeepFashion dataset, which contains a variety of challenging clothing types. Our method achieves state-of-the-art performance on a public benchmark and outperforms the prior work for clothed human digitization from a single image.", "target": ["1枚または複数の画像から3D像を生成する研究。全ての3Dドットを同時に生成するのではなく、射影の関数を学習する戦略をとっているため、必要なメモリが少ないことがポイント。人の存在確率を予測するPIFuと、それと画像から3Dの各点を予測するTe-PIFuに分かれる。"]} {"source": "Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently. Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks. Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts. In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks. The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks. We have made the code publicly available at \\url{this https URL}.", "target": ["Globalな情報はTransformerのSelf-Attentionで、Localな情報をRNNで取得するという手法の提案。Transformerは大域的な情報に強いものの局所情報はposition embeddingという限られた情報に依存しているため、これをRNNで代替/補強するというアイデア。言語モデルで優秀な制度を記録。"]} {"source": "Different layouts can characterize different aspects of the same graph. Finding a \"good\" layout of a graph is thus an important task for graph visualization. In practice, users often visualize a graph in multiple layouts by using different methods and varying parameter settings until they find a layout that best suits the purpose of the visualization. However, this trial-and-error process is often haphazard and time-consuming. To provide users with an intuitive way to navigate the layout design space, we present a technique to systematically visualize a graph in diverse layouts using deep generative models. We design an encoder-decoder architecture to learn a model from a collection of example layouts, where the encoder represents training examples in a latent space and the decoder produces layouts from the latent space. In particular, we train the model to construct a two-dimensional latent space for users to easily explore and generate various layouts. We demonstrate our approach through quantitative and qualitative evaluations of the generated layouts. The results of our evaluations show that our model is capable of learning and generalizing abstract concepts of graph layouts, not just memorizing the training examples. In summary, this paper presents a fundamentally new approach to graph visualization where a machine learning model learns to visualize a graph from examples without manually-defined heuristics.", "target": ["グラフ構造の可視化では、可視化のレイアウトによって見え方が異なりその調整が難しいと言う問題がある。そこで、選択したサンプルを元に様々なレイアウトを生成するEncoder/Decoderモデルを作成したという研究。生成レイアウトを2Dで表示するため、2次元の一様分布を経由してVAEのように生成を行う。"]} {"source": "This paper introduces a structured memory which can be easily integrated into a neural network. The memory is very large by design and significantly increases the capacity of the architecture, by up to a billion parameters with a negligible computational overhead. Its design and access pattern is based on product keys, which enable fast and exact nearest neighbor search. The ability to increase the number of parameters while keeping the same computational budget lets the overall system strike a better trade-off between prediction accuracy and computation efficiency both at training and test time. This memory layer allows us to tackle very large scale language modeling tasks. In our experiments we consider a dataset with up to 30 billion words, and we plug our memory layer in a state-of-the-art transformer-based architecture. In particular, we found that a memory augmented model with only 12 layers outperforms a baseline transformer model with 24 layers, while being twice faster at inference time. We release our code for reproducibility purposes.", "target": ["DNNに組み込める巨大なメモリ機構の提案。Key/Value型だが普通に実装すると計算コストがかかるため、Keyの作り方を工夫している。Key空間を2つのsub-key空間に分割し、クエリも2つに分割、各分割クエリに対応するsub-key空間で最も近いsub-keyを取得し、その外積でキーを作成する。"]} {"source": "We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. Additionally, we find that sparse momentum is insensitive to the choice of its hyperparameters suggesting that sparse momentum is robust and easy to use.", "target": ["DNNの学習を効率化する手法の提案。有望なWeightを残し、役立たないWeightは回収し(Prune)他に割り振る(Regrow)というのが基本的な考え。割り振りはMomentum(勾配の履歴を指数平滑で重みづけし合計したもの)のMagnitudeを基準に行い、Pruneされた重みはレイヤー単位/レイヤー内の順に割り振られる。"]} {"source": "We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. Additionally, we find that sparse momentum is insensitive to the choice of its hyperparameters suggesting that sparse momentum is robust and easy to use.", "target": ["6プレイヤーのポーカーでプロに勝ったという研究。既存の強化学習は1:1・完全情報のゲームが多いが、複数プレイヤーかつ不完全情報という難しい状態を扱っている。self-play(6人なので、更新を1体ずつ行う)で学習を行うが、ナッシュ均衡の達成=ゲームの勝利ではないことから均衡達成を目的としていない"]} {"source": "It can be challenging to train multi-task neural networks that outperform or even match their single-task counterparts. To help address this, we propose using knowledge distillation where single-task models teach a multi-task model. We enhance this training with teacher annealing, a novel method that gradually transitions the model from distillation to supervised learning, helping the multi-task model surpass its single-task teachers. We evaluate our approach by multi-task fine-tuning BERT on the GLUE benchmark. Our method consistently improves over standard single-task and multi-task training.", "target": ["マルチタスク学習における蒸留の提案。各教科専任の家庭教師をつけるイメージで、シングルタスクで学習したモデル(教師)の予測結果と近しくなるよう学習を行う(教師の模倣だけに終わらないように、通常のlossと教師との差異でバランスを取る)。BERT(#959)ベースのモデルで検証し、マルチで各シングルを上回る"]} {"source": "As shown in recent research, deep neural networks can perfectly fit randomly labeled data, but with very poor accuracy on held out data. This phenomenon indicates that loss functions such as cross-entropy are not a reliable indicator of generalization. This leads to the crucial question of how generalization gap should be predicted from the training data and network parameters. In this paper, we propose such a measure, and conduct extensive empirical studies on how well it can predict the generalization gap. Our measure is based on the concept of margin distribution, which are the distances of training points to the decision boundary. We find that it is necessary to use margin distributions at multiple layers of a deep network. On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates very strongly with the generalization gap. In addition, we find the following other factors to be of importance: normalizing margin values for scale independence, using characterizations of margin distribution rather than just the margin (closest distance to decision boundary), and working in log space instead of linear space (effectively using a product of margins rather than a sum). Our measure can be easily applied to feedforward deep networks with any architecture and may point towards new training loss functions that could enable better generalization.", "target": ["DNNの汎化誤差を推定する手法の研究。SVMにおけるマージンをヒントに、決定境界(正しいクラスと次点のクラスの予測確率が五分五分になる点)から、予測結果が変わらないギリギリの点までの距離を、テイラー展開で近似する(一次近似)。各レイヤでこの計算を行い、マージンとテスト精度の相関を示した。"]} {"source": "Adversarially trained generative models (GANs) have recently achieved compelling image synthesis results. But despite early successes in using GANs for unsupervised representation learning, they have since been superseded by approaches based on self-supervision. In this work we show that progress in image generation quality translates to substantially improved representation learning performance. Our approach, BigBiGAN, builds upon the state-of-the-art BigGAN model, extending it to representation learning by adding an encoder and modifying the discriminator. We extensively evaluate the representation learning and generation capabilities of these BigBiGAN models, demonstrating that these generation-based models achieve the state of the art in unsupervised representation learning on ImageNet, as well as in unconditional image generation. Pretrained BigBiGAN models -- including image generators and encoders -- are available on TensorFlow Hub (https://tfhub.dev/s?publisher=deepmind&q=bigbigan).", "target": ["BigGANを利用して表現学習を行う研究。実サンプルx/Encoderで生成した潜在表現zと、Generatorから生成したサンプルG(z)/潜在表現分布からサンプルしたzの距離を近づける形で学習を行う。この形式の学習が、潜在表現的にも画像生成的にも有効であることを確認。"]} {"source": "Balancing an ever growing strategic game of high complexity, such as Hearthstone is a complex task. The target of making strategies diverse and customizable results in a delicate intricate system. Tuning over 2000 cards to generate the desired outcome without disrupting the existing environment becomes a laborious challenge. In this paper, we discuss the impacts that changes to existing cards can have on strategy in Hearthstone. By analyzing the win rate on match-ups across different decks, being played by different strategies, we propose to compare their performance before and after changes are made to improve or worsen different cards. Then, using an evolutionary algorithm, we search for a combination of changes to the card attributes that cause the decks to approach equal, 50% win rates. We then expand our evolutionary algorithm to a multi-objective solution to search for this result, while making the minimum amount of changes, and as a consequence disruption, to the existing cards. Lastly, we propose and evaluate metrics to serve as heuristics with which to decide which cards to target with balance changes.", "target": ["進化戦略を用いて、新カードの登場/既存カードの変更が行われた場合のゲームバランスを調べるという研究。様々なデッキでも対等に戦える(勝率が五分五分になる)というポリシーを基本に、変更により勝率の変動が発生するかを調べる。また特定カードをドローした場合に勝率が大きく変わるかを調べている"]} {"source": "Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects (e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout remove informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on the ImageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances. Source code and pretrained models are available at this https URL .", "target": ["画像の一部をDropするCutoutと、画像を合成するMixupを複合させて、Dropさせた箇所に別サンプルの画像をはめ込むCutMixという手法を提案。Cutoutは重要な特徴を落としてしまう可能性がある、Mixupは合成画像が不自然になるという双方の課題を克服し、双方を上回る精度を達成"]} {"source": "Given a set of empirical observations, conditional density estimation aims to capture the statistical relationship between a conditional variable \\mathbf{x} and a dependent variable \\mathbf{y} by modeling their conditional probability p(\\mathbf{y}|\\mathbf{x}). The paper develops best practices for conditional density estimation for finance applications with neural networks, grounded on mathematical insights and empirical evaluations. In particular, we introduce a noise regularization and data normalization scheme, alleviating problems with over-fitting, initialization and hyper-parameter sensitivity of such estimators. We compare our proposed methodology with popular semi- and non-parametric density estimators, underpin its effectiveness in various benchmarks on simulated and Euro Stoxx 50 data and show its superior performance. Our methodology allows to obtain high-quality estimators for statistical expectations of higher moments, quantiles and non-linear return transformations, with very little assumptions about the return dynamic.", "target": ["ファイナンスの時系列データをニューラルネットワークで予測する際のベストプラクティスについて調査した研究(データの前処理がメイン)。ユーロ・ストックス50指数を題材に十分な予測が行えたとしている。"]} {"source": "AI technologies have the potential to dramatically impact the lives of people with disabilities (PWD). Indeed, improving the lives of PWD is a motivator for many state-of-the-art AI systems, such as automated speech recognition tools that can caption videos for people who are deaf and hard of hearing, or language prediction algorithms that can augment communication for people with speech or cognitive disabilities. However, widely deployed AI systems may not work properly for PWD, or worse, may actively discriminate against them. These considerations regarding fairness in AI for PWD have thus far received little attention. In this position paper, we identify potential areas of concern regarding how several AI technology categories may impact particular disability constituencies if care is not taken in their design, development, and testing. We intend for this risk assessment of how various classes of AI might interact with various classes of disability to provide a roadmap for future research that is needed to gather data, test these hypotheses, and build more inclusive algorithms.", "target": ["障害を持つ人でもフェアに恩恵を受けられるAIの開発にむけて、必要な検討事項がまとめられた研究。例として、顔認識では特徴的な顔の変形を伴う疾患を持つ人だと認識率が下がってしまう、音声認識では認知・知的障害がある人の場合話す速度が遅く認識できない、などといったことが挙げられている。"]} {"source": "Synthetic visual data can provide practically infinite diversity and rich labels, while avoiding ethical issues with privacy and bias. However, for many tasks, current models trained on synthetic data generalize poorly to real data. The task of 3D human pose estimation is a particularly interesting example of this sim2real problem, because learning-based approaches perform reasonably well given real training data, yet labeled 3D poses are extremely difficult to obtain in the wild, limiting scalability. In this paper, we show that standard neural-network approaches, which perform poorly when trained on synthetic RGB images, can perform well when the data is pre-processed to extract cues about the person's motion, notably as optical flow and the motion of 2D keypoints. Therefore, our results suggest that motion can be a simple way to bridge a sim2real gap when video is available. We evaluate on the 3D Poses in the Wild dataset, the most challenging modern benchmark for 3D pose estimation, where we show full 3D mesh recovery that is on par with state-of-the-art methods trained on real 3D sequences, despite training only on synthetic humans from the SURREAL dataset.", "target": ["3Dモーションの認識で、シミュレーターデータの拡張を行うことで精度を上げたという研究。人間はキーポイントの情報のみで3次元の構造を高度に認識できるという心理学研究の結果から、キーポイント/Optical Flowの情報を加える+実画像を背景に利用することで、合成画像のみでSOTAを達成。"]} {"source": "Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks. However, learning an accurate model for complex dynamical systems is difficult, and even then, the model might not generalize well outside the distribution of states on which it was trained. In this work, we combine model-based learning with model-free learning of primitives that make model-based planning easy. To that end, we aim to answer the question: how can we discover skills whose outcomes are easy to predict? We propose an unsupervised learning algorithm, Dynamics-Aware Discovery of Skills (DADS), which simultaneously discovers predictable behaviors and learns their dynamics. Our method can leverage continuous skill spaces, theoretically, allowing us to learn infinitely many behaviors even for high-dimensional state-spaces. We demonstrate that zero-shot planning in the learned latent space significantly outperforms standard MBRL and model-free goal-conditioned RL, can handle sparse-reward tasks, and substantially improves over prior hierarchical RL methods for unsupervised skill discovery.", "target": ["完全なモデルを作成することは困難、というモデルベースの手法が抱える問題に取り組んだ研究。「スキル」という条件付けを使い、特定スキルを発動しているという条件下でモデル/戦略の学習を行う(戦略の学習には内発的報酬を使用)。テスト時はスキルのシーケンスを計画しスキルに応じた戦略を実行する"]} {"source": "As 3D point clouds become the representation of choice for multiple vision and graphics applications, the ability to synthesize or reconstruct high-resolution, high-fidelity point clouds becomes crucial. Despite the recent success of deep learning models in discriminative tasks of point clouds, generating point clouds remains challenging. This paper proposes a principled probabilistic framework to generate 3D point clouds by modeling them as a distribution of distributions. Specifically, we learn a two-level hierarchy of distributions where the first level is the distribution of shapes and the second level is the distribution of points given a shape. This formulation allows us to both sample shapes and sample an arbitrary number of points from a shape. Our generative model, named PointFlow, learns each level of the distribution with a continuous normalizing flow. The invertibility of normalizing flows enables the computation of the likelihood during training and allows us to train our model in the variational inference framework. Empirically, we demonstrate that PointFlow achieves state-of-the-art performance in point cloud generation. We additionally show that our model can faithfully reconstruct point clouds and learn useful representations in an unsupervised manner. The code will be available at this https URL.", "target": ["点群を生成する手法についての研究。形状の推定=>形状から点群の推定という2段階の推定を行なっており、全体をContinuous Normalizing Flow(CNF)でモデル化している。点群推定にとっての事前分布となる形状分布の表現力を上げるために、Encoder(Q)を使い形状分布のCNFを学習している。"]} {"source": "Model-based reinforcement learning (MBRL) is widely seen as having the potential to be significantly more sample efficient than model-free RL. However, research in model-based RL has not been very standardized. It is fairly common for authors to experiment with self-designed environments, and there are several separate lines of research, which are sometimes closed-sourced or not reproducible. Accordingly, it is an open question how these various existing MBRL algorithms perform relative to each other. To facilitate research in MBRL, in this paper we gather a wide collection of MBRL algorithms and propose over 18 benchmarking environments specially designed for MBRL. We benchmark these algorithms with unified problem settings, including noisy environments. Beyond cataloguing performance, we explore and unify the underlying algorithmic differences across MBRL algorithms. We characterize three key research challenges for future MBRL research: the dynamics bottleneck, the planning horizon dilemma, and the early-termination dilemma. Finally, to maximally facilitate future research on MBRL, we open-source our benchmark in this http URL.", "target": ["近年研究が多い、モデルベースの強化学習を比較検証した研究。総計10のアルゴリズムを、18の環境で実験しそのパフォーマンスを比較している。モデルベースの課題として、サンプル数に対するパフォーマンスの伸びが悪くなる点、モデルによる計画期間の調整が難しい点などをあげている。"]} {"source": "The study of object representations in computer vision has primarily focused on developing representations that are useful for image classification, object detection, or semantic segmentation as downstream tasks. In this work we aim to learn object representations that are useful for control and reinforcement learning (RL). To this end, we introduce Transporter, a neural network architecture for discovering concise geometric object representations in terms of keypoints or image-space coordinates. Our method learns from raw video frames in a fully unsupervised manner, by transporting learnt image features between video frames using a keypoint bottleneck. The discovered keypoints track objects and object parts across long time-horizons more accurately than recent similar methods. Furthermore, consistent long-term tracking enables two notable results in control domains -- (1) using the keypoint co-ordinates and corresponding image features as inputs enables highly sample-efficient reinforcement learning; (2) learning to explore by controlling keypoint locations drastically reduces the search space, enabling deep exploration (leading to states unreachable through random action exploration) without any extrinsic rewards.", "target": ["強化学習にとって有効な表現学習を行う試み。オブジェクトのキーポイントに注目し、キーポイント上の特徴変化のみを手掛かりに時刻tから時刻t'の画像を生成するよう学習する(時刻tのキーポイント上の特徴を時刻t'のキーポイント上の特徴で置き換えて生成を行う)。これによりサンプル/探索効率を改善。"]} {"source": "Estimating how uncertain an AI system is in its predictions is important to improve the safety of such systems. Uncertainty in predictive can result from uncertainty in model parameters, irreducible data uncertainty and uncertainty due to distributional mismatch between the test and training data distributions. Different actions might be taken depending on the source of the uncertainty so it is important to be able to distinguish between them. Recently, baseline tasks and metrics have been defined and several practical methods to estimate uncertainty developed. These methods, however, attempt to model uncertainty due to distributional mismatch either implicitly through model uncertainty or as data uncertainty. This work proposes a new framework for modeling predictive uncertainty called Prior Networks (PNs) which explicitly models distributional uncertainty. PNs do this by parameterizing a prior distribution over predictive distributions. This work focuses on uncertainty for classification and evaluates PNs on the tasks of identifying out-of-distribution (OOD) samples and detecting misclassification on the MNIST dataset, where they are found to outperform previous methods. Experiments on synthetic and MNIST and CIFAR-10 data show that unlike previous non-Bayesian methods PNs are able to distinguish between data and distributional uncertainty.", "target": ["予測モデルにおける不確かさを、1.パラメーターの不確実性、2.データの不確実性(そもそも予測困難)、3.分布差異による不確実性(学習/テストの分布差異)の3つに分類し3に焦点を当てて解決を試みている研究。既存の不確実性推定では2, 3を混合していたため、これを分布として切り離して推定している。"]} {"source": "In this paper we propose a novel neural approach for automatic decipherment of lost languages. To compensate for the lack of strong supervision signal, our model design is informed by patterns in language change documented in historical linguistics. The model utilizes an expressive sequence-to-sequence model to capture character-level correspondences between cognates. To effectively train the model in an unsupervised manner, we innovate the training procedure by formalizing it as a minimum-cost flow problem. When applied to the decipherment of Ugaritic, we achieve a 5.5% absolute improvement over state-of-the-art results. We also report the first automatic results in deciphering Linear B, a syllabic language related to ancient Greek, where our model correctly translates 67.3% of cognates.", "target": ["失われた言語の解読(翻訳)を行う研究。失われている故に大規模なコーパスは使えないので、同族言語とのアライメントを手がかりに翻訳を行なっている。具体的には、既知の同族言語における文字の並び/単語の出現確率で制約をかけて生成を行なっている。"]} {"source": "Large pre-trained neural networks such as BERT have had great recent success in NLP, motivating a growing body of research investigating what aspects of language they are able to learn from unlabeled data. Most recent analysis has focused on model outputs (e.g., language model surprisal) or internal vector representations (e.g., probing classifiers). Complementary to these works, we propose methods for analyzing the attention mechanisms of pre-trained models and apply them to BERT. BERT's attention heads exhibit patterns such as attending to delimiter tokens, specific positional offsets, or broadly attending over the whole sentence, with heads in the same layer often exhibiting similar behaviors. We further show that certain attention heads correspond well to linguistic notions of syntax and coreference. For example, we find heads that attend to the direct objects of verbs, determiners of nouns, objects of prepositions, and coreferent mentions with remarkably high accuracy. Lastly, we propose an attention-based probing classifier and use it to further demonstrate that substantial syntactic information is captured in BERT's attention.", "target": ["BERT(#959)内のAttentionのかかり方を調べた論文。全体的にはCLS/SEP、またピリオドといった区切り文字を見ていて、また全体のTokenに散っているよう。ただ特定のAttentionヘッドは構文的な情報を見ているようで、Dependency Parseのタスクでbaselineを上回る結果を出している。"]} {"source": "Conventional neural architecture search (NAS) approaches are based on reinforcement learning or evolutionary strategy, which take more than 3000 GPU hours to find a good model on CIFAR-10. We propose an efficient NAS approach learning to search by gradient descent. Our approach represents the search space as a directed acyclic graph (DAG). This DAG contains billions of sub-graphs, each of which indicates a kind of neural architecture. To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG. This sampler is learnable and optimized by the validation loss after training the sampled architecture. In this way, our approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS). In experiments, we can finish one searching procedure in four GPU hours on CIFAR-10, and the discovered model obtains a test error of 2.82\\% with only 2.5M parameters, which is on par with the state-of-the-art. Code is publicly available on GitHub: this https URL.", "target": ["ネットワークの構造探索を効率的に行う手法の提案。範囲としてはセル構造の探索で、stride=1の標準セルとstride=2のダウンサイズするセルの2つを探索し、所与のネットワークに当てはめる。構造をDAGで定義し、パスのサンプリング自体を微分可能にすることでE2Eで最適化を行う"]} {"source": "The goal of early action prediction is to recognize actions from partially observed videos with incomplete action executions, which is quite different from action recognition. Predicting early actions is very challenging since the partially observed videos do not contain enough action information for recognition. In this paper, we aim at improving early action prediction by proposing a novel teacher-student learning framework. Our framework involves a teacher model for recognizing actions from full videos, a student model for predicting early actions from partial videos, and a teacher-student learning block for distilling progressive knowledge from teacher to student, crossing different tasks. Extensive experiments on three public action datasets show that the proposed progressive teacher-student learning framework can consistently improve performance of early action prediction model. We have also reported the state-of-the-art performances for early action prediction on all of these sets.", "target": ["部分的な動画から、何をしようとしているか検出する研究。フルの動画を入力としBi-LSTMを利用する(=行動終了の状態も把握している)Teacherに対し、部分動画を入力としLSTMを利用するStudentの潜在表現/予測が近くしなるよう学習を行う。"]} {"source": "Recently, deep learning based facial landmark detection has achieved great success. Despite this, we notice that the semantic ambiguity greatly degrades the detection performance. Specifically, the semantic ambiguity means that some landmarks (e.g. those evenly distributed along the face contour) do not have clear and accurate definition, causing inconsistent annotations by annotators. Accordingly, these inconsistent annotations, which are usually provided by public databases, commonly work as the ground-truth to supervise network training, leading to the degraded accuracy. To our knowledge, little research has investigated this problem. In this paper, we propose a novel probabilistic model which introduces a latent variable, i.e. the 'real' ground-truth which is semantically consistent, to optimize. This framework couples two parts (1) training landmark detection CNN and (2) searching the 'real' ground-truth. These two parts are alternatively optimized: the searched 'real' ground-truth supervises the CNN training; and the trained CNN assists the searching of 'real' ground-truth. In addition, to recover the unconfidently predicted landmarks due to occlusion and low quality, we propose a global heatmap correction unit (GHCU) to correct outliers by considering the global face shape as a constraint. Extensive experiments on both image-based (300W and AFLW) and video-based (300-VW) databases demonstrate that our method effectively improves the landmark detection accuracy and achieves the state of the art performance.", "target": ["顔のランドマーク検知における、ノイズの影響を調査した研究。ランドマークのうち輪郭などは定義が不明確なためノイズが多く、それによりモデルの精度が毀損される。そこで「真のランドマーク」の分布を推定=>学習=>学習機をアノテータとし分布学習、を繰り返す学習法を提案"]} {"source": "Detectors based on deep learning tend to detect multi-scale faces on a single input image for efficiency. Recent works, such as FPN and SSD, generally use feature maps from multiple layers with different spatial resolutions to detect objects at different scales, e.g., high-resolution feature maps for small objects. However, we find that such multi-layer prediction is not necessary. Faces at all scales can be well detected with features from a single layer of the network. In this paper, we carefully examine the factors affecting face detection across a large range of scales, and conclude that the balance of training samples, including both positive and negative ones, at different scales is the key. We propose a group sampling method which divides the anchors into several groups according to the scale, and ensure that the number of samples for each group is the same during training. Our approach using only the last layer of FPN as features is able to advance the state-of-the-arts. Comprehensive analysis and extensive experiments have been conducted to show the effectiveness of the proposed method. Our approach, evaluated on face detection benchmarks including FDDB and WIDER FACE datasets, achieves state-of-the-art results without bells and whistles.", "target": ["顔画像の検知において、複数層のマルチスケール特徴が必ずしも必要ないということを示した研究。顔のような小さいオブジェクトは検出領域/ストライドの幅に合うより外れる確率が高くなる=相対的に小Positive Sampleで学習している。よって学習データのバランスさえ調整すれば単一層で十分な特徴を得られるという。"]} {"source": "A surprising property of word vectors is that word analogies can often be solved with vector arithmetic. However, it is unclear why arithmetic operators correspond to non-linear embedding models such as skip-gram with negative sampling (SGNS). We provide a formal explanation of this phenomenon without making the strong assumptions that past theories have made about the vector space and word distribution. Our theory has several implications. Past work has conjectured that linear substructures exist in vector spaces because relations can be represented as ratios; we prove that this holds for SGNS. We provide novel justification for the addition of SGNS word vectors by showing that it automatically down-weights the more frequent word, as weighting schemes do ad hoc. Lastly, we offer an information theoretic interpretation of Euclidean distance in vector spaces, justifying its use in capturing word dissimilarity.", "target": ["単語分散表現において、Euclid距離で意味の近さ、加減算で意味の差し引きができる理由について調べた研究。共起シフトPMI(csPMI=PMI(x,y) + log p(x,y))という値が単語ペア間(王様/男性, 女王/女性)でそれぞれ等しければ、それらはベクトル空間上で同一平面に存在することを証明している。"]} {"source": "Visual explanation enables human to understand the decision making of Deep Convolutional Neural Network (CNN), but it is insufficient to contribute the performance improvement. In this paper, we focus on the attention map for visual explanation, which represents high response value as the important region in image recognition. This region significantly improves the performance of CNN by introducing an attention mechanism that focuses on a specific region in an image. In this work, we propose Attention Branch Network (ABN), which extends the top-down visual explanation model by introducing a branch structure with an attention mechanism. ABN can be applicable to several image recognition tasks by introducing a branch for attention mechanism and is trainable for the visual explanation and image recognition in end-to-end manner. We evaluate ABN on several image recognition tasks such as image classification, fine-grained recognition, and multiple facial attributes recognition. Experimental results show that ABN can outperform the accuracy of baseline models on these image recognition tasks while generating an attention map for visual explanation. Our code is available at this https URL.", "target": ["通常学習後に観測するActivation Mapを、Attentionとしてネットワーク内に組み込んだ研究。Activation Mapの計算には特徴マップ以外にクラス分類への貢献を測る重みが必要だが(通常は全結合層の重みを使う)、これを取得するためAttention側からもクラス分類確率を出力し、マルチタスクで学習している。"]} {"source": "With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.", "target": ["BERT (#959) の弱点を修正したXLNetが公開。BERTではMask箇所を予測するが、\"Mask\"は通常発生しないためノイズになる。そこで単語の予測時に使用するContextの順序を変える手法を提案。Selfを含まないContextから予測する一方、Context自体は通常のSelfを含むAttentionで作成する。20タスクでBERTを上回る成果"]} {"source": "Designing effective model-based reinforcement learning algorithms is difficult because the ease of data generation must be weighed against the bias of model-generated data. In this paper, we study the role of model usage in policy optimization both theoretically and empirically. We first formulate and analyze a model-based reinforcement learning algorithm with a guarantee of monotonic improvement at each step. In practice, this analysis is overly pessimistic and suggests that real off-policy data is always preferable to model-generated on-policy data, but we show that an empirical estimate of model generalization can be incorporated into such analysis to justify model usage. Motivated by this analysis, we then demonstrate that a simple procedure of using short model-generated rollouts branched from real data has the benefits of more complicated model-based algorithms without the usual pitfalls. In particular, this approach surpasses the sample efficiency of prior model-based methods, matches the asymptotic performance of the best model-free algorithms, and scales to horizons that cause other model-based methods to fail entirely.", "target": ["強化学習における、適切なモデルベースの使用方法の研究。フルにモデルベースで学習すると、モデルの再現性が低い箇所がハックされる現象が起こる(実環境では得られない報酬を取るようになるなど)。モデルを短いスパン(=予測性能が高い範囲)で使用することでこの現象を抑えつつ汎化性能をあげられる"]} {"source": "We propose a novel reinforcement learning algorithm, AlphaNPI, that incorporates the strengths of Neural Programmer-Interpreters (NPI) and AlphaZero. NPI contributes structural biases in the form of modularity, hierarchy and recursion, which are helpful to reduce sample complexity, improve generalization and increase interpretability. AlphaZero contributes powerful neural network guided search algorithms, which we augment with recursion. AlphaNPI only assumes a hierarchical program specification with sparse rewards: 1 when the program execution satisfies the specification, and 0 otherwise. Using this specification, AlphaNPI is able to train NPI models effectively with RL for the first time, completely eliminating the need for strong supervision in the form of execution traces. The experiments show that AlphaNPI can sort as well as previous strongly supervised NPI variants. The AlphaNPI agent is also trained on a Tower of Hanoi puzzle with two disks and is shown to generalize to puzzles with an arbitrary number of disk", "target": ["DNNの実行で、再利用可能なモジュールを組み合わせる試み。全体としては強化学習(AlphaZero)の枠組みで、環境以外にプログラムのembeddingを受け取りLSTMで処理、Action/Valueを出力するという流れになっている。LSTMはプログラムの実行環境として動作し、STOPというactionが出るまで実行を続ける。"]} {"source": "How much does having visual priors about the world (e.g. the fact that the world is 3D) assist in learning to perform downstream motor tasks (e.g. delivering a package)? We study this question by integrating a generic perceptual skill set (e.g. a distance estimator, an edge detector, etc.) within a reinforcement learning framework--see Figure 1. This skill set (hereafter mid-level perception) provides the policy with a more processed state of the world compared to raw images. We find that using a mid-level perception confers significant advantages over training end-to-end from scratch (i.e. not leveraging priors) in navigation-oriented tasks. Agents are able to generalize to situations where the from-scratch approach fails and training becomes significantly more sample efficient. However, we show that realizing these gains requires careful selection of the mid-level perceptual skills. Therefore, we refine our findings into an efficient max-coverage feature set that can be adopted in lieu of raw images. We perform our study in completely separate buildings for training and testing and compare against visually blind baseline policies and state-of-the-art feature learning methods.", "target": ["強化学習で、画像直接ではなく中間/加工表現を状態に使う手法の提案。全20の様々な表現(Edgeや認識クラスなど)を使用するが、そのままでは処理コストが高いため最も有効な特徴からの距離が最小になるようサブセットを選択する。これにより学習/テスト、タスク間の転移を行いやすくできるという"]} {"source": "Objects are composed of a set of geometrically organized parts. We introduce an unsupervised capsule autoencoder (SCAE), which explicitly uses geometric relationships between parts to reason about objects. Since these relationships do not depend on the viewpoint, our model is robust to viewpoint changes. SCAE consists of two stages. In the first stage, the model predicts presences and poses of part templates directly from the image and tries to reconstruct the image by appropriately arranging the templates. In the second stage, SCAE predicts parameters of a few object capsules, which are then used to reconstruct part poses. Inference in this model is amortized and performed by off-the-shelf neural encoders, unlike in previous capsule networks. We find that object capsule presences are highly informative of the object class, which leads to state-of-the-art results for unsupervised classification on SVHN (55%) and MNIST (98.7%). The code is available at this https URL", "target": ["Capsule Netを使用したAuto Encoderの提案。Part/Poseを推定したのち、それらを特定のパターン(Template)に統合して元画像を復元するAutoencoderと、(推定された)Partの表現を学習するAutoencoderの2つから構成される。"]} {"source": "Lattices are an efficient and effective method to encode ambiguity of upstream systems in natural language processing tasks, for example to compactly capture multiple speech recognition hypotheses, or to represent multiple linguistic analyses. Previous work has extended recurrent neural networks to model lattice inputs and achieved improvements in various tasks, but these models suffer from very slow computation speeds. This paper extends the recently proposed paradigm of self-attention to handle lattice inputs. Self-attention is a sequence modeling technique that relates inputs to one another by computing pairwise similarities and has gained popularity for both its strong results and its computational efficiency. To extend such models to handle lattices, we introduce probabilistic reachability masks that incorporate lattice structure into the model and support lattice scores if available. We also propose a method for adapting positional embeddings to lattice structures. We apply the proposed model to a speech translation task and find that it outperforms all examined baselines while being much faster to compute than previous neural lattice models during both training and inference.", "target": ["Lattice(DAG)を、Self-Attentionのモデルでうまく処理する手法の提案。Graph入力といえばGCNだが、長期コンテキストを読むのが苦手なため結局LSTMと複合させることが多い。そこでTransformerベースのEncoderを使用し、Latticeの接続/接続確率をベースにAttentionに対しMaskをかけて処理している。"]} {"source": "With the ever increasing size of the web, relevant information extraction on the Internet with a query formed by a few keywords has become a big challenge. Query Expansion (QE) plays a crucial role in improving searches on the Internet. Here, the user's initial query is reformulated by adding additional meaningful terms with similar significance. QE -- as part of information retrieval (IR) -- has long attracted researchers' attention. It has become very influential in the field of personalized social document, question answering, cross-language IR, information filtering and multimedia IR. Research in QE has gained further prominence because of IR dedicated conferences such as TREC (Text Information Retrieval Conference) and CLEF (Conference and Labs of the Evaluation Forum). This paper surveys QE techniques in IR from 1960 to 2017 with respect to core techniques, data sources used, weighting and ranking methodologies, user participation and applications -- bringing out similarities and differences.", "target": ["検索で、クエリ側を拡張するQuery Expansionのサーベイ。入力されるキーワードは1~2語が大半で、また入力者は対象に関する知識がないこともある。そこでクエリ側を拡張しようという分野。変換の類型(Methodology)とそれを行うための手法(Approach)、データソース、また適用例についてまとめられている"]} {"source": "People enjoy food photography because they appreciate food. Behind each meal there is a story described in a complex recipe and, unfortunately, by simply looking at a food image we do not have access to its preparation process. Therefore, in this paper we introduce an inverse cooking system that recreates cooking recipes given food images. Our system predicts ingredients as sets by means of a novel architecture, modeling their dependencies without imposing any order, and then generates cooking instructions by attending to both image and its inferred ingredients simultaneously. We extensively evaluate the whole system on the large-scale Recipe1M dataset and show that (1) we improve performance w.r.t. previous baselines for ingredient prediction; (2) we are able to obtain high quality recipes by leveraging both image and ingredients; (3) our system is able to produce more compelling recipes than retrieval-based approaches according to human judgment. We make code and models publicly available.", "target": ["Image Captionのように、料理の画像からレシピを生成するという研究。直接の生成は難しいので、画像をEncodeしたのち一旦材料の予測(Decode)を行い、予測した材料のEncode+画像のEncodeで手順の生成を行なっている。"]} {"source": "We present a deep convolutional neural network for breast cancer screening exam classification, trained and evaluated on over 200,000 exams (over 1,000,000 images). Our network achieves an AUC of 0.895 in predicting whether there is a cancer in the breast, when tested on the screening population. We attribute the high accuracy of our model to a two-stage training procedure, which allows us to use a very high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels. To validate our model, we conducted a reader study with 14 readers, each reading 720 screening mammogram exams, and find our model to be as accurate as experienced radiologists when presented with the same data. Finally, we show that a hybrid model, averaging probability of malignancy predicted by a radiologist with a prediction of our neural network, is more accurate than either of the two separately. To better understand our results, we conduct a thorough analysis of our network's performance on different subpopulations of the screening population, model design, training procedure, errors, and properties of its internal representations.", "target": ["CNNを用いて乳がんの診断を行うモデルの提案。マンモグラフィで撮影される2方向(脇から内側にかけての斜め方向=MLO、上下方向=CC)の画像を入力として良性腫瘍の有無・悪性腫瘍の有無計4つを予測。パッチレベルのネットワークを別途作成し、良性/悪性の予測で作成したヒートマップを追加入力にしている"]} {"source": "In the space of only a few years, deep generative modeling has revolutionized how we think of artificial creativity, yielding autonomous systems which produce original images, music, and text. Inspired by these successes, researchers are now applying deep generative modeling techniques to the generation and optimization of molecules - in our review we found 45 papers on the subject published in the past two years. These works point to a future where such systems will be used to generate lead molecules, greatly reducing resources spent downstream synthesizing and characterizing bad leads in the lab. In this review we survey the increasingly complex landscape of models and representation schemes that have been proposed. The four classes of techniques we describe are recursive neural networks, autoencoders, generative adversarial networks, and reinforcement learning. After first discussing some of the mathematical fundamentals of each technique, we draw high level connections and comparisons with other techniques and expose the pros and cons of each. Several important high level themes emerge as a result of this work, including the shift away from the SMILES string representation of molecules towards more sophisticated representations such as graph grammars and 3D representations, the importance of reward function design, the need for better standards for benchmarking and testing, and the benefits of adversarial training and reinforcement learning over maximum likelihood based training.", "target": ["分子設計に対しDNNを適用する手法のサーベイ。まず分子の表現方法から始まり、それを利用した生成手法、生成結果の評価/報酬、という3部構成でまとめられている。既存のアーキテクチャーがまとめられたTable2は要チェック。"]} {"source": "In medicine, both ethical and monetary costs of incorrect predictions can be significant, and the complexity of the problems often necessitates increasingly complex models. Recent work has shown that changing just the random seed is enough for otherwise well-tuned deep neural networks to vary in their individual predicted probabilities. In light of this, we investigate the role of model uncertainty methods in the medical domain. Using RNN ensembles and various Bayesian RNNs, we show that population-level metrics, such as AUC-PR, AUC-ROC, log-likelihood, and calibration error, do not capture model uncertainty. Meanwhile, the presence of significant variability in patient-specific predictions and optimal decisions motivates the need for capturing model uncertainty. Understanding the uncertainty for individual patients is an area with clear clinical impact, such as determining when a model decision is likely to be brittle. We further show that RNNs with only Bayesian embeddings can be a more efficient way to capture model uncertainty compared to ensembles, and we analyze how model uncertainty is impacted across individual input features and patient subgroups.", "target": ["医療において予測の不確実性はとても大きな問題なので、既存の不確実性をとらえる手法が実際の臨床データでどれぐらい有効かを調べた研究。実験から、AUCや尤度といった既存のメトリクスでは固有の不確実性(特定の患者集団に対してだけ不確実性が高いなど)を評価できないことを指摘している。"]} {"source": "Recent research in cross-lingual word embeddings has almost exclusively focused on offline methods, which independently train word embeddings in different languages and map them to a shared space through linear transformations. While several authors have questioned the underlying isomorphism assumption, which states that word embeddings in different languages have approximately the same structure, it is not clear whether this is an inherent limitation of mapping approaches or a more general issue when learning cross-lingual embeddings. So as to answer this question, we experiment with parallel corpora, which allows us to compare offline mapping to an extension of skip-gram that jointly learns both embedding spaces. We observe that, under these ideal conditions, joint learning yields to more isomorphic embeddings, is less sensitive to hubness, and obtains stronger results in bilingual lexicon induction. We thus conclude that current mapping methods do have strong limitations, calling for further research to jointly learn cross-lingual embeddings with a weaker cross-lingual signal.", "target": ["多言語の分散表現について、一緒に学習するのが良いか(joint)、別々に学習して対応を取るのがいいか(mapping)調べた研究。結果として、一緒に学習する方が同型の(isomorphic)、かつノイズ(Hubness)が低い構造を学習できるという結果。"]} {"source": "Climate change is one of the greatest challenges facing humanity, and we, as machine learning experts, may wonder how we can help. Here we describe how machine learning can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by machine learning, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the machine learning community to join the global effort against climate change.", "target": ["気候変動に関する活動と、機械学習の親和性について調査された研究。画像や自然言語処理など、機械学習の各技術が気候変動のどの活動にどれぐらい役立ちそうかまとめられている。役立つ、だけでなく適用にリスク/副作用が考えられるという点についても述べられている。"]} {"source": "Multi-head self-attention is a key component of the Transformer, a state-of-the-art architecture for neural machine translation. In this work we evaluate the contribution made by individual attention heads in the encoder to the overall performance of the model and analyze the roles played by them. We find that the most important and confident heads play consistent and often linguistically-interpretable roles. When pruning heads using a method based on stochastic gates and a differentiable relaxation of the L0 penalty, we observe that specialized heads are last to be pruned. Our novel pruning method removes the vast majority of heads without seriously affecting performance. For example, on the English-Russian WMT dataset, pruning 38 out of 48 encoder heads results in a drop of only 0.15 BLEU.", "target": ["TransformerのAttention Headについて、役に立っているHeadとそうでないHead、役に立っているものはどんな役割を果たしているのかについて調べた研究。結果として多くのHeadは枝刈り可能で、残ったHeadは位置関係/係り受け関係などの情報を持っているという。"]} {"source": "Scene graph prediction --- classifying the set of objects and predicates in a visual scene --- requires substantial training data. However, most predicates only occur a handful of times making them difficult to learn. We introduce the first scene graph prediction model that supports few-shot learning of predicates. Existing scene graph generation models represent objects using pretrained object detectors or word embeddings that capture semantic object information at the cost of encoding information about which relationships they afford. So, these object representations are unable to generalize to new few-shot relationships. We introduce a framework that induces object representations that are structured according to their visual relationships. Unlike past methods, our framework embeds objects that afford similar relationships closer together. This property allows our model to perform well in the few-shot setting. For example, applying the 'riding' predicate transformation to 'person' modifies the representation towards objects like 'skateboard' and 'horse' that enable riding. We generate object representations by learning predicates trained as message passing functions within a new graph convolution framework. The object representations are used to build few-shot predicate classifiers for rare predicates with as few as 1 labeled example. We achieve a 5-shot performance of 22.70 recall@50, a 3.7 increase when compared to strong transfer learning baselines.", "target": ["グラフを用いてシーンを理解させる手法の研究。ノードを認識オブジェクト、エッジを述語でつなぐ方式をとっている(英文法のSVOが基本で、S/Oがノード、VがS=>O/O=>Sへ変換するエッジとなっている)。文法構造を基本とすることで、Few-shotでの関係予測が行えるよう試みている"]} {"source": "Simultaneous machine translation begins to translate each source sentence before the source speaker is finished speaking, with applications to live and streaming scenarios. Simultaneous systems must carefully schedule their reading of the source sentence to balance quality against latency. We present the first simultaneous translation system to learn an adaptive schedule jointly with a neural machine translation (NMT) model that attends over all source tokens read thus far. We do so by introducing Monotonic Infinite Lookback (MILk) attention, which maintains both a hard, monotonic attention head to schedule the reading of the source sentence, and a soft attention head that extends from the monotonic head back to the beginning of the source. We show that MILk's adaptive schedule allows it to arrive at latency-quality trade-offs that are favorable to those of a recently proposed wait-k strategy for many latency values.", "target": ["リアルタイムの翻訳を行うためにAttentionを工夫した研究。基本は直近のStateを重視するMonotonicなAttentionだが、直近を重視するほど計算の待ち時間が長くなるため、直近ほどコストが高くなるようペナルティ項をもうけている。一方過去は計算済みのため、範囲を限定せずAttentionを許可している"]} {"source": "Deep generative models (DGMs) of images are now sufficiently mature that they produce nearly photorealistic samples and obtain scores similar to the data distribution on heuristics such as Frechet Inception Distance (FID). These results, especially on large-scale datasets such as ImageNet, suggest that DGMs are learning the data distribution in a perceptually meaningful space and can be used in downstream tasks. To test this latter hypothesis, we use class-conditional generative models from a number of model classes---variational autoencoders, autoregressive models, and generative adversarial networks (GANs)---to infer the class labels of real data. We perform this inference by training an image classifier using only synthetic data and using the classifier to predict labels on real data. The performance on this task, which we call Classification Accuracy Score (CAS), reveals some surprising results not identified by traditional metrics and constitute our contributions. First, when using a state-of-the-art GAN (BigGAN-deep), Top-1 and Top-5 accuracy decrease by 27.9\\% and 41.6\\%, respectively, compared to the original data; and conditional generative models from other model classes, such as Vector-Quantized Variational Autoencoder-2 (VQ-VAE-2) and Hierarchical Autoregressive Models (HAMs), substantially outperform GANs on this benchmark. Second, CAS automatically surfaces particular classes for which generative models failed to capture the data distribution, and were previously unknown in the literature. Third, we find traditional GAN metrics such as Inception Score (IS) and FID neither predictive of CAS nor useful when evaluating non-GAN models. Furthermore, in order to facilitate better diagnoses of generative models, we open-source the proposed metric.", "target": ["GANは本物っぽい画像を作れるようになったけど、その生成画像だけで分類器作っても上手くいかなかった、という研究。分類器の精度とGAN評価で頻繁に使われるFIDスコアとは相関がなく、FIDスコアが低い自己回帰やVQ-VAEで生成したデータの方がよい精度の分類器を作れたとのこと。"]} {"source": "Recent work has explored sequence-to-sequence latent variable models for expressive speech synthesis (supporting control and transfer of prosody and style), but has not presented a coherent framework for understanding the trade-offs between the competing methods. In this paper, we propose embedding capacity (the amount of information the embedding contains about the data) as a unified method of analyzing the behavior of latent variable models of speech, comparing existing heuristic (non-variational) methods to variational methods that are able to explicitly constrain capacity using an upper bound on representational mutual information. In our proposed model (Capacitron), we show that by adding conditional dependencies to the variational posterior such that it matches the form of the true posterior, the same model can be used for high-precision prosody transfer, text-agnostic style transfer, and generation of natural-sounding prior samples. For multi-speaker models, Capacitron is able to preserve target speaker identity during inter-speaker prosody transfer and when drawing samples from the latent prior. Lastly, we introduce a method for decomposing embedding capacity hierarchically across two sets of latents, allowing a portion of the latent variability to be specified and the remaining variability sampled from a learned prior. Audio examples are available on the web.", "target": ["音声合成における、潜在表現の役割を分析するフレームワークの提案。テキスト/話者以外に与える潜在表現は、テキストと音声のマッチ(合成精度)を優先するか、テキスト独立の音声表現(スタイル)を優先するかのトレードオフがある。教師とするEncoderでこれがどう異なるかをペナルティ項から分析している"]} {"source": "Not all neural network architectures are created equal, some perform much better than others for certain tasks. But how important are the weight parameters of a neural network compared to its architecture? In this work, we question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. We propose a search method for neural network architectures that can already perform a task without any explicit weight training. To evaluate these networks, we populate the connections with a single shared weight parameter sampled from a uniform random distribution, and measure the expected performance. We demonstrate that our method can find minimal neural network architectures that can perform several reinforcement learning tasks without weight training. On a supervised learning domain, we find network architectures that achieve much higher than chance accuracy on MNIST using random weights. Interactive version of this paper at this https URL", "target": ["ニューラルネットにおける構造の重要性を調べた研究。学習を一切せず、構造探索のみでタスクが解けるか検証している。進化戦略で優秀な構造を残していく手法を取っており、評価時の重みは一様分布から取得した共通のものを使う。これにより、学習なしでいくつかの強化学習タスクを解くことに成功。"]} {"source": "We show that the basic classification framework alone can be used to tackle some of the most challenging tasks in image synthesis. In contrast to other state-of-the-art approaches, the toolkit we develop is rather minimal: it uses a single, off-the-shelf classifier for all these tasks. The crux of our approach is that we train this classifier to be adversarially robust. It turns out that adversarial robustness is precisely what we need to directly manipulate salient features of the input. Overall, our findings demonstrate the utility of robustness in the broader machine learning context. Code and models for our experiments can be found at this https URL.", "target": ["Adversarial耐性のある分類機が、画像の生成(変換)やInpaint、超解像に有効であると示した研究。Adversarial耐性がある=画像(クラス)の本質的な特徴を捉えているので、画像のクラス変換を行う際は顕著な特徴を入れることが、Inpaint/超解像ではより「らしい」補完を行うことができるという。"]} {"source": "We introduce a pretraining technique called Selfie, which stands for SELFie supervised Image Embedding. Selfie generalizes the concept of masked language modeling of BERT (Devlin et al., 2019) to continuous data, such as images, by making use of the Contrastive Predictive Coding loss (Oord et al., 2018). Given masked-out patches in an input image, our method learns to select the correct patch, among other \"distractor\" patches sampled from the same image, to fill in the masked location. This classification objective sidesteps the need for predicting exact pixel values of the target patches. The pretraining architecture of Selfie includes a network of convolutional blocks to process patches followed by an attention pooling network to summarize the content of unmasked patches before predicting masked ones. During finetuning, we reuse the convolutional weights found by pretraining. We evaluate Selfie on three benchmarks (CIFAR-10, ImageNet 32 x 32, and ImageNet 224 x 224) with varying amounts of labeled data, from 5% to 100% of the training sets. Our pretraining method provides consistent improvements to ResNet-50 across all settings compared to the standard supervised training of the same network. Notably, on ImageNet 224 x 224 with 60 examples per class (5%), our method improves the mean accuracy of ResNet-50 from 35.6% to 46.7%, an improvement of 11.1 points in absolute accuracy. Our pretraining method also improves ResNet-50 training stability, especially on low data regime, by significantly lowering the standard deviation of test accuracies across different runs.", "target": ["BERT(#959)やGPT(#790)で使われている言語モデルの学習手法を、画像表現の学習に応用した研究。テキスト内の単語を部分的に落とし(Maskし)そこを予測させるという手法を、画像をパッチに分け部分的に落とし、残った部分の情報から抜けたパッチの位置を予測させるという形に置き換え学習を行なっている。"]} {"source": "Deep neural networks and decision trees operate on largely separate paradigms; typically, the former performs representation learning with pre-specified architectures, while the latter is characterised by learning hierarchies over pre-specified features with data-driven architectures. We unite the two via adaptive neural trees (ANTs), a model that incorporates representation learning into edges, routing functions and leaf nodes of a decision tree, along with a backpropagation-based training algorithm that adaptively grows the architecture from primitive modules (e.g., convolutional layers). We demonstrate that, whilst achieving competitive performance on classification and regression datasets, ANTs benefit from (i) lightweight inference via conditional computation, (ii) hierarchical separation of features useful to the predictive task e.g. learning meaningful class associations, such as separating natural vs. man-made objects, and (iii) a mechanism to adapt the architecture to the size and complexity of the training dataset.", "target": ["DNNとDecision Treeを複合させる研究。木の経路・分岐それぞれで表現学習を行う+木構造が固定的でないものは初とのこと。growthではleafにデータ分割/データ変換/既存維持のいずれかを追加・追加箇所以外の重みを固定し学習、を行なっていきlossが小さくなる操作を採用。refinementで全体の学習を行う"]} {"source": "Multi-hop Reading Comprehension (RC) requires reasoning and aggregation across several paragraphs. We propose a system for multi-hop RC that decomposes a compositional question into simpler sub-questions that can be answered by off-the-shelf single-hop RC models. Since annotations for such decomposition are expensive, we recast sub-question generation as a span prediction problem and show that our method, trained using only 400 labeled examples, generates sub-questions that are as effective as human-authored sub-questions. We also introduce a new global rescoring approach that considers each decomposition (i.e. the sub-questions and their answers) to select the best final answer, greatly improving overall performance. Our experiments on HotpotQA show that this approach achieves the state-of-the-art results, while providing explainable evidence for its decision making in the form of sub-questions.", "target": ["複数根拠文にまたがる推論が必要な質問(日光は日本にある/日光には猿がいる=日本に猿がいる、等)について、一度に回答せず一つ一つの質問に分解して答えるという手法。個別質問への分解は少量のアノテーションで学習し(400件)、入力文を元にどの個別質問の回答が正答かを予測する。"]} {"source": "To be successful in real-world tasks, Reinforcement Learning (RL) needs to exploit the compositional, relational, and hierarchical structure of the world, and learn to transfer it to the task at hand. Recent advances in representation learning for language make it possible to build models that acquire world knowledge from text corpora and integrate this knowledge into downstream decision making problems. We thus argue that the time is right to investigate a tight integration of natural language understanding into RL in particular. We survey the state of the field, including work on instruction following, text games, and learning from textual domain knowledge. Finally, we call for the development of new environments as well as further investigation into the potential uses of recent Natural Language Processing (NLP) techniques for such tasks.", "target": ["BERTやELMoが登場した今、自然言語と強化学習の組み合わせを考える時期に来ているのでは?という研究。言語で指示するか補助するか、という2つの観点で既存研究を整理している(言語理解が必須か否かで指示/補助が別れる)。自然言語知識を経由したタスク間の転移を最終的に目指しているよう。"]} {"source": "We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of the latent representation—characterising it as the fulfilment of two factors: a) the latent encodings of the data having an appropriate level of overlap, and b) the aggregate encoding of the data conforming to a desired structure, represented through the prior. Decomposition permits disentanglement, i.e. explicit independence between latents, as a special case, but also allows for a much richer class of properties to be imposed on the learnt representation, such as sparsity, clustering, independent subspaces, or even intricate hierarchical dependency relationships. We show that the β-VAE varies from the standard VAE predominantly in its control of latent overlap and that for the standard choice of an isotropic Gaussian prior, its objective is invariant to rotations of the latent representation. Viewed from the decomposition perspective, breaking this invariance with simple manipulations of the prior can yield better disentanglement with little or no detriment to reconstructions. We further demonstrate how other choices of prior can assist in producing different decompositions and introduce an alternative training objective that allows the control of both decomposition factors in a principled manner.", "target": ["VAEのような入力を構造分解(Disentangle)する手法について、評価手法+改善方法を提案した研究。分解した分布間の適切なoverlapと、全体を合わせたときのデータ分布との適合性の2点を評価点として提案。β-VAEは前者はコントロールできるが後者はできないことから、事前/事後の距離を正則化項として追加"]} {"source": "A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations. Properly chosen augmentation policies can lead to significant generalization improvements; however, state-of-the-art approaches such as AutoAugment are computationally infeasible to run for the ordinary user. In this paper, we introduce a new data augmentation algorithm, Population Based Augmentation (PBA), which generates nonstationary augmentation policy schedules instead of a fixed augmentation policy. We show that PBA can match the performance of AutoAugment on CIFAR-10, CIFAR-100, and SVHN, with three orders of magnitude less overall compute. On CIFAR-10 we achieve a mean test error of 1.46%, which is a slight improvement upon the current state-of-the-art. The code for PBA is open source and is available at this https URL.", "target": ["自動で最適なData Augmentationを探索するAutoAugment(#764 )の計算時間を1000倍高速にしたという研究。Data Augmentationの適用もパラメーターの一種と考え、進化戦略(PBA)を用いて良好な結果を出したモデル/Augmentationを残していく形を取っている。モデルのパラメーターが持ち越されるため再計算の必要がない"]} {"source": "Bridging the 'reality gap' that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.", "target": ["シミュレーション環境をランダム化する事で実環境への転移が可能である事を示した論文。 この論文ではObject localization(単純な物体の重心位置の推定)しているが、最終的にはロボット制御への応用のための研究。後続の論文では、ロボットコントロールに応用している。 ロボット制御の目的のために、実画像の事前トレーニング無しのシュミレーション画像のみで学習されたモデルをを現実に転移させる事に初めて成功した研究。"]} {"source": "Transformer architectures show significant promise for natural language processing. Given that a single pretrained model can be fine-tuned to perform well on many different tasks, these networks appear to extract generally useful linguistic features. A natural question is how such networks represent this information internally. This paper describes qualitative and quantitative investigations of one particularly effective model, BERT. At a high level, linguistic features seem to be represented in separate semantic and syntactic subspaces. We find evidence of a fine-grained geometric representation of word senses. We also present empirical descriptions of syntactic representations in both attention matrices and individual word embeddings, as well as a mathematical argument to explain the geometry of these representations.", "target": ["BERT(#959)内で単語/単語間の関係がどう表現されているかを調べた研究。単語間の関係はAttention、単語の意味については複数文を入れた場合の位置変動を可視化し確認+語義曖昧性タスクの精度で検証している。BERTは係り受けや単語意味を認識しており、この「認識」は各線型空間に射影できるのではとしている"]} {"source": "This paper explores the use of knowledge distillation to improve a Multi-Task Deep Neural Network (MT-DNN) (Liu et al., 2019) for learning text representations across multiple natural language understanding tasks. Although ensemble learning can improve model performance, serving an ensemble of large DNNs such as MT-DNN can be prohibitively expensive. Here we apply the knowledge distillation method (Hinton et al., 2015) in the multi-task learning setting. For each task, we train an ensemble of different MT-DNNs (teacher) that outperforms any single model, and then train a single MT-DNN (student) via multi-task learning to \\emph{distill} knowledge from these ensemble teachers. We show that the distilled MT-DNN significantly outperforms the original MT-DNN on 7 out of 9 GLUE tasks, pushing the GLUE benchmark (single model) to 83.7\\% (1.5\\% absolute improvement\\footnote{ Based on the GLUE leaderboard at this https URL as of April 1, 2019.}). The code and pre-trained models will be made publicly available at this https URL.", "target": ["モデルの蒸留を行い精度を上げる手法を、マルチタスクにも適用した研究。蒸留では教師となるネットワークを先に学習し、教師の予測で生徒を学習させる。本研究では各タスクで教師(アンサンブルのモデル)を作成し、学習データに加え教師モデルの予測結果(分類なら分類確率=soft target)も使い学習を行う"]} {"source": "When function approximation is deployed in reinforcement learning (RL), the same problem may be formulated in different ways, often by treating a pre-processing step as a part of the environment or as part of the agent. As a consequence, fundamental concepts in RL, such as (optimal) value functions, are not uniquely defined as they depend on where we draw this agent-environment boundary, causing problems in theoretical analyses that provide optimality guarantees. We address this issue via a simple and novel boundary-invariant analysis of Fitted Q-Iteration, a representative RL algorithm, where the assumptions and the guarantees are invariant to the choice of boundary. We also discuss closely related issues on state resetting and Monte-Carlo Tree Search, deterministic vs stochastic systems, imitation learning, and the verifiability of theoretical assumptions from data.", "target": ["強化学習において環境とエージェント(価値関数etc)の境界をどこに置くのかという研究。画面を入力とする場合、一般的には画面に対し前処理を行う。この場合「状態」とは元の画面なのか、前処理済みの画面なのか?という問い。状態に対する前提がアルゴリズムの理論的な解析に影響を与えているという。"]} {"source": "Standard sequential generation methods assume a pre-specified generation order, such as text generation methods which generate words from left to right. In this work, we propose a framework for training models of text generation that operate in non-monotonic orders; the model directly learns good orders, without any additional annotation. Our framework operates by generating a word at an arbitrary position, and then recursively generating words to its left and then words to its right, yielding a binary tree. Learning is framed as imitation learning, including a coaching method which moves from imitating an oracle to reinforcing the policy's own preferences. Experimental results demonstrate that using the proposed method, it is possible to learn policies which generate text without pre-specifying a generation order, while achieving competitive performance with conventional left-to-right generation.", "target": ["順不同で文生成を行う研究。Quick Sortを模した生成を手本とし、模倣学習の枠組みで学習を行う。ある単語を選択=>その単語を境界に前半/後半を分ける、という操作を繰り返し、最終的にEndが予測されるまで行うという形。通常の生成(単方向)に若干劣るものの、そこそこの精度での生成ができている。"]} {"source": "Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales that time-domain models have yet to achieve. We apply our model to a variety of audio generation tasks, including unconditional speech generation, music generation, and text-to-speech synthesis---showing improvements over previous approaches in both density estimates and human judgments.", "target": ["音声の生成を、一般的な振幅X時系列ではなく周波数X時系列(スペクトログラム)で行った研究。スペクトログラムは縦横軸の意味がそれぞれ異なるため、雑にCNNで畳み込むのは適さない。そこで、時間方向/周波数方向で別個のEncode(RNN)を行い生成を行っている。"]} {"source": "Object segmentation is a crucial problem that is usually solved by using supervised learning approaches over very large datasets composed of both images and corresponding object masks. Since the masks have to be provided at pixel level, building such a dataset for any new domain can be very time-consuming. We present ReDO, a new model able to extract objects from images without any annotation in an unsupervised way. It relies on the idea that it should be possible to change the textures or colors of the objects without changing the overall distribution of the dataset. Following this assumption, our approach is based on an adversarial architecture where the generator is guided by an input sample: given an image, it extracts the object mask, then redraws a new object at the same location. The generator is controlled by a discriminator that ensures that the distribution of generated images is aligned to the original one. We experiment with this method on different datasets and demonstrate the good quality of extracted masks.", "target": ["セグメンテーションを、画像生成の文脈で解くという研究。セグメンテーションの学習はピクセルレベルのマスクアノテーションが必要で、データを作るコストが高い。しかし、画像生成なら教師なしで行える。具体的にはGANの枠組みで一旦マスクを生成し、各マスク内の画像を生成+合わせる形で生成を行う"]} {"source": "We introduce a lifelong language learning setup where a model needs to learn from a stream of text examples without any dataset identifier. We propose an episodic memory model that performs sparse experience replay and local adaptation to mitigate catastrophic forgetting in this setup. Experiments on text classification and question answering demonstrate the complementary benefits of sparse experience replay and local adaptation to allow the model to continuously learn from new datasets. We also show that the space complexity of the episodic memory module can be reduced significantly (~50-90%) by randomly choosing which examples to store in memory with a minimal decrease in performance. We consider an episodic memory component as a crucial building block of general linguistic intelligence and see our model as a first step in that direction.", "target": ["継続的な学習を行う自然言語処理のモデルの提案。強化学習で使用されるExperience Replayに似た機構を持ち、ここに学習サンプルをランダムに格納する。学習中定期的にReplayから学習するほか、推論時もテストサンプルに似たデータをサンプリングし追加学習する(local adaptation)。"]} {"source": "Neural networks are easier to optimise when they have many more weights than are required for modelling the mapping from inputs to outputs. This suggests a two-stage learning procedure that first learns a large net and then prunes away connections or hidden units. But standard training does not necessarily encourage nets to be amenable to pruning. We introduce targeted dropout, a method for training a neural network so that it is robust to subsequent pruning. Before computing the gradients for each weight update, targeted dropout stochastically selects a set of units or weights to be dropped using a simple self-reinforcing sparsity criterion and then computes the gradients for the remaining weights. The resulting network is robust to post hoc pruning of weights or units that frequently occur in the dropped sets. The method improves upon more complicated sparsifying regularisers while being simple to implement and easy to tune.", "target": ["Dropoutを、あまり役立っていない接続に狙いをつけて行う手法の提案。イメージ的には枝刈りを学習中に行なっている形に近い。DNNでは役立っている接続が全体として少ないため、役立っているものに絞り込んで学習していこうという発想。"]} {"source": "Multilingual neural machine translation (NMT) enables training a single model that supports translation from multiple source languages into multiple target languages. In this paper, we push the limits of multilingual NMT in terms of number of languages being used. We perform extensive experiments in training massively multilingual NMT models, translating up to 102 languages to and from English within a single model. We explore different setups for training such models and analyze the trade-offs between translation quality and various modeling decisions. We report results on the publicly available TED talks multilingual corpus where we show that massively multilingual many-to-many models are effective in low resource settings, outperforming the previous state-of-the-art while supporting up to 59 languages. Our experiments on a large-scale dataset with 102 languages to and from English and up to one million examples per direction also show promising results, surpassing strong bilingual baselines and encouraging future work on massively multilingual NMT.", "target": ["複数言語の翻訳を並列で学習させることが、どれだけスケールするかを調べた研究。タスクはto英語/from英語が基本で、英語=>複数言語、複数言語=>英語、そしてfrom/to両方学習の計3つ。to英語/from英語単体より、両方学習の方が良いとの結果。この傾向は学習データの大きさによらず観測されるという。"]} {"source": "We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation. To this end, we scale and enhance the autoregressive priors used in VQ-VAE to generate synthetic samples of much higher coherence and fidelity than possible before. We use simple feed-forward encoder and decoder networks, making our model an attractive candidate for applications where the encoding and/or decoding speed is critical. Additionally, VQ-VAE requires sampling an autoregressive model only in the compressed latent space, which is an order of magnitude faster than sampling in the pixel space, especially for large images. We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GAN's known shortcomings such as mode collapse and lack of diversity.", "target": ["画像は圧縮しても劣化がほぼないことをヒントに、離散表現化(≒圧縮)を経由するVQ-VAE(#490)を用い画像生成を行なった研究。VQ-VAEはTop=>Bottomの2段階にしており、生成に使用する分布(PixelCNNベース)もTopのみ/Topで条件付けたBottomを使用。生成時は2つの分布から得たベクトルを単純な全結合に入力し行う"]} {"source": "To understand diverse natural language commands, virtual assistants today are trained with numerous labor-intensive, manually annotated sentences. This paper presents a methodology and the Genie toolkit that can handle new compound commands with significantly less manual effort. We advocate formalizing the capability of virtual assistants with a Virtual Assistant Programming Language (VAPL) and using a neural semantic parser to translate natural language into VAPL code. Genie needs only a small realistic set of input sentences for validating the neural model. Developers write templates to synthesize data; Genie uses crowdsourced paraphrases and data augmentation, along with the synthesized data, to train a semantic parser. We also propose design principles that make VAPL languages amenable to natural language translation. We apply these principles to revise ThingTalk, the language used by the Almond virtual assistant. We use Genie to build the first semantic parser that can support compound virtual assistants commands with unquoted free-form parameters. Genie achieves a 62% accuracy on realistic user inputs. We demonstrate Genie's generality by showing a 19% and 31% improvement over the previous state of the art on a music skill, aggregate functions, and access control.", "target": ["ユーザーの発話/入力テキストを、コマンド実行可能なフォーマットに変換するフレームワークの開発。変換先のコードはThingTalkをベースにしている。変換はNNで行うが当然学習データの問題があるため、テンプレートをベースにしたデータの自動生成やクラウドソーシングとの連携機能を搭載している。"]} {"source": "Does adding a theorem to a paper affect its chance of acceptance? Does labeling a post with the author's gender affect the post popularity? This paper develops a method to estimate such causal effects from observational text data, adjusting for confounding features of the text such as the subject or writing quality. We assume that the text suffices for causal adjustment but that, in practice, it is prohibitively high-dimensional. To address this challenge, we develop causally sufficient embeddings, low-dimensional document representations that preserve sufficient information for causal identification and allow for efficient estimation of causal effects. Causally sufficient embeddings combine two ideas. The first is supervised dimensionality reduction: causal adjustment requires only the aspects of text that are predictive of both the treatment and outcome. The second is efficient language modeling: representations of text are designed to dispose of linguistically irrelevant information, and this information is also causally irrelevant. Our method adapts language models (specifically, word embeddings and topic models) to learn document embeddings that are able to predict both treatment and outcome. We study causally sufficient embeddings with semi-synthetic datasets and find that they improve causal estimation over related embedding methods. We illustrate the methods by answering the two motivating questions---the effect of a theorem on paper acceptance and the effect of a gender label on post popularity. Code and data available at this https URL}{this http URL", "target": ["自然言語処理で因果推論を行う試み。論文のAccept、掲示板における投稿評価を題材に、BERTを用いたembeddingから論文の特定単語やTheoremが採択に影響を与える度合い、投稿者の性別が評価に与える影響を分析している。"]} {"source": "Recent progress in natural language generation has raised dual-use concerns. While applications like summarization and translation are positive, the underlying technology also might enable adversaries to generate neural fake news: targeted propaganda that closely mimics the style of real news. Modern computer security relies on careful threat modeling: identifying potential threats and vulnerabilities from an adversary's point of view, and exploring potential mitigations to these threats. Likewise, developing robust defenses against neural fake news requires us first to carefully investigate and characterize the risks of these models. We thus present a model for controllable text generation called Grover. Given a headline like `Link Found Between Vaccines and Autism,' Grover can generate the rest of the article; humans find these generations to be more trustworthy than human-written disinformation. Developing robust verification techniques against generators like Grover is critical. We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data. Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92% accuracy, demonstrating the importance of public release of strong generators. We investigate these results further, showing that exposure bias -- and sampling strategies that alleviate its effects -- both leave artifacts that similar discriminators can pick up on. We conclude by discussing ethical issues regarding the technology, and plan to release Grover publicly, helping pave the way for better detection of neural fake news.", "target": ["言語モデルなどを利用したフェイクニュース生成に対応していくために、基礎的な脅威モデルを作成したという研究。具体的にはニュース生成器/識別器を作成し現時点での生成/識別精度を検証している。生成器は発行元/日付/タイトル/本文等について、指定されたもの以外を順次穴埋めしていく形で生成する"]} {"source": "A disentangled representation encodes information about the salient factors of variation in the data independently. Although it is often argued that this representational format is useful in learning to solve many real-world down-stream tasks, there is little empirical evidence that supports this claim. In this paper, we conduct a large-scale study that investigates whether disentangled representations are more suitable for abstract reasoning tasks. Using two new tasks similar to Raven's Progressive Matrices, we evaluate the usefulness of the representations learned by 360 state-of-the-art unsupervised disentanglement models. Based on these representations, we train 3600 abstract reasoning models and observe that disentangled representations do in fact lead to better down-stream performance. In particular, they enable quicker learning using fewer samples.", "target": ["既存の表現学習手法について、画像内オブジェクトを識別する性能を検証した研究。レーヴン漸進的マトリックスという画像内からパターンを認識して答えるテストを参考にデータセットを作成し、β-VAEなど代表的な手法についてパラメーターを変えて幅広く実験を行なっている。"]} {"source": "Leveraging new data sources is a key step in accelerating the pace of materials design and discovery. To complement the strides in synthesis planning driven by historical, experimental, and computed data, we present an automated method for connecting scientific literature to synthesis insights. Starting from natural language text, we apply word embeddings from language models, which are fed into a named entity recognition model, upon which a conditional variational autoencoder is trained to generate syntheses for arbitrary materials. We show the potential of this technique by predicting precursors for two perovskite materials, using only training data published over a decade prior to their first reported syntheses. We demonstrate that the model learns representations of materials corresponding to synthesis-related properties, and that the model's behavior complements existing thermodynamic knowledge. Finally, we apply the model to perform synthesizability screening for proposed novel perovskite compounds.", "target": ["無機合成の情報はデータベース化が進んでいない。論文テキストをCVAEで学習させることで合成手順の自動生成を実現した。外部知識なしに材料の特性に応じた合成手法を生成できた。"]} {"source": "Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at this https URL.", "target": ["精度/実行速度のトレードオフを上手くチューニングする手法と、チューニング元のネットワーク(EfficientNet)の提案(これはAutoMLで構築)。ネットワークの幅・深さ・入力画像の解像度(H x M)が調整対象で、これらの相関関係から係数を算出することで単一パラメーターによる同時調整を可能にしている"]} {"source": "Behavioral cloning reduces policy learning to supervised learning by training a discriminative model to predict expert actions given observations. Such discriminative models are non-causal: the training procedure is unaware of the causal structure of the interaction between the expert and the environment. We point out that ignoring causality is particularly damaging because of the distributional shift in imitation learning. In particular, it leads to a counter-intuitive \"causal misidentification\" phenomenon: access to more information can yield worse performance. We investigate how this problem arises, and propose a solution to combat it through targeted interventions---either environment interaction or expert queries---to determine the correct causal model. We show that causal misidentification occurs in several benchmark control domains as well as realistic driving settings, and validate our solution against DAgger and other baselines and ablations.", "target": ["強化学習で、見せかけの因果関係で行動をするケースを防止する手法の提案(「ブレーキを踏む」行動が「信号」ではなく「速度メーター」に依存するなど)。環境の情報が多いほど惑わされやすい。関係を示す0/1ベクトル(関係定義グラフから取得)で余分な情報をマスクし、模倣学習を用い学習させている。"]} {"source": "With a view to bridging the gap between deep learning and symbolic AI, we present a novel end-to-end neural network architecture that learns to form propositional representations with an explicitly relational structure from raw pixel data. In order to evaluate and analyse the architecture, we introduce a family of simple visual relational reasoning tasks of varying complexity. We show that the proposed architecture, when pre-trained on a curriculum of such tasks, learns to generate reusable representations that better facilitate subsequent learning on previously unseen tasks when compared to a number of baseline architectures. The workings of a successfully trained model are visualised to shed some light on how the architecture functions.", "target": ["DNNに、認識対象間の関係を論理法則(and/or/notなど)で表現することを学習させる手法の提案。対象オブジェクトの選択(attention)、関係評価の特徴選択(binding)、関係の推定(relation)の3段階からなる。画像内のオブジェクトパターンを認識するタスクで効果を確認。"]} {"source": "Although the popular MNIST dataset [LeCun et al., 1994] is derived from the NIST database [Grother and Hanaoka, 1995], the precise processing steps for this derivation have been lost to time. We propose a reconstruction that is accurate enough to serve as a replacement for the MNIST dataset, with insignificant changes in accuracy. We trace each MNIST digit to its NIST source and its rich metadata such as writer identifier, partition identifier, etc. We also reconstruct the complete MNIST test set with 60,000 samples instead of the usual 10,000. Since the balance 50,000 were never distributed, they enable us to investigate the impact of twenty-five years of MNIST experiments on the reported testing performances. Our results unambiguously confirm the trends observed by Recht et al. [2018, 2019]: although the misclassification rates are slightly off, classifier ordering and model selection remain broadly reliable. We attribute this phenomenon to the pairing benefits of comparing classifiers on the same digits.", "target": ["MNISTの原型を新しく作り直したという研究。具体的には、詳細が不明だった前処理のプロセスを構築し直し、当時計算リソースの問題から使われなかった50000のテストセットを発掘して加えている。"]} {"source": "While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. In this paper, we show that flows can in fact be extended to discrete events---and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Discrete flows have numerous applications. We consider two flow architectures: discrete autoregressive flows that enable bidirectionality, allowing, for example, tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows that enable efficient non-autoregressive generation as in RealNVP. Empirically, we find that discrete autoregressive flows outperform autoregressive baselines on synthetic discrete distributions, an addition task, and Potts models; and bipartite flows can obtain competitive performance with autoregressive baselines on character-level language modeling for Penn Tree Bank and text8.", "target": ["可逆変換可能な関数で、確率分布を連続的に変換するNormalizing Flowを離散データに対して適用したという研究。Normalizing Flowの構成要素である変換関数、変換による密度変化を調整する項の2つについて、離散の場合どうするかをそれぞれ定義して導出している。文字レベルの言語モデルで有効性を確認"]} {"source": "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.", "target": ["ICRL Best Paperの1つ。ネットワークの画像汚染と摂動に対する。AlexNetをベースラインとした評価指標を提案。汚染の方は、汚染された画像に対してどれだけ強いか、綺麗なデータと比較した場合の精度の落ち込み度、をAlexネットと比較する。 著者らがいうには、ヒストグラム平坦化、multi scaleで画像を取り込む手法(MSDNetsw等)や複数の特徴を取り込む手法(DenseNet, ResNext等)が頑健性に対して良いらしい。"]} {"source": "Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp. We show that MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success.", "target": ["半教師あり学習で使われる3つの戦略(consistensy regularization: 本論ではデータ拡張された予測値の平均を用いる, generic regularization : 本論ではMIXUPを使う, entropy minimization:本論では温度項を用いてデータを決定境界から遠ざける)を合わせてsota達成。"]} {"source": "Modern neural sequence generation models are built to either generate tokens step-by-step from scratch or (iteratively) modify a sequence of tokens bounded by a fixed length. In this work, we develop Levenshtein Transformer, a new partially autoregressive model devised for more flexible and amenable sequence generation. Unlike previous approaches, the atomic operations of our model are insertion and deletion. The combination of them facilitates not only generation but also sequence refinement allowing dynamic length changes. We also propose a set of new training techniques dedicated at them, effectively exploiting one as the other's learning signal thanks to their complementary nature. Experiments applying the proposed model achieve comparable performance but much-improved efficiency on both generation (e.g. machine translation, text summarization) and refinement tasks (e.g. automatic post-editing). We further confirm the flexibility of our model by showing a Levenshtein Transformer trained by machine translation can straightforwardly be used for automatic post-editing.", "target": ["言語モデルでは通常一方通行に次の単語を予測していくが、編集を可能にすることでより柔軟な生成を行うという研究。具体的には、各単語に対し「削除」「挿入(単語の前に何個挿入を行うか)」「挿入単語」の3つを予測し、予測結果をもとに編集を行うことで系列を生成していく。生成速度面で大きな寄与。"]} {"source": "Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics. Supervised learning methods based on behavioral cloning (BC) suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation. Recent methods based on reinforcement learning (RL), such as inverse RL and generative adversarial imitation learning (GAIL), overcome this issue by training an RL agent to match the demonstrations over a long horizon. Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training. We propose a simple alternative that still uses RL, but does not require learning a reward function. The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states. We accomplish this by giving the agent a constant reward of r=+1 for matching the demonstrated action in a demonstrated state, and a constant reward of r=0 for all other behavior. Our method, which we call soft Q imitation learning (SQIL), can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm. Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation. Empirically, we show that SQIL outperforms BC and achieves competitive results compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo.", "target": ["模倣学習で、シンプルな手法で既存の優秀な手法(GANをベースにしたGAIL)を上回ったという研究。エキスパートの動作を模倣するよう学習するが、エキスパートの行動から外れた場合にペナルティを与える(報酬を0にする)ことで正則化を行うという手法。"]} {"source": "In this work, we develop a technique to produce counterfactual visual explanations. Given a 'query' image $I$ for which a vision system predicts class $c$, a counterfactual visual explanation identifies how $I$ could change such that the system would output a different specified class $c'$. To do this, we select a 'distractor' image $I'$ that the system predicts as class $c'$ and identify spatial regions in $I$ and $I'$ such that replacing the identified region in $I$ with the identified region in $I'$ would push the system towards classifying $I$ as $c'$. We apply our approach to multiple image classification datasets generating qualitative results showcasing the interpretability and discriminativeness of our counterfactual explanations. To explore the effectiveness of our explanations in teaching humans, we present machine teaching experiments for the task of fine-grained bird classification. We find that users trained to distinguish bird species fare better when given access to counterfactual explanations in addition to training examples.", "target": ["「サンプルの画像のある部分を変化させると別のクラスに分類される」という可視化を行うことによって、画像の分類根拠を説明するという試み。野鳥を分類する教育資料への利用例が書いてあり、この技術で2つの鳥の違いを明示することで教育の効率化を図っている。"]} {"source": "Generative Adversarial Networks (GANs) enjoy great success at image generation, but have proven difficult to train in the domain of natural language. Challenges with gradient estimation, optimization instability, and mode collapse have lead practitioners to resort to maximum likelihood pre-training, followed by small amounts of adversarial fine-tuning. The benefits of GAN fine-tuning for language generation are unclear, as the resulting models produce comparable or worse samples than traditional language models. We show it is in fact possible to train a language GAN from scratch -- without maximum likelihood pre-training. We combine existing techniques such as large batch sizes, dense rewards and discriminator regularization to stabilize and improve language GANs. The resulting model, ScratchGAN, performs comparably to maximum likelihood training on EMNLP2017 News and WikiText-103 corpora according to quality and diversity metrics.", "target": ["GANで言語モデルを作成しようという試み。G側は(単語の)サンプリングが必要なため、強化学習の枠組みで学習を行っている。D側は報酬を与えることになるが、生成系列全体ではなく生成単語単位に報酬を与える形をとっている。単語単位の密な報酬と大きいバッチサイズ(512)が学習に寄与するとの結果。"]} {"source": "Graph neural networks have become one of the most important techniques to solve machine learning problems on graph-structured data. Recent work on vertex classification proposed deep and distributed learning models to achieve high performance and scalability. However, we find that the feature vectors of benchmark datasets are already quite informative for the classification task, and the graph structure only provides a means to denoise the data. In this paper, we develop a theoretical framework based on graph signal processing for analyzing graph neural networks. Our results indicate that graph neural networks only perform low-pass filtering on feature vectors and do not have the non-linear manifold learning property. We further investigate their resilience to feature noise and propose some insights on GCN-based graph neural network design.", "target": ["Graph Convolutionは、グラフ信号に対するローパスフィルタの役割しか果たしていないのではという研究。代表的なデータセット(Cora等)で、高周波成分が有効な特徴を含まない場合(=単なるノイズの場合)にNNとGCNで精度の違いが大きくなることを確認。隣接行列の適用がフィルタに相当するという。"]} {"source": "We address the problem of automatic decompilation, converting a program in low-level representation back to a higher-level human-readable programming language. The problem of decompilation is extremely important for security researchers. Finding vulnerabilities and understanding how malware operates is much easier when done over source code. The importance of decompilation has motivated the construction of hand-crafted rule-based decompilers. Such decompilers have been designed by experts to detect specific control-flow structures and idioms in low-level code and lift them to source level. The cost of supporting additional languages or new language features in these models is very high. We present a novel approach to decompilation based on neural machine translation. The main idea is to automatically learn a decompiler from a given compiler. Given a compiler from a source language S to a target language T , our approach automatically trains a decompiler that can translate (decompile) T back to S . We used our framework to decompile both LLVM IR and x86 assembly to C code with high success rates. Using our LLVM and x86 instantiations, we were able to successfully decompile over 97% and 88% of our benchmarks respectively.", "target": ["機械翻訳の仕組みを用いて、プログラムのデコンパイルを行ったという研究。具体的には、LLVMの中間コードやx86のアセンブリからCのコードを生成する。入力から直にコードを生成するのでなく、一旦canonical形式を経由し、そこからテンプレートを生成して中の値を埋めるという形式をとっている。"]} {"source": "Language model (LM) pre-training has resulted in impressive performance and sample efficiency on a variety of language understanding tasks. However, it remains unclear how to best use pre-trained LMs for generation tasks such as abstractive summarization, particularly to enhance sample efficiency. In these sequence-to-sequence settings, prior work has experimented with loading pre-trained weights into the encoder and/or decoder networks, but used non-pre-trained encoder-decoder attention weights. We instead use a pre-trained decoder-only network, where the same Transformer LM both encodes the source and generates the summary. This ensures that all parameters in the network, including those governing attention over source states, have been pre-trained before the fine-tuning step. Experiments on the CNN/Daily Mail dataset show that our pre-trained Transformer LM substantially improves over pre-trained Transformer encoder-decoder networks in limited-data settings. For instance, it achieves 13.1 ROUGE-2 using only 1% of the training data (~3000 examples), while pre-trained encoder-decoder models score 2.3 ROUGE-2.", "target": ["事前学習済み言語モデルを用い、少量で学習可能な要約モデルを作成した研究。既存のEncoder/Decoderモデルを踏襲するタイプと(重みは共通で事前学習済みの言語モデルからとる)、文章/要約を結合し言語モデルタスクにして要約を生成する2パターンを検証。後者にて3000程度で十分なROUGEが出ることを確認"]} {"source": "Convolutional neural networks (CNNs) have obtained astounding successes for important pattern recognition tasks, but they suffer from high computational complexity and the lack of interpretability. The recent Tsetlin Machine (TM) attempts to address this lack by using easy-to-interpret conjunctive clauses in propositional logic to solve complex pattern recognition problems. The TM provides competitive accuracy in several benchmarks, while keeping the important property of interpretability. It further facilitates hardware-near implementation since inputs, patterns, and outputs are expressed as bits, while recognition and learning rely on straightforward bit manipulation. In this paper, we exploit the TM paradigm by introducing the Convolutional Tsetlin Machine (CTM), as an interpretable alternative to CNNs. Whereas the TM categorizes an image by employing each clause once to the whole image, the CTM uses each clause as a convolution filter. That is, a clause is evaluated multiple times, once per image patch taking part in the convolution. To make the clauses location-aware, each patch is further augmented with its coordinates within the image. The output of a convolution clause is obtained simply by ORing the outcome of evaluating the clause on each patch. In the learning phase of the TM, clauses that evaluate to 1 are contrasted against the input. For the CTM, we instead contrast against one of the patches, randomly selected among the patches that made the clause evaluate to 1. Accordingly, the standard Type I and Type II feedback of the classic TM can be employed directly, without further modification. The CTM obtains a peak test accuracy of 99.4% on MNIST, 96.31% on Kuzushiji-MNIST, 91.5% on Fashion-MNIST, and 100.0% on the 2D Noisy XOR Problem, which is competitive with results reported for simple 4-layer CNNs, BinaryConnect, Logistic Circuits and an FPGA-accelerated Binary CNN.", "target": ["解釈が容易かつ簡単に演算ができるCNNの提案。Tsetlin Machineというアルゴリズムをベースとしている。入力に対して0/1の値で構成されたフィルタを論理演算で適用し、各フィルタの演算結果を集計し出力を行う。各フィルタ(演算結果)の重みはバンディットアルゴリズムで調整する(ラベルとの適合が報酬)"]} {"source": "Humans are able to perform a myriad of sophisticated tasks by drawing upon skills acquired through prior experience. For autonomous agents to have this capability, they must be able to extract reusable skills from past experience that can be recombined in new ways for subsequent tasks. Furthermore, when controlling complex high-dimensional morphologies, such as humanoid bodies, tasks often require coordination of multiple skills simultaneously. Learning discrete primitives for every combination of skills quickly becomes prohibitive. Composable primitives that can be recombined to create a large variety of behaviors can be more suitable for modeling this combinatorial explosion. In this work, we propose multiplicative compositional policies (MCP), a method for learning reusable motor skills that can be composed to produce a range of complex behaviors. Our method factorizes an agent's skills into a collection of primitives, where multiple primitives can be activated simultaneously via multiplicative composition. This flexibility allows the primitives to be transferred and recombined to elicit new behaviors as necessary for novel tasks. We demonstrate that MCP is able to extract composable skills for highly complex simulated characters from pre-training tasks, such as motion imitation, and then reuse these skills to solve challenging continuous control tasks, such as dribbling a soccer ball to a goal, and picking up an object and transporting it to a target location.", "target": ["複数のベース戦略を組み合わせ、様々な戦略を実現する手法の提案。通常、戦略の組み合わせは合算(Additive)で行われることが多いが、掛け合わせ(Multiplicative)を使うことで複数戦略を統合して行動分布を作るような形をとっている。これにより、複雑な連続コントロールタスクができることを確認"]} {"source": "This work addresses a new problem that learns generative adversarial networks (GANs) from multiple data collections that are each i) owned separately by different clients and ii) drawn from a non-identical distribution that comprises different classes. Given such non-iid data as input, we aim to learn a distribution involving all the classes input data can belong to, while keeping the data decentralized in each client storage. Our key contribution to this end is a new decentralized approach for learning GANs from non-iid data called Forgiver-First Update (F2U), which a) asks clients to train an individual discriminator with their own data and b) updates a generator to fool the most `forgiving' discriminators who deem generated samples as the most real. Our theoretical analysis proves that this updating strategy allows the decentralized GAN to achieve a generator's distribution with all the input classes as its global optimum based on f-divergence minimization. Moreover, we propose a relaxed version of F2U called Forgiver-First Aggregation (F2A) that performs well in practice, which adaptively aggregates the discriminators while emphasizing forgiving ones. Our empirical evaluations with image generation tasks demonstrated the effectiveness of our approach over state-of-the-art decentralized learning methods.", "target": ["データが分散した環境でGANの学習を行う研究。ユーザーのモバイル端末内にデータがあり、それで学習したいが個々の秘匿性は担保したいというケースが想定されている。個々の端末でDを、中央側で各Dを用いGを学習するというのが基本構成で、各Dを合わせる際は最も判断がゆるいもの(Forgiver)に合わせる"]} {"source": "Unsupervised image-to-image translation methods learn to map images in a given class to an analogous image in a different class, drawing on unstructured (non-registered) datasets of images. While remarkably successful, current methods require access to many images in both source and destination classes at training time. We argue this greatly limits their use. Drawing inspiration from the human capability of picking up the essence of a novel object from a small number of examples and generalizing from there, we seek a few-shot, unsupervised image-to-image translation algorithm that works on previously unseen target classes that are specified, at test time, only by a few example images. Our model achieves this few-shot generation capability by coupling an adversarial training scheme with a novel network design. Through extensive experimental validation and comparisons to several baseline methods on benchmark datasets, we verify the effectiveness of the proposed framework. Our implementation and datasets are available at this https URL .", "target": ["Few shortで画像のドメイン変換を行う研究。ソース画像1枚とターゲット画像N枚を入力とし、ターゲット画像についてはそれぞれEncodeして平均を取る。この平均ベクトルを全結合のネットワークに入力し、各層の平均/分散で(正規化済みの)Decode側の層をシフトさせる形でターゲット情報を与えている。"]} {"source": "We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer. The video of our experiments can be found at this https URL", "target": ["強化学習で、戦略の学習と共にシミュレーターの挙動を徐々に現実の挙動に近づけていくという研究。シミュレーターで一旦戦略を学習したのち、同じ戦略に基づき現実/シミュレーター双方でプレイ(Rollout)、得られた双方の軌跡の距離を近づける形でシミュレーターを更新する。"]} {"source": "Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as \"A woman sits at a piano,\" a machine must select the most likely followup: \"She sets her fingers on the keys.\" With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference? In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset. Though its questions are trivial for humans (>95% accuracy), state-of-the-art models struggle (<48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key insight is to scale up the length and complexity of the dataset examples towards a critical 'Goldilocks' zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models. Our construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges.", "target": ["BERTで攻略困難な多肢選択問題を作成したという研究(同様のデータセットとしてSWAGがあるが、SWAGはBERTに攻略されてしまったためその後継となる)。選択肢の文品質が重要そうとの検証結果から、より高精度な言語モデル(GPTなど)を使い回答と識別が困難な文生成を行なっていくことで作成している"]} {"source": "We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper. Turing Tests, where a biomedical domain expert is asked to compare a system output and a human-authored string, show PaperRobot generated abstracts, conclusion and future work sections, and new titles are chosen over human-written ones up to 30%, 24% and 12% of the time, respectively.", "target": ["論文の概要/結論/課題を自動生成するという夢の研究。既存の論文から知識グラフを作成し、Link PredictionによりEntity間の接続を増やす。生成を行う際は既存研究のタイトルを与え、そこから知識グラフを用い関連Entityを抽出。既存研究とEntityとの関連度に応じAttentionをかけながら生成を行う。"]} {"source": "Penetration testing (pentesting) involves performing a controlled attack on a computer system in order to assess it's security. Although an effective method for testing security, pentesting requires highly skilled practitioners and currently there is a growing shortage of skilled cyber security professionals. One avenue for alleviating this problem is automate the pentesting process using artificial intelligence techniques. Current approaches to automated pentesting have relied on model-based planning, however the cyber security landscape is rapidly changing making maintaining up-to-date models of exploits a challenge. This project investigated the application of model-free Reinforcement Learning (RL) to automated pentesting. Model-free RL has the key advantage over model-based planning of not requiring a model of the environment, instead learning the best policy through interaction with the environment. We first designed and built a fast, low compute simulator for training and testing autonomous pentesting agents. We did this by framing pentesting as a Markov Decision Process with the known configuration of the network as states, the available scans and exploits as actions, the reward determined by the value of machines on the network. We then used this simulator to investigate the application of model-free RL to pentesting. We tested the standard Q-learning algorithm using both tabular and neural network based implementations. We found that within the simulated environment both tabular and neural network implementations were able to find optimal attack paths for a range of different network topologies and sizes without having a model of action behaviour. However, the implemented algorithms were only practical for smaller networks and numbers of actions. Further work is needed in developing scalable RL algorithms and testing these algorithms in larger and higher fidelity environments.", "target": ["ペネトレーションテストは職人の経験と勘に頼ることが多いため、それを強化学習で自動化しようという試み。既存の研究はモデルベースが主流だが、診断環境の変遷は早いためモデルフリーの手法を使用している。環境の作成からQ学習の適用まで丁寧に書かれている(81pある)"]} {"source": "Penetration testing (pentesting) involves performing a controlled attack on a computer system in order to assess it's security. Although an effective method for testing security, pentesting requires highly skilled practitioners and currently there is a growing shortage of skilled cyber security professionals. One avenue for alleviating this problem is automate the pentesting process using artificial intelligence techniques. Current approaches to automated pentesting have relied on model-based planning, however the cyber security landscape is rapidly changing making maintaining up-to-date models of exploits a challenge. This project investigated the application of model-free Reinforcement Learning (RL) to automated pentesting. Model-free RL has the key advantage over model-based planning of not requiring a model of the environment, instead learning the best policy through interaction with the environment. We first designed and built a fast, low compute simulator for training and testing autonomous pentesting agents. We did this by framing pentesting as a Markov Decision Process with the known configuration of the network as states, the available scans and exploits as actions, the reward determined by the value of machines on the network. We then used this simulator to investigate the application of model-free RL to pentesting. We tested the standard Q-learning algorithm using both tabular and neural network based implementations. We found that within the simulated environment both tabular and neural network implementations were able to find optimal attack paths for a range of different network topologies and sizes without having a model of action behaviour. However, the implemented algorithms were only practical for smaller networks and numbers of actions. Further work is needed in developing scalable RL algorithms and testing these algorithms in larger and higher fidelity environments.", "target": ["モデルを診断するために開発されたデータセットが、何を診断しているのかを調べた研究。診断データの少量を学習させ、診断データに対する精度が出る=学習データの弱点、出ない=モデルの弱点をついていることになる。診断を学習させることで精度が下がる場合、そもそもラベルの分散が違うケースと分類。"]} {"source": "Accurately predicting conversions in advertisements is generally a challenging task, because such conversions do not occur frequently. In this paper, we propose a new framework to support creating high-performing ad creatives, including the accurate prediction of ad creative text conversions before delivering to the consumer. The proposed framework includes three key ideas: multi-task learning, conditional attention, and attention highlighting. Multi-task learning is an idea for improving the prediction accuracy of conversion, which predicts clicks and conversions simultaneously, to solve the difficulty of data imbalance. Furthermore, conditional attention focuses attention of each ad creative with the consideration of its genre and target gender, thus improving conversion prediction accuracy. Attention highlighting visualizes important words and/or phrases based on conditional attention. We evaluated the proposed framework with actual delivery history data (14,000 creatives displayed more than a certain number of times from Gunosy Inc.), and confirmed that these ideas improve the prediction performance of conversions, and visualize noteworthy words according to the creatives' attributes.", "target": ["広告のコンバージョン性能(商品購入やダウンロードに繋がるか)を評価する手法の提案。広告のタイトル/テキストを個別にRNNでEncodeし、目的のCVRだけでなくCTRも予測させることで汎化性能を高めている。また広告の属性(ジャンル等)による重み付け(Attention)により予測精度を高めている。"]} {"source": "Detection identifies objects as axis-aligned boxes in an image. Most successful object detectors enumerate a nearly exhaustive list of potential object locations and classify each. This is wasteful, inefficient, and requires additional post-processing. In this paper, we take a different approach. We model an object as a single point --- the center point of its bounding box. Our detector uses keypoint estimation to find center points and regresses to all other object properties, such as size, 3D location, orientation, and even pose. Our center point based approach, CenterNet, is end-to-end differentiable, simpler, faster, and more accurate than corresponding bounding box based detectors. CenterNet achieves the best speed-accuracy trade-off on the MS COCO dataset, with 28.1% AP at 142 FPS, 37.4% AP at 52 FPS, and 45.1% AP with multi-scale testing at 1.4 FPS. We use the same approach to estimate 3D bounding box in the KITTI benchmark and human pose on the COCO keypoint dataset. Our method performs competitively with sophisticated multi-stage methods and runs in real-time.", "target": ["物体検出で領域(矩形)を予測するのは無駄が多いため、中心点のみ予測を行い領域の大きさや角度などはその属性として推定しようという研究。入力画像をストライド幅でダウンサンプルし、ガウシアンカーネルを用いてヒートマップを作製。それを推定するという形をとっている。"]} {"source": "Pre-trained text encoders have rapidly advanced the state of the art on many NLP tasks. We focus on one such model, BERT, and aim to quantify where linguistic information is captured within the network. We find that the model represents the steps of the traditional NLP pipeline in an interpretable and localizable way, and that the regions responsible for each step appear in the expected sequence: POS tagging, parsing, NER, semantic roles, then coreference. Qualitative analysis reveals that the model can and often does adjust this pipeline dynamically, revising lower-level decisions on the basis of disambiguating information from higher-level representations.", "target": ["BERT(#959)のどのレイヤがどのタスクで効いているのかを調査した研究。品詞推定(POS)や係り受け(Deps)などは低層で、役割推定(SPR)や関係推定(RE)は上層という割合直感と符合する結果となっている。ただNER/SPR/REは特定層が顕著に反応しているという感じではないよう。"]} {"source": "Exploration has been one of the greatest challenges in reinforcement learning (RL), which is a large obstacle in the application of RL to robotics. Even with state-of-the-art RL algorithms, building a well-learned agent often requires too many trials, mainly due to the difficulty of matching its actions with rewards in the distant future. A remedy for this is to train an agent with real-time feedback from a human observer who immediately gives rewards for some actions. This study tackles a series of challenges for introducing such a human-in-the-loop RL scheme. The first contribution of this work is our experiments with a precisely modeled human observer: binary, delay, stochasticity, unsustainability, and natural reaction. We also propose an RL method called DQN-TAMER, which efficiently uses both human feedback and distant rewards. We find that DQN-TAMER agents outperform their baselines in Maze and Taxi simulated environments. Furthermore, we demonstrate a real-world human-in-the-loop RL application where a camera automatically recognizes a user's facial expressions as feedback to the agent while the agent explores a maze.", "target": ["代表的なモデルフリーの強化学習手法であるDQNと、人のフィードバックを学習に用いるTAMERを組み合わせた手法(DQN-TAMER)の提案。DQNは環境から推定した価値関数(Q)、TAMERは人のフィードバックを推定する関数(H)を学習に用いるが、双方を学習に用い学習が進むにつれHへの依存量をへらしていく。"]} {"source": "Standard neural networks are often overconfident when presented with data outside the training distribution. We introduce HyperGAN, a new generative model for learning a distribution of neural network parameters. HyperGAN does not require restrictive assumptions on priors, and networks sampled from it can be used to quickly create very large and diverse ensembles. HyperGAN employs a novel mixer to project prior samples to a latent space with correlated dimensions, and samples from the latent space are then used to generate weights for each layer of a deep neural network. We show that HyperGAN can learn to generate parameters which label the MNIST and CIFAR-10 datasets with competitive performance to fully supervised learning, while learning a rich distribution of effective parameters. We also show that HyperGAN can also provide better uncertainty estimates than standard ensembles by evaluating on out of distribution data as well as adversarial examples.", "target": ["ネットワークの重みを、学習ではなく生成で作成する研究。ノイズを全結合層(Mixer)に通して各層のノイズを作り、層ごとのGで生成を行う。生成した重みでlossを計測、最適化するようGを学習する。ただ生成する重みが固定的にならないようDで生成元ノイズがGaussianに近いかチェックする。"]} {"source": "We present a joint audio-visual model for isolating a single speech signal from a mixture of sounds such as other speakers and background noise. Solving this task using only audio as input is extremely challenging and does not provide an association of the separated speech signals with speakers in the video. In this paper, we present a deep network-based model that incorporates both visual and auditory signals to solve this task. The visual features are used to \"focus\" the audio on desired speakers in a scene and to improve the speech separation quality. To train our joint audio-visual model, we introduce AVSpeech, a new dataset comprised of thousands of hours of video segments from the Web. We demonstrate the applicability of our method to classic speech separation tasks, as well as real-world scenarios involving heated interviews, noisy bars, and screaming children, only requiring the user to specify the face of the person in the video whose speech they want to isolate. Our method shows clear advantage over state-of-the-art audio-only speech separation in cases of mixed speech. In addition, our model, which is speaker-independent (trained once, applicable to any speaker), produces better results than recent audio-visual speech separation methods that are speaker-dependent (require training a separate model for each speaker of interest).", "target": ["音声分離タスクで、画像情報も用いる研究。画像は顔画像(顔検出は別途)をDilated Convolutionでdown/up sampling、音声はSTFTの画像を使用し、2つの特徴を結合してSpectrogramを出力する形を取っている。話者分離のために、実数/虚数双方を使用するマスクを使用している。"]} {"source": "Deep neural networks (DNNs) generalize remarkably well without explicit regularization even in the strongly over-parametrized regime where classical learning theory would instead predict that they would severely overfit. While many proposals for some kind of implicit regularization have been made to rationalise this success, there is no consensus for the fundamental reason why DNNs do not strongly overfit. In this paper, we provide a new explanation. By applying a very general probability-complexity bound recently derived from algorithmic information theory (AIT), we argue that the parameter-function map of many DNNs should be exponentially biased towards simple functions. We then provide clear evidence for this strong simplicity bias in a model DNN for Boolean functions, as well as in much larger fully connected and convolutional networks applied to CIFAR10 and MNIST. As the target functions in many real problems are expected to be highly structured, this intrinsic simplicity bias helps explain why deep networks generalize well on real world problems. This picture also facilitates a novel PAC-Bayes approach where the prior is taken over the DNN input-output function space, rather than the more conventional prior over parameter space. If we assume that the training algorithm samples parameters close to uniformly within the zero-error region then the PAC-Bayes theorem can be used to guarantee good expected generalization for target functions producing high-likelihood training sets. By exploiting recently discovered connections between DNNs and Gaussian processes to estimate the marginal likelihood, we produce relatively tight generalization PAC-Bayes error bounds which correlate well with the true error on realistic datasets such as MNIST and CIFAR10 and for architectures including convolutional and fully connected networks.", "target": ["DNNがなぜ汎化するのかについて、パラメーター空間の割にシンプルな関数になるようバイアスがかかっているから、とした研究。パラメーター空間の広がりに比べ関数のバリエーション変化が少ないことを、様々な初期化方法で確認。また、出現率の高い関数の複雑性(Lempel-Ziv)が低いことを確認。"]} {"source": "The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks. We attach a small parametric module, named \"loss prediction module,\" to a target network, and learn it to predict target losses of unlabeled inputs. Then, this module can suggest data that the target model is likely to produce a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks.", "target": ["Lossを予測するサブタスクを追加することで、Unlabeledなデータに対しLossを予測し高い(=学習効果の高い)ものを優先してラベルづけする手法。画像分類/物体検知/姿勢推定といった様々なタスクで検証し効果を確認したが、タスクが複雑になる程Lossの予測が難しくなるという結果。"]} {"source": "Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data.", "target": ["Adversarial Exampleへの耐性を上げるために、モデル側ではなくデータセット側を工夫するという論文。そもそもAdversarial Exampleは「汎用的ではない特徴」でモデルがこれに依存しているだけではとし、入力から「汎用的ではない特徴」を抜いたデータセットを作成しそれで学習という方法をとっている"]} {"source": "In this report we review memory-based meta-learning as a tool for building sample-efficient strategies that learn from past experience to adapt to any task within a target class. Our goal is to equip the reader with the conceptual foundations of this tool for building new, scalable agents that operate on broad domains. To do so, we present basic algorithmic templates for building near-optimal predictors and reinforcement learners which behave as if they had a probabilistic model that allowed them to efficiently exploit task structure. Furthermore, we recast memory-based meta-learning within a Bayesian framework, showing that the meta-learned strategies are near-optimal because they amortize Bayes-filtered data, where the adaptation is implemented in the memory dynamics as a state-machine of sufficient statistics. Essentially, memory-based meta-learning translates the hard problem of probabilistic sequential inference into a regression problem.", "target": ["強化学習のサンプル効率を向上させるため、環境の挙動に関する知識を戦略の推定に活かす手法の提案(論文では計算フレームワークの紹介のみ)。通常は戦略単体を学習、表現学習は戦略と別に環境表現を学習するが(並列)、本研究では環境推定から行動分布(戦略)推定と連続的な学習を行なっている(直列)"]} {"source": "Natural language is hierarchically structured: smaller units (e.g., phrases) are nested within larger units (e.g., clauses). When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed. While the standard LSTM architecture allows different neurons to track information at different time scales, it does not have an explicit bias towards modeling a hierarchy of constituents. This paper proposes to add such an inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated. Our novel recurrent architecture, ordered neurons LSTM (ON-LSTM), achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.", "target": ["文における単語間の関係(ツリー構造)を、LSTMの隠れ層で表現できるようにした研究。上位ノードが入れ替わる場合は上位配下のノード情報が全て消去される必要がある。これを表現するため、配下ノードが増える(=隠れ層の占有領域が増える)につれforgetを多くしinputを減らすmasterノードを導入している。"]} {"source": "The ability to predict depth from a single image - using recent advances in CNNs - is of increasing interest to the vision community. Unsupervised strategies to learning are particularly appealing as they can utilize much larger and varied monocular video datasets during learning without the need for ground truth depth or stereo. In previous works, separate pose and depth CNN predictors had to be determined such that their joint outputs minimized the photometric error. Inspired by recent advances in direct visual odometry (DVO), we argue that the depth CNN predictor can be learned without a pose CNN predictor. Further, we demonstrate empirically that incorporation of a differentiable implementation of DVO, along with a novel depth normalization strategy - substantially improves performance over state of the art that use monocular videos for training.", "target": ["SfM Learner の PoseNet を DVO というアルゴリズムに置き換えたもの"]} {"source": "Learning to reconstruct depths in a single image by watching unlabeled videos via deep convolutional network (DCN) is attracting significant attention in recent years. In this paper, we introduce a surface normal representation for unsupervised depth estimation framework. Our estimated depths are constrained to be compatible with predicted normals, yielding more robust geometry results. Specifically, we formulate an edge-aware depth-normal consistency term, and solve it by constructing a depth-to-normal layer and a normal-to-depth layer inside of the DCN. The depth-to-normal layer takes estimated depths as input, and computes normal directions using cross production based on neighboring pixels. Then given the estimated normals, the normal-to-depth layer outputs a regularized depth map through local planar smoothness. Both layers are computed with awareness of edges inside the image to help address the issue of depth/normal discontinuity and preserve sharp edges. Finally, to train the network, we apply the photometric error and gradient smoothness for both depth and normal predictions. We conducted experiments on both outdoor (KITTI) and indoor (NYUv2) datasets, and show that our algorithm vastly outperforms state of the art, which demonstrates the benefits from our approach.", "target": ["Depth と同時に物体表面の法線を推定する研究"]} {"source": "We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.", "target": ["Structure from Motion の深層学習化。いわゆる SfM Learner 系のベースとなったモデル。動画から単眼深度推定を学習する。"]} {"source": "We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\\% more accurate on ImageNet classification while reducing latency by 15\\% compared to MobileNetV2. MobileNetV3-Small is 4.6\\% more accurate while reducing latency by 5\\% compared to MobileNetV2. MobileNetV3-Large detection is 25\\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation.", "target": ["モバイルデバイスに最適化したモデル(=精度と実行速度のバランスが取れたモデル)を、自動探索で構築する研究。ネットワークのブロックはMobileNetV2+SENet(channelのAttention的な手法"]} {"source": "Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. Code is available at this https URL.", "target": ["Data Augmentationはラベル付きデータをかさ増しするために使われるのが通例だが、ラベルなしのデータに対し適用する手法の提案。重みを固定したモデルを用い、元データに対する予測分布と、Augmentされたデータに対する予測分布とで差異が出ないように学習を行う。画像/テキスト共に効果を確認。"]} {"source": "We consider causal inference in the presence of unobserved confounding. We study the case where a proxy is available for the unobserved confounding in the form of a network connecting the units. For example, the link structure of a social network carries information about its members. We show how to effectively use the proxy to do causal inference. The main idea is to reduce the causal estimation problem to a semi-supervised prediction of both the treatments and outcomes. Networks admit high-quality embedding models that can be used for this semi-supervised prediction. We show that the method yields valid inferences under suitable (weak) conditions on the quality of the predictive model. We validate the method with experiments on a semi-synthetic social network dataset. Code is available at this http URL.", "target": ["DNNの表現学習能力を、因果推論に活用する仕組みの提案。因果推論では交絡因子(証明したい関係xとy双方に影響のある因子z)の影響をどう除外するかが大きな課題だが、因果推論の問題を予測問題に変換することで交絡因子の影響推定(予測)に表現学習モデル(論文中ではGCN/BERT)を使えるようにしている。"]} {"source": "Given the recent surge in developments of deep learning, this article provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross-fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e. audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified.", "target": ["音声に対するDNNの適用についてまとめられた資料。音声と画像の性質的な違い(時系列/周波数という相関のない2軸で表現される点、時系列のため順次処理が必要など)を示しその違いを各手法がどう扱っているのかという観点からまとめられている。概要的な資料だが問題設定と手法が上手くまとめられている"]} {"source": "Memory is increasingly often the bottleneck when training neural network models. Despite this, techniques to lower the overall memory requirements of training have been less widely studied compared to the extensive literature on reducing the memory requirements of inference. In this paper we study a fundamental question: How much memory is actually needed to train a neural network? To answer this question, we profile the overall memory usage of training on two representative deep learning benchmarks -- the WideResNet model for image classification and the DynamicConv Transformer model for machine translation -- and comprehensively evaluate four standard techniques for reducing the training memory requirements: (1) imposing sparsity on the model, (2) using low precision, (3) microbatching, and (4) gradient checkpointing. We explore how each of these techniques in isolation affects both the peak memory usage of training and the quality of the end model, and explore the memory, accuracy, and computation tradeoffs incurred when combining these techniques. Using appropriate combinations of these techniques, we show that it is possible to the reduce the memory required to train a WideResNet-28-2 on CIFAR-10 by up to 60.7x with a 0.4% loss in accuracy, and reduce the memory required to train a DynamicConv model on IWSLT'14 German to English translation by up to 8.7x with a BLEU score drop of 0.15.", "target": ["DNNを省メモリで学習する手法と、その効果をまとめた研究。Sparsity(DNNは実際貢献する重みが少ないため、重みをもつものをまとめ効率化)、Low precision(低精度演算)、Microbatching(ミニバッチを分割し勾配計算)、Gradient checkpointing(forward時特定箇所の勾配だけキープ)の4つを検証している"]} {"source": "Learning multi-hop reasoning has been a key challenge for reading comprehension models, leading to the design of datasets that explicitly focus on it. Ideally, a model should not be able to perform well on a multi-hop question answering task without doing multi-hop reasoning. In this paper, we investigate two recently proposed datasets, WikiHop and HotpotQA. First, we explore sentence-factored models for these tasks; by design, these models cannot do multi-hop reasoning, but they are still able to solve a large number of examples in both datasets. Furthermore, we find spurious correlations in the unmasked version of WikiHop, which make it easy to achieve high performance considering only the questions and answers. Finally, we investigate one key difference between these datasets, namely span-based vs. multiple-choice formulations of the QA task. Multiple-choice versions of both datasets can be easily gamed, and two models we examine only marginally exceed a baseline in this setting. Overall, while these datasets are useful testbeds, high-performing models may not be learning as much multi-hop reasoning as previously thought.", "target": ["回答を行うために複数文にまたがる推論が必要なタスク(トムは東京にいる、東京は関東=>トムは関東にいるか?等)について、単純に単文だけ、選択式の場合単文すらなくても回答できるケースがあるという調査結果。WikiHop/HotpotQAで検証しており、既存モデルが複数文推論を学習していない可能性を示唆。"]} {"source": "Deep generative models are becoming a cornerstone of modern machine learning. Recent work on conditional generative adversarial networks has shown that learning complex, high-dimensional distributions over natural images is within reach. While the latest models are able to generate high-fidelity, diverse natural images at high resolution, they rely on a vast quantity of labeled data. In this work we demonstrate how one can benefit from recent work on self- and semi-supervised learning to outperform the state of the art on both unsupervised ImageNet synthesis, as well as in the conditional setting. In particular, the proposed approach is able to match the sample quality (as measured by FID) of the current state-of-the-art conditional model BigGAN on ImageNet using only 10% of the labels and outperform it using 20% of the labels.", "target": ["条件(クラスラベル)を指定した画像生成では、もちろんクラスラベルのついたデータセットが必要だった。これを教師なし/半教師ありの手法で削減したという研究。画像の回転予測と少量ラベルからの予測2つのタスクについて、事前学習する場合と補助タスクとして導入する場合を検証している。"]} {"source": "Training models to high-end performance requires availability of large labeled datasets, which are expensive to get. The goal of our work is to automatically synthesize labeled datasets that are relevant for a downstream task. We propose Meta-Sim, which learns a generative model of synthetic scenes, and obtain images as well as its corresponding ground-truth via a graphics engine. We parametrize our dataset generator with a neural network, which learns to modify attributes of scene graphs obtained from probabilistic scene grammars, so as to minimize the distribution gap between its rendered outputs and target data. If the real dataset comes with a small labeled validation set, we additionally aim to optimize a meta-objective, i.e. downstream task performance. Experiments show that the proposed method can greatly improve content generation quality over a human-engineered probabilistic scene grammar, both qualitatively and quantitatively as measured by performance on a downstream task.", "target": ["シーン認識などのタスクで、学習データを水増しする試み。画像の合成ではGANなどが使われることが多いが、本研究では普通にグラフィックエンジンを使用している。シーンをグラフで定義、グラフの属性を変更、グラフからレンダリング、というプロセスを取る。"]} {"source": "The convolution layer has been the dominant feature extractor in computer vision for years. However, the spatial aggregation in convolution is basically a pattern matching process that applies fixed filters which are inefficient at modeling visual elements with varying spatial distributions. This paper presents a new image feature extractor, called the local relation layer, that adaptively determines aggregation weights based on the compositional relationship of local pixel pairs. With this relational approach, it can composite visual elements into higher-level entities in a more efficient manner that benefits semantic inference. A network built with local relation layers, called the Local Relation Network (LR-Net), is found to provide greater modeling capacity than its counterpart built with regular convolution on large-scale recognition tasks such as ImageNet classification.", "target": ["CapsuleNetのようなボトムアップ型の特徴集約を、CNNのように局所領域単位で行うことで、双方の良いところどり(特徴認識/計算効率)を目指した研究。SOTAの精度には及ばないが、フィルタによる固定的な集約でなく、入力画素間の相関に応じた可変的な集約演算をImageNetのサイズで行うことに成功。"]} {"source": "We show how to teach machines to paint like human painters, who can use a small number of strokes to create fantastic paintings. By employing a neural renderer in model-based Deep Reinforcement Learning (DRL), our agents learn to determine the position and color of each stroke and make long-term plans to decompose texture-rich images into strokes. Experiments demonstrate that excellent visual effects can be achieved using hundreds of strokes. The training process does not require the experience of human painters or stroke tracking data. The code is available at this https URL.", "target": ["モデルベースでブラシストロークを学習する研究。通常は行動をペインティングソフト(MyPaintなど)に入れることが多いが、描画環境(=モデル)をNeural Netで構築している。遷移は描画プログラムを教師として学習、報酬は描画前後の画像に対するWGANのスコア(Discrimator判定)差を使用している。"]} {"source": "Machine learning for biomedical imaging often suffers from a lack of labelled training data. One solution is to use generative models to synthesise more data. To this end, we introduce CapsPix2Pix, which combines convolutional capsules with the pix2pix framework, to synthesise images conditioned on class segmentation labels. We apply our approach to a new biomedical dataset of cortical axons imaged by two-photon microscopy, as a method of data augmentation for small datasets. We evaluate performance both qualitatively and quantitatively. Quantitative evaluation is performed by using image data generated by either CapsPix2Pix or pix2pix to train a U-net on a segmentation task, then testing on real microscopy data. Our method quantitatively performs as well as pix2pix, with an order of magnitude fewer parameters. Additionally, CapsPix2Pix is far more capable at synthesising images of different appearance, but the same underlying geometry. Finally, qualitative analysis of the features learned by CapsPix2Pix suggests that individual capsules capture diverse and often semantically meaningful groups of features, covering structures such as synapses, axons and noise.", "target": ["Capsule機構を使用した画像生成についての研究。pix2pixをベースにしており、U-Netライクなアーキテクチャーの畳み込み部分にCapsuleを使って生成を行っている。生成画像を使うことでセグメンテーションタスクの精度が向上+Capsuleが意味あるグループの特徴をとらえていることを確認。"]} {"source": "Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to $O(n \\sqrt{n})$. We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.", "target": ["TransformerのAttention計算を効率化し、画像や音声といった入力数が多いデータにも適応可能にした研究。Self-Attentionは入力数Nに対しNxN行列の計算が必要で、Nが多いデータでは計算困難だった。そこで、実際Attentionがはられる箇所を元に行方向/列方向の計算箇所を限定し、計算を効率化している。"]} {"source": "Driving requires reacting to a wide variety of complex environment conditions and agent behaviors. Explicitly modeling each possible scenario is unrealistic. In contrast, imitation learning can, in theory, leverage data from large fleets of human-driven cars. Behavior cloning in particular has been successfully used to learn simple visuomotor policies end-to-end, but scaling to the full spectrum of driving behaviors remains an unsolved problem. In this paper, we propose a new benchmark to experimentally investigate the scalability and limitations of behavior cloning. We show that behavior cloning leads to state-of-the-art results, including in unseen environments, executing complex lateral and longitudinal maneuvers without these reactions being explicitly programmed. However, we confirm well-known limitations (due to dataset bias and overfitting), new generalization issues (due to dynamic objects and the lack of a causal model), and training instability requiring further research before behavior cloning can graduate to real-world driving. The code of the studied behavior cloning approaches can be found at this https URL .", "target": ["模倣学習で自動運転を行う際の、性能と限界についての研究。ベンチーマークとして行動意図(直進/右に行くなど)で条件付けし学習する CILを使用し十分な精度を確認。一方、学習データを集めるほどレアイベント(事故など)の対応が難しくなる、明確な因果モデルの欠如、学習分散の大きさなどをあげている"]} {"source": "In this paper, we propose a novel pretraining-based encoder-decoder framework, which can generate the output sequence based on the input sequence in a two-stage manner. For the encoder of our model, we encode the input sequence into context representations using BERT. For the decoder, there are two stages in our model, in the first stage, we use a Transformer-based decoder to generate a draft output sequence. In the second stage, we mask each word of the draft sequence and feed it to BERT, then by combining the input sequence and the draft representation generated by BERT, we use a Transformer-based decoder to predict the refined word for each masked position. To the best of our knowledge, our approach is the first method which applies the BERT into text generation tasks. As the first step in this direction, we evaluate our proposed method on the text summarization task. Experimental results show that our model achieves new state-of-the-art on both CNN/Daily Mail and New York Times datasets.", "target": ["BERT(#959)を使った抽象型(Abstractive)要約の研究。BERTはマスクされた箇所を予測する形で学習するため、そのままでは文生成に使えない。このため、一旦Transformerベースのモデルで文(ドラフト)を作成=>MaskをかけてBERTで補完=>補完文と入力を元に再作成、と2段階で生成している"]} {"source": "BERT, a pre-trained Transformer model, has achieved ground-breaking performance on multiple NLP tasks. In this paper, we describe BERTSUM, a simple variant of BERT, for extractive summarization. Our system is the state of the art on the CNN/Dailymail dataset, outperforming the previous best-performed system by 1.65 on ROUGE-L. The codes to reproduce our results are available at this https URL", "target": ["BERT(#959)を使った抽出型(Extractive)要約の研究。BERTはトークン単位で特徴を出力するが、抽出型では文単位の予測となる。このため文境界を識別するためのSegment Embeddingを入力し、境界開始のトークン表現を文表現として使用+文同士の関係を認識する層を追加し予測している"]} {"source": "The Transformer architecture is superior to RNN-based models in computational efficiency. Recently, GPT and BERT demonstrate the efficacy of Transformer models on various NLP tasks using pre-trained language models on large-scale corpora. Surprisingly, these Transformer architectures are suboptimal for language model itself. Neither self-attention nor the positional encoding in the Transformer is able to efficiently incorporate the word-level sequential context crucial to language modeling. In this paper, we explore effective Transformer architectures for language model, including adding additional LSTM layers to better capture the sequential context while still keeping the computation efficient. We propose Coordinate Architecture Search (CAS) to find an effective architecture through iterative refinement of the model. Experimental results on the PTB, WikiText-2, and WikiText-103 show that CAS achieves perplexities between 20.42 and 34.11 on all problems, i.e. on average an improvement of 12.0 perplexity units compared to state-of-the-art LSTMs. The source code is publicly available.", "target": ["BERTやGPTといったTransformer (#329) ベースのモデルは各NLPのタスクで好成績だが、ベースの言語モデルのタスクはそれほど精度が出ていなかった。そこで、言語モデルに強いLSTMを組み合わせる+組み合わせ方を自動探索で最適化するという研究。PTBやWikiTextといった言語モデルタスクで、SOTAを12pt近く更新。"]} {"source": "Addressing catastrophic forgetting is one of the key challenges in continual learning where machine learning systems are trained with sequential or streaming tasks. Despite recent remarkable progress in state-of-the-art deep learning, deep neural networks (DNNs) are still plagued with the catastrophic forgetting problem. This paper presents a conceptually simple yet general and effective framework for handling catastrophic forgetting in continual learning with DNNs. The proposed method consists of two components: a neural structure optimization component and a parameter learning and/or fine-tuning component. By separating the explicit neural structure learning and the parameter estimation, not only is the proposed method capable of evolving neural structures in an intuitively meaningful way, but also shows strong capabilities of alleviating catastrophic forgetting in experiments. Furthermore, the proposed method outperforms all other baselines on the permuted MNIST dataset, the split CIFAR100 dataset and the Visual Domain Decathlon dataset in continual learning setting.", "target": ["転移学習を行う際に起こる破壊的忘却(学習済みタスクのことを忘れてしまう現象)を防ぐためには、タスク固有のレイヤをどこにどれだけ設けるかなどを設計する必要がある。この設計をネットワークの自動探索を使って行う試み。再利用・転移用重みの追加・固有レイヤの追加の3行動を基本とし最適化を行う"]} {"source": "Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn \"patches\" that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it. In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera. From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.", "target": ["物体検出に対するAdversarial Exampleで、人検出の回避に特化した研究。イメージ的には、パッチの印刷されたボードを持つだけで人検出システムをかいくぐって侵入、が可能になる。YOLOv2をベースに、物体/クラスの判定スコアを下げると共に印刷可能/自然(滑らか)なパッチになるようlossを設計している"]} {"source": "We present SpecAugment, a simple data augmentation method for speech recognition. SpecAugment is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients). The augmentation policy consists of warping the features, masking blocks of frequency channels, and masking blocks of time steps. We apply SpecAugment on Listen, Attend and Spell networks for end-to-end speech recognition tasks. We achieve state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work. On LibriSpeech, we achieve 6.8% WER on test-other without the use of a language model, and 5.8% WER with shallow fusion with a language model. This compares to the previous state-of-the-art hybrid system of 7.5% WER. For Switchboard, we achieve 7.2%/14.6% on the Switchboard/CallHome portion of the Hub5'00 test set without the use of a language model, and 6.8%/14.1% with shallow fusion, which compares to the previous state-of-the-art hybrid system at 8.3%/17.3% WER.", "target": ["音声認識におけるData Augmentationの研究。spectrogramを入力とし、時間軸方向へのwarping、周波数軸/時間軸のrandom dropという3つの手法を検証している。いずれも効果があるが、warpingは大変な割に効果が微小なため、計算量の制約があるなら最初に落とす手法としている。"]} {"source": "In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language understanding tasks. The GLUE benchmark, introduced a little over one year ago, offers a single-number metric that summarizes progress on a diverse set of such tasks, but performance on the benchmark has recently surpassed the level of non-expert humans, suggesting limited headroom for further research. In this paper we present SuperGLUE, a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, a software toolkit, and a public leaderboard. SuperGLUE is available at this http URL.", "target": ["言語理解タスクのベンチマークとして使用されているGLUEを、新しいタスクで作り直したSuperGLUEが公開。分類/文関係(NLI)のタスクが減り、文書理解系のタスク(共参照/意味識別)が増えている。"]} {"source": "We explore neural painters, a generative model for brushstrokes learned from a real non-differentiable and non-deterministic painting program. We show that when training an agent to \"paint\" images using brushstrokes, using a differentiable neural painter leads to much faster convergence. We propose a method for encouraging this agent to follow human-like strokes when reconstructing digits. We also explore the use of a neural painter as a differentiable image parameterization. By directly optimizing brushstrokes to activate neurons in a pre-trained convolutional network, we can directly visualize ImageNet categories and generate \"ideal\" paintings of each class. Finally, we present a new concept called intrinsic style transfer. By minimizing only the content loss from neural style transfer, we allow the artistic medium, in this case, brushstrokes, to naturally dictate the resulting style.", "target": ["ブラシストロークを学習させるネットワークの提案。VAEで画像の表現を学習、全結合でブラシアクションをVAE表現との対応を学習、GANでブラシアクションによる画像生成を学習、と3段階で学習を行い学習効率をあげている。生成画像のクラス分類タスクを入れることで、任意クラスの画像生成も行なっている"]} {"source": "We introduce a self-supervised representation learning method based on the task of temporal alignment between videos. The method trains a network using temporal cycle consistency (TCC), a differentiable cycle-consistency loss that can be used to find correspondences across time in multiple videos. The resulting per-frame embeddings can be used to align videos by simply matching frames using the nearest-neighbors in the learned embedding space. To evaluate the power of the embeddings, we densely label the Pouring and Penn Action video datasets for action phases. We show that (i) the learned embeddings enable few-shot classification of these action phases, significantly reducing the supervised training requirements; and (ii) TCC is complementary to other methods of self-supervised learning in videos, such as Shuffle and Learn and Time-Contrastive Networks. The embeddings are also used for a number of applications based on alignment (dense temporal correspondence) between video pairs, including transfer of metadata of synchronized modalities between videos (sounds, temporal semantic labels), synchronized playback of multiple videos, and anomaly detection. Project webpage: this https URL .", "target": ["時間変化の表現を、教師なしで学習する試み。同じ動作を行っている2つの動画を用意し、画像フレームをEncodeして表現を作成=>相手先の画像フレームから表現が最も近いものを選択=>相手先からも選んだ場合一致するか、で学習する(Cycle Consistency)。"]} {"source": "One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content.From the perspective of a question answering system, this might comprise questions the document can potentially answer. Following this observation, we propose a simple method that predicts which queries will be issued for a given document and then expands it with those predictions with a vanilla sequence-to-sequence model, trained using datasets consisting of pairs of query and relevant documents. By combining our method with a highly-effective re-ranking component, we achieve the state of the art in two retrieval tasks. In a latency-critical regime, retrieval results alone (without re-ranking) approach the effectiveness of more computationally expensive neural re-rankers but are much faster.", "target": ["文書から文書を検索する際に使われそうなクエリを予測して、文書+予測クエリでインデックスを作成しておくことで検索性能を上げるという研究。非常にシンプルな手法ながら、MS MARCOの関連箇所検索(Passage Retrieval)でSOTAの性能。"]} {"source": "Current state-of-the-art convolutional architectures for object detection are manually designed. Here we aim to learn a better architecture of feature pyramid network for object detection. We adopt Neural Architecture Search and discover a new feature pyramid architecture in a novel scalable search space covering all cross-scale connections. The discovered architecture, named NAS-FPN, consists of a combination of top-down and bottom-up connections to fuse features across scales. NAS-FPN, combined with various backbone models in the RetinaNet framework, achieves better accuracy and latency tradeoff compared to state-of-the-art object detection models. NAS-FPN improves mobile detection accuracy by 2 AP compared to state-of-the-art SSDLite with MobileNetV2 model in [32] and achieves 48.3 AP which surpasses Mask R-CNN [10] detection accuracy with less computation time.", "target": ["物体検出を行うネットワークの特徴ピラミッドを、自動探索で作成したという研究。異なるスケールの特徴マップを混合させるのが有効という知見から、候補から2つ選択=>出力サイズ決定=>混合(sum/global poolingでchannelの重みを計算し足し合わせる)=>候補に加える、を再帰的に繰り返し構造を作る。"]} {"source": "The combination of deep neural network models and reinforcement learning algorithms can make it possible to learn policies for robotic behaviors that directly read in raw sensory inputs, such as camera images, effectively subsuming both estimation and control into one model. However, real-world applications of reinforcement learning must specify the goal of the task by means of a manually programmed reward function, which in practice requires either designing the very same perception pipeline that end-to-end reinforcement learning promises to avoid, or else instrumenting the environment with additional sensors to determine if the task has been performed successfully. In this paper, we propose an approach for removing the need for manual engineering of reward specifications by enabling a robot to learn from a modest number of examples of successful outcomes, followed by actively solicited queries, where the robot shows the user a state and asks for a label to determine whether that state represents successful completion of the task. While requesting labels for every single state would amount to asking the user to manually provide the reward signal, our method requires labels for only a tiny fraction of the states seen during training, making it an efficient and practical approach for learning skills without manually engineered rewards. We evaluate our method on real-world robotic manipulation tasks where the observations consist of images viewed by the robot's camera. In our experiments, our method effectively learns to arrange objects, place books, and drape cloth, directly from images and without any manually specified reward functions, and with only 1-4 hours of interaction with the real world.", "target": ["一般的な模倣学習ではエキスパートのデモが必要になるが、デモでなく成功時の画像だけで行動を学習する手法の提案。成功画像から成否判定を行う分類機をCNNで作成し、それを報酬関数として学習させる。"]} {"source": "This paper addresses the problem of unsupervised domain adaption from theoretical and algorithmic perspectives. Existing domain adaptation theories naturally imply minimax optimization algorithms, which connect well with the domain adaptation methods based on adversarial learning. However, several disconnections still exist and form the gap between theory and algorithm. We extend previous theories (Mansour et al., 2009c; Ben-David et al., 2010) to multiclass classification in domain adaptation, where classifiers based on the scoring functions and margin loss are standard choices in algorithm design. We introduce Margin Disparity Discrepancy, a novel measurement with rigorous generalization bounds, tailored to the distribution comparison with the asymmetric margin loss, and to the minimax optimization for easier training. Our theory can be seamlessly transformed into an adversarial learning algorithm for domain adaptation, successfully bridging the gap between theory and algorithm. A series of empirical studies show that our algorithm achieves the state of the art accuracies on challenging domain adaptation tasks.", "target": ["教師なしのドメイン適用ではソース側で学習した後、ラベルなしでターゲット側への転移を行う。このときソース/ターゲット側で共通する判断空間が転移可能な上限となるが、空間が広すぎるため上界設定が難しかった。そこで、制約を加えることで学習/上界到達を行いやすくした"]} {"source": "In natural images, information is conveyed at different frequencies where higher frequencies are usually encoded with fine details and lower frequencies are usually encoded with global structures. Similarly, the output feature maps of a convolution layer can also be seen as a mixture of information at different frequencies. In this work, we propose to factorize the mixed feature maps by their frequencies, and design a novel Octave Convolution (OctConv) operation to store and process feature maps that vary spatially \"slower\" at a lower spatial resolution reducing both memory and computation cost. Unlike existing multi-scale methods, OctConv is formulated as a single, generic, plug-and-play convolutional unit that can be used as a direct replacement of (vanilla) convolutions without any adjustments in the network architecture. It is also orthogonal and complementary to methods that suggest better topologies or reduce channel-wise redundancy like group or depth-wise convolutions. We experimentally show that by simply replacing convolutions with OctConv, we can consistently boost accuracy for both image and video recognition tasks, while reducing memory and computational cost. An OctConv-equipped ResNet-152 can achieve 82.9% top-1 classification accuracy on ImageNet with merely 22.2 GFLOPs.", "target": ["CNNで出力される特徴マップを、低周波(解像度が荒い)特徴と高周波(解像度が高い)特徴にわける手法の提案。これにより特徴マップの冗長性を無くし、精度面+演算速度面でメリットが出るとしている。CNNのフィルタを同周波(高=>高・低=>低)・異周波交換(高=>低・低=>高)の4つに分割し実装している"]} {"source": "We propose a multi-task learning framework to learn a joint Machine Reading Comprehension (MRC) model that can be applied to a wide range of MRC tasks in different domains. Inspired by recent ideas of data selection in machine translation, we develop a novel sample re-weighting scheme to assign sample-specific weights to the loss. Empirical study shows that our approach can be applied to many existing MRC models. Combined with contextual representations from pre-trained language models (such as ELMo), we achieve new state-of-the-art results on a set of MRC benchmark datasets. We release our code at this https URL.", "target": ["QAなどの読解タスクで、複数タスクを並列で学習させる手法の提案。どのタスクをどれだけ学習するかの配分率を決めるのに、言語モデルを使用している。質問の言語モデル、回答は短すぎるので長さの分布を作り、ターゲットタスクとは傾向が異なるものに重みを振るようにしている"]} {"source": "A machine learning approach to the creation of visual summaries for narrative text is presented. Standard natural language processing tools for named entities recognition are used together with a clustering algorithm to detect the characters of the novel and their aliases. The most relevant ones and their relations are evaluated on the basis of a simple statistical analysis. These characters are visually depicted as nodes of an undirected graph whose edges describe relations with other characters. Specialized sentiment analysis techniques based on sentence embedding decide the colours of characters/nodes and their relations/edges. Additional information about the characters (e.g., gender) and their relations (e.g., siblings or partnerships) are returned by binary classifiers and visually depicted in the graph. For those specialized tasks, small amounts of manually annotated data are sufficient to achieve good accuracy. Compared to analogous tools, the machine learning approach we present allows for a richer representation of texts of this kind. A case study to demonstrate this approach for a series of books is also reported.", "target": ["自然言語処理を用いてハリーポッターの人物相関図を作成したという研究。人名/別名(あだ名など)を固有表現認識で抽出し、関係に関するフレーズを共起から取得。そのフレーズから関係分類を行うことで人物間のエッジを作成している。"]} {"source": "We present a novel method for simultaneous learning of depth, egomotion, object motion, and camera intrinsics from monocular videos, using only consistency across neighboring video frames as supervision signal. Similarly to prior work, our method learns by applying differentiable warping to frames and comparing the result to adjacent ones, but it provides several improvements: We address occlusions geometrically and differentiably, directly using the depth maps as predicted during training. We introduce randomized layer normalization, a novel powerful regularizer, and we account for object motion relative to the scene. To the best of our knowledge, our work is the first to learn the camera intrinsic parameters, including lens distortion, from video in an unsupervised manner, thereby allowing us to extract accurate depth and motion from arbitrary videos of unknown origin at scale. We evaluate our results on the Cityscapes, KITTI and EuRoC datasets, establishing new state of the art on depth prediction and odometry, and demonstrate qualitatively that depth prediction can be learned from a collection of YouTube videos.", "target": ["単眼の画像フレームから深度/自己・物体動作だけでなく、カメラの内部パラメーターまで推定する手法。平行移動の場合カメラ外部で調整できてしまうため学習ができないが、回転についてはそうはいかないため(外部/内部双方が正しくないと復元できない)、回転を用い学習している"]} {"source": "When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities. If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation. Can machines help us do this work? Which type of context is more important for machines to solve the problem? To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts. To solve this task, we propose a neural description model that consists of two context encoders and a description decoder. In contrast to the existing methods for non-standard English explanation [Ni+ 2017] and definition generation [Noraset+ 2017; Gadetsky+ 2018], our model appropriately takes important clues from both local and global contexts. Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.", "target": ["知らない単語について、定義を書き出せるようにする研究。人間が知らない単語について、周辺の文脈から意味を察するようにLocal+Global+Characterの情報を使い定義を生成する。「知らない単語の定義」を学習するために、(辞書ではなく)Wikipedia/Wikidataから学習データを作成している。"]} {"source": "Neural networks for image recognition have evolved through extensive manual design from simple chain-like models to structures with multiple wiring paths. The success of ResNets and DenseNets is due in large part to their innovative wiring plans. Now, neural architecture search (NAS) studies are exploring the joint optimization of wiring and operation types, however, the space of possible wirings is constrained and still driven by manual design despite being searched. In this paper, we explore a more diverse set of connectivity patterns through the lens of randomly wired neural networks. To do this, we first define the concept of a stochastic network generator that encapsulates the entire network generation process. Encapsulation provides a unified view of NAS and randomly wired networks. Then, we use three classical random graph models to generate randomly wired graphs for networks. The results are surprising: several variants of these random generators yield network instances that have competitive accuracy on the ImageNet benchmark. These results suggest that new efforts focusing on designing better network generators may lead to new breakthroughs by exploring less constrained search spaces with more room for novel design.", "target": ["ネットワーク構造の自動構築において、より自由な構造を探索するためランダムにグラフを作成してそれを実行可能なNNに変換する方式を提案(通常は探索空間を人が決めて絞る)。各アルゴリズムについて5回実行の平均/分散をとっており、ImageNetベンチマークと同等の結果。"]} {"source": "Text-based adventure games provide a platform on which to explore reinforcement learning in the context of a combinatorial action space, such as natural language. We present a deep reinforcement learning architecture that represents the game state as a knowledge graph which is learned during exploration. This graph is used to prune the action space, enabling more efficient exploration. The question of which action to take can be reduced to a question-answering task, a form of transfer learning that pre-trains certain parts of our architecture. In experiments using the TextWorld framework, we show that our proposed technique can learn a control policy faster than baseline alternatives. We have also open-sourced our code at this https URL.", "target": ["テキストアドベンチャーゲームを、強化学習+知識グラフで攻略したという研究。ゲームは選択肢で分岐して進むため、得られた状態で内部のグラフを更新していく。グラフ表現(Graph Convolution + Attention)+テキスト表現(一定範囲のBi-LSTM)で行動価値を出力する(行動数はグラフで絞り込む)。"]} {"source": "How can we measure whether a natural language generation system produces both high quality and diverse outputs? Human evaluation captures quality but not diversity, as it does not catch models that simply plagiarize from the training set. On the other hand, statistical evaluation (i.e., perplexity) captures diversity but not quality, as models that occasionally emit low quality samples would be insufficiently penalized. In this paper, we propose a unified framework which evaluates both diversity and quality, based on the optimal error rate of predicting whether a sentence is human- or machine-generated. We demonstrate that this error rate can be efficiently estimated by combining human and statistical evaluation, using an evaluation metric which we call HUSE. On summarization and chit-chat dialogue, we show that (i) HUSE detects diversity defects which fool pure human evaluation and that (ii) techniques such as annealing for improving quality actually decrease HUSE due to decreased diversity.", "target": ["自然言語における文生成で、人の評価では品質が測れるが多様性が、機械的な指標では多様性は測れるが品質を測ることが難しい問題を解決する試み。人の評価(生成されたものか人間が作ったものかをクラウドソーシングで判定してもらい作成)と機械的な識別結果を結合した特徴でスコアを算出している。"]} {"source": "Recently, convolutional neural networks with 3D kernels (3D CNNs) have been very popular in computer vision community as a result of their superior ability of extracting spatio-temporal features within video frames compared to 2D CNNs. Although there has been great advances recently to build resource efficient 2D CNN architectures considering memory and power budget, there is hardly any similar resource efficient architectures for 3D CNNs. In this paper, we have converted various well-known resource efficient 2D CNNs to 3D CNNs and evaluated their performance on three major benchmarks in terms of classification accuracy for different complexity levels. We have experimented on (1) Kinetics-600 dataset to inspect their capacity to learn, (2) Jester dataset to inspect their ability to capture motion patterns, and (3) UCF-101 to inspect the applicability of transfer learning. We have evaluated the run-time performance of each model on a single Titan XP GPU and a Jetson TX2 embedded system. The results of this study show that these models can be utilized for different types of real-world applications since they provide real-time performance with considerable accuracies and memory usage. Our analysis on different complexity levels shows that the resource efficient 3D CNNs should not be designed too shallow or narrow in order to save complexity. The codes and pretrained models used in this work are publicly available.", "target": ["2Dにおける省サイズモデル(SqueezeNetやMobileNetV1/V2)を、3Dのタスクに適用してみたという研究。具体的には、モーション認識に適用している。基本的に性能は引き継がれるという結果で、depthwiseが有効で3Dの場合深いほうが良いとのこと。"]} {"source": "The overreliance on large parallel corpora significantly limits the applicability of machine translation systems to the majority of language pairs. Back-translation has been dominantly used in previous approaches for unsupervised neural machine translation, where pseudo sentence pairs are generated to train the models with a reconstruction loss. However, the pseudo sentences are usually of low quality as translation errors accumulate during training. To avoid this fundamental issue, we propose an alternative but more effective approach, extract-edit, to extract and then edit real sentences from the target monolingual corpora. Furthermore, we introduce a comparative translation loss to evaluate the translated target sentences and thus train the unsupervised translation systems. Experiments show that the proposed approach consistently outperforms the previous state-of-the-art unsupervised machine translation systems across two benchmarks (English-French and English-German) and two low-resource language pairs (English-Romanian and English-Russian) by more than 2 (up to 3.63) BLEU points.", "target": ["教師なし翻訳における逆翻訳の手法を改善した研究。逆翻訳(Back-translation)は逆文の品質があまり良くないため、候補をいくつかとる(Extract)、かつ元文の意味に近づける(Edit)操作を行ない選択を行う。前者は意味(潜在空間上)の距離比較で、後者は元/候補の間でMaxpool(=意味強調)をとることで行う"]} {"source": "Generative models often use human evaluations to measure the perceived quality of their outputs. Automated metrics are noisy indirect proxies, because they rely on heuristics or pretrained embeddings. However, up until now, direct human evaluation strategies have been ad-hoc, neither standardized nor validated. Our work establishes a gold standard human benchmark for generative realism. We construct Human eYe Perceptual Evaluation (HYPE) a human benchmark that is (1) grounded in psychophysics research in perception, (2) reliable across different sets of randomly sampled outputs from a model, (3) able to produce separable model performances, and (4) efficient in cost and time. We introduce two variants: one that measures visual perception under adaptive time constraints to determine the threshold at which a model's outputs appear real (e.g. 250ms), and the other a less expensive variant that measures human error rate on fake and real images sans time constraints. We test HYPE across six state-of-the-art generative adversarial networks and two sampling techniques on conditional and unconditional image generation using four datasets: CelebA, FFHQ, CIFAR-10, and ImageNet. We find that HYPE can track model improvements across training epochs, and we confirm via bootstrap sampling that HYPE rankings are consistent and replicable.", "target": ["GANなどの生成モデルに対する人手評価について、評価方法を統一しようという研究。画像を見る時間に制限を設けたHYPE-Timeと設けないHYPE-Infinityを提案している。Timeは心理学における適応的階段法をベースとした厳密な評価方法であり、Infinityはその簡易版としている。"]} {"source": "Recent work has improved our ability to detect linguistic knowledge in word representations. However, current methods for detecting syntactic knowledge do not test whether syntax trees are represented in their entirety. In this work, we propose a structural probe, which evaluates whether syntax trees are embedded in a linear transformation of a neural network’s word representation space. The probe identifies a linear transformation under which squared L2 distance encodes the distance between words in the parse tree, and one in which squared L2 norm encodes depth in the parse tree. Using our probe, we show that such transformations exist for both ELMo and BERT but not in baselines, providing evidence that entire syntax trees are embedded implicitly in deep models’ vector geometry.", "target": ["BERTやELMoといった文脈埋め込みベクトルについて、潜在表現を線形変換することで係り受け解析の距離が得られる空間に転写できるとした研究。端的には、BERTやELMoの表現内にパースツリーの情報が含まれていることを示した。距離については、ツリー内の接続を辿った距離を使用している。"]} {"source": "Many anatomical factors, such as bone geometry and muscle condition, interact to affect human movements. This work aims to build a comprehensive musculoskeletal model and its control system that reproduces realistic human movements driven by muscle contraction dynamics. The variations in the anatomic model generate a spectrum of human movements ranging from typical to highly stylistic movements. To do so, we discuss scalable and reliable simulation of anatomical features, robust control of under-actuated dynamical systems based on deep reinforcement learning, and modeling of pose-dependent joint limits. The key technical contribution is a scalable, two-level imitation learning algorithm that can deal with a comprehensive full-body musculoskeletal model with 346 muscles. We demonstrate the predictive simulation of dynamic motor skills under anatomical conditions including bone deformity, muscle weakness, contracture, and the use of a prosthesis. We also simulate various pathological gaits and predictively visualize how orthopedic surgeries improve post-operative gaits.", "target": ["筋骨格モデルに基づく運動を、強化学習で学ぶという研究。骨格を動かすモデルと筋肉を活性させるモデルの2つに分かれており、骨格=>筋肉と階層型の構成にしてコントロールを行う。学習には模倣学習を使用しており、姿勢(間接角度)と末端位置の2つを報酬としている。"]} {"source": "We propose a novel generative adversarial network (GAN) for the task of unsupervised learning of 3D representations from natural images. Most generative models rely on 2D kernels to generate images and make few assumptions about the 3D world. These models therefore tend to create blurry images or artefacts in tasks that require a strong 3D understanding, such as novel-view synthesis. HoloGAN instead learns a 3D representation of the world, and to render this representation in a realistic manner. Unlike other GANs, HoloGAN provides explicit control over the pose of generated objects through rigid-body transformations of the learnt 3D features. Our experiments show that using explicit 3D features enables HoloGAN to disentangle 3D pose and identity, which is further decomposed into shape and appearance, while still being able to generate images with similar or higher visual quality than other generative models. HoloGAN can be trained end-to-end from unlabelled 2D images only. Particularly, we do not require pose labels, 3D shapes, or multiple views of the same objects. This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.", "target": ["画像生成を行うGANで、3D構造を意識した生成を行わせる研究。これにより学習データにない角度での生成なども可能になる。元の画像から3D Convolutionにより立体表現を得て、それに回転などの変更を加えた上で2Dへのレンダリングを行う。どの角度から見ても同じ物体とされるよう、lossを設定している。"]} {"source": "Reinforcement learning (RL) algorithms have demonstrated promising results on complex tasks, yet often require impractical numbers of samples since they learn from scratch. Meta-RL aims to address this challenge by leveraging experience from previous tasks so as to more quickly solve new tasks. However, in practice, these algorithms generally also require large amounts of on-policy experience during the meta-training process, making them impractical for use in many problems. To this end, we propose to learn a reinforcement learning procedure in a federated way, where individual off-policy learners can solve the individual meta-training tasks, and then consolidate these solutions into a single meta-learner. Since the central meta-learner learns by imitating the solutions to the individual tasks, it can accommodate either the standard meta-RL problem setting or a hybrid setting where some or all tasks are provided with example demonstrations. The former results in an approach that can leverage policies learned for previous tasks without significant amounts of on-policy data during meta-training, whereas the latter is particularly useful in cases where demonstrations are easy for a person to provide. Across a number of continuous control meta-RL problems, we demonstrate significant improvements in meta-RL sample efficiency in comparison to prior work as well as the ability to scale to domains with visual observations.", "target": ["強化学習におけるメタラーニングについて、「メタ」側の学習時間を模倣学習で短縮する研究。様々なタスクに適合可能な初期値を得るには、複数タスクによる十分なメタ学習が必要であり、これはOn-policyの場合特に時間がかかる。そこで模倣学習+Off-policyにより高速な学習を行う手法を提案"]} {"source": "Learning useful representations with little or no supervision is a key challenge in artificial intelligence. We provide an in-depth review of recent advances in representation learning with a focus on autoencoder-based models. To organize these results we make use of meta-priors believed useful for downstream tasks, such as disentanglement and hierarchical organization of features. In particular, we uncover three main mechanisms to enforce such properties, namely (i) regularizing the (approximate or aggregate) posterior distribution, (ii) factorizing the encoding and decoding distribution, or (iii) introducing a structured prior distribution. While there are some promising results, implicit or explicit supervision remains a key enabler and all current methods use strong inductive biases and modeling assumptions. Finally, we provide an analysis of autoencoder-based representation learning through the lens of rate-distortion theory and identify a clear tradeoff between the amount of prior knowledge available about the downstream tasks, and how useful the representation is for this task.", "target": ["VAEを中心とした、表現学習の手法のまとめ。表現学習の目的はデータ間に存在するメタ的な構造(meta-prior)を抽出することがあるとし、これをDisentanglement(独立した因子に分解)、階層(葉=>枝=木のような属性構造)、Semi-Supervised、Clusteringの4つに分けている。"]} {"source": "Interest in larger-context neural machine translation, including document-level and multi-modal translation, has been growing. Multiple works have proposed new network architectures or evaluation schemes, but potentially helpful context is still sometimes ignored by larger-context translation models. In this paper, we propose a novel learning algorithm that explicitly encourages a neural translation model to take into account additional context using a multilevel pair-wise ranking loss. We evaluate the proposed learning algorithm with a transformer-based larger-context translation system on document-level translation. By comparing performance using actual and random contexts, we show that a model trained with the proposed algorithm is more sensitive to the additional context.", "target": ["機械翻訳において、コンテキストはあくまで単語というトークンの予測に使用されるが、これを文/データの予測にも有用なら採用するように矯正する手法の提案。単語の予測だと短期のコンテキストが採用されがちだが、文・データ全体に対する再現性を加味することで長期を意識させるようにする"]} {"source": "Entity Linking (EL) systems aim to automatically map mentions of an entity in text to the corresponding entity in a Knowledge Graph (KG). Degree of connectivity of an entity in the KG directly affects an EL system’s ability to correctly link mentions in text to the entity in KG. This causes many EL systems to perform well for entities well connected to other entities in KG, bringing into focus the role of KG density in EL. In this paper, we propose Entity Linking using Densified Knowledge Graphs (ELDEN). ELDEN is an EL system which first densifies the KG with co-occurrence statistics from a large text corpus, and then uses the densified KG to train entity embeddings. Entity similarity measured using these trained entity embeddings result in improved EL. ELDEN outperforms state-of-the-art EL system on benchmark datasets. Due to such densification, ELDEN performs well for sparsely connected entities in the KG too. ELDEN’s approach is simple, yet effective. We have made ELDEN’s code and data publicly available.", "target": ["Entity Linkingのタスクでは、当然Entity間のRelation(接続)が多いEntityほど高精度な推定が可能になる。逆に言えば接続が少ないEntityについては推定精度が下がることになるが、その問題を擬似的なEntityを経由した接続を行うことで解消したという研究。擬似的なEntityは、unigram/bigramの頻度を元にテキスト中から選出される。"]} {"source": "We propose a new objective, the counterfactual objective, unifying existing objectives for off-policy policy gradient algorithms in the continuing reinforcement learning (RL) setting. Compared to the commonly used excursion objective, which can be misleading about the performance of the target policy when deployed, our new objective better predicts such performance. We prove the Generalized Off-Policy Policy Gradient Theorem to compute the policy gradient of the counterfactual objective and use an emphatic approach to get an unbiased sample from this policy gradient, yielding the Generalized Off-Policy Actor-Critic (Geoff-PAC) algorithm. We demonstrate the merits of Geoff-PAC over existing algorithms in Mujoco robot simulation tasks, the first empirical success of emphatic algorithms in prevailing deep RL benchmarks.", "target": ["Off-Policy Policy Gradientでは実際の戦略(target policy)とは別に探索用のbehavior policyを用いて更新を行うが、実際の戦略が目指したいところ(期待値の最大化)と探索が目指したいところ(価値の最大化)はバッティングする可能性がある。そこで、両者のバランスを調整するパラメーターの導入を提案"]} {"source": "The Word Embedding Association Test shows that GloVe and word2vec word embeddings exhibit human-like implicit biases based on gender, race, and other social constructs (Caliskan et al., 2017). Meanwhile, research on learning reusable text representations has begun to explore sentence-level texts, with some sentence encoders seeing enthusiastic adoption. Accordingly, we extend the Word Embedding Association Test to measure bias in sentence encoders. We then test several sentence encoders, including state-of-the-art methods such as ELMo and BERT, for the social biases studied in prior work and two important biases that are difficult or impossible to test at the word level. We observe mixed results including suspicious patterns of sensitivity that suggest the test's assumptions may not hold in general. We conclude by proposing directions for future work on measuring bias in sentence encoders.", "target": ["文表現における偏見を検証する手法の提案(例えば「xxは賢い」という文について、xxに人種を入れた場合差が出るかなど)。既存の手法は単語ベースであったが(Word Embedding Association Test=WEAT)、これを文(Sentence=SEAT)に拡張している。文は、xx is yyのような単純なテンプレートを用いている。"]} {"source": "We've seen tremendous success of image generating models these years. Generating images through a neural network is usually pixel-based, which is fundamentally different from how humans create artwork using brushes. To imitate human drawing, interactions between the environment and the agent is required to allow trials. However, the environment is usually non-differentiable, leading to slow convergence and massive computation. In this paper we try to address the discrete nature of software environment with an intermediate, differentiable simulation. We present StrokeNet, a novel model where the agent is trained upon a well-crafted neural approximation of the painting environment. With this approach, our agent was able to learn to write characters such as MNIST digits faster than reinforcement learning approaches in an unsupervised manner. Our primary contribution is the neural simulation of a real-world environment. Furthermore, the agent trained with the emulated environment is able to directly transfer its skills to real-world software.", "target": ["ブラシストロークによる文字の生成を学習する手法の研究。入力画像を一旦ブラシの種別/ストローク(位置の連続)の表現にし生成、という処理を再帰的に繰り返すことで元の画像をストロークにより再現することを学習する。"]} {"source": "Batch Normalization (BN) has become an out-of-box technique to improve deep network training. However, its effectiveness is limited for micro-batch training, i.e., each GPU typically has only 1-2 images for training, which is inevitable for many computer vision tasks, e.g., object detection and semantic segmentation, constrained by memory consumption. To address this issue, we propose Weight Standardization (WS) and Batch-Channel Normalization (BCN) to bring two success factors of BN into micro-batch training: 1) the smoothing effects on the loss landscape and 2) the ability to avoid harmful elimination singularities along the training trajectory. WS standardizes the weights in convolutional layers to smooth the loss landscape by reducing the Lipschitz constants of the loss and the gradients; BCN combines batch and channel normalizations and leverages estimated statistics of the activations in convolutional layers to keep networks away from elimination singularities. We validate WS and BCN on comprehensive computer vision tasks, including image classification, object detection, instance segmentation, video recognition and semantic segmentation. All experimental results consistently show that WS and BCN improve micro-batch training significantly. Moreover, using WS and BCN with micro-batch training is even able to match or outperform the performances of BN with large-batch training.", "target": ["出力チャンネルごとに重みを正規化するWeight Standardizationの提案。Batch Normalizationの効果は当初提唱されていた共変量シフトの解消ではなく、勾配をなだらかにする方にあることが最近示されているが、そうならば後者をより強める手法を、ということで発明された。小バッチサイズでも有効な手法"]} {"source": "Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.", "target": ["Transformer (#329) で、Relation Extractionを行った研究。入力としては、対象A/対象B/本文/AとBの関係種別、をSeparatorを間に挟み結合して入れている。モデルはほぼ素のTransformerだが、転移学習時も言語モデルタスクを行うなど近年の工夫が取られている。"]} {"source": "Adversarial examples are inputs to machine learning models designed by an adversary to cause an incorrect output. So far, adversarial examples have been studied most extensively in the image domain. In this domain, adversarial examples can be constructed by imperceptibly modifying images to cause misclassification, and are practical in the physical world. In contrast, current targeted adversarial examples applied to speech recognition systems have neither of these properties: humans can easily identify the adversarial perturbations, and they are not effective when played over-the-air. This paper makes advances on both of these fronts. First, we develop effectively imperceptible audio adversarial examples (verified through a human study) by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full-sentence targets. Next, we make progress towards physical-world over-the-air audio adversarial examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions.", "target": ["音声におけるAdversarial Exampleの研究。音声の場合画像に比べてノイズが顕著になるため、聞いてもわからない+空間に伝播しても誤認識させられることを目指している。このために、空間反響のシミュレーターを経由した(Adversarial入り)音声が、実音声の周波数マスク帯以内に入るよう学習している。"]} {"source": "We propose a novel transition-based algorithm that straightforwardly parses sentences from left to right by building n attachments, with n being the length of the input sentence. Similarly to the recent stack-pointer parser by Ma et al. (2018), we use the pointer network framework that, given a word, can directly point to a position from the sentence. However, our left-to-right approach is simpler than the original top-down stack-pointer parser (not requiring a stack) and reduces transition sequence length in half, from 2n-1 actions to n. This results in a quadratic non-projective parser that runs twice as fast as the original while achieving the best accuracy to date on the English PTB dataset (96.04% UAS, 94.43% LAS) among fully-supervised single-model dependency parsers, and improves over the former top-down transition system in the majority of languages tested.", "target": ["係り受け解析で、左から右に一発読むだけでツリーを作成するという手法の提案。Pointer Networkを利用し、位置iの単語の親はどれか?を直接推定していく(0の場合ROOTとする)。English PTB datasetで既存手法より精度が高いばかりでなく、2倍ほど速度が向上。"]} {"source": "In computer vision, virtually every state-of-the-art deep learning system is trained with data augmentation. In text classification, however, data augmentation is less widely practiced because it must be performed before training and risks introducing label noise. We augment the IMDB movie reviews dataset with examples generated by two families of techniques: random token perturbations introduced by Wei and Zou [2019] and backtranslation -- translating to a second language then back to English. In low resource environments, backtranslation generates significant improvement on top of the state of-the-art ULMFit model. A ULMFit model pretrained on wikitext103 and then fine-tuned on only 50 IMDB examples and 500 synthetic examples generated by backtranslation achieves 80.6% accuracy, an 8.1% improvement over the augmentation-free baseline with only 9 minutes of additional training time. Random token perturbations do not yield any improvements but incur equivalent computational cost. The benefits of training with backtranslated examples decreases with the size of the available training data. On the full dataset, neither augmentation technique improves upon ULMFit's state of the art performance. We address this by using backtranslations as a form of test time augmentation as well as ensembling ULMFit with other models, and achieve small improvements.", "target": ["Sentimentの分類(IMDB)で、事前学習による転移(ULMFit)とData Augmentationの効果を検証した研究。転移が有効なのはもちろんとして、AugmentationではBacktranslationがかなり有効との結果(ランダムなノイズ挿入は逆効果のよう)。転移+実データ数50(+Augmentation 500)で80%超の精度を達成。"]} {"source": "Adaptive optimization methods such as AdaGrad, RMSprop and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates. Though prevailing, they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates. Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods. In our paper, we demonstrate that extreme learning rates can lead to poor performance. We provide new variants of Adam and AMSGrad, called AdaBound and AMSBound respectively, which employ dynamic bounds on learning rates to achieve a gradual and smooth transition from adaptive methods to SGD and give a theoretical proof of convergence. We further conduct experiments on various popular tasks and models, which is often insufficient in previous work. Experimental results show that new variants can eliminate the generalization gap between adaptive methods and SGD and maintain higher learning speed early in training at the same time. Moreover, they can bring significant improvement over their prototypes, especially on complex deep networks. The implementation of the algorithm can be found at this https URL .", "target": ["Adamから徐々にSGDへと移行していく最適化手法の提案。既存の研究では大きすぎる学習率の抑制に焦点を置いたものが多かったが、小さすぎる学習率も影響がある点を指摘し、上限/下限を設定することでこれをクリップし所定範囲内で徐々にSGDへ移行するようにしている。"]} {"source": "In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations in order to adapt to new tasks and domains. MT-DNN extends the model proposed in Liu et al. (2015) by incorporating a pre-trained bidirectional transformer language model, known as BERT (Devlin et al., 2018). MT-DNN obtains new state-of-the-art results on ten NLU tasks, including SNLI, SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.7% (2.2% absolute improvement). We also demonstrate using the SNLI and SciTail datasets that the representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations. The code and pre-trained models are publicly available at this https URL.", "target": ["BERTをマルチタスクでチューニングした形の研究。タスクは、文分類・文間類似度・文関係分類・QAの4種類。SNLI/SciTailといった、文関係の推論タスクの精度が素のBERTに比べて大幅に更新。"]} {"source": "We present a new approach for pretraining a bi-directional transformer model that provides significant performance gains across a variety of language understanding problems. Our model solves a cloze-style word reconstruction task, where each word is ablated and must be predicted given the rest of the text. Experiments demonstrate large performance gains on GLUE and new state of the art results on NER as well as constituency parsing benchmarks, consistent with the concurrently introduced BERT model. We also present a detailed analysis of a number of factors that contribute to effective pretraining, including data domain and size, model capacity, and variations on the cloze objective.", "target": ["Bi-directionalとTransformerを組み合わせた、事前学習モデルの提案。順方向と逆方向、2方向のTransformerを持ち、最終的には順/逆の情報をSelf-Attention+全結合で処理して予測を行う。固有表現認識/構文解析にて、BERTより精度が微増。"]} {"source": "Human perception is structured around objects which form the basis for our higher-level cognition and impressive systematic generalization abilities. Yet most work on representation learning focuses on feature learning without even considering multiple objects, or treats segmentation as an (often supervised) preprocessing step. Instead, we argue for the importance of learning to segment and represent objects jointly. We demonstrate that, starting from the simple assumption that a scene is composed of multiple entities, it is possible to learn to segment images into interpretable objects with disentangled representations. Our method learns -- without supervision -- to inpaint occluded parts, and extrapolates to scenes with more objects and to unseen objects with novel feature combinations. We also show that, due to the use of iterative variational inference, our system is able to learn multi-modal posteriors for ambiguous inputs and extends naturally to sequences.", "target": ["複数物体が含まれる画像(入力)について、物体ごと段階的に生成(改善)を行うVAEの提案。物体ごとに潜在表現を推定し一気に生成、ではなく物体ごとパラメーターとマスクを生成=>未生成の部分を次の周回に渡す、という形で段階的に生成する(物体数(slot数)は事前に決める)。"]} {"source": "The ability to decompose scenes in terms of abstract building blocks is crucial for general intelligence. Where those basic building blocks share meaningful properties, interactions and other regularities across scenes, such decompositions can simplify reasoning and facilitate imagination of novel scenarios. In particular, representing perceptual observations in terms of entities should improve data efficiency and transfer performance on a wide range of tasks. Thus we need models capable of discovering useful decompositions of scenes by identifying units with such regularities and representing them in a common format. To address this problem, we have developed the Multi-Object Network (MONet). In this model, a VAE is trained end-to-end together with a recurrent attention network -- in a purely unsupervised manner -- to provide attention masks around, and reconstructions of, regions of images. We show that this model is capable of learning to decompose and represent challenging 3D scenes into semantically meaningful components, such as objects and background elements.", "target": ["複数の物体が混在する画像(入力)について、個別に認識して表現を獲得する手法の提案(教師なしでのセグメンテーション/セグメント内表現学習といった印象)。対象領域を選択(マスクを生成)し、対象箇所についてVAEで表現学習、という処理を再帰的に繰り返す構成となっている。"]} {"source": "Energy based models (EBMs) are appealing due to their generality and simplicity in likelihood modeling, but have been traditionally difficult to train. We present techniques to scale MCMC based EBM training on continuous neural networks, and we show its success on the high-dimensional data domains of ImageNet32x32, ImageNet128x128, CIFAR-10, and robotic hand trajectories, achieving better samples than other likelihood models and nearing the performance of contemporary GAN approaches, while covering all modes of the data. We highlight some unique capabilities of implicit generation such as compositionality and corrupt image reconstruction and inpainting. Finally, we show that EBMs are useful models across a wide variety of tasks, achieving state-of-the-art out-of-distribution classification, adversarially robust classification, state-of-the-art continual online class learning, and coherent long term predicted trajectory rollouts.", "target": ["生成モデルの学習は潜在表現を得るのに有効だが、学習時文字通り生成モデル(画像の生成など)が必要で計算コストが高い。そこで潜在表現に勾配を適用していくことでサンプリングを行うLangevin dynamicsを用いこれを解消している(勾配適用により最終的には実データ分布からのサンプリングと等価になる)"]} {"source": "Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. The also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems. In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.", "target": ["強化学習におけるメタラーニングにおいて、戦略ではなく環境にフォーカスを当てた研究。既存の研究ではメタな戦略(新しい環境にすぐ適合する連略)の作成を試みることが多いが、本研究ではメタな環境認識(具体的には潜在表現)を学習させ、それを基に行動をとるというOff-Policyなアプローチをとっている"]} {"source": "Gaussian processes (GPs) are flexible non-parametric models, with a capacity that grows with the available data. However, computational constraints with standard inference procedures have limited exact GPs to problems with fewer than about ten thousand training points, necessitating approximations for larger datasets. In this paper, we develop a scalable approach for exact GPs that leverages multi-GPU parallelization and methods like linear conjugate gradients, accessing the kernel matrix only through matrix multiplication. By partitioning and distributing kernel matrix multiplies, we demonstrate that an exact GP can be trained on over a million points, a task previously thought to be impossible with current computing hardware, in less than 2 hours. Moreover, our approach is generally applicable, without constraints to grid data or specific kernel classes. Enabled by this scalability, we perform the first-ever comparison of exact GPs against scalable GP approximations on datasets with 104−106 data points, showing dramatic performance improvements.", "target": ["Gaussian Processesについて、演算処理をマルチGPUに最適化(=並列+行列演算)した研究。これにより、通常数千点へのフィッティングが限界のところ百万まで拡大できたという。GPyTorchという形で実装も公開されている。"]} {"source": "We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the deep network, which is then processed through stacks of convolution, normalization, and nonlinearity layers. We show that this is suboptimal as the normalization layers tend to ``wash away'' semantic information. To address the issue, we propose using the input layout for modulating the activations in normalization layers through a spatially-adaptive, learned transformation. Experiments on several challenging datasets demonstrate the advantage of the proposed method over existing approaches, regarding both visual fidelity and alignment with input layouts. Finally, our model allows user control over both semantic and style. Code is available at this https URL .", "target": ["セグメンテーションによる条件付けをした上での画像生成を行う研究。既存のネットワークでは正規化のレイヤを通すと条件付けの情報を(正規化により)消してしまう傾向があった。このため、事前にマスク単位で係数/バイアス(=特徴)を計算しておき、正規化後に乗せるという形をとっている。"]} {"source": "Deep neural networks are widely used for nonlinear function approximation with applications ranging from computer vision to control. Although these networks involve the composition of simple arithmetic operations, it can be very challenging to verify whether a particular network satisfies certain input-output properties. This article surveys methods that have emerged recently for soundly verifying such properties. These methods borrow insights from reachability analysis, optimization, and search. We discuss fundamental differences and connections between existing algorithms. In addition, we provide pedagogical implementations of existing methods and compare them on a set of benchmark problems.", "target": ["DNNが意図した振る舞いをするかどうか検証するための手法のまとめ。入力=>出力への到達過程の検証(Reachability)、最適化において(線形の)制約をかける(Optimization)、意図しない出力を生む入力の探索(Search)、という3つの種別に分けて解説されている。具体的な疑似コードもあり実装も公開されている"]} {"source": "In this paper we develop a new perspective on generalization of neural networks by proposing and investigating the concept of a neural network stiffness. We measure how stiff a network is by looking at how a small gradient step in the network's parameters on one example affects the loss on another example. Higher stiffness suggests that a network is learning features that generalize. In particular, we study how stiffness depends on 1) class membership, 2) distance between data points in the input space, 3) training iteration, and 4) learning rate. We present experiments on MNIST, FASHION MNIST, and CIFAR-10/100 using fully-connected and convolutional neural networks, as well as on a transformer-based NLP model. We demonstrate the connection between stiffness and generalization, and observe its dependence on learning rate. When training on CIFAR-100, the stiffness matrix exhibits a coarse-grained behavior indicative of the model's awareness of super-class membership. In addition, we measure how stiffness between two data points depends on their mutual input-space distance, and establish the concept of a dynamical critical length -- a distance below which a parameter update based on a data point influences its neighbors.", "target": ["あるデータのlossによる更新が、他のデータのlossにどのような影響を与えるのかを調べることで汎化性能が調査できるとした研究。同クラス内のデータであればクラス内の、異なるクラス間のデータであればクラス間の相関を知ることができる。学習率が高い方が離れたデータ間の相関を誘導できるとのこと"]} {"source": "While most previous work has focused on different pretraining objectives and architectures for transfer learning, we ask how to best adapt the pretrained model to a given target task. We focus on the two most common forms of adaptation, feature extraction (where the pretrained weights are frozen), and directly fine-tuning the pretrained model. Our empirical results across diverse NLP tasks with two state-of-the-art models show that the relative performance of fine-tuning vs. feature extraction depends on the similarity of the pretraining and target tasks. We explore possible explanations for this finding and provide a set of adaptation guidelines for the NLP practitioner.", "target": ["ELMo/BERTといった事前学習済みモデルを有効に使うための方法を、様々な実験から検証した資料。重みを固定し特徴抽出機として使う/Fine Tuningする、いずれもほぼ同様の結果だが文関係の推論においてはELMoは固定、BERTはFine Tuneが良いとの結果。これはLSTMの特性(関係認識に弱い)に由来する可能性有"]} {"source": "Autonomous agents that must exhibit flexible and broad capabilities will need to be equipped with large repertoires of skills. Defining each skill with a manually-designed reward function limits this repertoire and imposes a manual engineering burden. Self-supervised agents that set their own goals can automate this process, but designing appropriate goal setting objectives can be difficult, and often involves heuristic design decisions. In this paper, we propose a formal exploration objective for goal-reaching policies that maximizes state coverage. We show that this objective is equivalent to maximizing goal reaching performance together with the entropy of the goal distribution, where goals correspond to full state observations. To instantiate this principle, we present an algorithm called Skew-Fit for learning a maximum-entropy goal distributions. We prove that, under regularity conditions, Skew-Fit converges to a uniform distribution over the set of valid states, even when we do not know this set beforehand. Our experiments show that combining Skew-Fit for learning goal distributions with existing goal-reaching methods outperforms a variety of prior methods on open-sourced visual goal-reaching tasks. Moreover, we demonstrate that Skew-Fit enables a real-world robot to learn to open a door, entirely from scratch, from pixels, and without any manually-designed reward function.", "target": ["報酬関数を自ら推定できるようにする研究。端的には目的達成に重要な状態に報酬を振りたいが、そのためには全状態を検証する必要がありこれは困難。そこで、Replay Bufferのうち到達確率が低いものを重点的に見るようにすることで効率を上げている。"]} {"source": "Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work, we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful `explanations' for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do. Code for all experiments is available at this https URL.", "target": ["自然言語処理においてAttentionはよくモデルの判断根拠として用いられるが、本当に説明になっているのかを検証した研究。結果として、GradientベースのスコアとAttentionは乖離があり、またAttentionの分布が異なるよう変更しても予測結果を維持することができることを確認"]} {"source": "Touch sensing is widely acknowledged to be important for dexterous robotic manipulation, but exploiting tactile sensing for continuous, non-prehensile manipulation is challenging. General purpose control techniques that are able to effectively leverage tactile sensing as well as accurate physics models of contacts and forces remain largely elusive, and it is unclear how to even specify a desired behavior in terms of tactile percepts. In this paper, we take a step towards addressing these issues by combining high-resolution tactile sensing with data-driven modeling using deep neural network dynamics models. We propose deep tactile MPC, a framework for learning to perform tactile servoing from raw tactile sensor inputs, without manual supervision. We show that this method enables a robot equipped with a GelSight-style tactile sensor to manipulate a ball, analog stick, and 20-sided die, learning from unsupervised autonomous interaction and then using the learned tactile predictive model to reposition each object to user-specified configurations, indicated by a goal tactile reading. Videos, visualizations and the code are available here: this https URL", "target": ["DNNを用いて、適切な触覚に基づく行動を行えるようにする研究(卵をつかむのに握りつぶしちゃった!ということにならないようにするなど)。モデルベースの手法を使用しており、現在の観測/モデルの予測をベースに、候補となる行動系列から最も目的の触覚に近づくと見込まれる系列の最初の行動を取る。"]} {"source": "High-quality computer vision models typically address the problem of understanding the general distribution of real-world images. However, most cameras observe only a very small fraction of this distribution. This offers the possibility of achieving more efficient inference by specializing compact, low-cost models to the specific distribution of frames observed by a single camera. In this paper, we employ the technique of model distillation (supervising a low-cost student model using the output of a high-cost teacher) to specialize accurate, low-cost semantic segmentation models to a target video stream. Rather than learn a specialized student model on offline data from the video stream, we train the student in an online fashion on the live video, intermittently running the teacher to provide a target for learning. Online model distillation yields semantic segmentation models that closely approximate their Mask R-CNN teacher with 7 to 17\\times lower inference runtime cost (11 to 26\\times in FLOPs), even when the target video's distribution is non-stationary. Our method requires no offline pretraining on the target video stream, achieves higher accuracy and lower cost than solutions based on flow or video object segmentation, and can exhibit better temporal stability than the original teacher. We also provide a new video dataset for evaluating the efficiency of inference over long running video streams.", "target": ["セグメンテーションの演算効率向上を狙った研究。 非常に小さなネットワーク(Just-in-Time Network)を常に蒸留でオンライン学習し、スポーツや監視カメラなど特定シーンに対し特化させる。精度は教師(Mask-RCNN)とほぼ同等で、学習を含めた演算効率は5倍向上"]} {"source": "Multimodal language models attempt to incorporate non-linguistic features for the language modeling task. In this work, we extend a standard recurrent neural network (RNN) language model with features derived from videos. We train our models on data that is two orders-of-magnitude bigger than datasets used in prior work. We perform a thorough exploration of model architectures for combining visual and text features. Our experiments on two corpora (YouCookII and 20bn-something-something-v2) show that the best performing architecture consists of middle fusion of visual and text features, yielding over 25% relative improvement in perplexity. We report analysis that provides insights into why our multimodal language model improves upon a standard RNN language model.", "target": ["画像の情報もミックスして言語モデルを構築する研究。モデル自体はシンプルなRNNで、画像情報を入れる位置(複数層のLSTMの前/中/後)、また自然言語/画像ベクトルのマージ方法(線形結合/言語との相関に応じ重みをつけ画像ベクトルを結合)の組み合わせを検証している。Middle(中)が一番良好な精度。"]} {"source": "Modeling generative process of growing graphs has wide applications in social networks and recommendation systems, where cold start problem leads to new nodes isolated from existing graph. Despite the emerging literature in learning graph representation and graph generation, most of them can not handle isolated new nodes without nontrivial modifications. The challenge arises due to the fact that learning to generate representations for nodes in observed graph relies heavily on topological features, whereas for new nodes only node attributes are available. Here we propose a unified generative graph convolutional network that learns node representations for all nodes adaptively in a generative model framework, by sampling graph generation sequences constructed from observed graph data. We optimize over a variational lower bound that consists of a graph reconstruction term and an adaptive Kullback-Leibler divergence regularization term. We demonstrate the superior performance of our approach on several benchmark citation network datasets.", "target": ["グラフにおいて「新しいノードをどう扱うか」という問題にチャレンジした研究。SNSやアイテム推薦における新規アイテム問題ではこの点が顕著だが、既存の研究では固定的なグラフを扱うのが大半だった。VAEをベースとして、順次ノードが追加されていく時系列の生成モデルという形でアプローチをしている"]} {"source": "We develop a mean field theory for batch normalization in fully-connected feedforward neural networks. In so doing, we provide a precise characterization of signal propagation and gradient backpropagation in wide batch-normalized networks at initialization. Our theory shows that gradient signals grow exponentially in depth and that these exploding gradients cannot be eliminated by tuning the initial weight variances or by adjusting the nonlinear activation function. Indeed, batch normalization itself is the cause of gradient explosion. As a result, vanilla batch-normalized networks without skip connections are not trainable at large depths for common initialization schemes, a prediction that we verify with a variety of empirical simulations. While gradient explosion cannot be eliminated, it can be reduced by tuning the network close to the linear regime, which improves the trainability of deep batch-normalized networks without residual connections. Finally, we investigate the learning dynamics of batch-normalized networks and observe that after a single step of optimization the networks achieve a relatively stable equilibrium in which gradients have dramatically smaller dynamic range. Our theory leverages Laplace, Fourier, and Gegenbauer transforms and we derive new identities that may be of independent interest.", "target": ["バッチ正規化(BN)の効果について理論と実験双方から検証した研究。層を深くすることにより発生する勾配爆発は、BN単体では防げないことを示唆(Skip- connectionの併用が必要)。ただ、ネットワークの挙動を線形に近づけることで学習しやすくするなどのメリットがあるとのこと。"]} {"source": "Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution. While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms. Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, often motivating this approach as minimizing two (of three) terms in a theoretical bound on target error. Unfortunately, this minimization can cause arbitrary increases in the third term, e.g. they can break down under shifting label distributions. We propose asymmetrically-relaxed distribution alignment, a new approach that overcomes some limitations of standard domain-adversarial algorithms. Moreover, we characterize precise assumptions under which our algorithm is theoretically principled and demonstrate empirical benefits on both synthetic and real datasets.", "target": ["ドメイン適応では、潜在空間を通じた転移がよく用いられる。しかし潜在表現を完全に転写が可能な空間としてしまうと転移できる範囲に限界がある(ソースにはあるけどターゲットにはない、という状況が許容されずソースにあるものはターゲットに必ずなくてはならない)。そのため、この条件を緩和した研究"]} {"source": "Hyperparameter optimization can be formulated as a bilevel optimization problem, where the optimal parameters on the training set depend on the hyperparameters. We aim to adapt regularization hyperparameters for neural networks by fitting compact approximations to the best-response function, which maps hyperparameters to optimal weights and biases. We show how to construct scalable best-response approximations for neural networks by modeling the best-response as a single network whose hidden units are gated conditionally on the regularizer. We justify this approximation by showing the exact best-response for a shallow linear network with L2-regularized Jacobian can be represented by a similar gating mechanism. We fit this model using a gradient-based hyperparameter optimization algorithm which alternates between approximating the best-response around the current hyperparameters and optimizing the hyperparameters using the approximate best-response function. Unlike other gradient-based approaches, we do not require differentiating the training loss with respect to the hyperparameters, allowing us to tune discrete hyperparameters, data augmentation hyperparameters, and dropout probabilities. Because the hyperparameters are adapted online, our approach discovers hyperparameter schedules that can outperform fixed hyperparameter values. Empirically, our approach outperforms competing hyperparameter optimization methods on large-scale deep learning problems. We call our networks, which update their own hyperparameters online during training, Self-Tuning Networks (STNs).", "target": ["ハイパーパラメーターを自動チューニングするという研究。実現にはもちろん学習セットに適合済みのネットワークが必要だが(その上でvalidationを使いチューニングするため)、その出力を近似する。手法としては、通常の重み/biasに加えハイパーパラメーターでスケールされる重み/biasを加え学習を行う。"]} {"source": "Acquiring a diverse repertoire of general-purpose skills remains an open challenge for robotics. In this work, we propose self-supervising control on top of human teleoperated play data as a way to scale up skill learning. Play has two properties that make it attractive compared to conventional task demonstrations. Play is cheap, as it can be collected in large quantities quickly without task segmenting, labeling, or resetting to an initial state. Play is naturally rich, covering ~4x more interaction space than task demonstrations for the same amount of collection time. To learn control from play, we introduce Play-LMP, a self-supervised method that learns to organize play behaviors in a latent space, then reuse them at test time to achieve specific goals. Combining self-supervised control with a diverse play dataset shifts the focus of skill learning from a narrow and discrete set of tasks to the full continuum of behaviors available in an environment. We find that this combination generalizes well empirically---after self-supervising on unlabeled play, our method substantially outperforms individual expert-trained policies on 18 difficult user-specified visual manipulation tasks in a simulated robotic tabletop environment. We additionally find that play-supervised models, unlike their expert-trained counterparts, are more robust to perturbations and exhibit retrying-till-success behaviors. Finally, we find that our agent organizes its latent plan space around functional tasks, despite never being trained with task labels. Videos, code and data are available at this http URL", "target": ["強化学習においてデモの中から汎用的な動作を認識し、使いまわせるようにする研究。デモの中から任意の長さのパートを切り出し、それがどんな(汎用)動作に該当するかを予測する。パートのどの位置でも動作の一部のため、位置により予測が変化しないよう潜在空間上の距離を小さくする形で学習する。"]} {"source": "Generative models that can model and predict sequences of future events can, in principle, learn to capture complex real-world phenomena, such as physical interactions. However, a central challenge in video prediction is that the future is highly uncertain: a sequence of past observations of events can imply many possible futures. Although a number of recent works have studied probabilistic models that can represent uncertain futures, such models are either extremely expensive computationally as in the case of pixel-level autoregressive models, or do not directly optimize the likelihood of the data. To our knowledge, our work is the first to propose multi-frame video prediction with normalizing flows, which allows for direct optimization of the data likelihood, and produces high-quality stochastic predictions. We describe an approach for modeling the latent space dynamics, and demonstrate that flow-based generative models offer a viable and competitive approach to generative modelling of video.", "target": ["厳密な潜在変数の推定が可能なGlow (#828) を、動画に適用した研究。タイムステップごと、またスケールごとに潜在表現を持たせるという工夫を行なっている。VAEをベースとしたSOTAの手法とほぼ同精度で、かつ高速な推定が行えたとのこと。"]} {"source": "We introduce \\alpha-Rank, a principled evolutionary dynamics methodology for the evaluation and ranking of agents in large-scale multi-agent interactions, grounded in a novel dynamical game-theoretic solution concept called Markov-Conley chains (MCCs). The approach leverages continuous- and discrete-time evolutionary dynamical systems applied to empirical games, and scales tractably in the number of agents, the type of interactions, and the type of empirical games (symmetric and asymmetric). Current models are fundamentally limited in one or more of these dimensions and are not guaranteed to converge to the desired game-theoretic solution concept (typically the Nash equilibrium). \\alpha-Rank provides a ranking over the set of agents under evaluation and provides insights into their strengths, weaknesses, and long-term dynamics. This is a consequence of the links we establish to the MCC solution concept when the underlying evolutionary model's ranking-intensity parameter, \\alpha, is chosen to be large, which exactly forms the basis of \\alpha-Rank. In contrast to the Nash equilibrium, which is a static concept based on fixed points, MCCs are a dynamical solution concept based on the Markov chain formalism, Conley's Fundamental Theorem of Dynamical Systems, and the core ingredients of dynamical systems: fixed points, recurrent sets, periodic orbits, and limit cycles. \\alpha-Rank runs in polynomial time with respect to the total number of pure strategy profiles, whereas computing a Nash equilibrium for a general-sum game is known to be intractable. We introduce proofs that not only provide a unifying perspective of existing continuous- and discrete-time evolutionary evaluation models, but also reveal the formal underpinnings of the \\alpha-Rank methodology. We empirically validate the method in several domains including AlphaGo, AlphaZero, MuJoCo Soccer, and Poker.", "target": ["マルチエージェントの戦略評価についての研究。評価はナッシュ均衡がセオリーだが、計算が難しく動的な環境になじまない。そこで、まず連続した環境の動きをConley’s Theorem(連続と接続点)で定義、連続箇所をマルコフ連鎖で近似(MCC)、均衡を複数エージェントの行動から近似(Micro-model)という流れ"]} {"source": "Model-free reinforcement learning (RL) can be used to learn effective policies for complex tasks, such as Atari games, even from image observations. However, this typically requires very large amounts of interaction -- substantially more, in fact, than a human would need to learn the same games. How can people learn so quickly? Part of the answer may be that people can learn how the game works and predict which actions will lead to desirable outcomes. In this paper, we explore how video prediction models can similarly enable agents to solve Atari games with fewer interactions than model-free methods. We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting. Our experiments evaluate SimPLe on a range of Atari games in low data regime of 100k interactions between the agent and the environment, which corresponds to two hours of real-time play. In most games SimPLe outperforms state-of-the-art model-free algorithms, in some games by over an order of magnitude.", "target": ["モデルベースの手法で、Atariのゲームを攻略してみたという研究。ベースとしているのはモデルベースとモデルフリーを併用するDyna。環境のモデル化にはCNNのEncoder/Decoderが使用されており、Decode時にActionによる画面遷移表現(離散)で条件付けて生成を行う。多くのゲームでPPO/Rainbowを上回る効率"]} {"source": "Gaining a better understanding of how and what machine learning systems learn is important to increase confidence in their decisions and catalyze further research. In this paper, we analyze the predictions made by a specific type of recurrent neural network, mixture density RNNs (MD-RNNs). These networks learn to model predictions as a combination of multiple Gaussian distributions, making them particularly interesting for problems where a sequence of inputs may lead to several distinct future possibilities. An example is learning internal models of an environment, where different events may or may not occur, but where the average over different events is not meaningful. By analyzing the predictions made by trained MD-RNNs, we find that their different Gaussian components have two complementary roles: 1) Separately modeling different stochastic events and 2) Separately modeling scenarios governed by different rules. These findings increase our understanding of what is learned by predictive MD-RNNs, and open up new research directions for further understanding how we can benefit from their self-organizing model decomposition.", "target": ["強化学習で環境の学習に使用されたりするMixture Density RNN(MD-RNNs)についての研究。MD-RNNsではRNNの潜在表現から混合分布のパラメーター(=複数の平均μ/分散σ、その他のパラメータπ)を推定する形式をとっているが、これにより異なるイベント/ルールに基づく法則を個別に学習できているとしている。"]} {"source": "We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet. Across thousands of experiments, we demonstrate that complex techniques (Molchanov et al., 2017; Louizos et al., 2017b) shown to yield high compression rates on smaller datasets perform inconsistently, and that simple magnitude pruning approaches achieve comparable or better results. Additionally, we replicate the experiments performed by (Frankle & Carbin, 2018) and (Liu et al., 2018) at scale and show that unstructured sparse architectures learned through pruning cannot be trained from scratch to the same test set performance as a model trained with joint sparsification and optimization. Together, these results highlight the need for large-scale benchmarks in the field of model compression. We open-source our code, top performing model checkpoints, and results of all hyperparameter configurations to establish rigorous baselines for future work on compression and sparsification.", "target": ["DNNの重みを疎にする手法について、どの手法が有効かを大規模なモデル(Transformer/ResNet)で検証した研究(疎=重みが0に近いパラメーターが多いと計算を簡略化できる)。結果としては、単純に重みのMagnitudeで枝刈りする手法が良好であり、枝刈りで得られる構造は素の状態からは得難いとのこと。"]} {"source": "The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine translation (NMT), have achieved state-of-the-art performance on these tasks by treating source code as a sequence of tokens. We present code2seq: an alternative approach that leverages the syntactic structure of programming languages to better encode source code. Our model represents a code snippet as the set of compositional paths in its abstract syntax tree (AST) and uses attention to select the relevant paths while decoding. We demonstrate the effectiveness of our approach for two tasks, two programming languages, and four datasets of up to 16M examples. Our model significantly outperforms previous models that were specifically designed for programming languages, as well as general state-of-the-art NMT models. An interactive online demo of our model is available at http://code2seq.org. Our code, data and trained models are available at http://github.com/tech-srl/code2seq.", "target": ["コードからメソッド名を推測、また要約を行う研究。ソースコードは文字列として処理するのでなく、ASTという構造木にしてEncodeを行なっている(サブツリーのEncode結果を平均して潜在表現を作成する)。なおASTの作成はパーサーを使っているよう(アノテーションした節がない)"]} {"source": "Dense video captioning aims to generate text descriptions for all events in an untrimmed video. This involves both detecting and describing events. Therefore, all previous methods on dense video captioning tackle this problem by building two models, i.e. an event proposal and a captioning model, for these two sub-problems. The models are either trained separately or in alternation. This prevents direct influence of the language description to the event proposal, which is important for generating accurate descriptions. To address this problem, we propose an end-to-end transformer model for dense video captioning. The encoder encodes the video into appropriate representations. The proposal decoder decodes from the encoding with different anchors to form video event proposals. The captioning decoder employs a masking network to restrict its attention to the proposal event over the encoding feature. This masking network converts the event proposal to a differentiable mask, which ensures the consistency between the proposal and captioning during training. In addition, our model employs a self-attention mechanism, which enables the use of efficient non-recurrent structure during encoding and leads to performance improvements. We demonstrate the effectiveness of this end-to-end model on ActivityNet Captions and YouCookII datasets, where we achieved 10.12 and 6.58 METEOR score, respectively.", "target": ["Transformerベースのモデルで、End2Endのビデオキャプションを実現したという研究。Encoder側は動画中からキャプション対象のイベント(時間範囲)を抽出し、Decoder側はイベントにマスクをかけた上で文の生成を行なっていく。"]} {"source": "We present a novel image editing system that generates images as the user provides free-form mask, sketch and color as an input. Our system consist of a end-to-end trainable convolutional network. Contrary to the existing methods, our system wholly utilizes free-form user input with color and shape. This allows the system to respond to the user's sketch and color input, using it as a guideline to generate an image. In our particular work, we trained network with additional style loss which made it possible to generate realistic results, despite large portions of the image being removed. Our proposed network architecture SC-FEGAN is well suited to generate high quality synthetic image using intuitive user inputs.", "target": ["写真に絵を描いたら、その通りに写真を修正してくれるSC-FEGANを提案。 データの入れ方に工夫がある。ランダムにマスクをかけてその部分にスケッチ画(Edge Detectで生成)と粗い色付きに変換したもの、+ノイズを乗せた画像、それから生成した画像と、真の画像の3つを使ってGANを構成する。"]} {"source": "A growing number of state-of-the-art transfer learning methods employ language models pretrained on large generic corpora. In this paper we present a conceptually simple and effective transfer learning approach that addresses the problem of catastrophic forgetting. Specifically, we combine the task-specific optimization function with an auxiliary language model objective, which is adjusted during the training process. This preserves language regularities captured by language models, while enabling sufficient adaptation for solving the target task. Our method does not require pretraining or finetuning separate components of the network and we train our models end-to-end in a single step. We present results on a variety of challenging affective and text classification tasks, surpassing well established transfer learning methods with greater level of complexity.", "target": ["言語モデルを転移させる、シンプルな手法(SiATL)の提案。最初に言語モデルまず学習。その後目的のタスクを学習させる際は、上に分類機などを積むとともに言語モデルの学習内容を忘れないよう言語モデルの目的関数も組み合わせる。細かい学習率の調整などが不要で手軽に使える"]} {"source": "Fine-Grained Named Entity Recognition (FG-NER) is critical for many NLP applications. While classical named entity recognition (NER) has attracted a substantial amount of research, FG-NER is still an open research domain. The current state-of-the-art (SOTA) model for FG-NER relies heavily on manual efforts for building a dictionary and designing hand-crafted features. The end-to-end framework which achieved the SOTA result for NER did not get the competitive result compared to SOTA model for FG-NER. In this paper, we investigate how effective multi-task learning approaches are in an end-to-end framework for FG-NER in different aspects. Our experiments show that using multi-task learning approaches with contextualized word representation can help an end-to-end neural network model achieve SOTA results without using any additional manual effort for creating data and designing features.", "target": ["認識対象のカテゴリが数百などに及ぶFine-Grained NERにおいて、ELMoなどの文脈潜在表現やマルチタスクを導入し、どんな表現・タスク・ネットワーク構成が有効化を検証した研究。ELMoを利用し、タスクを階層的に積んでタスクごとの言語モデルタスクを同時に解かせるモデルが有効だったという結果。"]} {"source": "Reinforcement learning is a promising framework for solving control problems, but its use in practical situations is hampered by the fact that reward functions are often difficult to engineer. Specifying goals and tasks for autonomous machines, such as robots, is a significant challenge: conventionally, reward functions and goal states have been used to communicate objectives. But people can communicate objectives to each other simply by describing or demonstrating them. How can we build learning algorithms that will allow us to tell machines what we want them to do? In this work, we investigate the problem of grounding language commands as reward functions using inverse reinforcement learning, and argue that language-conditioned rewards are more transferable than language-conditioned policies to new environments. We propose language-conditioned reward learning (LC-RL), which grounds language commands as a reward function represented by a deep neural network. We demonstrate that our model learns rewards that transfer to novel tasks and environments on realistic, high-dimensional visual environments with natural language commands, whereas directly learning a language-conditioned policy leads to poor performance.", "target": ["逆強化学習を行う際に、推定対象である報酬関数に言語による条件付けを導入した研究(端的には、指示通り動けたら報酬が与えられる)。言語による指示は様々なタスクで使えるため、報酬の転移性が高くなるとのこと。手法はMax Entropyベースで、言語指示はLSTMで処理し画像側ベクトルと内積を取っている"]} {"source": "A central capability of intelligent systems is the ability to continuously build upon previous experiences to speed up and enhance learning of new tasks. Two distinct research paradigms have studied this question. Meta-learning views this problem as learning a prior over model parameters that is amenable for fast adaptation on a new task, but typically assumes the set of tasks are available together as a batch. In contrast, online (regret based) learning considers a sequential setting in which problems are revealed one after the other, but conventionally train only a single model without any task-specific adaptation. This work introduces an online meta-learning setting, which merges ideas from both the aforementioned paradigms to better capture the spirit and practice of continual lifelong learning. We propose the follow the meta leader algorithm which extends the MAML algorithm to this setting. Theoretically, this work provides an (logT) regret guarantee with only one additional higher order smoothness assumption in comparison to the standard online setting. Our experimental evaluation on three different large-scale tasks suggest that the proposed algorithm significantly outperforms alternatives based on traditional online learning approaches.", "target": ["連続的なタスクに対し後悔を最小化するOnline Learning(具体的には各時刻のlossを合計したものを最小化する。各時刻の重み調整が適切なら総和が小さくなる)と、メタラーニングを組み合わせた手法の提案。タスク開始時に重みを適したものに変換し(f_t)、タスクに応じた学習をする(U_t)ようにしている。"]} {"source": "End-to-end optimization has achieved state-of-the-art performance on many specific problems, but there is no straight-forward way to combine pretrained models for new problems. Here, we explore improving modularity by learning a post-hoc interface between two existing models to solve a new task. Specifically, we take inspiration from neural machine translation, and cast the challenging problem of cross-modal domain transfer as unsupervised translation between the latent spaces of pretrained deep generative models. By abstracting away the data representation, we demonstrate that it is possible to transfer across different modalities (e.g., image-to-audio) and even different types of generative models (e.g., VAE-to-GAN). We compare to state-of-the-art techniques and find that a straight-forward variational autoencoder is able to best bridge the two generative models through learning a shared latent space. We can further impose supervised alignment of attributes in both domains with a classifier in the shared latent space. Through qualitative and quantitative evaluations, we demonstrate that locality and semantic alignment are preserved through the transfer process, as indicated by high transfer accuracies and smooth interpolations within a class. Finally, we show this modular structure speeds up training of new interface models by several orders of magnitude by decoupling it from expensive retraining of base generative models.", "target": ["ドメインの異なる潜在表現(音声と画像など)を転移させる研究。双方の潜在表現を共通の潜在表現から生成するためのブリッジ用VAEを学習するという形。このVAEは、各ドメインの潜在表現の復元に加え、それらの分類、また意味が同一の場合(「4」という画像と音声など)の距離最小化から学習する。"]} {"source": "We consider the problem of learning from sparse and underspecified rewards, where an agent receives a complex input, such as a natural language instruction, and needs to generate a complex response, such as an action sequence, while only receiving binary success-failure feedback. Such success-failure rewards are often underspecified: they do not distinguish between purposeful and accidental success. Generalization from underspecified rewards hinges on discounting spurious trajectories that attain accidental success, while learning from sparse feedback requires effective exploration. We address exploration by using a mode covering direction of KL divergence to collect a diverse set of successful trajectories, followed by a mode seeking KL divergence to train a robust policy. We propose Meta Reward Learning (MeRL) to construct an auxiliary reward function that provides more refined feedback for learning. The parameters of the auxiliary reward function are optimized with respect to the validation performance of a trained policy. The MeRL approach outperforms our alternative reward learning technique based on Bayesian Optimization, and achieves the state-of-the-art on weakly-supervised semantic parsing. It improves previous work by 1.2% and 2.4% on WikiTableQuestions and WikiSQL datasets respectively.", "target": ["強化学習において、行動の正当性を高めるための手法の提案。「偶発的な成功」は予想しない行動の獲得につながるとして、それを抑制するために学習/検証のデータセットを分け、学習側で獲得した戦略が検証側でも通用する場合追加報酬を与えるような手法を提案している。"]} {"source": "As humans we are driven by a strong desire for seeking novelty in our world. Also upon observing a novel pattern we are capable of refining our understanding of the world based on the new information---humans can discover their world. The outstanding ability of the human mind for discovery has led to many breakthroughs in science, art and technology. Here we investigate the possibility of building an agent capable of discovering its world using the modern AI technology. In particular we introduce NDIGO, Neural Differential Information Gain Optimisation, a self-supervised discovery model that aims at seeking new information to construct a global view of its world from partial and noisy observations. Our experiments on some controlled 2-D navigation tasks show that NDIGO outperforms state-of-the-art information-seeking methods in terms of the quality of the learned representation. The improvement in performance is particularly significant in the presence of white or structured noise where other information-seeking methods follow the noise instead of discovering their world.", "target": ["部分観測な環境(POMDP)にて、内発的報酬を導入した研究。内発的報酬については、行動に対する環境変化の予測分布と、実際の環境変化の分布を比較することで差が大きい(=予測と大きく異なる)なら探索を行うようにしている。変化予測は単体の行動からだけでなく、過去の行動系列に対しても行なっている"]} {"source": "Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the gap across methods including the baseline, 2) a slightly modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic, cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.", "target": ["Few-shot learningの評価についての研究。特徴抽出を深いネットワークで、分類を線形分離でなく距離(コサイン類似度)ベースにするだけでSOTAと同等の結果が得られることを確認。またドメインを変えてのテストをすべきとし、mini-ImageNetからCUBで評価を行なっている。"]} {"source": "Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures. In this work, in order to help ground the empirical results in this field, we propose new NAS baselines that build off the following observations: (i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameter optimization. Leveraging these observations, we evaluate both random search with early-stopping and a novel random search with weight-sharing algorithm on two standard NAS benchmarks---PTB and CIFAR-10. Our results show that random search with early-stopping is a competitive NAS baseline, e.g., it performs at least as well as ENAS, a leading NAS method, on both benchmarks. Additionally, random search with weight-sharing outperforms random search with early-stopping, achieving a state-of-the-art NAS result on PTB and a highly competitive result on CIFAR-10. Finally, we explore the existing reproducibility issues of published NAS results. We note the lack of source material needed to exactly reproduce these results, and further discuss the robustness of published results given the various sources of variability in NAS experimental setups. Relatedly, we provide all information (code, random seeds, documentation) needed to exactly reproduce our results, and report our random search with weight-sharing results for each benchmark on multiple runs.", "target": ["ネットワーク構造の自動探索で、再現可能なベースラインを作るという研究。構造探索はハイパーパラメーター探索の一種なので、random search/early stoppingベースのシンプルかつ強力なASHAでベースラインを構築。weight-sharingでの探索でSOTAに近い結果が得られたとのこと。そのため、評価値として「(同等の結果を得るのに)何本のrandom searchが必要か」という指標が適切ではとしている。"]} {"source": "Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundary, so a good classifier bears a good decision boundary. Therefore, transferring information closely related to the decision boundary can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting a decision boundary. Based on this idea, to transfer more accurate information about the decision boundary, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundary. Experiments show that the proposed method indeed improves knowledge distillation and achieves the state-of-the-arts performance.", "target": ["敵対的サンプルを利用して、モデルの蒸留を行うという研究。敵対的サンプルは決定境界に近い場所にあるだろうという仮説から、それらを用いてより精度よくTeacherの決定境界をStudentに学習させる。あるデータに敵対的摂動を加えて様々なクラスに誤分類させるサンプルを作り、その予測ラベルを真似させることによって、より精度良く決定境界を近似できる。"]} {"source": "Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes. Some recent studies suggest a more important role of image textures. We here put these conflicting hypotheses to a quantitative test by evaluating CNNs and human observers on images with a texture-shape cue conflict. We show that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classification strategies. We then demonstrate that the same standard architecture (ResNet-50) that learns a texture-based representation on ImageNet is able to learn a shape-based representation instead when trained on 'Stylized-ImageNet', a stylized version of ImageNet. This provides a much better fit for human behavioural performance in our well-controlled psychophysical lab setting (nine experiments totalling 48,560 psychophysical trials across 97 observers) and comes with a number of unexpected emergent benefits such as improved object detection performance and previously unseen robustness towards a wide range of image distortions, highlighting advantages of a shape-based representation.", "target": ["従来の定説と異なり、CNNは形状ではなく画像表面の表面質感(Texture)をもとに判断していると提唱。様々な表面質感変換を加えたStylized-ImageNetデータセットを用いて訓練することで、表面質感依存を軽減した頑健性が高いモデルができるようになるとのこと。"]} {"source": "An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multi-task architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-of-the-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC-2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model's learned saliency and entailment skills.", "target": ["要約の生成において、QA/Entailmentのタスクを複合させた研究。要約は文書において顕著な情報を含む必要があるため、QAで「質問されるようなこと」、Entailmentで「他の文で言及されること」を学習することを意図している。各タスクでコンテキストとAttentionをシェアしている"]} {"source": "Transfer learning from natural image datasets, particularly ImageNet, using standard large models and corresponding pretrained weights has become a de-facto method for deep learning applications to medical imaging. However, there are fundamental differences in data sizes, features and task specifications between natural image classification and the target medical tasks, and there is little understanding of the effects of transfer. In this paper, we explore properties of transfer learning for medical imaging. A performance evaluation on two large scale medical imaging tasks shows that surprisingly, transfer offers little benefit to performance, and simple, lightweight models can perform comparably to ImageNet architectures. Investigating the learned representations and features, we find that some of the differences from transfer learning are due to the over-parametrization of standard models rather than sophisticated feature reuse. We isolate where useful feature reuse occurs, and outline the implications for more efficient model exploration. We also explore feature independent benefits of transfer arising from weight scalings.", "target": ["画像の転移学習について、何が転移されているのかを調べた研究(画像は医療画像が対象)。転移の効果として、収束速度の向上、ゼロからの学習では得られない特徴検出(本研究ではGabor filter=斜めの検知)の転移があるという。ただ収束速度は単に重みの平均/分散で初期化するだけでも得られるという"]} {"source": "A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.", "target": ["幅が無限とした場合のDNNを最急降下法で学習させる処理は、線形変換と同等であるとした論文(=学習結果は一次のテイラー展開で置き換えられる)。損失関数が2次の場合は出力がGaussianで維持されるため学習の過程はGaussian Processと見做すことができ、この点がBayesianNetと異なるとしている。"]} {"source": "Feature pyramids are widely exploited by both the state-of-the-art one-stage object detectors (e.g., DSSD, RetinaNet, RefineDet) and the two-stage object detectors (e.g., Mask R-CNN, DetNet) to alleviate the problem arising from scale variation across object instances. Although these object detectors with feature pyramids achieve encouraging results, they have some limitations due to that they only simply construct the feature pyramid according to the inherent multi-scale, pyramidal architecture of the backbones which are actually designed for object classification task. Newly, in this work, we present a method called Multi-Level Feature Pyramid Network (MLFPN) to construct more effective feature pyramids for detecting objects of different scales. First, we fuse multi-level features (i.e. multiple layers) extracted by backbone as the base feature. Second, we feed the base feature into a block of alternating joint Thinned U-shape Modules and Feature Fusion Modules and exploit the decoder layers of each u-shape module as the features for detecting objects. Finally, we gather up the decoder layers with equivalent scales (sizes) to develop a feature pyramid for object detection, in which every feature map consists of the layers (features) from multiple levels. To evaluate the effectiveness of the proposed MLFPN, we design and train a powerful end-to-end one-stage object detector we call M2Det by integrating it into the architecture of SSD, which gets better detection performance than state-of-the-art one-stage detectors. Specifically, on MS-COCO benchmark, M2Det achieves AP of 41.0 at speed of 11.8 FPS with single-scale inference strategy and AP of 44.2 with multi-scale inference strategy, which is the new state-of-the-art results among one-stage detectors. The code will be made available on \\url{this https URL.", "target": ["物体検知の手法で使用される、Feature Pyramidを改善した研究。既存のピラミッドはクラス分類特化のモデルの、単一レイヤからしか作られていない点を指摘。複数レイヤの特徴を混合し、そこからU型のConv/Deconvを積み上げ物体検出に特化したピラミッドを構築している。"]} {"source": "We present a method for storing multiple models within a single set of parameters. Models can coexist in superposition and still be retrieved individually. In experiments with neural networks, we show that a surprisingly large number of models can be effectively stored within a single parameter instance. Furthermore, each of these models can undergo thousands of training steps without significantly interfering with other models within the superposition. This approach may be viewed as the online complement of compression: rather than reducing the size of a network after training, we make use of the unrealized capacity of a network during training.", "target": ["ニューラルネットの重みを、複数の重みの重ね合わせと考える手法の提案。背景として枝刈りの手法から大半の重みが暇をしていることがわかってきたことがある。重みの空間を効率的に使用するために、入力を分解するコンテキストをかけた上で重みを適用する手法を提案している(フーリエ変換に近い手法)。"]} {"source": "In this paper we introduce the concept of network semantic segmentation for social network analysis. We consider the GitHub social coding network which has been a center of attention for both researchers and software developers. Network semantic segmentation describes the process of associating each user with a class label such as a topic of interest. We augment node attributes with network significant connections and then employ machine learning approaches to cluster the users. We compare the results with a network segmentation performed using community detection algorithms and one executed by clustering with node attributes. Results are compared in terms of community diversity within the semantic segments along with topic", "target": ["既存の2部グラフのネットワークを、意味単位に分類するタスクの提案。実例としてGitHubのデータを使用している。ユーザー=>リポジトリの関連(2部グラフ)から、ユーザー=>ユーザーの関連を推定する。手法としては代表的なクラスタリングの手法をいくつか使って試している。"]} {"source": "Planning has been very successful for control tasks with known environment dynamics. To leverage planning in unknown environments, the agent needs to learn the dynamics from interactions with the world. However, learning dynamics models that are accurate enough for planning has been a long-standing challenge, especially in image-based domains. We propose the Deep Planning Network (PlaNet), a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space. To achieve high performance, the dynamics model must accurately predict the rewards ahead for multiple time steps. We approach this using a latent dynamics model with both deterministic and stochastic transition components. Moreover, we propose a multi-step variational inference objective that we name latent overshooting. Using only pixel observations, our agent solves continuous control tasks with contact dynamics, partial observability, and sparse rewards, which exceed the difficulty of tasks that were previously solved by planning with learned models. PlaNet uses substantially fewer episodes and reaches final performance close to and sometimes higher than strong model-free algorithms.", "target": ["表現学習とモデルベースを組み合わせた手法の提案。RNNベースの表現学習が基本だが、画面の再構成だけでなく系列からの予測が直前からの予測からあまり離れないよう正則化を入れている。似た研究としてWorld Modelsがあるが、こちらは戦略のネットワークがなく完全にモデルベースの計画で解いている。"]} {"source": "Anomaly detection is an important problem that has been well-studied within diverse research areas and application domains. The aim of this survey is two-fold, firstly we present a structured and comprehensive overview of research methods in deep learning-based anomaly detection. Furthermore, we review the adoption of these methods for anomaly across various application domains and assess their effectiveness. We have grouped state-of-the-art research techniques into different categories based on the underlying assumptions and approach adopted. Within each category we outline the basic anomaly detection technique, along with its variants and present key assumptions, to differentiate between normal and anomalous behavior. For each category, we present we also present the advantages and limitations and discuss the computational complexity of the techniques in real application domains. Finally, we outline open issues in research and challenges faced while adopting these techniques.", "target": ["異常検知に深層学習を使用した研究のサーベイ。既存のサーベイは特定領域にフォーカスしたものが多かったが(動画や医療画像など)、本サーベイでは包括的なまとめを行い、また研究だけでなく産業などでの適用事例についてもまとめている。"]} {"source": "Hyperbolic space is a geometry that is known to be well-suited for representation learning of data with an underlying hierarchical structure. In this paper, we present a novel hyperbolic distribution called \\textit{pseudo-hyperbolic Gaussian}, a Gaussian-like distribution on hyperbolic space whose density can be evaluated analytically and differentiated with respect to the parameters. Our distribution enables the gradient-based learning of the probabilistic models on hyperbolic space that could never have been considered before. Also, we can sample from this hyperbolic probability distribution without resorting to auxiliary means like rejection sampling. As applications of our distribution, we develop a hyperbolic-analog of variational autoencoder and a method of probabilistic word embedding on hyperbolic space. We demonstrate the efficacy of our distribution on various datasets including MNIST, Atari 2600 Breakout, and WordNet.", "target": ["双曲空間上で、ガウスライク+微分可能な分布を定義したという研究。双曲空間としてはPoincare´が代表的だが、こちらではLorentzを使用している。平均0/分散Σでサンプリングした点を、Lorentzモデルに沿うよう移動(Parallel transport)/転写(Exponential map)することで空間上の点を得ている。"]} {"source": "The transfer of knowledge from one policy to another is an important tool in Deep Reinforcement Learning. This process, referred to as distillation, has been used to great success, for example, by enhancing the optimisation of agents, leading to stronger performance faster, on harder domains [26, 32, 5, 8]. Despite the widespread use and conceptual simplicity of distillation, many different formulations are used in practice, and the subtle variations between them can often drastically change the performance and the resulting objective that is being optimised. In this work, we rigorously explore the entire landscape of policy distillation, comparing the motivations and strengths of each variant through theoretical and empirical analysis. Our results point to three distillation techniques, that are preferred depending on specifics of the task. Specifically a newly proposed expected entropy regularised distillation allows for quicker learning in a wide range of situations, while still guaranteeing convergence.", "target": ["強化学習における、戦略の蒸留についてまとめた研究。既存の蒸留手法について、どの時にどういうパターンが良いのかがまとめられている。蒸留を行うに際して、教師側と生徒側の確率分布の差を正則化として使うと良いとしている。"]} {"source": "Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples. This includes distribution shifts, outliers, and adversarial examples. To address these issues, we propose Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations. Manifold Mixup leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance. We prove theory on why this flattening happens under ideal conditions, validate it on practical situations, and connect it to previous works on information theory and generalization. In spite of incurring no significant computation and being implemented in a few lines of code, Manifold Mixup improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.", "target": ["Inputではなく隠れ層で混ぜ合わせるMIXUPを提案。隠れ層における表現で混ぜ合わせると、非線型変換で特徴量がより分離しやすい状態でMIXUPにするのでよい。2つのデータの中間点データが良い決定境界を作りやすくなるため、敵対的サンプルにも強いようだ。"]} {"source": "We explore and expand the \\textit{Soft Nearest Neighbor Loss} to measure the \\textit{entanglement} of class manifolds in representation space: i.e., how close pairs of points from the same class are relative to pairs of points from different classes. We demonstrate several use cases of the loss. As an analytical tool, it provides insights into the evolution of class similarity structures during learning. Surprisingly, we find that \\textit{maximizing} the entanglement of representations of different classes in the hidden layers is beneficial for discrimination in the final layer, possibly because it encourages representations to identify class-independent similarity structures. Maximizing the soft nearest neighbor loss in the hidden layers leads not only to improved generalization but also to better-calibrated estimates of uncertainty on outlier data. Data that is not from the training distribution can be recognized by observing that in the hidden layers, it has fewer than the normal number of neighbors from the predicted class.", "target": ["クラス間のもつれ具合を計測するSoft Nearest Neighbor Lossを検証した研究(発表自体は2007年。ただ本研究では温度Tを加えている)。もつれは分類性能が高い場合当然低くなるが、GANなど本物/偽物が区別つかなくなる方が良い場合高い方がよくなる。こうした理論的な推論を、実際に実験して確認している"]} {"source": "It is shown that many published models for the Stanford Question Answering Dataset (Rajpurkar et al., 2016) lack robustness, suffering an over 50% decrease in F1 score during adversarial evaluation based on the AddSent (Jia and Liang, 2017) algorithm. It has also been shown that retraining models on data generated by AddSent has limited effect on their robustness. We propose a novel alternative adversary-generation algorithm, AddSentDiverse, that significantly increases the variance within the adversarial training data by providing effective examples that punish the model for making certain superficial assumptions. Further, in order to improve robustness to AddSent's semantic perturbations (e.g., antonyms), we jointly improve the model's semantic-relationship learning capabilities in addition to our AddSentDiverse-based adversarial training data augmentation. With these additions, we show that we can make a state-of-the-art model significantly more robust, achieving a 36.5% increase in F1 score under many different types of adversarial evaluation while maintaining performance on the regular SQuAD task.", "target": ["既存のQAモデルは、文書内の単語を置き換えたり不正解の文を挿入するといったAdversarialな操作に弱かった。そこで、既存のAdversarial手法に対応するためのData Augmentationを行った研究。不正解文をランダムに挿入/WordNetを使い対義語に置き換えといった操作を行っている"]} {"source": "Conventional neural autoregressive decoding commonly assumes a fixed left-to-right generation order, which may be sub-optimal. In this work, we propose a novel decoding algorithm -- InDIGO -- which supports flexible sequence generation in arbitrary orders through insertion operations. We extend Transformer, a state-of-the-art sequence generation model, to efficiently implement the proposed approach, enabling it to be trained with either a pre-defined generation order or adaptive orders obtained from beam-search. Experiments on four real-world tasks, including word order recovery, machine translation, image caption and code generation, demonstrate that our algorithm can generate sequences following arbitrary orders, while achieving competitive or even better performance compared to the conventional left-to-right generation. The generated sequences show that InDIGO adopts adaptive generation orders based on input information.", "target": ["自然言語の生成を行う際、別に左から右に順番で生成しなくてもよいのでは?という着想に基づいた研究。ベースはTransformerだが、予測においては単語と挿入位置(左or右)を予測させ、組み立てる形で文を生成する。"]} {"source": "While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through on-the-fly back-translation. Together, we obtain large improvements over the previous state-of-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014.", "target": ["教師なしの翻訳で、フレーズベースの翻訳モデルとSeq-to-Seqベースの翻訳モデルを組み合わせた研究。フレーズベースについてはcross-lingualな分散表現を使うとともに、固有表現などにうまく対応するためsubwordの情報も使用している。構築したフレーズベースをSeq-to-Seqの学習に使いブーストしている"]} {"source": "Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. We analyze some important properties of these models, and propose a strategy to overcome those. In particular, the range of \"neighboring\" nodes that a node's representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.", "target": ["Graph Convolutionではレイヤ数を増やすことで考慮できる周辺ノードの半径を広げられるが、グラフ構造は均一でないためある段階から爆発的に増減する場合もあり、ノードの特徴に大きな影響を与える。そのためレイヤーごとの出力をマージするレイヤを設け調整する手法を提案"]} {"source": "Cross-lingual word embeddings (CLEs) enable multilingual modeling of meaning and facilitate cross-lingual transfer of NLP models. Despite their ubiquitous usage in downstream tasks, recent increasingly popular projection-based CLE models are almost exclusively evaluated on a single task only: bilingual lexicon induction (BLI). Even BLI evaluations vary greatly, hindering our ability to correctly interpret performance and properties of different CLE models. In this work, we make the first step towards a comprehensive evaluation of cross-lingual word embeddings. We thoroughly evaluate both supervised and unsupervised CLE models on a large number of language pairs in the BLI task and three downstream tasks, providing new insights concerning the ability of cutting-edge CLE models to support cross-lingual NLP. We empirically demonstrate that the performance of CLE models largely depends on the task at hand and that optimizing CLE models for BLI can result in deteriorated downstream performance. We indicate the most robust supervised and unsupervised CLE models and emphasize the need to reassess existing baselines, which still display competitive performance across the board. We hope that our work will catalyze further work on CLE evaluation and model analysis.", "target": ["多言語の分散表現について、よく利用されるBLI評価(A言語の単語から同じ意味のB言語の単語を予測させるタスクの精度)だけでなく、翻訳や文書分類といった後続タスクでの性能についても検証した研究。直交性を持たない変換を使う場合はBLIと後続タスクの精度が比例しないケースがあるため要注意とのこと"]} {"source": "Normalization layers are a staple in state-of-the-art deep neural network architectures. They are widely believed to stabilize training, enable higher learning rate, accelerate convergence and improve generalization, though the reason for their effectiveness is still an active research topic. In this work, we challenge the commonly-held beliefs by showing that none of the perceived benefits is unique to normalization. Specifically, we propose fixed-update initialization (Fixup), an initialization motivated by solving the exploding and vanishing gradient problem at the beginning of training via properly rescaling a standard initialization. We find training residual networks with Fixup to be as stable as training with normalization -- even for networks with 10,000 layers. Furthermore, with proper regularization, Fixup enables residual networks without normalization to achieve state-of-the-art performance in image classification and machine translation.", "target": ["バッチ正規化(BN)を使わずとも重みの初期化だけで同程度の性能が出せるとした研究(画像分類/翻訳の双方で検証)。ResNet Blockは2ルートが合流する形をとるため基本分散が2倍となり勾配爆発が起こる。BNはブロック単位でこれを阻止するが、重みのスケーリングでこれを阻止する"]} {"source": "Multi-agent cooperation is an important feature of the natural world. Many tasks involve individual incentives that are misaligned with the common good, yet a wide range of organisms from bacteria to insects and humans are able to overcome their differences and collaborate. Therefore, the emergence of cooperative behavior amongst self-interested individuals is an important question for the fields of multi-agent reinforcement learning (MARL) and evolutionary theory. Here, we study a particular class of multi-agent problems called intertemporal social dilemmas (ISDs), where the conflict between the individual and the group is particularly sharp. By combining MARL with appropriately structured natural selection, we demonstrate that individual inductive biases for cooperation can be learned in a model-free way. To achieve this, we introduce an innovative modular architecture for deep reinforcement learning agents which supports multi-level selection. We present results in two challenging environments, and interpret these in the context of cultural and ecological evolution.", "target": ["マルチエージェントの強化学習で、協調するための内発的報酬を与えるという研究。エージェントと内発的報酬のネットワークは分かれており、エージェントのチーム(5 policy)と1報酬ネットワークを組み合わせてプレイを行い、獲得できた報酬により重みをつけて学習を行う(進化戦略的な更新 #531 )。"]} {"source": "Deep Neural Networks (DNNs) excel on many complex perceptual tasks but it has proven notoriously difficult to understand how they reach their decisions. We here introduce a high-performance DNN architecture on ImageNet whose decisions are considerably easier to explain. Our model, a simple variant of the ResNet-50 architecture called BagNet, classifies an image based on the occurrences of small local image features without taking into account their spatial ordering. This strategy is closely related to the bag-of-feature (BoF) models popular before the onset of deep learning and reaches a surprisingly high accuracy on ImageNet (87.6% top-5 for 32 x 32 px features and Alexnet performance for 16 x16 px features). The constraint on local features makes it straight-forward to analyse how exactly each part of the image influences the classification. Furthermore, the BagNets behave similar to state-of-the art deep neural networks such as VGG-16, ResNet-152 or DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts. This suggests that the improvements of DNNs over previous bag-of-feature classifiers in the last few years is mostly achieved by better fine-tuning rather than by qualitatively different decision strategies.", "target": ["DNNを使ってBag of Featureを作成するという研究。局所特徴それぞれから各クラスの分類確率を出力し、その合算で最終分類を決定する。局所特徴とクラス分類確率の対応が明確なので、説明力が高くなる。得られたSensitivity Mapは、VGGやResNetなどと近かったとのこと。"]} {"source": "Recent works have highlighted the strength of the Transformer architecture on sequence tasks while, at the same time, neural architecture search (NAS) has begun to outperform human-designed models. Our goal is to apply NAS to search for a better alternative to the Transformer. We first construct a large search space inspired by the recent advances in feed-forward sequence models and then run evolutionary architecture search with warm starting by seeding our initial population with the Transformer. To directly search on the computationally expensive WMT 2014 English-German translation task, we develop the Progressive Dynamic Hurdles method, which allows us to dynamically allocate more resources to more promising candidate models. The architecture found in our experiments -- the Evolved Transformer -- demonstrates consistent improvement over the Transformer on four well-established language tasks: WMT 2014 English-German, WMT 2014 English-French, WMT 2014 English-Czech and LM1B. At a big model size, the Evolved Transformer establishes a new state-of-the-art BLEU score of 29.8 on WMT'14 English-German; at smaller sizes, it achieves the same quality as the original \"big\" Transformer with 37.6% less parameters and outperforms the Transformer by 0.7 BLEU at a mobile-friendly model size of 7M parameters.", "target": ["Transformerの構造を、構造探索を用いて最適化したという研究。構造探索には進化戦略を使用しており、学習結果が良いものを親としてさらに子を生成する。子についてはより良いものに学習リソースを割り当てるために、一定ステップごとにハードルを設けて厳選している。大幅な小サイズ化ができている"]} {"source": "We define general linguistic intelligence as the ability to reuse previously acquired knowledge about a language's lexicon, syntax, semantics, and pragmatic conventions to adapt to new tasks quickly. Using this definition, we analyze state-of-the-art natural language understanding models and conduct an extensive empirical investigation to evaluate them against these criteria through a series of experiments that assess the task-independence of the knowledge being acquired by the learning process. In addition to task performance, we propose a new evaluation metric based on an online encoding of the test data that quantifies how quickly an existing agent (model) learns a new task. Our results show that while the field has made impressive progress in terms of model architectures that generalize to many tasks, these models still require a lot of in-domain training examples (e.g., for fine tuning, training task-specific modules), and are prone to catastrophic forgetting. Moreover, we find that far from solving general tasks (e.g., document question answering), our models are overfitting to the quirks of particular datasets (e.g., SQuAD). We discuss missing components and conjecture on how to make progress toward general linguistic intelligence.", "target": ["BERT/ELMoを題材に、自然言語処理モデルの転移能力を検証した論文。モデルがどれだけ早く学習結果を応用できるかを計測するスコアを提案(Learning Curveの収束速度を数値化したようなもの)。教師ありタスクを事前学習に含むと、F1が上がる一方このスコアが下がることを確認"]} {"source": "A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of \"superhuman\" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).", "target": ["内発的報酬の手法を改善し、Montezumaで圧倒的なスコアを記録した手法。内発的報酬は新規の状態に対し与えられるため、探索ルートが深い場合入り口付近の探索が済んでしまうと奥まで行かなくなる問題があった。そこで探索した所まで戻る手法を提案(状態到達の難易度(確率)=奥ほど低いをベースに選択)"]} {"source": "It is intuitive that NLP tasks for logographic languages like Chinese should benefit from the use of the glyph information in those languages. However, due to the lack of rich pictographic evidence in glyphs and the weak generalization ability of standard computer vision models on character data, an effective way to utilize the glyph information remains to be found. In this paper, we address this gap by presenting Glyce, the glyph-vectors for Chinese character representations. We make three major innovations: (1) We use historical Chinese scripts (e.g., bronzeware script, seal script, traditional Chinese, etc) to enrich the pictographic evidence in characters; (2) We design CNN structures (called tianzege-CNN) tailored to Chinese character image processing; and (3) We use image-classification as an auxiliary task in a multi-task learning setup to increase the model's ability to generalize. We show that glyph-based models are able to consistently outperform word/char ID-based models in a wide range of Chinese NLP tasks. We are able to set new state-of-the-art results for a variety of Chinese NLP tasks, including tagging (NER, CWS, POS), sentence pair classification, single sentence classification tasks, dependency parsing, and semantic role labeling. For example, the proposed model achieves an F1 score of 80.6 on the OntoNotes dataset of NER, +1.5 over BERT; it achieves an almost perfect accuracy of 99.8\\% on the Fudan corpus for text classification. Code found at this https URL.", "target": ["漢字のような象形文字について、画像情報も使って文字分散表現を得る研究。漢字の形は時代を経るにつれ簡略化されてきたためいくつかの旧字体も使う、文字を4象限(漢字ドリルにあるような4マス)に区切った特徴を使用するなどの工夫を行なっている。様々な自然言語タスクで、1~2ptの改善ができている"]} {"source": "Analog IC design relies on human experts to search for parameters that satisfy circuit specifications with their experience and intuitions, which is highly labor intensive, time consuming and suboptimal. Machine learning is a promising tool to automate this process. However, supervised learning is difficult for this task due to the low availability of training data: 1) Circuit simulation is slow, thus generating large-scale dataset is time-consuming; 2) Most circuit designs are propitiatory IPs within individual IC companies, making it expensive to collect large-scale datasets. We propose Learning to Design Circuits (L2DC) to leverage reinforcement learning that learns to efficiently generate new circuits data and to optimize circuits. We fix the schematic, and optimize the parameters of the transistors automatically by training an RL agent with no prior knowledge about optimizing circuits. After iteratively getting observations, generating a new set of transistor parameters, getting a reward, and adjusting the model, L2DC is able to optimize circuits. We evaluate L2DC on two transimpedance amplifiers. Trained for a day, our RL agent can achieve comparable or better performance than human experts trained for a quarter. It first learns to meet hard-constraints (eg. gain, bandwidth), and then learns to optimize good-to-have targets (eg. area, power). Compared with grid search-aided human design, L2DC can achieve 250× higher sample efficiency with comparable performance. Under the same runtime constraint, the performance of L2DC is also better than Bayesian Optimization.", "target": ["アナログICの回路設計を、強化学習で行なったという研究。回路全体と各素子(トランジスタ)を状態とし、トランジスタの大きさ/静電容量などのパラメーター決定を行動とする。シミュレーターを用いて要求スペック満たしているか確認し、それに基づき報酬を与える。モデルはDDPGを使っている。"]} {"source": "Recent work has shown that it is possible to train deep neural networks that are provably robust to norm-bounded adversarial perturbations. Most of these methods are based on minimizing an upper bound on the worst-case loss over all possible adversarial perturbations. While these techniques show promise, they often result in difficult optimization procedures that remain hard to scale to larger networks. Through a comprehensive analysis, we show how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy. While the upper bound computed by IBP can be quite weak for general networks, we demonstrate that an appropriate loss and clever hyper-parameter schedule allow the network to adapt such that the IBP bound is tight. This results in a fast and stable learning algorithm that outperforms more sophisticated methods and achieves state-of-the-art results on MNIST, CIFAR-10 and SVHN. It also allows us to train the largest model to be verified beyond vacuous bounds on a downscaled version of ImageNet.", "target": ["Adversarial Exampleに対する耐性を上げるための手法。サンプルに対しノイズを加えて学習するが、判断が誤った場合は上限のノイズ、逆に正しい判断の場合は下限のノイズを加えたとして学習する(加えるノイズは、徐々に増やしていく)。"]} {"source": "Optimal transport (OT) theory can be informally described using the words of the French mathematician Gaspard Monge (1746-1818): A worker with a shovel in hand has to move a large pile of sand lying on a construction site. The goal of the worker is to erect with all that sand a target pile with a prescribed shape (for example, that of a giant sand castle). Naturally, the worker wishes to minimize her total effort, quantified for instance as the total distance or time spent carrying shovelfuls of sand. Mathematicians interested in OT cast that problem as that of comparing two probability distributions, two different piles of sand of the same volume. They consider all of the many possible ways to morph, transport or reshape the first pile into the second, and associate a \"global\" cost to every such transport, using the \"local\" consideration of how much it costs to move a grain of sand from one place to another. Recent years have witnessed the spread of OT in several fields, thanks to the emergence of approximate solvers that can scale to sizes and dimensions that are relevant to data sciences. Thanks to this newfound scalability, OT is being increasingly used to unlock various problems in imaging sciences (such as color or texture processing), computer vision and graphics (for shape manipulation) or machine learning (for regression, classification and density fitting). This short book reviews OT with a bias toward numerical methods and their applications in data sciences, and sheds lights on the theoretical properties of OT that make it particularly useful for some of these applications.", "target": ["最適輸送問題(Optimal Transport)に関する解説資料(200pくらいある)。最適な輸送を行うには距離やコストの計測が必要で、GANで使われるWasserstein距離も出自はこの分野となっている。画像や機械学習への応用も意識されている。"]} {"source": "In this paper, we provide a theoretical understanding of word embedding and its dimensionality. Motivated by the unitary-invariance of word embedding, we propose the Pairwise Inner Product (PIP) loss, a novel metric on the dissimilarity between word embeddings. Using techniques from matrix perturbation theory, we reveal a fundamental bias-variance trade-off in dimensionality selection for word embeddings. This bias-variance trade-off sheds light on many empirical observations which were previously unexplained, for example the existence of an optimal dimensionality. Moreover, new insights and discoveries, like when and how word embeddings are robust to over-fitting, are revealed. By optimizing over the bias-variance trade-off of the PIP loss, we can explicitly answer the open question of dimensionality selection for word embedding.", "target": ["単語分散表現の適切なサイズを求める研究。「適切」の定義として単語分散表現全体として回転のようなunitary operatorを適用しても内積は変わらない(各単語ベクトルの位置関係は変わらない)とし、適用前後の差異を測るPIP lossを提案。これが最小となるサイズをよしとしている"]} {"source": "Zero-sum games such as chess and poker are, abstractly, functions that evaluate pairs of agents, for example labeling them `winner' and `loser'. If the game is approximately transitive, then self-play generates sequences of agents of increasing strength. However, nontransitive games, such as rock-paper-scissors, can exhibit strategic cycles, and there is no longer a clear objective -- we want agents to increase in strength, but against whom is unclear. In this paper, we introduce a geometric framework for formulating agent objectives in zero-sum games, in order to construct adaptive sequences of objectives that yield open-ended learning. The framework allows us to reason about population performance in nontransitive games, and enables the development of a new algorithm (rectified Nash response, PSRO_rN) that uses game-theoretic niching to construct diverse populations of effective agents, producing a stronger set of agents than existing algorithms. We apply PSRO_rN to two highly nontransitive resource allocation games and find that PSRO_rN consistently outperforms the existing alternatives.", "target": ["強化学習において、戦略を得るのでなく戦略空間を明らかにするという研究。常勝の戦略が存在しないケース(Aには強いがBには負ける、というようなトレードオフがある場合)は、更新を続けても最適戦略は得られない。そこで、戦略間の関係性=戦略空間を解き明かし、その均衡点から戦略を得る手法を提案。"]} {"source": "Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether causal reasoning can emerge via meta-reinforcement learning. We train a recurrent network with model-free reinforcement learning to solve a range of problems that each contain causal structure. We find that the trained agent can perform causal reasoning in novel situations in order to obtain rewards. The agent can select informative interventions, draw causal inferences from observational data, and make counterfactual predictions. Although established formal causal reasoning algorithms also exist, in this paper we show that such reasoning can arise from model-free reinforcement learning, and suggest that causal reasoning in complex settings may benefit from the more end-to-end learning-based approaches presented here. This work also offers new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform -- and interpret -- experiments.", "target": ["因果推論を強化学習で解く試み。ある交絡因子(関係を持つと予想されるA/B双方に影響を与える因子)を含む因果関係について、情報収集=>関係に関する回答(正当で来たら報酬)というタスクを通じ学習する。モデルフリーのエージェントによる経験からメタモデル(RNN)を学習するメタラーニングの構成をとる。"]} {"source": "Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.", "target": ["多言語対応の文表現を得る際、どんなタスクが良いのか検証した研究。ベースは言語モデルで、通常通り次の単語を予測する(Causal LM)、単語をdropした箇所を予測する(Masked LM)+翻訳データがある場合に、並べた文でMasked LMを行うTranslation LMの計3つを提案。CLM