diff --git "a/data/it/dev.jsonl" "b/data/it/dev.jsonl" new file mode 100644--- /dev/null +++ "b/data/it/dev.jsonl" @@ -0,0 +1,619 @@ +{"source": "Mixed precision training (MPT) is becoming a practical technique to improve the speed and energy efficiency of training deep neural networks by leveraging the fast hardware support for IEEE half-precision floating point that is available in existing GPUs.MPT is typically used in combination with a technique called loss scaling, that works by scaling up the loss value up before the start of backpropagation in order to minimize the impact of numerical underflow on training.Unfortunately, existing methods make this loss scale value a hyperparameter that needs to be tuned per-model, and a single scale cannot be adapted to different layers at different training stages.We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use, by removing the need to tune a model-specific loss scale hyperparameter.We achieve this by introducing layer-wise loss scale values which are automatically computed during training to deal with underflow more effectively than existing methods.We present experimental results on a variety of networks and tasks that show our approach can shorten the time to convergence and improve accuracy, compared with using the existing state-of-the-art MPT and single-precision floating point.", "target": ["Abbiamo ideato uno scaling adattativo delle loss per migliorare il training a precisione mista che supera i risultati dello stato dell'arte.", "Proposta di un metodo adattivo di scaling delle loss durante la backpropagation per il training a precisione mista dove il fattore di scala viene deciso automaticamente per ridurre l'underflow.", "Gli autori propongono un metodo per addestrare modelli in precisione FP16 che adotta un modo più elaborato per minimizzare l'underflow in ogni layer simultaneamente e automaticamente."]} +{"source": "Many real-world problems, e.g. object detection, have outputs that are naturally expressed as sets of entities.This creates a challenge for traditional deep neural networks which naturally deal with structured outputs such as vectors, matrices or tensors.We present a novel approach for learning to predict sets with unknown permutation and cardinality using deep neural networks.Specifically, in our formulation we incorporate the permutation as unobservable variable and estimate its distribution during the learning process using alternating optimization.We demonstrate the validity of this new formulation on two relevant vision problems: object detection, for which our formulation outperforms state-of-the-art detectors such as Faster R-CNN and YOLO, and a complex CAPTCHA test, where we observe that, surprisingly, our set based network acquired the ability of mimicking arithmetics without any rules being coded.", "target": ["Presentiamo un nuovo approccio per imparare a prevedere insiemi con permutazione e cardinalità sconosciute usando reti neurali deep feed-forward.", "Una formulazione per imparare la distribuzione su variabili di permutazione non osservabili basata su reti deep per il problema della predizione degli insiemi."]} +{"source": "Foveation is an important part of human vision, and a number of deep networks have also used foveation.However, there have been few systematic comparisons between foveating and non-foveating deep networks, and between different variable-resolution downsampling methods.Here we define several such methods, and compare their performance on ImageNet recognition with a Densenet-121 network.The best variable-resolution method slightly outperforms uniform downsampling.Thus in our experiments, foveation does not substantially help or hinder object recognition in deep networks.", "target": ["Confrontiamo le prestazioni di object recognition su immagini sottocampionate in modo uniforme e con tre diversi schemi di foveazione."]} +{"source": "We explore the concept of co-design in the context of neural network verification.Specifically, we aim to train deep neural networks that not only are robust to adversarial perturbations but also whose robustness can be verified more easily.To this end, we identify two properties of network models - weight sparsity and so-called ReLU stability - that turn out to significantly impact the complexity of the corresponding verification task.We demonstrate that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones.Then, improving ReLU stability leads to an additional 4-13x speedup in verification times.An important feature of our methodology is its \"universality,\" in the sense that it can be used with a broad range of training procedures and verification approaches.", "target": ["Sviluppiamo metodi per addestrare modelli neurali deep che sono robusti alle perturbazioni avversarie e la cui robustezza è significativamente più facile da verificare.", "L'articolo presenta diversi modi per regolarizzare le reti ReLU semplici per ottimizzare la robustezza avversaria, la robustezza avversaria dimostrabile e la velocità di verifica.", "Questo articolo propone metodi per addestrare reti neurali robuste che possono essere verificate più velocemente, utilizzando metodi di pruning per incoraggiare la sparsità dei pesi e la regolarizzazione per incoraggiare la stabilità ReLU."]} +{"source": "Batch Normalization (BatchNorm) has shown to be effective for improving and accelerating the training of deep neural networks.However, recently it has been shown that it is also vulnerable to adversarial perturbations.In this work, we aim to investigate the cause of adversarial vulnerability of the BatchNorm.We hypothesize that the use of different normalization statistics during training and inference (mini-batch statistics for training and moving average of these values at inference) is the main cause of this adversarial vulnerability in the BatchNorm layer.We empirically proved this by experiments on various neural network architectures and datasets.Furthermore, we introduce Robust Normalization (RobustNorm) and experimentally show that it is not only resilient to adversarial perturbation but also inherit the benefits of BatchNorm.", "target": ["Indagine su come BatchNorm provoca la vulnerabilità avversaria e come evitarla.", "Questo articolo affronta la vulnerabilità alle perturbazioni avversarie in BatchNorm, e propone un'alternativa chiamata RobustNorm, utilizzando il rescaling min-max invece della normalizzazione.", "Questo articolo indaga la ragione dietro la vulnerabilità di BatchNorm e propone Robust Normalization, un metodo di normalizzazione che raggiunge risultati significativamente migliori sotto una varietà di metodi di attacco."]} +{"source": "Electronic Health Records (EHR) comprise of longitudinal clinical observations portrayed with sparsity, irregularity, and high-dimensionality which become the major obstacles in drawing reliable downstream outcome.Despite greatly numbers of imputation methods are being proposed to tackle these issues, most of the existing methods ignore correlated features or temporal dynamics and entirely put aside the uncertainty.In particular, since the missing values estimates have the risk of being imprecise, it motivates us to pay attention to reliable and less certain information differently.In this work, we propose a novel variational-recurrent imputation network (V-RIN), which unified imputation and prediction network, by taking into account the correlated features, temporal dynamics, and further utilizing the uncertainty to alleviate the risk of biased missing values estimates.Specifically, we leverage the deep generative model to estimate the missing values based on the distribution among variables and a recurrent imputation network to exploit the temporal relations in conjunction with utilization of the uncertainty.We validated the effectiveness of our proposed model with publicly available real-world EHR dataset, PhysioNet Challenge 2012, and compared the results with other state-of-the-art competing methods in the literature.", "target": ["La nostra rete di imputazione variazionale-recorrente (V-RIN) prende in considerazione le caratteristiche correlate, le dinamiche temporali e utilizza ulteriormente l'incertezza per alleviare il rischio di stime distorte dei valori mancanti.", "Una imputation network dei dati mancanti per incorporare la correlazione, le relazioni temporali e l'incertezza dei dati per il problema della scarsità dei dati in EHRs, che produce un AUC più alto sui task di classificazione del tasso di mortalità.", "L'articolo ha presentato un metodo che combina VAE e uncertainty aware GRU per l'imputazione sequenziale dei dati mancanti e la predizione dei risultati."]} +{"source": "Despite the state-of-the-art accuracy of Deep Neural Networks (DNN) in various classification problems, their deployment onto resource constrained edge computing devices remains challenging due to their large size and complexity.Several recent studies have reported remarkable results in reducing this complexity through quantization of DNN models.However, these studies usually do not consider the changes in the loss function when performing quantization, nor do they take the different importances of DNN model parameters to the accuracy into account.We address these issues in this paper by proposing a new method, called adaptive quantization, which simplifies a trained DNN model by finding a unique, optimal precision for each network parameter such that the increase in loss is minimized.The optimization problem at the core of this method iteratively uses the loss function gradient to determine an error margin for each parameter and assigns it a precision accordingly.Since this problem uses linear functions, it is computationally cheap and, as we will show, has a closed-form approximate solution.Experiments on MNIST, CIFAR, and SVHN datasets showed that the proposed method can achieve near or better than state-of-the-art reduction in model size with similar error rates.Furthermore, it can achieve compressions close to floating-point model compression methods without loss of accuracy.", "target": ["Un metodo adattivo per la quantizzazione in virgola fissa delle reti neurali basato sull'analisi teorica piuttosto che sull'euristica.", "Propone un metodo per quantizzare le reti neurali che permette di quantizzare i pesi con una precisione diversa a seconda della loro importanza, tenendo conto della loss.", "L'articolo propone una tecnica per quantizzare i pesi di una rete neurale con profondità/precisione di bit variabile per ogni parametro."]} +{"source": "We study the problem of learning permutation invariant representations that can capture containment relations.We propose training a model on a novel task: predicting the size of the symmetric difference between pairs of multisets, sets which may contain multiple copies of the same object.With motivation from fuzzy set theory, we formulate both multiset representations and how to predict symmetric difference sizes given these representations.We model multiset elements as vectors on the standard simplex and multisets as the summations of such vectors, and we predict symmetric difference as the l1-distance between multiset representations.We demonstrate that our representations more effectively predict the sizes of symmetric differences than DeepSets-based approaches with unconstrained object representations.Furthermore, we demonstrate that the model learns meaningful representations, mapping objects of different classes to different standard basis vectors.", "target": ["Sulla base della teoria degli insiemi fuzzy, proponiamo un modello che, date solo le dimensioni delle differenze simmetriche tra coppie di multiset, impara le rappresentazioni di tali multiset e dei loro elementi.", "Questo articolo propone un nuovo task di apprendimento degli insiemi, prevedendo la dimensione della differenza simmetrica tra più insiemi, e fornisce un metodo per risolvere il task basato sulla teoria degli insiemi fuzzy."]} +{"source": "It is important to collect credible training samples $(x,y)$ for building data-intensive learning systems (e.g., a deep learning system).In the literature, there is a line of studies on eliciting distributional information from self-interested agents who hold a relevant information. Asking people to report complex distribution $p(x)$, though theoretically viable, is challenging in practice.This is primarily due to the heavy cognitive loads required for human agents to reason and report this high dimensional information.Consider the example where we are interested in building an image classifier via first collecting a certain category of high-dimensional image data.While classical elicitation results apply to eliciting a complex and generative (and continuous) distribution $p(x)$ for this image data, we are interested in eliciting samples $x_i \\sim p(x)$ from agents.This paper introduces a deep learning aided method to incentivize credible sample contributions from selfish and rational agents.The challenge to do so is to design an incentive-compatible score function to score each reported sample to induce truthful reports, instead of an arbitrary or even adversarial one.We show that with accurate estimation of a certain $f$-divergence function we are able to achieve approximate incentive compatibility in eliciting truthful samples.We then present an efficient estimator with theoretical guarantee via studying the variational forms of $f$-divergence function.Our work complements the literature of information elicitation via introducing the problem of \\emph{sample elicitation}. We also show a connection between this sample elicitation problem and $f$-GAN, and how this connection can help reconstruct an estimator of the distribution based on collected samples.", "target": ["Questo articolo propone un metodo basato su deep learning per ottenere campioni credibili da agenti auto-interessati.", "Gli autori propongono un quadro di elicitazione del campione per il problema di elicitare campioni credibili dagli agenti per distribuzioni complesse, suggeriscono che i quadri neurali deep possono essere applicati in questo quadro, e collegano l'elicitazione del campione e f-GAN.", "Questo articolo studia il problema dell'elicitazione del campione, proponendo un approccio di apprendimento deep che si basa sull'espressione duale della f-divergenza che scrive come massimo su un insieme di funzioni t."]} +{"source": "The celebrated Sequence to Sequence learning (Seq2Seq) technique and its numerous variants achieve excellent performance on many tasks.However, many machine learning tasks have inputs naturally represented as graphs; existing Seq2Seq models face a significant challenge in achieving accurate conversion from graph form to the appropriate sequence.To address this challenge, we introduce a general end-to-end graph-to-sequence neural encoder-decoder architecture that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors.Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.We further introduce an attention mechanism that aligns node embeddings and the decoding sequence to better cope with large graphs.Experimental results on bAbI, Shortest Path, and Natural Language Generation tasks demonstrate that our model achieves state-of-the-art performance and significantly outperforms existing graph neural networks, Seq2Seq, and Tree2Seq models; using the proposed bi-directional node embedding aggregation strategy, the model can converge rapidly to the optimal performance.", "target": ["Grapht to Sequence Learning con reti neurali basate su attention", "Un'architettura graph2seq che combina un encoder di grafi che mescola componenti GGNN e GCN con un encoder di sequenze con attention e che mostra miglioramenti rispetto alle baseline.", "Questo lavoro propone un encoder di grafi end-to-end a modelli di decoder di sequenze con un meccanismo di attention nel mezzo."]} +{"source": "We address the problem of learning to discover 3D parts for objects in unseen categories.Being able to learn the geometry prior of parts and transfer this prior to unseen categories pose fundamental challenges on data-driven shape segmentation approaches.Formulated as a contextual bandit problem, we propose a learning-based iterative grouping framework which learns a grouping policy to progressively merge small part proposals into bigger ones in a bottom-up fashion.At the core of our approach is to restrict the local context for extracting part-level features, which encourages the generalizability to novel categories.On a recently proposed large-scale fine-grained 3D part dataset, PartNet, we demonstrate that our method can transfer knowledge of parts learned from 3 training categories to 21 unseen testing categories without seeing any annotated samples.Quantitative comparisons against four strong shape segmentation baselines show that we achieve the state-of-the-art performance.", "target": ["Un framework di segmentazione zero-shot per la segmentazione di parti di oggetti 3D. Modellare la segmentazione come un processo decisionale e risolvere come un contextual bandit problem", "Un metodo per la segmentazione di nuvole di punti 3D di oggetti in parti componenti, focalizzato sulla generalizzazione dei raggruppamenti di parti a nuove categorie di oggetti non visti durante il training, che mostra alte prestazioni rispetto alle baseline.", "Questo articolo propone un metodo per la segmentazione di parti in nuvole di punti di oggetti."]} +{"source": "This paper presents the ballistic graph neural network.Ballistic graph neural network tackles the weight distribution from a transportation perspective and has many different properties comparing to the traditional graph neural network pipeline.The ballistic graph neural network does not require to calculate any eigenvalue.The filters propagate exponentially faster($\\sigma^2 \\sim T^2$) comparing to traditional graph neural network($\\sigma^2 \\sim T$).We use a perturbed coin operator to perturb and optimize the diffusion rate.Our results show that by selecting the diffusion speed, the network can reach a similar accuracy with fewer parameters.We also show the perturbed filters act as better representations comparing to pure ballistic ones.We provide a new perspective of training graph neural network, by adjusting the diffusion rate, the neural network's performance can be improved.", "target": ["Una nuova prospettiva su come raccogliere la correlazione tra i nodi basata sulle proprietà di diffusione.", "Una nuova operazione di diffusione per le graph neural networks che non richiede il calcolo degli autovalori e può propagarsi esponenzialmente più velocemente rispetto alle graph neural networks tradizionali.", "L'articolo propone di affrontare il problema della velocità di diffusione introducendo la camminata balistica."]} +{"source": "In this paper, we propose a \\textit{weak supervision} framework for neural ranking tasks based on the data programming paradigm \\citep{Ratner2016}, which enables us to leverage multiple weak supervision signals from different sources.Empirically, we consider two sources of weak supervision signals, unsupervised ranking functions and semantic feature similarities.We train a BERT-based passage-ranking model (which achieves new state-of-the-art performances on two benchmark datasets with full supervision) in our weak supervision framework.Without using ground-truth training labels, BERT-PR models outperform BM25 baseline by a large margin on all three datasets and even beat the previous state-of-the-art results with full supervision on two of datasets.", "target": ["Proponiamo una pipeline di training con weak supervision basata sul framework di programmazione dei dati per task di classificazione, in cui addestriamo un modello di classificazione basato su BERT e stabiliamo il nuovo state-of-the-art.", "Gli autori propongono una combinazione di BERT e del framework di supervisione debole per affrontare il problema del passage ranking, ottenendo risultati migliori dello stato dell'arte fully supervised."]} +{"source": "We study the training process of Deep Neural Networks (DNNs) from the Fourier analysis perspective.We demonstrate a very universal Frequency Principle (F-Principle) --- DNNs often fit target functions from low to high frequencies --- on high-dimensional benchmark datasets, such as MNIST/CIFAR10, and deep networks, such as VGG16.This F-Principle of DNNs is opposite to the learning behavior of most conventional iterative numerical schemes (e.g., Jacobi method), which exhibits faster convergence for higher frequencies, for various scientific computing problems.With a naive theory, we illustrate that this F-Principle results from the regularity of the commonly used activation functions.The F-Principle implies an implicit bias that DNNs tend to fit training data by a low-frequency function.This understanding provides an explanation of good generalization of DNNs on most real datasets and bad generalization of DNNs on parity function or randomized dataset.", "target": ["Nei problemi reali, abbiamo trovato che le DNN spesso adattano le funzioni target dalle basse alle alte frequenze durante il processo di training.", "Questo articolo analizza la loss delle reti neurali nel dominio di Fourier e trova che le DNN tendono ad imparare le componenti a bassa frequenza prima di quelle ad alta frequenza.", "L'articolo studia il processo di training delle NN attraverso l'analisi di Fourier, concludendo che le NN imparano le componenti a bassa frequenza prima di quelle ad alta frequenza."]} +{"source": "The problem of accelerating drug discovery relies heavily on automatic tools to optimize precursor molecules to afford them with better biochemical properties.Our work in this paper substantially extends prior state-of-the-art on graph-to-graph translation methods for molecular optimization.In particular, we realize coherent multi-resolution representations by interweaving the encoding of substructure components with the atom-level encoding of the original molecular graph.Moreover, our graph decoder is fully autoregressive, and interleaves each step of adding a new substructure with the process of resolving its attachment to the emerging molecule.We evaluate our model on multiple molecular optimization tasks and show that our model significantly outperforms previous state-of-the-art baselines.", "target": ["Proponiamo un encoder-decoder multi-risoluzione e hierarchically coupled per graph-graph translation.", "Un modello gerarchico di graph-graph translation per generare grafi molecolari usando sottostrutture chimiche come elementi costitutivi che è completamente autoregressivo e impara rappresentazioni multirisoluzione coerenti, superando i modelli precedenti.", "Gli autori presentano un metodo gerarchico di graph-to-graph translation per generare nuove molecole organiche."]} +{"source": "Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping.Even though some combinations of transformations might never appear (e.g. an upright face with a horizontal nose), current equivariant architectures consider the set of all possible transformations in a transformation group when learning feature representations.Contrarily, the human visual system is able to attend to the set of relevant transformations occurring in the environment and utilizes this information to assist and improve object recognition.Based on this observation, we modify conventional equivariant feature mappings such that they are able to attend to the set of co-occurring transformations in data and generalize this notion to act on groups consisting of multiple symmetries.We show that our proposed co-attentive equivariant neural networks consistently outperform conventional rotation equivariant and rotation & reflection equivariant neural networks on rotated MNIST and CIFAR-10.", "target": ["Utilizziamo attention per limitare le reti neurali equivarianti all'insieme o alle trasformazioni co-occorrenti nei dati.", "Questo articolo combina l'attention con l'equivarianza di gruppo, in particolare per il gruppo p4m di rotazioni, traslazioni e capovolgimenti, e deriva una forma di self-attention che non distrugge la proprietà di equivarianza.", "Gli autori propongono un meccanismo di auto-attention per le reti neurali roto-equivarianti che migliora le prestazioni di classificazione rispetto alle reti roto-equivarianti regolari."]} +{"source": "The fast generation and refinement of protein backbones would constitute a major advancement to current methodology for the design and development of de novo proteins.In this study, we train Generative Adversarial Networks (GANs) to generate fixed-length full-atom protein backbones, with the goal of sampling from the distribution of realistic 3-D backbone fragments.We represent protein structures by pairwise distances between all backbone atoms, and present a method for directly recovering and refining the corresponding backbone coordinates in a differentiable manner.We show that interpolations in the latent space of the generator correspond to smooth deformations of the output backbones, and that test set structures not seen by the generator during training exist in its image.Finally, we perform sequence design, relaxation, and ab initio folding of a subset of generated structures, and show that in some cases we can recover the generated folds after forward-folding.Together, these results suggest a mechanism for fast protein structure refinement and folding using external energy functions.", "target": ["Addestriamo una GAN per generare e recuperare backbone di proteine full-atom, e mostriamo che in casi selezionati possiamo recuperare le proteine generate dopo la progettazione della sequenza e il forward-folding ab initio.", "Un modello generativo per proteine backbone che utilizza una GAN, una rete di tipo autoencoder e un processo di raffinamento, e una serie di valutazioni qualitative che suggeriscono risultati positivi.", "Questo articolo presenta un approccio end-to-end per la generazione di proteine backbone utilizzando generative adversarial networks."]} +{"source": "Few-Shot Learning (learning with limited labeled data) aims to overcome the limitations of traditional machine learning approaches which require thousands of labeled examples to train an effective model.Considered as a hallmark of human intelligence, the community has recently witnessed several contributions on this topic, in particular through meta-learning, where a model learns how to learn an effective model for few-shot learning.The main idea is to acquire prior knowledge from a set of training tasks, which is then used to perform (few-shot) test tasks.Most existing work assumes that both training and test tasks are drawn from the same distribution, and a large amount of labeled data is available in the training tasks.This is a very strong assumption which restricts the usage of meta-learning strategies in the real world where ample training tasks following the same distribution as test tasks may not be available.In this paper, we propose a novel meta-learning paradigm wherein a few-shot learning model is learnt, which simultaneously overcomes domain shift between the train and test tasks via adversarial domain adaptation.We demonstrate the efficacy the proposed method through extensive experiments.", "target": ["Il Meta Learning per few-shot learning presuppone che i task di training e i task di test siano tratti dalla stessa distribuzione. Cosa fare se non lo sono? Meta Learning con domain adaptation a livello di task!", "Questo articolo propone un modello che combina unsupervised adversarial domain adaptation con le prototypical networks che esegue meglio delle baseline few-shot su task di apprendimento few-shot con domain-shift", "Gli autori hanno proposto il meta domain adaptation per affrontare lo scenario del domain shift nella configurazione del meta learning, dimostrando miglioramenti delle prestazioni in diversi esperimenti."]} +{"source": "Universal probabilistic programming systems (PPSs) provide a powerful framework for specifying rich and complex probabilistic models.However, this expressiveness comes at the cost of substantially complicating the process of drawing inferences from the model.In particular, inference can become challenging when the support of the model varies between executions.Though general-purpose inference engines have been designed to operate in such settings, they are typically inefficient, often relying on proposing from the prior to make transitions.To address this, we introduce a new inference framework: Divide, Conquer, and Combine (DCC).DCC divides the program into separate straight-line sub-programs, each of which has a fixed support allowing more powerful inference algorithms to be run locally, before recombining their outputs in a principled fashion.We show how DCC can be implemented as an automated and general-purpose PPS inference engine, and empirically confirm that it can provide substantial performance improvements over previous approaches.", "target": ["Divide, Conquer, and Combine è un nuovo schema di inferenza che può essere eseguito sui programmi probabilistici con supporto stocastico, cioè l'esistenza stessa delle variabili è stocastica."]} +{"source": "Detecting communities or the modular structure of real-life networks (e.g. a socialnetwork or a product purchase network) is an important task because the way anetwork functions is often determined by its communities.The traditional approaches to community detection involve modularity-based approaches,which generally speaking, construct partitions based on heuristics thatseek to maximize the ratio of the edges within the partitions to those betweenthem.Node embedding approaches, which represent each node in a graph as areal-valued vector, transform the problem of community detection in a graph tothat of clustering a set of vectors.Existing node embedding approaches are primarilybased on first initiating uniform random walks from each node to constructa context of a node and then seeks to make the vector representation ofthe node close to its context.However, standard node embedding approaches donot directly take into account the community structure of a network while constructingthe context around each node.To alleviate this, we explore two differentthreads of work.First, we investigate the use of biased random walks (specifically,maximum entropy based walks) to obtain more centrality preserving embeddingof nodes, which we hypothesize may lead to more effective clusters in the embeddedspace.Second, we propose a community structure aware node embeddingapproach where we incorporate modularity-based partitioning heuristics intothe objective function of node embedding.We demonstrate that our proposed approachfor community detection outperforms a number of modularity-based baselinesas well as K-means on a standard node-embedded vector space (specifically,node2vec) on a wide range of real-life networks of different sizes and densities.", "target": ["Un algoritmo di embedding dei nodi che preserva la comunità e che risulta in un rilevamento più efficace delle comunità con un clustering sullo spazio degli embedding"]} +{"source": "A point cloud is an agile 3D representation, efficiently modeling an object's surface geometry.However, these surface-centric properties also pose challenges on designing tools to recognize and synthesize point clouds.This work presents a novel autoregressive model, PointGrow, which generates realistic point cloud samples from scratch or conditioned from given semantic contexts.Our model operates recurrently, with each point sampled according to a conditional distribution given its previously-generated points.Since point cloud object shapes are typically encoded by long-range interpoint dependencies, we augment our model with dedicated self-attention modules to capture these relations.Extensive evaluation demonstrates that PointGrow achieves satisfying performance on both unconditional and conditional point cloud generation tasks, with respect to fidelity, diversity and semantic preservation.Further, conditional PointGrow learns a smooth manifold of given images where 3D shape interpolation and arithmetic calculation can be performed inside.", "target": ["Un modello di apprendimento deep autoregressivo per la generazione di nuvole di punti diversi.", "Un approccio per la generazione di forme 3D come nuvole di punti che considera l'ordinamento lessicografico dei punti secondo le coordinate e addestra un modello per prevedere i punti in ordine.", "L'articolo introduce un modello generativo per le nuvole di punti utilizzando un modello auto-regressivo sui pixel simile a RNN e un modello di attention per gestire le interazioni a lungo raggio."]} +{"source": "Reinforcement learning and evolutionary algorithms can be used to create sophisticated control solutions.Unfortunately explaining how these solutions work can be difficult to due to their \"black box\" nature.In addition, the time-extended nature of control algorithms often prevent direct applications of explainability techniques used for standard supervised learning algorithms.This paper attempts to address explainability of blackbox control algorithms through six different techniques:1) Bayesian rule lists,2) Function analysis,3) Single time step integrated gradients,4) Grammar-based decision trees,5) Sensitivity analysis combined with temporal modeling with LSTMs, and6) Explanation templates.These techniques are tested on a simple 2d domain, where a simulated rover attempts to navigate through obstacles to reach a goal.For control, this rover uses an evolved multi-layer perception that maps an 8d field of obstacle and goal sensors to an action determining where it should go in the next time step.Results show that some simple insights in explaining the neural network are possible, but that good explanations are difficult.", "target": ["Descrive una serie di tecniche di explainability applicate a un semplice controller con rete neurale utilizzato per la navigazione.", "Questo articolo fornisce intuizioni e spiegazioni sul problema di fornire spiegazioni per un perceptron multilayer usato come controller inverso per il movimento del rover, e idee su come spiegare un modello black-box."]} +{"source": "The Vision-and-Language Navigation (VLN) task entails an agent following navigational instruction in photo-realistic unknown environments.This challenging task demands that the agent be aware of which instruction was completed, which instruction is needed next, which way to go, and its navigation progress towards the goal.In this paper, we introduce a self-monitoring agent with two complementary components: (1) visual-textual co-grounding module to locate the instruction completed in the past, the instruction required for the next action, and the next moving direction from surrounding images and (2) progress monitor to ensure the grounded instruction correctly reflects the navigation progress.We test our self-monitoring agent on a standard benchmark and analyze our proposed approach through a series of ablation studies that elucidate the contributions of the primary components.Using our proposed method, we set the new state of the art by a significant margin (8% absolute increase in success rate on the unseen test set).Code is available at https://github.com/chihyaoma/selfmonitoring-agent.", "target": ["Proponiamo un agente di auto-monitoraggio per il task di navigazione con visione e lingua.", "Un metodo per la vision+language navigation che tiene traccia dei progressi sull'istruzione usando un monitor di progresso e un modulo di co-grounding visivo-testuale, e ha buone prestazioni sui benchmark standard.", "Questo articolo descrive un modello per la vision-and-language navigation con un'attention visiva panoramica e una loss ausiliaria di monitoraggio del progresso, dando risultati allo stato dell'arte."]} +{"source": "Environments in Reinforcement Learning (RL) are usually only partially observable.To address this problem, a possible solution is to provide the agent with information about past observations.While common methods represent this history using a Recurrent Neural Network (RNN), in this paper we propose an alternative representation which is based on the record of the past events observed in a given episode.Inspired by the human memory, these events describe only important changes in the environment and, in our approach, are automatically discovered using self-supervision. We evaluate our history representation method using two challenging RL benchmarks: some games of the Atari-57 suite and the 3D environment Obstacle Tower.Using these benchmarks we show the advantage of our solution with respect to common RNN-based approaches.", "target": ["scoperta di eventi per rappresentare la storia dell'agente in RL", "Gli autori studiano il problema di RL sotto impostazioni parzialmente osservate, e propongono una soluzione che utilizza una FFNN ma fornisce una rappresentazione della storia, superando PPO.", "Questo articolo propone un nuovo modo di rappresentare la storia passata come input per un agente RL, mostrando di avere prestazioni migliori di PPO e di una variante RNN di PPO."]} +{"source": "The unconditional generation of high fidelity images is a longstanding benchmarkfor testing the performance of image decoders.Autoregressive image modelshave been able to generate small images unconditionally, but the extension ofthese methods to large images where fidelity can be more readily assessed hasremained an open problem.Among the major challenges are the capacity to encodethe vast previous context and the sheer difficulty of learning a distribution thatpreserves both global semantic coherence and exactness of detail.To address theformer challenge, we propose the Subscale Pixel Network (SPN), a conditionaldecoder architecture that generates an image as a sequence of image slices of equalsize.The SPN compactly captures image-wide spatial dependencies and requires afraction of the memory and the computation.To address the latter challenge, wepropose to use multidimensional upscaling to grow an image in both size and depthvia intermediate stages corresponding to distinct SPNs.We evaluate SPNs on theunconditional generation of CelebAHQ of size 256 and of ImageNet from size 32to 128.We achieve state-of-the-art likelihood results in multiple settings, set upnew benchmark results in previously unexplored settings and are able to generatevery high fidelity large scale samples on the basis of both datasets.", "target": ["Mostriamo che i modelli autoregressivi possono generare immagini ad alta fedeltà.", "Un'architettura che utilizza componenti decoder, size-upscaling decoder e depth-upscaling decoder per affrontare il problema dell'apprendimento delle dipendenze a lungo raggio nelle immagini al fine di ottenere immagini ad alta fedeltà.", "Questo articolo affronta il problema della generazione di immagini ad alta fedeltà, mostrando con successo campioni convincenti di Imagenet con risoluzione 128x128 per un likelihood density model."]} +{"source": "Real-world dynamical systems often consist of multiple stochastic subsystems that interact with each other.Modeling and forecasting the behavior of such dynamics are generally not easy, due to the inherent hardness in understanding the complicated interactions and evolutions of their constituents.This paper introduces the relational state-space model (R-SSM), a sequential hierarchical latent variable model that makes use of graph neural networks (GNNs) to simulate the joint state transitions of multiple correlated objects.By letting GNNs cooperate with SSM, R-SSM provides a flexible way to incorporate relational information into the modeling of multi-object dynamics.We further suggest augmenting the model with normalizing flows instantiated for vertex-indexed random variables and propose two auxiliary contrastive objectives to facilitate the learning.The utility of R-SSM is empirically evaluated on synthetic and real time series datasets.", "target": ["Un modello di spazio di stato gerarchico deep in cui le transizioni di stato degli oggetti correlati sono coordinate da graph neural networks.", "Un modello gerarchico a variabili latenti di processi dinamici sequenziali di oggetti multipli quando ogni oggetto presenta una stocasticità significativa.", "L'articolo presenta un modello di spazio di stato relazionale che simula le transizioni di stato congiunte di oggetti correlati che sono gerarchicamente coordinati in una struttura a grafo."]} +{"source": "Natural language is hierarchically structured: smaller units (e.g., phrases) are nested within larger units (e.g., clauses).When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed.While the standard LSTM architecture allows different neurons to track information at different time scales, it does not have an explicit bias towards modeling a hierarchy of constituents.This paper proposes to add such inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated.Our novel recurrent architecture, ordered neurons LSTM (ON-LSTM), achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.", "target": ["Introduciamo un nuovo bias induttivo che integra le strutture ad albero nelle reti neurali ricorrenti.", "Questo articolo propone ON-LSTM, una nuova unità RNN che integra la struttura ad albero latente nei modelli ricorrenti e che ha buoni risultati nella modellazione del linguaggio, nel parsing non supervisionato, nella valutazione sintattica mirata e nell'inferenza logica."]} +{"source": "Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures.A completely satisfactory explanation for their success remains elusive.Here, we present a novel explanation for the benefits of skip connections in training very deep networks.The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model.Several such singularities have been identified in previous works:(i) overlap singularities caused by the permutation symmetry of nodes in a given layer,(ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes,(iii) singularities generated by the linear dependence of the nodes.These singularities cause degenerate manifolds in the loss landscape that slow down learning.We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent.Moreover, for typical initializations, skip connections move the network away from the \"ghosts\" of these singularities and sculpt the landscape around them to alleviate the learning slow-down.These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets.", "target": ["Varietà degeneri derivanti dalla non identificabilità del modello rallentano l'apprendimento nelle reti deep; le skip connections aiutano rompendo le degenerazioni.", "Gli autori mostrano che le singolarità di eliminazione e le singolarità di sovrapposizione impediscono l'apprendimento nelle reti neurali deep, e dimostrano che le skip connections possono ridurre la prevalenza di queste singolarità, accelerando l'apprendimento.", "Il documento esamina l'uso delle skip connections nelle reti deep come un modo per alleviare le singolarità nella matrice Hessiana durante l'allenamento."]} +{"source": "Representation learning is a central challenge across a range of machine learning areas.In reinforcement learning, effective and functional representations have the potential to tremendously accelerate learning progress and solve more challenging problems.Most prior work on representation learning has focused on generative approaches, learning representations that capture all the underlying factors of variation in the observation space in a more disentangled or well-ordered manner.In this paper, we instead aim to learn functionally salient representations: representations that are not necessarily complete in terms of capturing all factors of variation in the observation space, but rather aim to capture those factors of variation that are important for decision making -- that are \"actionable\".These representations are aware of the dynamics of the environment, and capture only the elements of the observation that are necessary for decision making rather than all factors of variation, eliminating the need for explicit reconstruction.We show how these learned representations can be useful to improve exploration for sparse reward problems, to enable long horizon hierarchical reinforcement learning, and as a state representation for learning policies for downstream tasks.We evaluate our method on a number of simulated environments, and compare it to prior methods for representation learning, exploration, and hierarchical reinforcement learning.", "target": ["Apprendimento di rappresentazioni di stato che catturano i fattori necessari per il controllo", "Un approccio al representation learning nel contesto del reinforcement learning che distingue due fasi in termini di azioni necessarie per raggiungerle.", "L'articolo presenta un metodo per imparare rappresentazioni in cui la vicinanza in distanza euclidea rappresenta stati che sono raggiunti da policy simili."]} +{"source": "We explore the behavior of a standard convolutional neural net in a setting that introduces classification tasks sequentially and requires the net to master new tasks while preserving mastery of previously learned tasks. This setting corresponds to that which human learners face as they acquire domain expertise, for example, as an individual reads a textbook chapter-by-chapter.Through simulations involving sequences of 10 related tasks, we find reason for optimism that nets will scale well as they advance from having a single skill to becoming domain experts.We observed two key phenomena.First, forward facilitation---the accelerated learning of task n+1 having learned n previous tasks---grows with n. Second, backward interference---the forgetting of the n previous tasks when learning task n+1---diminishes with n. Forward facilitation is the goal of research on metalearning, and reduced backward interference is the goal of research on ameliorating catastrophic forgetting.We find that both of these goals are attained simply through broader exposure to a domain.", "target": ["Studiamo il comportamento di una CNN mentre impara nuovi task conservando la padronanza per i task precedentemente appresi"]} +{"source": "We demonstrate a low effort method that unsupervisedly constructs task-optimized embeddings from existing word embeddings to gain performance on a supervised end-task.This avoids additional labeling or building more complex model architectures by instead providing specialized embeddings better fit for the end-task(s).Furthermore, the method can be used to roughly estimate whether a specific kind of end-task(s) can be learned form, or is represented in, a given unlabeled dataset, e.g. using publicly available probing tasks.We evaluate our method for diverse word embedding probing tasks and by size of embedding training corpus -- i.e. to explore its use in reduced (pretraining-resource) settings.", "target": ["Morty modifica pretrained word embeddings per: (a) migliorare le prestazioni complessive degli embedding (per setting multi-task) o migliorare le prestazioni single-task, richiedendo solo uno sforzo minimo."]} +{"source": "Data augmentation is commonly used to encode invariances in learning methods.However, this process is often performed in an inefficient manner, as artificial examples are created by applying a number of transformations to all points in the training set.The resulting explosion of the dataset size can be an issue in terms of storage and training costs, as well as in selecting and tuning the optimal set of transformations to apply.In this work, we demonstrate that it is possible to significantly reduce the number of data points included in data augmentation while realizing the same accuracy and invariance benefits of augmenting the entire dataset.We propose a novel set of subsampling policies, based on model influence and loss, that can achieve a 90% reduction in augmentation set size while maintaining the accuracy gains of standard data augmentation.", "target": ["Aumentando selettivamente i punti difficili da classificare si ottiene un training efficiente.", "Gli autori studiano il problema dell'identificazione delle strategie di sottocampionamento per l'aumento dei dati e propongono strategie basate sull'influenza e la loss del modello, così come il benchmarking empirico dei metodi proposti.", "Gli autori propongono di usare metodi basati sull'influenza o sulla loss per selezionare un sottoinsieme di punti da usare per aumentare i set di dati per il training di modelli in cui la loss è additiva sui data point."]} +{"source": "Over the last few years exciting work in deep generative models has produced models able to suggest new organic molecules by generating strings, trees, and graphs representing their structure.While such models are able to generate molecules with desirable properties, their utility in practice is limited due to the difficulty in knowing how to synthesize these molecules.We therefore propose a new molecule generation model, mirroring a more realistic real-world process, where reactants are selected and combined to form more complex molecules.More specifically, our generative model proposes a bag of initial reactants (selected from a pool of commercially-available molecules) and uses a reaction model to predict how they react together to generate new molecules.Modeling the entire process of constructing a molecule during generation offers a number of advantages.First, we show that such a model has the ability to generate a wide, diverse set of valid and unique molecules due to the useful inductive biases of modeling reactions.Second, modeling synthesis routes rather than final molecules offers practical advantages to chemists who are not only interested in new molecules but also suggestions on stable and safe synthetic routes.Third, we demonstrate the capabilities of our model to also solve one-step retrosynthesis problems, predicting a set of reactants that can produce a target product.", "target": ["Un modello generativo deep per le molecole organiche che genera prima i blocchi di costruzione dei reagenti prima di combinarli usando un predittore di reazione.", "Un modello generativo molecolare che genera molecole attraverso un processo a due fasi che fornisce percorsi di sintesi delle molecole generate, permettendo agli utenti di esaminare l'accessibilità sintetica dei composti generati."]} +{"source": "Deep neural networks are complex non-linear models used as predictive analytics tool and have demonstrated state-of-the-art performance on many classification tasks. However, they have no inherent capability to recognize when their predictions might go wrong.There have been several efforts in the recent past to detect natural errors i.e. misclassified inputs but these mechanisms pose additional energy requirements. To address this issue, we present a novel post-hoc framework to detect natural errors in an energy efficient way. We achieve this by appending relevant features based linear classifiers per class referred as Relevant features based Auxiliary Cells (RACs). The proposed technique makes use of the consensus between RACs appended at few selected hidden layers to distinguish the correctly classified inputs from misclassified inputs.The combined confidence of RACs is utilized to determine if classification should terminate at an early stage.We demonstrate the effectiveness of our technique on various image classification datasets such as CIFAR10, CIFAR100 and Tiny-ImageNet.Our results show that for CIFAR100 dataset trained on VGG16 network, RACs can detect 46% of the misclassified examples along with 12% reduction in energy compared to the baseline network while 69% of the examples are correctly classified.", "target": ["Migliorare la robustezza e l'efficienza energetica di una deep neural network usando le rappresentazioni nascoste.", "Questo articolo mira a ridurre gli errori di classificazione delle reti neurali deep in un modo efficiente dal punto di vista energetico aggiungendo celle ausiliarie basate su caratteristiche rilevanti dopo uno o più hidden layer per decidere se terminare la classificazione in anticipo."]} +{"source": "Many methods have been developed to represent knowledge graph data, which implicitly exploit low-rank latent structure in the data to encode known information and enable unknown facts to be inferred.To predict whether a relationship holds between entities, their embeddings are typically compared in the latent space following a relation-specific mapping.Whilst link prediction has steadily improved, the latent structure, and hence why such models capture semantic information, remains unexplained.We build on recent theoretical interpretation of word embeddings as a basis to consider an explicit structure for representations of relations between entities.For identifiable relation types, we are able to predict properties and justify the relative performance of leading knowledge graph representation methods, including their often overlooked ability to make independent predictions.", "target": ["Comprendere la struttura della rappresentazione dei knowledge graph usando l'idea dei word embedding.", "Questo articolo cerca di capire la struttura latente che sta alla base dei metodi di incorporazione dei knwoledge graph, e dimostra che la capacità di un modello di rappresentare un tipo di relazione dipende dai limiti dell'architettura del modello rispetto alle condizioni di relazione.", "Questo articolo propone uno studio dettagliato sulla explainability dei modelli di predizione dei link (LP) utilizzando una recente interpretazione dei word embeddings per fornire una migliore comprensione delle prestazioni dei modelli LP."]} +{"source": "Many real-world applications involve multivariate, geo-tagged time series data: at each location, multiple sensors record corresponding measurements.For example, air quality monitoring system records PM2.5, CO, etc.The resulting time-series data often has missing values due to device outages or communication errors.In order to impute the missing values, state-of-the-art methods are built on Recurrent Neural Networks (RNN), which process each time stamp sequentially, prohibiting the direct modeling of the relationship between distant time stamps.Recently, the self-attention mechanism has been proposed for sequence modeling tasks such as machine translation, significantly outperforming RNN because the relationship between each two time stamps can be modeled explicitly.In this paper, we are the first to adapt the self-attention mechanism for multivariate, geo-tagged time series data.In order to jointly capture the self-attention across different dimensions (i.e. time, location and sensor measurements) while keep the size of attention maps reasonable, we propose a novel approach called Cross-Dimensional Self-Attention (CDSA) to process each dimension sequentially, yet in an order-independent manner.On three real-world datasets, including one our newly collected NYC-traffic dataset, extensive experiments demonstrate the superiority of our approach compared to state-of-the-art methods for both imputation and forecasting tasks.", "target": ["Un nuovo meccanismo di self-attention per l'imputazione di serie temporali multivariate e geo-taggate.", "Questo articolo propone il problema di applicare i transformer ai dati spazio-temporali in un modo computazionalmente efficiente, e studia i modi di implementare l'attention 3D.", "Questo articolo studia empiricamente l'efficacia dei transformer per l'imputazione dei dati delle serie temporali attraverso le dimensioni dell'input."]} +{"source": "The conversion of scanned documents to digital forms is performed using an Optical Character Recognition (OCR) software.This work focuses on improving the quality of scanned documents in order to improve the OCR output.We create an end-to-end document enhancement pipeline which takes in a set of noisy documents and produces clean ones.Deep neural network based denoising auto-encoders are trained to improve the OCR quality.We train a blind model that works on different noise levels of scanned text documents.Results are shown for blurring and watermark noise removal from noisy scanned documents.", "target": ["Abbiamo progettato e testato un REDNET (ResNet Encoder-Decoder) con 8 skip connections per rimuovere il rumore dai documenti, compresa la sfocatura e le filigrane, ottenendo una rete deep ad alte prestazioni per la pulizia delle immagini dei documenti."]} +{"source": "The existence of adversarial examples, or intentional mis-predictions constructed from small changes to correctly predicted examples, is one of the most significant challenges in neural network research today.Ironically, many new defenses are based on a simple observation - the adversarial inputs themselves are not robust and small perturbations to the attacking input often recover the desired prediction.While the intuition is somewhat clear, a detailed understanding of this phenomenon is missing from the research literature.This paper presents a comprehensive experimental analysis of when and why perturbation defenses work and potential mechanisms that could explain their effectiveness (or ineffectiveness) in different settings.", "target": ["Identifichiamo una famiglia di tecniche di difesa e mostriamo che sia la compressione lossy deterministica che le perturbazioni randomizzate all'input portano a guadagni simili nella robustezza.", "Questo articolo discute i modi di destabilizzare un dato attacco avversario, cosa rende le immagini avversarie non robuste, e se è possibile per gli attaccanti usare un modello universale di perturbazioni per rendere i loro esempi avversari robusti contro tali perturbazioni.", "L'articolo studia la robustezza degli attacchi avversari alle trasformazioni del loro input."]} +{"source": "There is no consensus yet on the question whether adaptive gradient methods like Adam are easier to use than non-adaptive optimization methods like SGD.In this work, we fill in the important, yet ambiguous concept of ‘ease-of-use’ by defining an optimizer’s tunability: How easy is it to find good hyperparameter configurations using automatic random hyperparameter search?We propose a practical and universal quantitative measure for optimizer tunability that can form the basis for a fair optimizer benchmark. Evaluating a variety of optimizers on an extensive set of standard datasets and architectures, we find that Adam is the most tunable for the majority of problems, especially with a low budget for hyperparameter tuning.", "target": ["Forniamo un metodo per il benchmark degli ottimizzatori che è consapevole del processo di tuning degli iperparametri.", "Introduzione di una nuova metrica per catturare l'adattabilità di un ottimizzatore, e un confronto empirico completo degli ottimizzatori di deep learning sotto diverse quantità di tuning degli iperparametri.", "Questo articolo introduce una semplice misura di adattabilità che permette di confrontare gli ottimizzatori sotto vincoli di risorse, scoprendo che il tuning del learning rate dell'ottimizatore Adam è la più facile per trovare configurazioni di iperparametri ben performanti."]} +{"source": "The phase problem in diffraction physics is one of the oldest inverse problems in all of science.The central difficulty that any approach to solving this inverse problem must overcome is that half of the information, namely the phase of the diffracted beam, is always missing.In the context of electron microscopy, the phase problem is generally non-linear and solutions provided by phase-retrieval techniques are known to be poor approximations to the physics of electrons interacting with matter.Here, we show that a semi-supervised learning approach can effectively solve the phase problem in electron microscopy/scattering.In particular, we introduce a new Deep Neural Network (DNN), Y-net, which simultaneously learns a reconstruction algorithm via supervised training in addition to learning a physics-based regularization via unsupervised training.We demonstrate that this constrained, semi-supervised approach is an order of magnitude more data-efficient and accurate than the same model trained in a purely supervised fashion.In addition, the architecture of the Y-net model provides for a straightforward evaluation of the consistency of the model's prediction during inference and is generally applicable to the phase problem in other settings.", "target": ["Introduciamo una deep neural network semi-supervised per approssimare la soluzione del problema di fase nella microscopia elettronica"]} +{"source": "Word embeddings extract semantic features of words from large datasets of text.Most embedding methods rely on a log-bilinear model to predict the occurrenceof a word in a context of other words.Here we propose word2net, a method thatreplaces their linear parametrization with neural networks.For each term in thevocabulary, word2net posits a neural network that takes the context as input andoutputs a probability of occurrence.Further, word2net can use the hierarchicalorganization of its word networks to incorporate additional meta-data, such assyntactic features, into the embedding model.For example, we show how to shareparameters across word networks to develop an embedding model that includespart-of-speech information.We study word2net with two datasets, a collectionof Wikipedia articles and a corpus of U.S. Senate speeches.Quantitatively, wefound that word2net outperforms popular embedding methods on predicting held-out words and that sharing parameters based on part of speech further boostsperformance.Qualitatively, word2net learns interpretable semantic representationsand, compared to vector-based methods, better incorporates syntactic information.", "target": ["Word2net è un nuovo metodo per l'apprendimento di rappresentazioni di parole tramite reti neurali che possono usare le informazioni sintattiche per imparare migliori caratteristiche semantiche.", "Questo articolo estende SGNS con un cambiamento dell'architettura da un modello bag-of-words a un modello feedforward, e contribuisce a una nuova forma di regolarizzazione vincolando un sottoinsieme di layer tra diverse reti associate.", "Un metodo per utilizzare la combinazione non lineare di vettori di contesto per l'apprendimento della rappresentazione vettoriale delle parole, dove l'idea principale è quella di sostituire ogni word embedding con una rete neurale."]} +{"source": "A key goal in neuroscience is to understand brain mechanisms of cognitive functions.An emerging approach is to study “brain states” dynamics using functional magnetic resonance imaging (fMRI).So far in the literature, brain states have typically been studied using 30 seconds of fMRI data or more, and it is unclear to which extent brain states can be reliably identified from very short time series.In this project, we applied graph convolutional networks (GCN) to decode brain activity over short time windows in a task fMRI dataset, i.e. associate a given window of fMRI time series with the task used.Starting with a populational brain graph with nodes defined by a parcellation of cerebral cortex and the adjacent matrix extracted from functional connectome, GCN takes a short series of fMRI volumes as input, generates high-level domain-specific graph representations, and then predicts the corresponding cognitive state.We investigated the performance of this GCN \"cognitive state annotation\" in the Human Connectome Project (HCP) database, which features 21 different experimental conditions spanning seven major cognitive domains, and high temporal resolution task fMRI data.Using a 10-second window, the 21 cognitive states were identified with an excellent average test accuracy of 89% (chance level 4.8%).As the HCP task battery was designed to selectively activate a wide range of specialized functional networks, we anticipate the GCN annotation to be applicable as a base model for other transfer learning applications, for instance, adapting to new task domains.", "target": ["Utilizzando una finestra di 10s di segnali fMRI, il nostro modello GCN ha identificato 21 diverse condizioni di task dal dataset HCP con una precisione sul test set del 89%."]} +{"source": "Modern deep neural networks (DNNs) require high memory consumption and large computational loads. In order to deploy DNN algorithms efficiently on edge or mobile devices, a series of DNN compression algorithms have been explored, including the line of works on factorization methods.Factorization methods approximate the weight matrix of a DNN layer with multiplication of two or multiple low-rank matrices.However, it is hard to measure the ranks of DNN layers during the training process.Previous works mainly induce low-rank through implicit approximations or via costly singular value decomposition (SVD) process on every training step.The former approach usually induces a high accuracy loss while the latter prevents DNN factorization from efficiently reaching a high compression rate.In this work, we propose SVD training, which first applies SVD to decompose DNN's layers and then performs training on the full-rank decomposed weights.To improve the training quality and convergence, we add orthogonality regularization to the singular vectors, which ensure the valid form of SVD and avoid gradient vanishing/exploding.Low-rank is encouraged by applying sparsity-inducing regularizers on the singular values of each layer.Singular value pruning is applied at the end to reach a low-rank model.We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy, comparing to not only previous factorization methods but also state-of-the-art filter pruning methods.", "target": ["Induzione efficiente di reti neurali deep a basso rango tramite training via SVD con valori singolari sparsi e vettori singolari ortogonali.", "Questo articolo introduce un approccio alla compressione della rete incoraggiando la matrice dei pesi in ogni layer ad avere un rango basso e fattorizzando esplicitamente le matrici dei pesi in una fattorizzazione simile a SVD per il trattamento come nuovi parametri.", "Proposta di parametrizzare ogni layer di una deep neural network, prima del training, con una decomposizione di matrice a basso rango, di conseguenza sostituire le convoluzioni con due convoluzioni consecutive, e poi addestrare il metodo decomposto."]} +{"source": "The recent rise in popularity of few-shot learning algorithms has enabled models to quickly adapt to new tasks based on only a few training samples.Previous few-shot learning works have mainly focused on classification and reinforcement learning. In this paper, we propose a few-shot meta-learning system that focuses exclusively on regression tasks.Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of appropriate basis functions.This enables a few labelled samples to approximate the function.We design a Feature Extractor network to encode basis functions for a task distribution, and a Weights Generator to generate the weight vector for a novel task.We show that our model outperforms the current state of the art meta-learning methods in various regression tasks.", "target": ["Proponiamo un modello di few-shot learning che è fatto su misura per task di regressione", "Questo articolo propone un nuovo metodo di few-shot learning per problemi di regressione su piccoli campioni.", "Un metodo che impara un modello di regressione con pochi campioni e supera gli altri metodi."]} +{"source": "Most classification and segmentation datasets assume a closed-world scenario in which predictions are expressed as distribution over a predetermined set of visual classes.However, such assumption implies unavoidable and often unnoticeable failures in presence of out-of-distribution (OOD) input.These failures are bound to happen in most real-life applications since current visual ontologies are far from being comprehensive.We propose to address this issue by discriminative detection of OOD pixels in input data.Different from recent approaches, we avoid to bring any decisions by only observing the training dataset of the primary model trained to solve the desired computer vision task.Instead, we train a dedicated OOD modelwhich discriminates the primary training set from a much larger \"background\" dataset which approximates the variety of the visual world.We perform our experiments on high resolution natural images in a dense prediction setup.We use several road driving datasets as our training distribution, while we approximate the background distribution with the ILSVRC dataset.We evaluate our approach on WildDash test, which is currently the only public test dataset with out-of-distribution images.The obtained results show that the proposed approach succeeds to identify out-of-distribution pixels while outperforming previous work by a wide margin.", "target": ["Presentiamo un nuovo approccio per rilevare i pixel out-of-distribution nella segmentazione semantica.", "Questo articolo affronta il rilevamento di out-of-distribution per aiutare il processo di segmentazione, e propone un approccio di training di un classificatore binario che distingue le patch dell'immagine da un insieme di classi conosciute da quelle di una sconosciuta.", "Questo documento mira a rilevare i pixel out-of-distribution per la segmentazione semantica, e questo lavoro utilizza i dati di altri domini per rilevare le classi indeterminate per modellare meglio l'incertezza."]} +{"source": "Network quantization is one of the most hardware friendly techniques to enable the deployment of convolutional neural networks (CNNs) on low-power mobile devices.Recent network quantization techniques quantize each weight kernel in a convolutional layer independently for higher inference accuracy, since the weight kernels in a layer exhibit different variances and hence have different amounts of redundancy.The quantization bitwidth or bit number (QBN) directly decides the inference accuracy, latency, energy and hardware overhead.To effectively reduce the redundancy and accelerate CNN inferences, various weight kernels should be quantized with different QBNs.However, prior works use only one QBN to quantize each convolutional layer or the entire CNN, because the design space of searching a QBN for each weight kernel is too large.The hand-crafted heuristic of the kernel-wise QBN search is so sophisticated that domain experts can obtain only sub-optimal results.It is difficult for even deep reinforcement learning (DRL) DDPG-based agents to find a kernel-wise QBN configuration that can achieve reasonable inference accuracy.In this paper, we propose a hierarchical-DRL-based kernel-wise network quantization technique, AutoQ, to automatically search a QBN for each weight kernel, and choose another QBN for each activation layer.Compared to the models quantized by the state-of-the-art DRL-based schemes, on average, the same models quantized by AutoQ reduce the inference latency by 54.06%, and decrease the inference energy consumption by 50.69%, while achieving the same inference accuracy.", "target": ["Quantizzazione accurata, veloce e automatizzata della rete neurale Kernel-Wise con precisione mista usando Hierarchical Deep Reinforcement Learning", "Un metodo per quantizzare i pesi e le attivazioni delle reti neurali che utilizza reinforcement learning per selezionare la larghezza di bit per i singoli kernel in un layer e che raggiunge prestazioni migliori, o latenza, rispetto agli approcci precedenti.", "Questo articolo propone di cercare automaticamente schemi di quantizzazione per ogni kernel nella rete neurale, usando RL gerarchico per guidare la ricerca."]} +{"source": "Recent visual analytics systems make use of multiple machine learning models to better fit the data as opposed to traditional single, pre-defined model systems.However, while multi-model visual analytic systems can be effective, their added complexity poses usability concerns, as users are required to interact with the parameters of multiple models.Further, the advent of various model algorithms and associated hyperparameters creates an exhaustive model space to sample models from.This poses complexity to navigate this model space to find the right model for the data and the task.In this paper, we present Gaggle, a multi-model visual analytic system that enables users to interactively navigate the model space.Further translating user interactions into inferences, Gaggle simplifies working with multiple models by automatically finding the best model from the high-dimensional model space to support various user tasks.Through a qualitative user study, we show how our approach helps users to find a best model for a classification and ranking task.The study results confirm that Gaggle is intuitive and easy to use, supporting interactive model space navigation and automated model selection without requiring any technical expertise from users.", "target": ["Gaggle, un sistema analitico visivo interattivo per aiutare gli utenti a navigare interattivamente nello spazio dei modelli per task di classificazione e ranking.", "Un nuovo sistema analitico visivo che mira a permettere agli utenti non esperti di navigare in modo interattivo in uno spazio modello utilizzando un approccio basato sulla dimostrazione.", "Un sistema di visual analytics che aiuta gli analisti alle prime armi a navigare nello spazio dei modelli nell'eseguire task di classificazione e ranking."]} +{"source": "Chinese text classification has received more and more attention today.However, the problem of Chinese text representation still hinders the improvement of Chinese text classification, especially the polyphone and the homophone in social media.To cope with it effectively, we propose a new structure, the Extractor, based on attention mechanisms and design novel attention networks named Extractor-attention network (EAN).Unlike most of previous works, EAN uses a combination of a word encoder and a Pinyin character encoder instead of a single encoder.It improves the capability of Chinese text representation.Moreover, compared with the hybrid encoder methods, EAN has more complex combination architecture and more reducing parameters structures.Thus, EAN can take advantage of a large amount of information that comes from multi-inputs and alleviates efficiency issues.The proposed model achieves the state of the art results on 5 large datasets for Chinese text classification.", "target": ["Proponiamo una nuova rete di attention con un encoder hybrid per risolvere il problema della rappresentazione del testo per text classification in lingua cinese, specialmente i fenomeni linguistici sulle pronunce come il polifono e l'omofono.", "Questo articolo propone un modello basato sull'attention composto da un encoder di parole e dall'encoder Pinyin per text classification in lingua cinese, ed estende l'architettura per l'encoder di caratteri Pinyin.", "Proposta di una rete di attention in cui sia la parola che il pinyin sono considerati per la rappresentazione del cinese, con risultati migliori mostrati in diversi dataset per text classification."]} +{"source": "Recent advances in learning from demonstrations (LfD) with deep neural networks have enabled learning complex robot skills that involve high dimensional perception such as raw image inputs. LfD algorithms generally assume learning from single task demonstrations.In practice, however, it is more efficient for a teacher to demonstrate a multitude of tasks without careful task set up, labeling, and engineering.Unfortunately in such cases, traditional imitation learning techniques fail to represent the multi-modal nature of the data, and often result in sub-optimal behavior.In this paper we present an LfD approach for learning multiple modes of behavior from visual data.Our approach is based on a stochastic deep neural network (SNN), which represents the underlying intention in the demonstration as a stochastic activation in the network.We present an efficient algorithm for training SNNs, and for learning with vision inputs, we also propose an architecture that associates the intention with a stochastic attention module.We demonstrate our method on real robot visual object reaching tasks, and show thatit can reliably learn the multiple behavior modes in the demonstration data.Video results are available at https://vimeo.com/240212286/fd401241b9.", "target": ["imitation learning multimodale da dimostrazioni non strutturate utilizzando l'intenzione di modellazione di rete neurale stocastica.", "Un nuovo approccio basato sul campionamento per l'inferenza nei modelli a variabile latente che si applica all'imitation learning multimodale e funziona meglio delle reti neurali deterministiche e delle reti neurali stocastiche per un task reale di robotica visiva.", "Questo articolo mostra come apprendere diverse modalità utilizzando l'apprendimento per imitazione da dati visivi utilizzando imitation learning, e un metodo per imparare da dimostrazioni in cui sono date diverse modalità dello stesso task."]} +{"source": "The interpretability of neural networks has become crucial for their applications in real world with respect to the reliability and trustworthiness.Existing explanation generation methods usually provide important features by scoring their individual contributions to the model prediction and ignore the interactions between features, which eventually provide a bag-of-words representation as explanation.In natural language processing, this type of explanations is challenging for human user to understand the meaning of an explanation and draw the connection between explanation and model prediction, especially for long texts.In this work, we focus on detecting the interactions between features, and propose a novel approach to build a hierarchy of explanations based on feature interactions.The proposed method is evaluated with three neural classifiers, LSTM, CNN, and BERT, on two benchmark text classification datasets.The generated explanations are assessed by both automatic evaluation measurements and human evaluators.Experiments show the effectiveness of the proposed method in providing explanations that are both faithful to models, and understandable to humans.", "target": ["Un nuovo approccio per costruire spiegazioni gerarchiche per text classification rilevando le interazioni delle feature.", "Un nuovo metodo per fornire spiegazioni per le predizioni fatte dai classificatori di testo che supera le baseline sui punteggi di importanza a livello di parola, e una nuova metrica, la loss di coesione, per valutare l'importanza a livello di span.", "Un metodo di interpretazione basato sulle interazioni delle feature e sul punteggio di importanza delle feature rispetto ai contributi indipendenti delle feature."]} +{"source": "Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources.In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time.FBS introduces small auxiliary connections to existing convolutional layers.In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels.FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs.We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification.Experiments show that FBS can respectively provide 5× and 2× savings in compute on VGG-16 and ResNet-18, both with less than 0.6% top-5 accuracy loss.", "target": ["Facciamo funzionare più velocemente i layer convoluzionali potenziando e sopprimendo dinamicamente i canali nel calcolo delle feature.", "Un metodo di potenziamento e soppressione delle caratteristiche per la potatura dinamica dei canali che predice l'importanza di ogni canale e poi usa una funzione affine per amplificare/sopprimere l'importanza del canale.", "Proposta di un metodo di pruning dei canali per selezionare dinamicamente i canali durante il test."]} +{"source": "We propose a novel way of reducing the number of parameters in the storage-hungry fully connected layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training.Our results indicate that convolutional neural networks can operate without any loss of accuracy at less than 0.5% classification layer connection density, or less than 5% overall network connection density.We also investigate the effects of pre-defining the sparsity of networks with only fully connected layers.Based on our sparsifying technique, we introduce the `scatter' metric to characterize the quality of a particular connection pattern.As proof of concept, we show results on CIFAR, MNIST and a new dataset on classifying Morse code symbols, which highlights some interesting trends and limits of sparse connection patterns.", "target": ["Le reti neurali possono essere pre-definite per avere una connettività sparsa senza degradazione delle prestazioni.", "Questo articolo esamina i modelli di connessione sparsi negli layer superiori delle reti di classificazione delle immagini convoluzionali, e introduce un'euristica per distribuire le connessioni tra finestre/gruppi e una misura chiamata scatter per costruire maschere di connettività.", "Proposta di ridurre il numero di parametri appresi da una rete deep impostando pesi di connessione sparsi nei layer di classificazione, e introduzione di un concetto di \"scatter\"."]} +{"source": "Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important problems in the development of deep learning.While a lot of efforts have been made in recent years, it is of great significance to perform correct and complete evaluations of the adversarial attack and defense algorithms.In this paper, we establish a comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness on image classification tasks.After briefly reviewing plenty of representative attack and defense methods, we perform large-scale experiments with two robustness curves as the fair-minded evaluation criteria to fully understand the performance of these methods.Based on the evaluation results, we draw several important findings and provide insights for future research.", "target": ["Forniamo un benchmark completo, rigoroso e coerente per valutare la robustezza avversaria dei modelli di deep learning.", "Questo articolo presenta una valutazione di diversi tipi di modelli di classificazione sotto vari metodi di attacco avversario.", "Uno studio empirico su larga scala che confronta diverse tecniche di attacco e difesa avversaria, e l'uso della precisione rispetto al budget di perturbazione e della precisione rispetto alle curve di forza dell'attacco per valutare attacchi e difese."]} +{"source": "We propose a modification to traditional Artificial Neural Networks (ANNs), which provides the ANNs with new aptitudes motivated by biological neurons. Biological neurons work far beyond linearly summing up synaptic inputs and then transforming the integrated information. A biological neuron change firing modes accordingly to peripheral factors (e.g., neuromodulators) as well as intrinsic ones. Our modification connects a new type of ANN nodes, which mimic the function of biological neuromodulators and are termed modulators, to enable other traditional ANN nodes to adjust their activation sensitivities in run-time based on their input patterns. In this manner, we enable the slope of the activation function to be context dependent. This modification produces statistically significant improvements in comparison with traditional ANN nodes in the context of Convolutional Neural Networks and Long Short-Term Memory networks.", "target": ["Proponiamo una modifica alle reti neurali artificiali tradizionali motivata dalla biologia dei neuroni per permettere alla forma della funzione di attivazione di essere dipendente dal contesto.", "Un metodo per scalare le attivazioni di un layer di neuroni in una ANN a seconda degli ingressi a quel layer che riporta miglioramenti al di sopra delle baseline.", "Introduzione di un cambiamento nell'architettura dei neuroni di base in una rete neurale, e l'idea di moltiplicare l'uscita della combinazione lineare dei neuroni con un modulatore prima di passarla funzione di attivazione."]} +{"source": "In this work, we study how the large-scale pretrain-finetune framework changes the behavior of a neural language generator.We focus on the transformer encoder-decoder model for the open-domain dialogue response generation task.We find that after standard fine-tuning, the model forgets important language generation skills acquired during large-scale pre-training.We demonstrate the forgetting phenomenon through a detailed behavior analysis from the perspectives of context sensitivity and knowledge transfer.Adopting the concept of data mixing, we propose an intuitive fine-tuning strategy named \"mix-review''.We find that mix-review effectively regularize the fine-tuning process, and the forgetting problem is largely alleviated.Finally, we discuss interesting behavior of the resulting dialogue model and its implications.", "target": ["Identifichiamo il problema del forgetting nel fine-tuning dei modelli pre-trained per NLG, e proponiamo la strategia mix-review per affrontarlo.", "Questo articolo analizza il problema del forgetting nel framework del pretraining-finetuning dal punto di vista della sensibilità al contesto e del trasferimento della conoscenza, e propone una strategia di fine-tuning che supera il metodo del weight decay.", "Studio del problema del forgetting nel framework pretrain-finetune, in particolare nei task di dialogue response generation, e proposta di una strategia di revisione mista per alleviare il problema del forgetting."]} +{"source": "Combining domain knowledge models with neural models has been challenging. End-to-end trained neural models often perform better (lower Mean Square Error) than domain knowledge models or domain/neural combinations, and the combination is inefficient to train. In this paper, we demonstrate that by composing domain models with machine learning models, by using extrapolative testing sets, and invoking decorrelation objective functions, we create models which can predict more complex systems.The models are interpretable, extrapolative, data-efficient, and capture predictable but complex non-stochastic behavior such as unmodeled degrees of freedom and systemic measurement noise. We apply this improved modeling paradigm to several simulated systems and an actual physical system in the context of system identification. Several ways of composing domain models with neural models are examined for time series, boosting, bagging, and auto-encoding on various systems of varying complexity and non-linearity. Although this work is preliminary, we show that the ability to combine models is a very promising direction for neural modeling.", "target": ["Una migliore modellazione di sistemi complessi utilizza una composizione ibrida di modelli neurali/di dominio, nuove funzioni di loss di decorrelazione e test set estrapolativi", "Questo articolo conduce esperimenti per confrontare le previsioni estrapolative di vari modelli ibridi che compongono modelli fisici, reti neurali e modelli stocastici, e affronta la sfida della dinamica non modellata che è un collo di bottiglia.", "Questo articolo presenta approcci per la combinazione di reti neurali con modelli non-NN per prevedere il comportamento di sistemi fisici complessi."]} +{"source": "Humans can learn task-agnostic priors from interactive experience and utilize the priors for novel tasks without any finetuning.In this paper, we propose Scoring-Aggregating-Planning (SAP), a framework that can learn task-agnostic semantics and dynamics priors from arbitrary quality interactions as well as the corresponding sparse rewards and then plan on unseen tasks in zero-shot condition.The framework finds a neural score function for local regional state and action pairs that can be aggregated to approximate the quality of a full trajectory; moreover, a dynamics model that is learned with self-supervision can be incorporated for planning.Many of previous works that leverage interactive data for policy learning either need massive on-policy environmental interactions or assume access to expert data while we can achieve a similar goal with pure off-policy imperfect data.Instantiating our framework results in a generalizable policy to unseen tasks.Experiments demonstrate that the proposed method can outperform baseline methods on a wide range of applications including gridworld, robotics tasks and video games.", "target": ["Impariamo i punteggi densi e il modello dinamico come priori dai dati di esplorazione e li usiamo per indurre una buona policy nei nuovi task in condizione zero-shot.", "Questo articolo discute la generalizzazione dello zero shot in nuovi ambienti, e propone un approccio con risultati su Grid-World, Super Mario Bros, e 3D Robotics.", "Un metodo che mira ad apprendere priori task-agnostic per la generalizzazione zero-shot, con l'idea di impiegare un approccio di modellazione in cima al framework di RL basato sul modello."]} +{"source": "Particle-based inference algorithm is a promising method to efficiently generate samples for an intractable target distribution by iteratively updating a set of particles.As a noticeable example, Stein variational gradient descent (SVGD) provides a deterministic and computationally efficient update, but it is known to underestimate the variance in high dimensions, the mechanism of which is poorly understood.In this work we explore a connection between SVGD and MMD-based inference algorithm via Stein's lemma.By comparing the two update rules, we identify the source of bias in SVGD as a combination of high variance and deterministic bias, and empirically demonstrate that the removal of either factors leads to accurate estimation of the variance.In addition, for learning high-dimensional Gaussian target, we analytically derive the converged variance for both algorithms, and confirm that only SVGD suffers from the \"curse of dimensionality\".", "target": ["Analizzare i meccanismi sottostanti al collasso della varianza di SVGD in dimensioni elevate."]} +{"source": "We describe an approach to understand the peculiar and counterintuitive generalization properties of deep neural networks. The approach involves going beyond worst-case theoretical capacity control frameworks that have been popular in machine learning in recent years to revisit old ideas in the statistical mechanics of neural networks. Within this approach, we present a prototypical Very Simple Deep Learning (VSDL) model, whose behavior is controlled by two control parameters, one describing an effective amount of data, or load, on the network (that decreases when noise is added to the input), and one with an effective temperature interpretation (that increases when algorithms are early stopped). Using this model, we describe how a very simple application of ideas from the statistical mechanics theory of generalization provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.", "target": ["Ripensare la generalizzazione richiede la rivisitazione di vecchie idee: approcci di meccanica statistica e comportamento di apprendimento complesso", "Gli autori suggeriscono che le idee di meccanica statistica aiuteranno a capire le proprietà di generalizzazione delle reti neurali deep, e danno un approccio che fornisce forti descrizioni qualitative dei risultati empirici riguardanti le reti neurali deep e gli algoritmi di apprendimento.", "Un insieme di idee legate alla comprensione teorica delle proprietà di generalizzazione delle reti neurali multilayer, e un'analogia qualitativa tra i comportamenti nel deep learning e i risultati dell'analisi fisica statistica quantitativa delle reti neurali a uno o due layer."]} +{"source": "Computations for the softmax function in neural network models are expensive when the number of output classes is large.This can become a significant issue in both training and inference for such models.In this paper, we present Doubly Sparse Softmax (DS-Softmax), Sparse Mixture of Sparse of Sparse Experts, to improve the efficiency for softmax inference.During training, our method learns a two-level class hierarchy by dividing entire output class space into several partially overlapping experts.Each expert is responsible for a learned subset of the output class space and each output class only belongs to a small number of those experts.During inference, our method quickly locates the most probable expert to compute small-scale softmax.Our method is learning-based and requires no knowledge of the output class partition space a priori.We empirically evaluate our method on several real-world tasks and demonstrate that we can achieve significant computation reductions without loss of performance.", "target": ["Presentiamo una softmax doppiamente sparsa, la miscela sparsa di esperti sparsi, per migliorare l'efficienza dell'inferenza softmax attraverso lo sfruttamento della gerarchia a due livelli di sovrapposizione.", "Questo articolo propone un'approssimazione veloce al calcolo di softmax quando il numero di classi è molto grande.", "Questo articolo propone una miscela di esperti sparsi che impara una gerarchia di classi a due livelli per un'inferenza softmax efficiente."]} +{"source": "Supervised machine learning models for high-value computer vision applications such as medical image classification often require large datasets labeled by domain experts, which are slow to collect, expensive to maintain, and static with respect to changes in the data distribution.In this context, we assess the utility of observational supervision, where we take advantage of passively-collected signals such as eye tracking or “gaze” data, to reduce the amount of hand-labeled data needed for model training.Specifically, we leverage gaze information to directly supervise a visual attention layer by penalizing disagreement between the spatial regions the human labeler looked at the longest and those that most heavily influence model output.We present evidence that constraining the model in this way can reduce the number of labeled examples required to achieve a given performance level by as much as 50%, and that gaze information is most helpful on more difficult tasks.", "target": ["Esploriamo l'uso di dati di eye-tracking raccolti passivamente per ridurre la quantità di dati etichettati necessari durante la formazione.", "Un metodo per utilizzare le informazioni sullo sguardo per ridurre la complessità del campione di un modello e lo sforzo di annotazione necessario per ottenere una performance target, con risultati migliori nei campioni di medie dimensioni e nei task più difficili.", "Un metodo per incorporare i segnali di sguardo nelle CNN standard per la classificazione delle immagini, aggiungendo un termine di funzione di loss basato sulla differenza tra la Class Activation Map del modello e la mappa costruita dalle informazioni di eye tracking."]} +{"source": "We study the robustness to symmetric label noise of GNNs training procedures.By combining the nonlinear neural message-passing models (e.g. Graph Isomorphism Networks, GraphSAGE, etc.) with loss correction methods, we present a noise-tolerant approach for the graph classification task.Our experiments show that test accuracy can be improved under the artificial symmetric noisy setting.", "target": ["Applichiamo la correzione delle loss alle graph neural networks per addestrare un modello più robusto al rumore.", "Questo articolo introduce la correzione delle loss per le reti neurali dei grafi per affrontare il rumore simmetrico delle label dei grafi, concentrandosi su un task di classificazione dei grafi.", "Questo articolo propone l'uso di una loss di correzione del rumore nel contesto delle graph neural networks per affrontare le label rumorose."]} +{"source": "Through many recent advances in graph representation learning, performance achieved on tasks involving graph-structured data has substantially increased in recent years---mostly on tasks involving node-level predictions.The setup of prediction tasks over entire graphs (such as property prediction for a molecule, or side-effect prediction for a drug), however, proves to be more challenging, as the algorithm must combine evidence about several structurally relevant patches of the graph into a single prediction.Most prior work attempts to predict these graph-level properties while considering only one graph at a time---not allowing the learner to directly leverage structural similarities and motifs across graphs.Here we propose a setup in which a graph neural network receives pairs of graphs at once, and extend it with a co-attentional layer that allows node representations to easily exchange structural information across them.We first show that such a setup provides natural benefits on a pairwise graph classification task (drug-drug interaction prediction), and then expand to a more generic graph regression setup: enhancing predictions over QM9, a standard molecular prediction benchmark.Our setup is flexible, powerful and makes no assumptions about the underlying dataset properties, beyond anticipating the existence of multiple training graphs.", "target": ["Usiamo la co-attention dei grafi in un sistema di training dei grafi accoppiati per la classificazione e la regressione dei grafi.", "Questo articolo inietta un meccanismo di co-attention a più teste in GCN che permette a un farmaco di osservare un altro farmaco durante la predizione degli effetti collaterali.", "Un metodo per estendere l'apprendimento basato sui grafi con uno layer co-attenzionale, che supera altri precedenti su pairwise graph classification task."]} +{"source": "In this paper we study image captioning as a conditional GAN training, proposing both a context-aware LSTM captioner and co-attentive discriminator, which enforces semantic alignment between images and captions.We investigate the viability of two discrete GAN training methods: Self-critical Sequence Training (SCST) and Gumbel Straight-Through (ST) and demonstrate that SCST shows more stable gradient behavior and improved results over Gumbel ST.", "target": ["Image captioning come un training di una GAN condizionale con nuove architetture, anche studiare due metodi discreti di training di GAN .", "Un modello GAN migliorato per image captioning che propone un captioner LSTM context-aware, introduce un discriminatore co-attentive più forte con prestazioni migliori, e usa SCST per il training della GAN."]} +{"source": "We present Newtonian Monte Carlo (NMC), a method to improve Markov Chain Monte Carlo (MCMC) convergence by analyzing the first and second order gradients of the target density to determine a suitable proposal density at each point.Existing first order gradient-based methods suffer from the problem of determining an appropriate step size.Too small a step size and it will take a large number of steps to converge, while a very large step size will cause it to overshoot the high density region.NMC is similar to the Newton-Raphson update in optimization where the second order gradient is used to automatically scale the step size in each dimension.However, our objective is not to find a maxima but instead to find a parameterized density that can best match the local curvature of the target density. This parameterized density is then used as a single-site Metropolis-Hastings proposal.As a further improvement on first order methods, we show that random variables with constrained supports don't need to be transformed before taking a gradient step.NMC directly matches constrained random variables to a proposal density with the same support thus keeping the curvature of the target density intact.We demonstrate the efficiency of NMC on a number of different domains.For statistical models where the prior is conjugate to the likelihood, our method recovers the posterior quite trivially in one step.However, we also show results on fairly large non-conjugate models, where NMC performs better than adaptive first order methods such as NUTS or other inexact scalable inference methods such as Stochastic Variational Inference or bootstrapping.", "target": ["Sfruttare la curvatura per far convergere i metodi MCMC più velocemente dello stato dell'arte."]} +{"source": "Neural Tangents is a library designed to enable research into infinite-width neural networks.It provides a high-level API for specifying complex and hierarchical neural network architectures.These networks can then be trained and evaluated either at finite-width as usual or in their infinite-width limit.Infinite-width networks can be trained analytically using exact Bayesian inference or using gradient descent via the Neural Tangent Kernel.Additionally, Neural Tangents provides tools to study gradient descent training dynamics of wide but finite networks in either function space or weight space. The entire library runs out-of-the-box on CPU, GPU, or TPU.All computations can be automatically distributed over multiple accelerators with near-linear scaling in the number of devices. Neural Tangents is availableathttps://www.github.com/google/neural-tangentsWe also provide an accompanying interactive Colab notebook athttps://colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/neural_tangents_cookbook.ipynb", "target": ["Keras per reti neurali infinite."]} +{"source": "Deep neural networks have achieved great success in classification tasks during the last years.However, one major problem to the path towards artificial intelligence is the inability of neural networks to accurately detect samples from novel class distributions and therefore, most of the existent classification algorithms assume that all classes are known prior to the training stage.In this work, we propose a methodology for training a neural network that allows it to efficiently detect out-of-distribution (OOD) examples without compromising much of its classification accuracy on the test examples from known classes.Based on the Outlier Exposure (OE) technique, we propose a novel loss function that achieves state-of-the-art results in out-of-distribution detection with OE both on image and text classification tasks.Additionally, the way this method was constructed makes it suitable for training any classification algorithm that is based on Maximum Likelihood methods.", "target": ["Proponiamo una nuova funzione di loss che raggiunge risultati allo stato dell'arte nel rilevamento di out-of-distribution con Outlier Exposure sia su task di classificazione di immagini che di testo.", "Questo articolo affronta i problemi del out-of-distribution detection e della calibrazione del modello adattando la funzione di loss della tecnica Outlier Exposure, con risultati che dimostrano un aumento delle prestazioni rispetto a OE su benchmark di visione e testo e una migliore calibrazione del modello.", "Proposta di una nuova funzione di loss per addestrare la rete con Outlier Exposure che porta a una migliore individuazione di OOD rispetto alle semplici funzioni di loss che utilizzano la divergenza KL."]} +{"source": "Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map.The precise form of this representation is often considered to be a metric representation of space.An internal representation, however, is judged by its contribution to performance on a given task, and may thus vary between different types of navigation tasks.Here we train a recurrent neural network that controls an agent performing several navigation tasks in a simple environment.To focus on internal representations, we split learning into a task-agnostic pre-training stage that modifies internal connectivity and a task-specific Q learning stage that controls the network's output.We show that pre-training shapes the attractor landscape of the networks, leading to either a continuous attractor, discrete attractors or a disordered state.These structures induce bias onto the Q-Learning phase, leading to a performance pattern across the tasks corresponding to metric and topological regularities.Our results show that, in recurrent networks, inductive bias takes the form of attractor landscapes -- which can be shaped by pre-training and analyzed using dynamical systems methods.Furthermore, we demonstrate that non-metric representations are useful for navigation tasks.", "target": ["Il pre-training agnostico può modellare la configurazione degli attrattori di RNN e formare diversi bias induttivi per diversi task di navigazione", "Questo articolo studia le rappresentazioni interne delle reti neurali ricorrenti addestrate su task di navigazione, e trova che le RNN pre-addestrate a usare path integration contengono attrattori continui 2D mentre le RNN pre-addestrate per landmark memory contengono attrattori discreti.", "Questo articolo esplora come il pre-training delle reti ricorrenti su diversi obiettivi di navigazione conferisce diversi benefici per la risoluzione dei task a valle, e mostra come il diverso pre-training si manifesta come diverse strutture dinamiche nelle reti dopo il pre-training."]} +{"source": "Formal verification of machine learning models has attracted attention recently, and significant progress has been made on proving simple properties like robustness to small perturbations of the input features.In this context, it has also been observed that folding the verification procedure into training makes it easier to train verifiably robust models.In this paper, we extend the applicability of verified training by extending it to (1) recurrent neural network architectures and (2) complex specifications that go beyond simple adversarial robustness, particularly specifications that capture temporal properties like requiring that a robot periodically visits a charging station or that a language model always produces sentences of bounded length.Experiments show that while models trained using standard training often violate desired specifications, our verified training method produces models that both perform well (in terms of test error or reward) and can be shown to be provably consistent with specifications.", "target": ["Verifica della rete neurale per proprietà temporali e modelli di generazione di sequenze", "Questo articolo estende l'interval bound propagation alla computazione ricorrente e ai modelli auto-regressivi, introduce ed estende la Signal Temporal Logic per specificare i vincoli temporali, e fornisce la prova che STL con bound propagation può garantire che i modelli neurali siano conformi alle specifiche temporali.", "Un modo per addestrare regressori di serie temporali in modo verificabile rispetto a un insieme di regole definite dalla logica temporale del segnale, e un lavoro di derivazione delle regole di bound propagation per il linguaggio STL."]} +{"source": "Neural Network (NN) has achieved state-of-the-art performances in many tasks within image, speech, and text domains.Such great success is mainly due to special structure design to fit the particular data patterns, such as CNN capturing spatial locality and RNN modeling sequential dependency.Essentially, these specific NNs achieve good performance by leveraging the prior knowledge over corresponding domain data.Nevertheless, there are many applications with all kinds of tabular data in other domains.Since there are no shared patterns among these diverse tabular data, it is hard to design specific structures to fit them all.Without careful architecture design based on domain knowledge, it is quite challenging for NN to reach satisfactory performance in these tabular data domains.To fill the gap of NN in tabular data learning, we propose a universal neural network solution, called TabNN, to derive effective NN architectures for tabular data in all kinds of tasks automatically.Specifically, the design of TabNN follows two principles: \\emph{to explicitly leverages expressive feature combinations} and \\emph{to reduce model complexity}.Since GBDT has empirically proven its strength in modeling tabular data, we use GBDT to power the implementation of TabNN.Comprehensive experimental analysis on a variety of tabular datasets demonstrate that TabNN can achieve much better performance than many baseline solutions.", "target": ["Proponiamo una soluzione di rete neurale universale per derivare automaticamente delle architetture NN efficaci per i dati tabulari.", "Una nuova procedura di training delle reti neurali, progettata per i dati tabulari, che cerca di sfruttare i cluster di feature estratte dai GBDT.", "Proposta di un algoritmo ibrido di machine learning che utilizza alberi decisionali Gradient Boosted potenziati dal gradiente e reti neurali deep, con una direzione di ricerca pensata per i dati tabulari."]} +{"source": "Knowledge Bases (KBs) are becoming increasingly large, sparse and probabilistic.These KBs are typically used to perform query inferences and rule mining.But their efficacy is only as high as their completeness.Efficiently utilizing incomplete KBs remains a major challenge as the current KB completion techniques either do not take into account the inherent uncertainty associated with each KB tuple or do not scale to large KBs.Probabilistic rule learning not only considers the probability of every KB tuple but also tackles the problem of KB completion in an explainable way.For any given probabilistic KB, it learns probabilistic first-order rules from its relations to identify interesting patterns.But, the current probabilistic rule learning techniques perform grounding to do probabilistic inference for evaluation of candidate rules.It does not scale well to large KBs as the time complexity of inference using grounding is exponential over the size of the KB.In this paper, we present SafeLearner -- a scalable solution to probabilistic KB completion that performs probabilistic rule learning using lifted probabilistic inference -- as faster approach instead of grounding. We compared SafeLearner to the state-of-the-art probabilistic rule learner ProbFOIL+ and to its deterministic contemporary AMIE+ on standard probabilistic KBs of NELL (Never-Ending Language Learner) and Yago.Our results demonstrate that SafeLearner scales as good as AMIE+ when learning simple rules and is also significantly faster than ProbFOIL+.", "target": ["Sistema di apprendimento probabilistico delle regole utilizzando Lifted Inference", "Un modello per l'apprendimento di regole probabilistiche per automatizzare il completamento di database probabilistici che usa AMIE+ e lifted inference per migliorare l'efficienza computazionale."]} +{"source": "Recent efforts in Dialogue State Tracking (DST) for task-oriented dialogues have progressed toward open-vocabulary or generation-based approaches where the models can generate slot value candidates from the dialogue history itself.These approaches have shown good performance gain, especially in complicated dialogue domains with dynamic slot values.However, they fall short in two aspects: (1) they do not allow models to explicitly learn signals across domains and slots to detect potential dependencies among \\textit{(domain, slot)} pairs; and (2) existing models follow auto-regressive approaches which incur high time cost when the dialogue evolves over multiple domains and multiple turns.In this paper, we propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST) which can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots.In particular, the non-autoregressive nature of our method not only enables decoding in parallel to significantly reduce the latency of DST for real-time dialogue response generation, but also detect dependencies among slots at token level in addition to slot and domain level.Our empirical results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus, and the latency of our model is an order of magnitude lower than the previous state of the art as the dialogue history extends over time.", "target": ["Proponiamo il primo modello neurale non autoregressivo per il Dialogue State Tracking (DST), raggiungendo la precisione SOTA (49,04%) sul benchmark MultiWOZ2.1, e riducendo la latenza di inferenza di un ordine di grandezza.", "Un nuovo modello per il task DST che riduce la complessità del tempo di inferenza con un decoder non autoregressivo, ottiene una precisione DST competitiva e mostra miglioramenti rispetto ad altre baseline.", "Proposta di un modello capace di seguire gli stati di dialogo in modo non ricorsivo."]} +{"source": "The 3D-zoom operation is the positive translation of the camera in the Z-axis, perpendicular to the image plane.In contrast, the optical zoom changes the focal length and the digital zoom is used to enlarge a certain region of an image to the original image size.In this paper, we are the first to formulate an unsupervised 3D-zoom learning problem where images with an arbitrary zoom factor can be generated from a given single image.An unsupervised framework is convenient, as it is a challenging task to obtain a 3D-zoom dataset of natural scenes due to the need for special equipment to ensure camera movement is restricted to the Z-axis.Besides, the objects in the scenes should not move when being captured, which hinders the construction of a large dataset of outdoor scenes.We present a novel unsupervised framework to learn how to generate arbitrarily 3D-zoomed versions of a single image, not requiring a 3D-zoom ground truth, called the Deep 3D-Zoom Net.The Deep 3D-Zoom Net incorporates the following features:(i) transfer learning from a pre-trained disparity estimation network via a back re-projection reconstruction loss;(ii) a fully convolutional network architecture that models depth-image-based rendering (DIBR), taking into account high-frequency details without the need for estimating the intermediate disparity; and(iii) incorporating a discriminator network that acts as a no-reference penalty for unnaturally rendered areas.Even though there is no baseline to fairly compare our results, our method outperforms previous novel view synthesis research in terms of realistic appearance on large camera baselines.We performed extensive experiments to verify the effectiveness of our method on the KITTI and Cityscapes datasets.", "target": ["Una nuova architettura di rete per eseguire Deep 3D Zoom o close-up.", "Un metodo per creare una \"immagine ingrandita\" per una data immagine di input, e una nuova loss di ricostruzione della retroproiezione che permette alla rete di imparare la struttura 3D sottostante e mantenere un aspetto naturale.", "Un algoritmo per sintetizzare il comportamento dello zoom 3D quando la telecamera si muove in avanti, una struttura di rete che incorpora la stima della disparità in un framework con GANs per sintetizzare nuove viste, e un nuovo task di computer vision viene proposto."]} +{"source": "The universal approximation theorem, in one of its most general versions, says that if we consider only continuous activation functions σ, then a standard feedforward neural network with one hidden layer is able to approximate any continuous multivariate function f to any given approximation threshold ε, if and only if σ is non-polynomial.In this paper, we give a direct algebraic proof of the theorem.Furthermore we shall explicitly quantify the number of hidden units required for approximation.Specifically, if X in R^n is compact, then a neural network with n input units, m output units, and a single hidden layer with {n+d choose d} hidden units (independent of m and ε), can uniformly approximate any polynomial function f:X -> R^m whose total degree is at most d for each of its m coordinate functions.In the general case that f is any continuous function, we show there exists some N in O(ε^{-n}) (independent of m), such that N hidden units would suffice to approximate f.We also show that this uniform approximation property (UAP) still holds even under seemingly strong conditions imposed on the weights.We highlight several consequences:(i) For any δ > 0, the UAP still holds if we restrict all non-bias weights w in the last layer to satisfy |w| < δ.(ii) There exists some λ>0 (depending only on f and σ), such that the UAP still holds if we restrict all non-bias weights w in the first layer to satisfy |w|>λ.(iii) If the non-bias weights in the first layer are *fixed* and randomly chosen from a suitable range, then the UAP holds with probability 1.", "target": ["Un perfezionamento quantitativo del teorema di approssimazione universale attraverso un approccio algebrico.", "Gli autori derivano le dimostrazioni della proprietà di approssimazione universale in modo algebrico e affermano che i risultati sono generali per altri tipi di reti neurali e learner simili.", "Una nuova dimostrazione della versione di Leshno della proprietà di approssimazione universale per le reti neurali, e nuove intuizioni sulla proprietà di approssimazione universale."]} +{"source": "In this paper, we design a generic framework for learning a robust text classification model that achieves accuracy comparable to standard full models under test-timebudget constraints.We take a different approach from existing methods and learn to dynamically delete a large fraction of unimportant words by a low-complexity selector such that the high-complexity classifier only needs to process a small fraction of important words.In addition, we propose a new data aggregation method to train the classifier, allowing it to make accurate predictions even on fragmented sequence of words.Our end-to-end method achieves state-of-the-art performance while its computational complexity scales linearly with the small fraction of important words in the whole corpus.Besides, a single deep neural network classifier trained by our framework can be dynamically tuned to different budget levels at inference time.", "target": ["Struttura modulare per la classificazione dei documenti e tecnica di aggregazione dei dati per rendere il framework robusto a varie distorsioni e rumore e concentrarsi solo sulle parole importanti.", "Gli autori considerano il training di una text classification basata su RNN dove c'è una restrizione di risorse a test-time prediction, e forniscono un approccio che utilizza un meccanismo di mascheramento per ridurre le parole/frasi/sentenze utilizzate nella predizione seguito da un classificatore per gestire tali componenti."]} +{"source": "Differentiable architecture search (DARTS) provided a fast solution in finding effective network architectures, but suffered from large memory and computing overheads in jointly training a super-net and searching for an optimal architecture.In this paper, we present a novel approach, namely Partially-Connected DARTS, by sampling a small part of super-net to reduce the redundancy in exploring the network space, thereby performing a more efficient search without comprising the performance.In particular, we perform operation search in a subset of channels while bypassing the held out part in a shortcut.This strategy may suffer from an undesired inconsistency on selecting the edges of super-net caused by sampling different channels.We solve it by introducing edge normalization, which adds a new set of edge-level hyper-parameters to reduce uncertainty in search.Thanks to the reduced memory cost, PC-DARTS can be trained with a larger batch size and, consequently, enjoy both faster speed and higher training stability.Experiment results demonstrate the effectiveness of the proposed method.Specifically, we achieve an error rate of 2.57% on CIFAR10 within merely 0.1 GPU-days for architecture search, and a state-of-the-art top-1 error rate of 24.2% on ImageNet (under the mobile setting) within 3.8 GPU-days for search.Our code has been made available at https://www.dropbox.com/sh/on9lg3rpx1r6dkf/AABG5mt0sMHjnEJyoRnLEYW4a?dl=0.", "target": ["Consentire la connessione parziale dei canali nelle super-reti per regolarizzare e accelerare la ricerca di architetture differenziabili", "Un'estensione del metodo di ricerca dell'architettura neurale DARTS che affronta il suo difetto di immenso costo di memoria utilizzando un sottoinsieme casuale di canali e un metodo per normalizzare i bordi.", "Questo articolo propone di migliorare DARTS in termini di efficienza di training, da grandi costi di memoria e di calcolo, e propone un DARTS parzialmente connesso con connessione parziale del canale e edge normalization."]} +{"source": "Dialogue research tends to distinguish between chit-chat and goal-oriented tasks.While the former is arguably more naturalistic and has a wider use of language, the latter has clearer metrics and a more straightforward learning signal.Humans effortlessly combine the two, and engage in chit-chat for example with the goal of exchanging information or eliciting a specific response.Here, we bridge the divide between these two domains in the setting of a rich multi-player text-based fantasy environment where agents and humans engage in both actions and dialogue.Specifically, we train a goal-oriented model with reinforcement learning via self-play against an imitation-learned chit-chat model with two new approaches: the policy either learns to pick a topic or learns to pick an utterance given the top-k utterances.We show that both models outperform a strong inverse model baseline and can converse naturally with their dialogue partner in order to achieve goals.", "target": ["Gli agenti interagiscono (parlano, agiscono) e possono raggiungere obiettivi in un mondo complesso con un linguaggio diverso, colmando il divario tra il chit-chat e il dialogo orientato agli obiettivi.", "Questo articolo studia un task di dialogo multiagente in cui l'agente che apprende mira a generare azioni in linguaggio naturale che elicitano una particolare azione dall'altro agente, e mostra che gli agenti RL possono raggiungere livelli più alti di completamento del task rispetto alle baseline di imitation learning.", "Questo articolo esplora il setting del dialogo goal-oriented con reinforcement learning in un Fantasy Text Adventure Game e osserva che gli approcci RL superano i modelli di supervised learning."]} +{"source": "We consider off-policy policy evaluation when the trajectory data are generated by multiple behavior policies.Recent work has shown the key role played by the state or state-action stationary distribution corrections in the infinite horizon context for off-policy policy evaluation.We propose estimated mixture policy (EMP), a novel class of partially policy-agnostic methods to accurately estimate those quantities.With careful analysis, we show that EMP gives rise to estimates with reduced variance for estimating the state stationary distribution correction while it also offers a useful induction bias for estimating the state-action stationary distribution correction.In extensive experiments with both continuous and discrete environments, we demonstrate that our algorithm offers significantly improved accuracy compared to the state-of-the-art methods.", "target": ["Un nuovo metodo parzialmente indipendente dalla policy per la valutazione infinite-horizon off-policy della policy con più policy di comportamento conosciute o sconosciute.", "Una stima della policy mista che prende le idee dagli stimatori infinite-horizon di valutazione della politica off-policy e dal regression importance sampling per l'importance weight, e li estende a molte policy e a policy sconosciute.", "Un algoritmo per risolvere la valutazione infinite horizon off-policy con policy di comportamento multiple stimando una policy mista sotto regressione, e la prova teorica che un rapporto di policy stimato può ridurre la varianza."]} +{"source": "We introduce a more efficient neural architecture for amortized inference, which combines continuous and conditional normalizing flows using a principled choice of structure.Our gradient flow derives its sparsity pattern from the minimally faithful inverse of its underlying graphical model.We find that this factorization reduces the necessary numbers both of parameters in the neural network and of adaptive integration steps in the ODE solver.Consequently, the throughput at training time and inference time is increased, without decreasing performance in comparison to unconstrained flows.By expressing the structural inversion and the flow construction as compilation passes of a probabilistic programming language, we demonstrate their applicability to the stochastic inversion of realistic models such as convolutional neural networks (CNN).", "target": ["Introduciamo un'architettura neurale più efficiente per l'inferenza ammortizzata, che combina normalizing flows continui e condizionali utilizzando una scelta con principio di sparsità."]} +{"source": "We present a neural architecture search algorithm to construct compact reinforcement learning (RL) policies, by combining ENAS and ES in a highly scalable and intuitive way.By defining the combinatorial search space of NAS to be the set of different edge-partitionings (colorings) into same-weight classes, we represent compact architectures via efficient learned edge-partitionings.For several RL tasks, we manage to learn colorings translating to effective policies parameterized by as few as 17 weight parameters, providing >90 % compression over vanilla policies and 6x compression over state-of-the-art compact policies based on Toeplitz matrices, while still maintaining good reward.We believe that our work is one of the first attempts to propose a rigorous approach to training structured neural network architectures for RL problems that are of interest especially in mobile robotics with limited storage and computational resources.", "target": ["Mostriamo che ENAS con ottimizzazione ES in RL è altamente scalabile, e lo usiamo per compattare le policy delle reti neurali attraverso la condivisione dei pesi.", "Gli autori costruiscono policy di reinforcement learning con pochissimi parametri comprimendo una rete neurale feed-forward, forzandola a condividere i pesi e usando un metodo di reinforcement learning per imparare la mappatura dei pesi condivisi.", "Questo articolo combina idee dai metodi ENAS e ES per l'ottimizzazione, e introduce l'architettura della rete cromatica, che partiziona i pesi della rete RL in sottogruppi legati."]} +{"source": "Deep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets.Typically anomaly detection is treated as an unsupervised learning problem.In practice however, one may have---in addition to a large set of unlabeled samples---access to a small pool of labeled samples, e.g. a subset verified by some domain expert as being normal or anomalous.Semi-supervised approaches to anomaly detection aim to utilize such labeled samples, but most proposed methods are limited to merely including labeled normal samples.Only a few methods take advantage of labeled anomalies, with existing deep approaches being domain-specific.In this work we present Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection.Using an information-theoretic perspective on anomaly detection, we derive a loss motivated by the idea that the entropy of the latent distribution for normal data should be lower than the entropy of the anomalous distribution.We demonstrate in extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10, along with other anomaly detection benchmark datasets, that our method is on par or outperforms shallow, hybrid, and deep competitors, yielding appreciable performance improvements even when provided with only little labeled data.", "target": ["Introduciamo Deep SAD, un metodo deep per il rilevamento generale semi-supervised delle anomalie che sfrutta soprattutto le anomalie etichettate.", "Un nuovo metodo per trovare dati anomali, quando alcune anomalie etichettate sono date, che applica la loss derivata dalla teoria dell'informazione basata sui dati normali che hanno solitamente un'entropia più bassa dei dati anomali.", "Proposta per un framework di rilevamento delle anomalie in impostazioni in cui sono disponibili dati non etichettati, dati positivi etichettati e dati negativi etichettati, e proposta di approccio all'AD semi-supervised da una prospettiva della teoria dell'informazione."]} +{"source": "To analyze deep ReLU network, we adopt a student-teacher setting in which an over-parameterized student network learns from the output of a fixed teacher network of the same depth, with Stochastic Gradient Descent (SGD).Our contributions are two-fold.First, we prove that when the gradient is zero (or bounded above by a small constant) at every data point in training, a situation called \\emph{interpolation setting}, there exists many-to-one \\emph{alignment} between student and teacher nodes in the lowest layer under mild conditions.This suggests that generalization in unseen dataset is achievable, even the same condition often leads to zero training error.Second, analysis of noisy recovery and training dynamics in 2-layer network shows that strong teacher nodes (with large fan-out weights) are learned first and subtle teacher nodes are left unlearned until late stage of training.As a result, it could take a long time to converge into these small-gradient critical points.Our analysis shows that over-parameterization plays two roles: (1) it is a necessary condition for alignment to happen at the critical points, and (2) in training dynamics, it helps student nodes cover more teacher nodes with fewer iterations.Both improve generalization.Experiments justify our finding.", "target": ["Questo articolo analizza le dinamiche di training e i punti critici della formazione della rete ReLU deep tramite SGD nel setting teacher-student.", "Studio della sovra-parametrizzazione nelle reti ReLU multilayer teacher-student, una parte teorica sui punti critici SGD per l'impostazione teacher-student, e una parte euristica ed empirica sulla dinamica dell'algoritmo SDG in funzione delle teacher networks."]} +{"source": "We study the convergence of gradient descent (GD) and stochastic gradient descent (SGD) for training $L$-hidden-layer linear residual networks (ResNets).We prove that for training deep residual networks with certain linear transformations at input and output layers, which are fixed throughout training, both GD and SGD with zero initialization on all hidden weights can converge to the global minimum of the training loss.Moreover, when specializing to appropriate Gaussian random linear transformations, GD and SGD provably optimize wide enough deep linear ResNets.Compared with the global convergence result of GD for training standard deep linear networks \\citep{du2019width}, our condition on the neural network width is sharper by a factor of $O(\\kappa L)$, where $\\kappa$ denotes the condition number of the covariance matrix of the training data.In addition, for the first time we establish the global convergence of SGD for training deep linear ResNets and prove a linear convergence rate when the global minimum is $0$.", "target": ["Sotto certe condizioni sulle trasformazioni lineari in input e output, sia GD sia SGD possono raggiungere la convergenza globale per il training di ResNet lineari deep.", "Gli autori studiano la convergenza della discesa del gradiente nel training di residual networks lineari deep, e stabiliscono una convergenza globale di GD/SGD e tassi di convergenza lineare di SG/SGD.", "Studio delle proprietà di convergenza di GD e SGD su reti lineari deep, e dimostrazione che sotto certe condizioni sulle trasformazioni di input e output e con inizializzazione zero, GD e SGD convergono ai minimi globali."]} +{"source": "In this paper, we empirically investigate the training journey of deep neural networks relative to fully trained shallow machine learning models.We observe that the deep neural networks (DNNs) train by learning to correctly classify shallow-learnable examples in the early epochs before learning the harder examples.We build on this observation this to suggest a way for partitioning the dataset into hard and easy subsets that can be used for improving the overall training process.Incidentally, we also found evidence of a subset of intriguing examples across all the datasets we considered, that were shallow learnable but not deep-learnable.In order to aid reproducibility, we also duly release our code for this work at https://github.com/karttikeya/Shallow_to_Deep/", "target": ["Analizziamo il processo di training delle reti deep e mostriamo che partono da un rapido apprendimento di esempi classificabili poco profondi e generalizzano lentamente a datapoint più difficili."]} +{"source": "While much recent work has targeted learning deep discrete latent variable models with variational inference, this setting remains challenging, and it is often necessary to make use of potentially high-variance gradient estimators in optimizing the ELBO.As an alternative, we propose to optimize a non-ELBO objective derived from the Bethe free energy approximation to an MRF's partition function.This objective gives rise to a saddle-point learning problem, which we train inference networks to approximately optimize.The derived objective requires no sampling, and can be efficiently computed for many MRFs of interest.We evaluate the proposed approach in learning high-order neural HMMs on text, and find that it often outperforms other approximate inference schemes in terms of true held-out log likelihood.At the same time, we find that all the approximate inference-based approaches to learning high-order neural HMMs we consider underperform learning with exact inference by a significant margin.", "target": ["Apprendimento di MRF latenti deep con un obiettivo saddle-point derivato dall'approssimazione della funzione di partizione di Bethe.", "Un metodo per l'apprendimento di MRF deep a variabili latenti con un obiettivo di ottimizzazione che utilizza l'energia libera di Bethe, che risolve anche i vincoli sottostanti alle ottimizzazioni dell'energia libera di Bethe.", "Un obiettivo per l'apprendimento di MRF a variabili latenti basato sull'energia libera di Bethe e sull'inferenza ammortizzata, diverso dall'ottimizzazione dell'ELBO standard."]} +{"source": "In an explanation generation problem, an agent needs to identify and explain the reasons for its decisions to another agent.Existing work in this area is mostly confined to planning-based systems that use automated planning approaches to solve the problem.In this paper, we approach this problem from a new perspective, where we propose a general logic-based framework for explanation generation.In particular, given a knowledge base $KB_1$ that entails a formula $\\phi$ and a second knowledge base $KB_2$ that does not entail $\\phi$, we seek to identify an explanation $\\epsilon$ that is a subset of $KB_1$ such that the union of $KB_2$ and $\\epsilon$ entails $\\phi$.We define two types of explanations, model- and proof-theoretic explanations, and use cost functions to reflect preferences between explanations.Further, we present our algorithm implemented for propositional logic that compute such explanations and empirically evaluate it in random knowledge bases and a planning domain.", "target": ["Un framework generale per la generazione di spiegazioni utilizzando la logica.", "Questo articolo studia la generazione di spiegazioni da un punto di vista KR e conduce esperimenti che misurano la dimensione delle spiegazioni e il tempo di esecuzione su formule casuali e formule da un'istanza di Blocksworld.", "Questo articolo fornisce una prospettiva sulle spiegazioni tra due basi di conoscenza, ed è parallelo al lavoro sulla riconciliazione dei modelli nella letteratura sulla pianificazione."]} +{"source": "Recent theoretical work has demonstrated that deep neural networks have superior performance over shallow networks, but their training is more difficult, e.g., they suffer from the vanishing gradient problem.This problem can be typically resolved by the rectified linear unit (ReLU) activation.However, here we show that even for such activation, deep and narrow neural networks (NNs) will converge to erroneous mean or median states of the target function depending on the loss with high probability.Deep and narrow NNs are encountered in solving partial differential equations with high-order derivatives.We demonstrate this collapse of such NNs both numerically and theoretically, and provide estimates of the probability of collapse.We also construct a diagram of a safe region for designing NNs that avoid the collapse to erroneous states.Finally, we examine different ways of initialization and normalization that may avoid the collapse problem.Asymmetric initializations may reduce the probability of collapse but do not totally eliminate it.", "target": ["Le reti neurali deep e strette convergeranno verso stati medi o mediani errati della funzione obiettivo a seconda della loss con alta probabilità.", "Questo articolo studia le modalità di fallimento delle reti deep e strette, concentrandosi sui modelli più piccoli possibili per i quali si verifica il comportamento indesiderato.", "Questo articolo mostra che il training delle reti neurali ReLU deep convergerà a un classificatore costante con alta probabilità rispetto all'inizializzazione casuale se le larghezze degli hidden lazer sono troppo piccole."]} +{"source": "We study adversarial robustness of neural networks from a margin maximization perspective, where margins are defined as the distances from inputs to a classifier's decision boundary.Our study shows that maximizing margins can be achieved by minimizing the adversarial loss on the decision boundary at the \"shortest successful perturbation\", demonstrating a close connection between adversarial losses and the margins.We propose Max-Margin Adversarial (MMA) training to directly maximize the margins to achieve adversarial robustness. Instead of adversarial training with a fixed $\\epsilon$, MMA offers an improvement by enabling adaptive selection of the \"correct\" $\\epsilon$ as the margin individually for each datapoint.In addition, we rigorously analyze adversarial training with the perspective of margin maximization, and provide an alternative interpretation for adversarial training, maximizing either a lower or an upper bound of the margins.Our experiments empirically confirm our theory and demonstrate MMA training's efficacy on the MNIST and CIFAR10 datasets w.r.t. $\\ell_\\infty$ and $\\ell_2$ robustness.", "target": ["Proponiamo il training MMA per massimizzare direttamente il margine dello spazio di input al fine di migliorare la robustezza avversaria principalmente rimuovendo il requisito di specificare un limite di distorsione fisso.", "Un approccio di training adversariale basato sul margine adattivo per addestrare DNN robuste, massimizzando il margine più stretto degli input al decision boundary, che rende possibile il training adversariale con grandi perturbazioni.", "Viene introdotto un metodo per l'apprendimento robusto contro gli attacchi avversari in cui il margine dello spazio di input è direttamente massimizzato e una variante softmax del max-margin."]} +{"source": "Many anomaly detection methods exist that perform well on low-dimensional problems however there is a notable lack of effective methods for high-dimensional spaces, such as images.Inspired by recent successes in deep learning we propose a novel approach to anomaly detection using generative adversarial networks.Given a sample under consideration, our method is based on searching for a good representation of that sample in the latent space of the generator; if such a representation is not found, the sample is deemed anomalous. We achieve state-of-the-art performance on standard image benchmark datasets and visual inspection of the most anomalous samples reveals that our method does indeed return anomalies.", "target": ["Proponiamo un metodo per il rilevamento delle anomalie con le GAN cercando nello spazio latente del generatore delle buone rappresentazioni del campione.", "Gli autori propongono di utilizzare GAN per il rilevamento delle anomalie, un metodo basato sulla discesa del gradiente per aggiornare iterativamente le rappresentazioni latenti, e un nuovo tipo di aggiornamento dei parametri dei generatori.", "Un approccio basato su GAN per fare il rilevamento delle anomalie per immagini dove lo spazio latente del generatore viene esplorato per trovare una rappresentazione per un'immagine di test."]} +{"source": "Variational inference (VI) and Markov chain Monte Carlo (MCMC) are approximate posterior inference algorithms that are often said to have complementary strengths, with VI being fast but biased and MCMC being slower but asymptotically unbiased.In this paper, we analyze gradient-based MCMC and VI procedures and find theoretical and empirical evidence that these procedures are not as different as one might think.In particular, a close examination of the Fokker-Planck equation that governs the Langevin dynamics (LD) MCMC procedure reveals that LD implicitly follows a gradient flow that corresponds to a variational inference procedure based on optimizing a nonparametric normalizing flow.This result suggests that the transient bias of LD (due to too few warmup steps) may track that of VI (due to too few optimization steps), up to differences due to VI’s parameterization and asymptotic bias.Empirically, we find that the transient biases of these algorithms (and momentum-accelerated versions) do evolve similarly.This suggests that practitioners with a limited time budget may get more accurate results by running an MCMC procedure (even if it’s far from burned in) than a VI procedure, as long as the variance of the MCMC estimator can be dealt with (e.g., by running many parallel chains).", "target": ["Il comportamento transitorio degli algoritmi MCMC basati sul gradiente e di variational inference è più simile di quanto si possa pensare, mettendo in discussione l'affermazione che variational inference è più veloce di MCMC."]} +{"source": "Graph Convolutional Networks (GCNs) have recently been shown to be quite successful in modeling graph-structured data.However, the primary focus has been on handling simple undirected graphs.Multi-relational graphs are a more general and prevalent form of graphs where each edge has a label and direction associated with it.Most of the existing approaches to handle such graphs suffer from over-parameterization and are restricted to learning representations of nodes only.In this paper, we propose CompGCN, a novel Graph Convolutional framework which jointly embeds both nodes and relations in a relational graph.CompGCN leverages a variety of entity-relation composition operations from Knowledge Graph Embedding techniques and scales with the number of relations.It also generalizes several of the existing multi-relational GCN methods.We evaluate our proposed method on multiple tasks such as node classification, link prediction, and graph classification, and achieve demonstrably superior results.We make the source code of CompGCN available to foster reproducible research.", "target": ["Un framework di convoluzione dei grafi basato sulla composizione per i grafi multirelazionali.", "Gli autori sviluppano GCN su grafi multi-relazionali e propongono CompGCN, che sfrutta le intuizioni dei knowledge graph embeddings e impara le rappresentazioni dei nodi e delle relazioni per alleviare il problema dell'iper-parametrizzazione.", "Questo articolo introduce un framework GCN per i grafi multi-relazionali e generalizza diversi approcci esistenti per Knowledge Graph embeddings in un framework."]} +{"source": "State-of-the-art neural machine translation methods employ massive amounts of parameters.Drastically reducing computational costs of such methods without affecting performance has been up to this point unsolved.In this work, we propose a quantization strategy tailored to the Transformer architecture.We evaluate our method on the WMT14 EN-FR and WMT14 EN-DE translation tasks and achieve state-of-the-art quantization results for the Transformer, obtaining no loss in BLEU scores compared to the non-quantized baseline.We further compress the Transformer by showing that, once the model is trained, a good portion of the nodes in the encoder can be removed without causing any loss in BLEU.", "target": ["Quantizziamo completamente il transformer a 8 bit e miglioriamo la qualità della traduzione rispetto al modello a precisione completa.", "Un metodo di quantizzazione a 8 bit per quantizzare il modello di traduzione automatica Transformer, proponendo di usare una quantizzazione uniforme min-max durante l'inferenza e di usare i bucketing weights prima della quantizzazione per ridurre l'errore di quantizzazione.", "Un metodo per ridurre lo spazio di memoria richiesto da una tecnica di quantizzazione, focalizzato sulla riduzione per l'architettura Transformer."]} +{"source": "Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems.However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes.We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space.The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters.Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks.Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.", "target": ["Latent Embedding Optimization (LEO) è un nuovo meta-learner basato sul gradiente con prestazioni allo stato dell'arte sui difficili task di classificazione 5-way 1-shot e 5-shot miniImageNet e tieredImageNet.", "Un nuovo framework di meta-learning che impara lo spazio latente dipendente dai dati, esegue un adattamento veloce nello spazio latente, è efficace per il few-shot learning, ha un'inizializzazione dipendente dal task per l'adattamento, e funziona bene per la distribuzione multimodale del task.", "Questo articolo propone un metodo di ottimizzazione di embedding latente per il meta-learning, e sostiene che il contributo è quello di disaccoppiare le tecniche di meta-learning basate sull'ottimizzazione dallo spazio ad alta densità dei parametri del modello."]} +{"source": "We introduce an approach for augmenting model-free deep reinforcement learning agents with a mechanism for relational reasoning over structured representations, which improves performance, learning efficiency, generalization, and interpretability.Our architecture encodes an image as a set of vectors, and applies an iterative message-passing procedure to discover and reason about relevant entities and relations in a scene.In six of seven StarCraft II Learning Environment mini-games, our agent achieved state-of-the-art performance, and surpassed human grandmaster-level on four.In a novel navigation and planning task, our agent's performance and learning efficiency far exceeded non-relational baselines, it was able to generalize to more complex scenes than it had experienced during training.Moreover, when we examined its learned internal representations, they reflected important structure about the problem and the agent's intentions.The main contribution of this work is to introduce techniques for representing and reasoning about states in model-free deep reinforcement learning agents via relational inductive biases.Our experiments show this approach can offer advantages in efficiency, generalization, and interpretability, and can scale up to meet some of the most challenging test environments in modern artificial intelligence.", "target": ["I bias induttivi relazionali migliorano le capacità di generalizzazione fuori distribuzione negli agenti di reinforcement learning senza modello", "Un'architettura di rete relazionale condivisa per parametrizzare la rete actor and critc, focalizzata su algoritmi actor-critic advantage distribuiti, che migliora le tecniche di deep reinforcement senza modello con la conoscenza relazionale dell'ambiente in modo che gli agenti possano imparare rappresentazioni di stato interpretabili.", "Un'analisi e una valutazione quantitativa e qualitativa del meccanismo di self-attention combinato con le relation network nel contesto del RL senza modello."]} +{"source": "Image translation between two domains is a class of problems aiming to learn mapping from an input image in the source domain to an output image in the target domain.It has been applied to numerous applications, such as data augmentation, domain adaptation, and unsupervised training.When paired training data is not accessible, image translation becomes an ill-posed problem.We constrain the problem with the assumption that the translated image needs to be perceptually similar to the original image and also appears to be drawn from the new domain, and propose a simple yet effective image translation model consisting of a single generator trained with a self-regularization term and an adversarial term.We further notice that existing image translation techniques are agnostic to the subjects of interest and often introduce unwanted changes or artifacts to the input.Thus we propose to add an attention module to predict an attention map to guide the image translation process.The module learns to attend to key parts of the image while keeping everything else unaltered, essentially avoiding undesired artifacts or changes.The predicted attention map also opens door to applications such as unsupervised segmentation and saliency detection.Extensive experiments and evaluations show that our model while being simpler, achieves significantly better performance than existing image translation methods.", "target": ["Proponiamo un semplice modello generativo per unsupervised image translation e il rilevamento della saliency."]} +{"source": "Building deep neural networks to control autonomous agents which have to interact in real-time with the physical world, such as robots or automotive vehicles, requires a seamless integration of time into a network’s architecture.The central question of this work is, how the temporal nature of reality should be reflected in the execution of a deep neural network and its components.Most artificial deep neural networks are partitioned into a directed graph of connected modules or layers and the layers themselves consist of elemental building blocks, such as single units.For most deep neural networks, all units of a layer are processed synchronously and in parallel, but layers themselves are processed in a sequential manner.In contrast, all elements of a biological neural network are processed in parallel.In this paper, we define a class of networks between these two extreme cases.These networks are executed in a streaming or synchronous layerwise-parallel manner, unlocking the layers of such networks for parallel processing.Compared to the standard layerwise-sequential deep networks, these new layerwise-parallel networks show a fundamentally different temporal behavior and flow of information, especially for networks with skip or recurrent connections.We argue that layerwise-parallel deep networks are better suited for future challenges of deep neural network design, such as large functional modularized and/or recurrent architectures as well as networks allocating different network capacities dependent on current stimulus and/or task complexity.We layout basic properties and discuss major challenges for layerwise-parallel networks.Additionally, we provide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks.", "target": ["Definiamo un concetto di reti neurali deep a layer model-parallel, per le quali i layer operano in parallelo, e forniamo un toolbox per progettare, formare, valutare e interagire online con queste reti.", "Un toolbox accelerato dalla GPU per l'aggiornamento parallelo dei neuroni, scritto in Theano, che supporta diversi ordini di aggiornamento in reti ricorrenti e reti con connessioni che saltano gli layer.", "Un nuovo toolbox per l'apprendimento e la valutazione delle reti neurali deep, e proposta per un cambio di paradigma dalle reti sequenziali a layer alle reti parallele a layer."]} +{"source": "Deep neural networks are known to be vulnerable to adversarial perturbations.In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems.From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin's maximum principle, to train neural nets.This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust.The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently.Experiments show that our method effectively improves deep model's adversarial robustness.", "target": ["Un metodo di difesa avversaria che collega la robustezza delle reti neurali deep con la stabilità di Lyapunov", "Gli autori formulano il training delle NN come la ricerca di un controllore ottimale per un sistema dinamico discreto, permettendo loro di utilizzare il metodo delle approssimazioni successive per addestrare una NN in modo da essere più robusta agli attacchi avversari.", "Questo articolo usa la visione teorica di una rete neurale come un ODE discretizzato per sviluppare una teoria del controllo robusto che mira al training della rete imponendo al tempo stesso la robustezza."]} +{"source": "In this paper, we propose a method named Dimensional reweighting Graph Convolutional Networks (DrGCNs), to tackle the problem of variance between dimensional information in the node representations of GCNs.We prove that DrGCNs can reduce the variance of the node representations by connecting our problem to the theory of the mean field.However, practically, we find that the degrees DrGCNs help vary severely on different datasets.We revisit the problem and develop a new measure K to quantify the effect.This measure guides when we should use dimensional reweighting in GCNs and how much it can help.Moreover, it offers insights to explain the improvement obtained by the proposed DrGCNs.The dimensional reweighting block is light-weighted and highly flexible to be built on most of the GCN variants.Carefully designed experiments, including several fixes on duplicates, information leaks, and wrong labels of the well-known node classification benchmark datasets, demonstrate the superior performances of DrGCNs over the existing state-of-the-art approaches.Significant improvements can also be observed on a large scale industrial dataset.", "target": ["Proponiamo uno schema di reweighting semplice ma efficace per le GCN, supportato teoricamente dalla teoria del campo medio.", "Un metodo, noto come DrGCN, per ripesare le diverse dimensioni delle rappresentazioni dei nodi nelle graph convolutional networks riducendo la varianza tra le dimensioni."]} +{"source": "Knowledge-grounded dialogue is a task of generating an informative response based on both discourse context and external knowledge.As we focus on better modeling the knowledge selection in the multi-turn knowledge-grounded dialogue, we propose a sequential latent variable model as the first approach to this matter.The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge; as a result, it can not only reduce the ambiguity caused from the diversity in knowledge selection of conversation but also better leverage the response information for proper choice of knowledge.Our experimental results show that the proposed model improves the knowledge selection accuracy and subsequently the performance of utterance generation.We achieve the new state-of-the-art performance on Wizard of Wikipedia (Dinan et al., 2019) as one of the most large-scale and challenging benchmarks.We further validate the effectiveness of our model over existing conversation methods in another knowledge-based dialogue Holl-E dataset (Moghe et al., 2018).", "target": ["Il nostro approccio è il primo tentativo di sfruttare un modello di variabile latente sequenziale per knowledge selection nel dialogo multi-turn knowledge-grounded. Raggiunge il nuovo stato dell'arte delle prestazioni sul benchmark Wizard of Wikipedia.", "Un modello sequenziale con variabili latenti per knowledge selection in dialogue generation che estende il modello di attention posteriore al latent knowledge selection problem e raggiunge prestazioni più elevate rispetto ai precedenti modelli allo stato dell'arte.", "Una nuova architettura per knowledge-grounded multi-turn dialogue selection che rende lo stato dell'arte sui benchmark di riferimento e ottiene punteggi più alti nelle valutazioni umane."]} +{"source": "Meta-learning, or learning-to-learn, has proven to be a successful strategy in attacking problems in supervised learning and reinforcement learning that involve small amounts of data.State-of-the-art solutions involve learning an initialization and/or learning algorithm using a set of training episodes so that the meta learner can generalize to an evaluation episode quickly.These methods perform well but often lack good quantification of uncertainty, which can be vital to real-world applications when data is lacking.We propose a meta-learning method which efficiently amortizes hierarchical variational inference across tasks, learning a prior distribution over neural network weights so that a few steps of Bayes by Backprop will produce a good task-specific approximate posterior.We show that our method produces good uncertainty estimates on contextual bandit and few-shot learning benchmarks.", "target": ["Proponiamo un metodo di meta-learning che ammortizza in modo efficiente la variational inference gerarchica attraverso gli episodi di allenamento.", "Un adattamento ai modelli di tipo MAML che tiene conto dell'incertezza posteriore nelle variabili latenti specifiche del task impiegando la variational inference per i parametri specifici del task in una visione gerarchica bayesiana di MAML.", "Gli autori considerano il meta-learning per imparare un prior sui pesi delle reti neurali, fatto tramite varitional inference ammortizzata."]} +{"source": "Often we wish to transfer representational knowledge from one neural network to another.Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator.Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network.We demonstrate that this objective ignores important structural knowledge of the teacher network.This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data.We formulate this objective as contrastive learning.Experiments demonstrate that our resulting new objective outperforms knowledge distillation on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer.When combined with knowledge distillation, our method sets a state of the art in many transfer tasks, sometimes even outperforming the teacher network.", "target": ["Representation/knowledge distillation massimizzando l'informazione mutua tra teacher e student", "Questo articolo combina un obiettivo contrastive che misura l'informazione mutua tra le rappresentazioni apprese dalle reti di teacher e student per la distillazione dei modelli, e propone un modello con un miglioramento rispetto alle alternative esistenti sui task di distillazione."]} +{"source": "Developing effective biologically plausible learning rules for deep neural networks is important for advancing connections between deep learning and neuroscience.To date, local synaptic learning rules like those employed by the brain have failed to match the performance of backpropagation in deep networks.In this work, we employ meta-learning to discover networks that learn using feedback connections and local, biologically motivated learning rules.Importantly, the feedback connections are not tied to the feedforward weights, avoiding any biologically implausible weight transport.It can be shown mathematically that this approach has sufficient expressivity to approximate any online learning algorithm.Our experiments show that the meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures.Moreover, we demonstrate empirically that this model outperforms a state-of-the-art gradient-based meta-learning algorithm for continual learning on regression and classification benchmarks.This approach represents a step toward biologically plausible learning mechanisms that can not only match gradient descent-based learning, but also overcome its limitations.", "target": ["Le reti che imparano con connessioni di feedback e regole di plasticità locale possono essere ottimizzate per l'uso del meta learning."]} +{"source": "In the visual system, neurons respond to a patch of the input known as their classical receptive field (RF), and can be modulated by stimuli in the surround.These interactions are often mediated by lateral connections, giving rise to extra-classical RFs.We use supervised learning via backpropagation to learn feedforward connections, combined with an unsupervised learning rule to learn lateral connections between units within a convolutional neural network.These connections allow each unit to integrate information from its surround, generating extra-classical receptive fields for the units in our new proposed model (CNNEx).We demonstrate that these connections make the network more robust and achieve better performance on noisy versions of the MNIST and CIFAR-10 datasets.Although the image statistics of MNIST and CIFAR-10 differ greatly, the same unsupervised learning rule generalized to both datasets.Our framework can potentially be applied to networks trained on other tasks, with the learned lateral connections aiding the computations implemented by feedforward connections when the input is unreliable.", "target": ["Le CNN con connessioni laterali biologicamente ispirate apprese in modo unsupervised sono più robuste agli input rumorosi."]} +{"source": "Deep learning (DL) has in recent years been widely used in naturallanguage processing (NLP) applications due to its superiorperformance.However, while natural languages are rich ingrammatical structure, DL has not been able to explicitlyrepresent and enforce such structures.This paper proposes a newarchitecture to bridge this gap by exploiting tensor productrepresentations (TPR), a structured neural-symbolic frameworkdeveloped in cognitive science over the past 20 years, with theaim of integrating DL with explicit language structures and rules.We call it the Tensor Product Generation Network(TPGN), and apply it to image captioning.The keyideas of TPGN are:1) unsupervised learning ofrole-unbinding vectors of words via a TPR-based deep neuralnetwork, and2) integration of TPR with typical DL architecturesincluding Long Short-Term Memory (LSTM) models.The novelty of ourapproach lies in its ability to generate a sentence and extractpartial grammatical structure of the sentence by usingrole-unbinding vectors, which are obtained in an unsupervisedmanner.Experimental results demonstrate the effectiveness of theproposed approach.", "target": ["Questo articolo ha lo scopo di sviluppare un approccio di rappresentazione del prodotto tensoriale per applicazioni di natural language processing con deep learning."]} +{"source": "It is well-known that classifiers are vulnerable to adversarial perturbations.To defend against adversarial perturbations, various certified robustness results have been derived.However, existing certified robustnesses are limited to top-1 predictions.In many real-world applications, top-$k$ predictions are more relevant.In this work, we aim to derive certified robustness for top-$k$ predictions.In particular, our certified robustness is based on randomized smoothing, which turns any classifier to a new classifier via adding noise to an input example.We adopt randomized smoothing because it is scalable to large-scale neural networks and applicable to any classifier.We derive a tight robustness in $\\ell_2$ norm for top-$k$ predictions when using randomized smoothing with Gaussian noise.We find that generalizing the certified robustness from top-1 to top-$k$ predictions faces significant technical challenges.We also empirically evaluate our method on CIFAR10 and ImageNet.For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62.8\\% when the $\\ell_2$-norms of the adversarial perturbations are less than 0.5 (=127/255).Our code is publicly available at: \\url{https://github.com/jjy1994/Certify_Topk}.", "target": ["Studiamo la robustezza certificata per le previsioni top-k attraverso lo smoothing randomizzato sotto il rumore gaussiano e deriviamo un limite di robustezza stretto nella norma L_2.", "Questo articolo estende il lavoro sulla deduzione di un raggio certificato usando lo smoothing randomizzato, e mostra il raggio al quale un classificatore smoothed sotto perturbazioni gaussiane è certificato per le prime k previsioni.", "Questo articolo si basa sulla tecnica di random smoothing per la predizione top-1, e mira a fornire una certificazione sulle predizioni top-k."]} +{"source": "Recent work has shown increased interest in using the Variational Autoencoder (VAE) framework to discover interpretable representations of data in an unsupervised way.These methods have focussed largely on modifying the variational cost function to achieve this goal.However, we show that methods like beta-VAE simplify the tendency of variational inference to underfit causing pathological over-pruning and over-orthogonalization of learned components.In this paper we take a complementary approach: to modify the probabilistic model to encourage structured latent variable representations to be discovered.Specifically, the standard VAE probabilistic model is unidentifiable: the likelihood of the parameters is invariant under rotations of the latent space.This means there is no pressure to identify each true factor of variation with a latent variable.We therefore employ a rich prior distribution, akin to the ICA model, that breaks the rotational symmetry.Extensive quantitative and qualitative experiments demonstrate that the proposed prior mitigates the trade-off introduced by modified cost functions like beta-VAE and TCVAE between reconstruction loss and disentanglement.The proposed prior allows to improve these approaches with respect to both disentanglement and reconstruction quality significantly over the state of the art.", "target": ["Presentiamo prior strutturati per l'apprendimento unsupervised di rappresentazioni disentangled in VAEs che mitigano significativamente il trade-off tra disentanglement e loss di ricostruzione.", "Un framework generale per utilizzare la famiglia di distribuzioni L^p-nested come prior per il code vector di un VAE, dimostrando un MIG superiore.", "Gli autori sottolineano i problemi negli attuali approcci VAE e forniscono una nuova prospettiva sul compromesso tra ricostruzione e ortogonalizzazione per VAE, beta-VAE e beta-TCVAE."]} +{"source": "Due to the success of residual networks (resnets) and related architectures, shortcut connections have quickly become standard tools for building convolutional neural networks.The explanations in the literature for the apparent effectiveness of shortcuts are varied and often contradictory.We hypothesize that shortcuts work primarily because they act as linear counterparts to nonlinear layers.We test this hypothesis by using several variations on the standard residual block, with different types of linear connections, to build small (100k--1.2M parameter) image classification networks.Our experiments show that other kinds of linear connections can be even more effective than the identity shortcuts.Our results also suggest that the best type of linear connection for a given application may depend on both network width and depth.", "target": ["Generalizziamo i residual blocks ai tandem blocks, che usano mappe lineari arbitrarie invece di shortcut, e migliorano le prestazioni rispetto alle ResNet.", "Questo articolo esegue un'analisi delle shortcut connections nelle architetture ResNet-like, e propone di sostituire le identity shortcut con una alternativa con la convoluzione chiamata blocco tandem.", "Questo articolo studia l'effetto della sostituzione delle skip connections con identità con skip connections convoluzionali addestrabili in ResNet e trova che le prestazioni migliorino."]} +{"source": "Adam-typed optimizers, as a class of adaptive moment estimation methods with the exponential moving average scheme, have been successfully used in many applications of deep learning.Such methods are appealing for capability on large-scale sparse datasets.On top of that, they are computationally efficient and insensitive to the hyper-parameter settings.In this paper, we present a new framework for adapting Adam-typed methods, namely AdamT.Instead of applying a simple exponential weighted average, AdamT also includes the trend information when updating the parameters with the adaptive step size and gradients.The newly added term is expected to efficiently capture the non-horizontal moving patterns on the cost surface, and thus converge more rapidly.We show empirically the importance of the trend component, where AdamT outperforms the conventional Adam method constantly in both convex and non-convex settings.", "target": ["Presentiamo un nuovo framework per adattare i metodi di tipo Adam, cioè AdamT, per includere le informazioni sulla tendenza quando si aggiornano i parametri con step size e i gradienti adattivi.", "Un nuovo tipo di variante di Adam che usa il metodo lineare di Holt per calcolare il momento smoothed del primo ordine e del secondo ordine invece di usare la media pesata esponenziale."]} +{"source": "As machine learning methods see greater adoption and implementation in high stakes applications suchas medical image diagnosis, the need for model interpretability and explanation has become morecritical.Classical approaches that assess feature importance (eg saliency maps) do not explain how and why a particular region of an image is relevant to the prediction.We proposea method that explains the outcome of a classification black-box by gradually exaggeratingthe semantic effect of a given class.Given a query input to a classifier, our method produces aprogressive set of plausible variations of that query, which gradually change the posterior probabilityfrom its original class to its negation.These counter-factually generated samples preserve featuresunrelated to the classification decision, such that a user can employ our method as a ``tuning knob'' to traverse a data manifold while crossing the decision boundary. Our method is model agnostic and only requires the output value and gradient of the predictor with respect to its input.", "target": ["Un metodo per spiegare un classificatore, generando una perturbazione visiva di un'immagine esagerando o diminuendo le feature semantiche che il classificatore associa a un'etichetta target.", "Un modello che quando viene data una query in input a una black-box, mira a spiegare il risultato fornendo variazioni plausibili e progressive alla query che possono portare a un cambiamento dell'output.", "Un metodo per spiegare l'output della classificazione black box delle immagini, che genera una perturbazione graduale degli output in risposta a richieste di input gradualmente perturbate."]} +{"source": "We study the problem of explaining a rich class of behavioral properties of deep neural networks.Our influence-directed explanations approach this problem by peering inside the network to identify neurons with high influence on the property of interest using an axiomatically justified influence measure, and then providing an interpretation for the concepts these neurons represent.We evaluate our approach by training convolutional neural networks on Pubfig, ImageNet, and Diabetic Retinopathy datasets. Our evaluation demonstrates that influence-directed explanations (1) localize features used by the network, (2) isolate features distinguishing related instances, (3) help extract the essence of what the network learned about the class, and (4) assist in debugging misclassifications.", "target": ["Noi presentiamo un approccio influenzato alla costruzione di spiegazioni per il comportamento delle reti convoluzionali deep, e mostriamo come può essere usato per rispondere a un'ampia serie di domande che non potevano essere affrontate dal lavoro precedente.", "Un modo di misurare l'influenza che soddisfa certi assiomi, e una nozione di influenza che può essere utilizzata per identificare quale parte di input è più influente per l'output di un neurone in una deep neural network.", "Questo articolo propone di misurare l'influenza di singoli neuroni rispetto a una quantità di interesse rappresentata da un altro neurone."]} +{"source": "Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily.By contrast, humans have an incredible ability to do one-shot or few-shot learning.For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us.Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data.This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter.", "target": ["Evidenziamo una tecnica con cui i sistemi di elaborazione del linguaggio naturale possono imparare una nuova parola dal contesto, permettendo loro di essere molto più flessibili.", "Una tecnica per sfruttare la conoscenza precedente per imparare embeddings per nuove parole con dati minimi."]} +{"source": "Recent research developing neural network architectures with external memory have often used the benchmark bAbI question and answering dataset which provides a challenging number of tasks requiring reasoning.Here we employed a classic associative inference task from the human neuroscience literature in order to more carefully probe the reasoning capacity of existing memory-augmented architectures.This task is thought to capture the essence of reasoning -- the appreciation of distant relationships among elements distributed across multiple facts or memories.Surprisingly, we found that current architectures struggle to reason over long distance associations.Similar results were obtained on a more complex task involving finding the shortest path between nodes in a path.We therefore developed a novel architecture, MEMO, endowed with the capacity to reason over longer distances.This was accomplished with the addition of two novel components.First, it introduces a separation between memories/facts stored in external memory and the items that comprise these facts in external memory.Second, it makes use of an adaptive retrieval mechanism, allowing a variable number of ‘memory hops’ before the answer is produced.MEMO is capable of solving our novel reasoning tasks, as well as all 20 tasks in bAbI.", "target": ["Un'architettura con memoria che supporta il ragionamento inferenziale.", "Questo articolo propone modifiche all'architettura End2End Memory Network, introduce un nuovo task Paired Associative Inference che la maggior parte dei modelli esistenti fatica a risolvere, e mostra che l'architettura proposta risolve meglio il task.", "Un nuovo task (paired associate inference) tratto dalla psicologia cognitiva, e la proposta di una nuova architettura con memoria con caratteristiche che permettono una migliore performance sul task paired associate."]} +{"source": "Depthwise separable convolutions reduce the number of parameters and computation used in convolutional operations while increasing representational efficiency.They have been shown to be successful in image classification models, both in obtaining better models than previously possible for a given parameter count (the Xception architecture) and considerably reducing the number of parameters required to perform at a given level (the MobileNets family of architectures).Recently, convolutional sequence-to-sequence networks have been applied to machine translation tasks with good results.In this work, we study how depthwise separable convolutions can be applied to neural machine translation.We introduce a new architecture inspired by Xception and ByteNet, called SliceNet, which enables a significant reduction of the parameter count and amount of computation needed to obtain results like ByteNet, and, with a similar parameter count, achieves better results.In addition to showing that depthwise separable convolutions perform well for machine translation, we investigate the architectural changes that they enable: we observe that thanks to depthwise separability, we can increase the length of convolution windows, removing the need for filter dilation.We also introduce a new super-separable convolution operation that further reduces the number of parameters and computational cost of the models.", "target": ["Le convoluzioni separabili in profondità migliorano la traduzione automatica neurale: più sono separabili, meglio è.", "Questo articolo propone di usare layer di convoluzione separabili in profondità in un modello di traduzione automatica neurale completamente convoluzionale, e introduce un nuovo layer di convoluzione super-separabile che riduce ulteriormente il costo computazionale."]} +{"source": "Interpreting generative adversarial network (GAN) training as approximate divergence minimization has beentheoretically insightful, has spurred discussion, and has lead to theoretically and practically interestingextensions such as f-GANs and Wasserstein GANs.For both classic GANs and f-GANs, there is an original variant of training and a \"non-saturating\" variant which uses an alternative form of generator gradient.The original variant is theoretically easier to study, but for GANs the alternative variant performs better in practice.The non-saturating scheme is often regarded as a simple modification to deal with optimization issues, but we show that in fact the non-saturating scheme for GANs is effectively optimizing a reverse KL-like f-divergence.We also develop a number of theoretical tools to help compare and classify f-divergences.We hope these results may help to clarify some of the theoretical discussion surrounding the divergence minimization view of GAN training.", "target": ["Il training non saturante delle GAN minimizza efficacemente una f-divergenza inversa simile a KL.", "Questo articolo propone un'utile espressione della classe delle f-divergenze, indaga le proprietà teoriche delle f-divergenze popolari in strumenti di recente sviluppo, e studia le GAN con lo schema di training non saturante."]} +{"source": "We introduce a novel method for converting text data into abstract image representations, which allows image-based processing techniques (e.g. image classification networks) to be applied to text-based comparison problems.We apply the technique to entity disambiguation of inventor names in US patents.The method involves converting text from each pairwise comparison between two inventor name records into a 2D RGB (stacked) image representation.We then train an image classification neural network to discriminate between such pairwise comparison images, and use the trained network to label each pair of records as either matched (same inventor) or non-matched (different inventors), obtaining highly accurate results (F1: 99.09%, precision: 99.41%, recall: 98.76%).Our new text-to-image representation method could potentially be used more broadly for other NLP comparison problems, such as disambiguation of academic publications, or for problems that require simultaneous classification of both text and images.", "target": ["Introduciamo un nuovo metodo di rappresentazione del testo che permette ai classificatori di immagini di essere applicati a problemi di text classification, e applichiamo il metodo alla disambiguazione dei nomi degli inventori.", "Un metodo per mappare una coppia di informazioni testuali in un'immagine RGB 2D che può essere alimentata a reti neurali convoluzionali 2D (classificatori di immagini).", "Gli autori considerano il problema della disambiguazione dei nomi per gli inventori di nomi di brevetti e propongono di costruire una rappresentazione di pagina come immagine delle due stringhe di nomi da confrontare e di applicare un classificatore di immagini."]} +{"source": "We propose a novel algorithm, Difference-Seeking Generative Adversarial Network (DSGAN), developed from traditional GAN.DSGAN considers the scenario that the training samples of target distribution, $p_{t}$, are difficult to collect.Suppose there are two distributions $p_{\\bar{d}}$ and $p_{d}$ such that the density of the target distribution can be the differences between the densities of $p_{\\bar{d}}$ and $p_{d}$.We show how to learn the target distribution $p_{t}$ only via samples from $p_{d}$ and $p_{\\bar{d}}$ (relatively easy to obtain).DSGAN has the flexibility to produce samples from various target distributions (e.g. the out-of-distribution).Two key applications, semi-supervised learning and adversarial training, are taken as examples to validate the effectiveness of DSGAN.We also provide theoretical analyses about the convergence of DSGAN.", "target": ["Abbiamo proposto il modello \"Difference-Seeking Generative Adversarial Network\" (DSGAN) per imparare la distribuzione target per la quale è difficile raccogliere dati di formazione.", "Questo articolo presenta DS-GAN, che mira ad apprendere la differenza tra due distribuzioni qualsiasi i cui campioni sono difficili o impossibili da raccogliere, e mostra la sua efficacia nell'apprendimento semi-supervised e nei task di adversarial training.", "Questo articolo considera il problema dell'apprendimento di una GAN per catturare una distribuzione target con solo pochi campioni di training disponibili per quella distribuzione ."]} +{"source": "Recently, Generative Adversarial Network (GAN) and numbers of its variants have been widely used to solve the image-to-image translation problem and achieved extraordinary results in both a supervised and unsupervised manner.However, most GAN-based methods suffer from the imbalance problem between the generator and discriminator in practice.Namely, the relative model capacities of the generator and discriminator do not match, leading to mode collapse and/or diminished gradients.To tackle this problem, we propose a GuideGAN based on attention mechanism.More specifically, we arm the discriminator with an attention mechanism so not only it estimates the probability that its input is real, but also does it create an attention map that highlights the critical features for such prediction.This attention map then assists the generator to produce more plausible and realistic images.We extensively evaluate the proposed GuideGAN framework on a number of image transfer tasks.Both qualitative results and quantitative comparison demonstrate the superiority of our proposed approach.", "target": ["Un metodo generale che migliora le prestazioni di image translation del framework GAN utilizzando un discriminatore integrato con attention", "Un meccanismo di feedback nel framework GAN che migliora la qualità delle immagini generate nella image-to-image translation, e il cui discriminatore produce una mappa che indica dove il generatore dovrebbe concentrarsi per rendere i suoi risultati più convincenti.", "Proposta di una GAN con un discriminatore basato sull'attention per la traduzione I2I che fornisce la probabilità di vero/falso e una mappa di attention che riflette la salienza per la generazione dell'immagine."]} +{"source": "The problem of verifying whether a textual hypothesis holds based on the given evidence, also known as fact verification, plays an important role in the study of natural language understanding and semantic representation.However, existing studies are mainly restricted to dealing with unstructured evidence (e.g., natural language sentences and documents, news, etc), while verification under structured evidence, such as tables, graphs, and databases, remains unexplored.This paper specifically aims to study the fact verification given semi-structured data as evidence.To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED.TabFact is challenging since it involves both soft linguistic reasoning and hard symbolic reasoning.To address these reasoning challenges, we design two different models: Table-BERT and Latent Program Algorithm (LPA).Table-BERT leverages the state-of-the-art pre-trained language model to encode the linearized tables and statements into continuous vectors for verification.LPA parses statements into LISP-like programs and executes them against the tables to obtain the returned binary value for verification.Both methods achieve similar accuracy but still lag far behind human performance.We also perform a comprehensive analysis to demonstrate great future opportunities.", "target": ["Proponiamo un nuovo datset per investigare il problema di entailment sotto la tabella semi-strutturata come premessa", "Questo articolo propone un nuovo dataset per fact verification basata sulle tabelle e introduce metodi per questo task.", "Gli autori propongono il problema di fact verification con fonti di dati semi-strutturati come le tabelle, creano un nuovo set di dati e valutano le baseline con variazioni."]} +{"source": "This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs.First, we use localized node embeddings computed by a graph neural network to obtain an initial ranking of soft correspondences between nodes.Secondly, we employ synchronous message passing networks to iteratively re-rank the soft correspondences to reach a matching consensus in local neighborhoods between graphs.We show, theoretically and empirically, that our message passing scheme computes a well-founded measure of consensus for corresponding neighborhoods, which is then used to guide the iterative re-ranking process.Our purely local and sparsity-aware architecture scales well to large, real-world inputs while still being able to recover global correspondences consistently.We demonstrate the practical effectiveness of our method on real-world tasks from the fields of computer vision and entity alignment between knowledge graphs, on which we improve upon the current state-of-the-art.", "target": ["Sviluppiamo un'architettura di deep graph matching che raffina le corrispondenze iniziali al fine di raggiungere il consenso del neighborhood.", "Un framework per rispondere alle domande di graph matching che consiste in embeddings di nodi locali con una fase di raffinamento con message passing.", "Un'architettura basata su GNN a due stadi per stabilire le corrispondenze tra due grafi che si comporta bene nei task reali di image matching e di entity alignment nei knowledge graph."]} +{"source": "This paper extends the proof of density of neural networks in the space of continuous (or even measurable) functions on Euclidean spaces to functions on compact sets of probability measures.By doing so the work parallels a more then a decade old results on mean-map embedding of probability measures in reproducing kernel Hilbert spaces. The work has wide practical consequences for multi-instance learning, where it theoretically justifies some recently proposed constructions.The result is then extended to Cartesian products, yielding universal approximation theorem for tree-structured domains, which naturally occur in data-exchange formats like JSON, XML, YAML, AVRO, and ProtoBuffer.This has important practical implications, as it enables to automatically create an architecture of neural networks for processing structured data (AutoML paradigms), as demonstrated by an accompanied library for JSON format.", "target": ["Questo articolo estende la prova di densità delle reti neurali nello spazio delle funzioni continue (o anche misurabili) su spazi euclidei alle funzioni su insiemi compatti di misure di probabilità.", "Questo articolo studia le proprietà di approssimazione di una famiglia di reti neurali progettate per affrontare problemi di learning multistanza, e mostra che i risultati per le architetture standard ad un layer si estendono a questi modelli.", "Questo articolo generalizza il teorema di approssimazione universale alle funzioni reali sullo spazio delle misure."]} +{"source": "Interactions such as double negation in sentences and scene interactions in images are common forms of complex dependencies captured by state-of-the-art machine learning models.We propose Mahé, a novel approach to provide Model-Agnostic Hierarchical Explanations of how powerful machine learning models, such as deep neural networks, capture these interactions as either dependent on or free of the context of data instances.Specifically, Mahé provides context-dependent explanations by a novel local interpretation algorithm that effectively captures any-order interactions, and obtains context-free explanations through generalizing context-dependent interactions to explain global behaviors.Experimental results show that Mahé obtains improved local interaction interpretations over state-of-the-art methods and successfully provides explanations of interactions that are context-free.", "target": ["Un nuovo framework per le spiegazioni delle predizioni dipendenti dal contesto e senza contesto", "Gli autori estendono il metodo di attribuzione locale lineare LIME per interpretare i modelli black box, e propongono un metodo per discernere tra interazioni dipendenti dal contesto e senza contesto.", "Un metodo che può fornire spiegazioni gerarchiche per un modello, includendo sia spiegazioni dipendenti dal contesto che libere dal contesto tramite un algoritmo di interpretazione locale."]} +{"source": "To realize the promise of ubiquitous embedded deep network inference, it is essential to seek limits of energy and area efficiency. To this end, low-precision networks offer tremendous promise because both energy and area scale down quadratically with the reduction in precision. Here, for the first time, we demonstrate ResNet-18, ResNet-34, ResNet-50, ResNet-152, Inception-v3, densenet-161, and VGG-16bn networks on the ImageNet classification benchmark that, at 8-bit precision exceed the accuracy of the full-precision baseline networks after one epoch of finetuning, thereby leveraging the availability of pretrained models.We also demonstrate ResNet-18, ResNet-34, and ResNet-50 4-bit models that match the accuracy of the full-precision baseline networks -- the highest scores to date.Surprisingly, the weights of the low-precision networks are very close (in cosine similarity) to the weights of the corresponding baseline networks, making training from scratch unnecessary.We find that gradient noise due to quantization during training increases with reduced precision, and seek ways to overcome this noise.The number of iterations required by stochastic gradient descent to achieve a given training error is related to the square of (a) the distance of the initial solution from the final plus (b) the maximum variance of the gradient estimates. By drawing inspiration from this observation, we (a) reduce solution distance by starting with pretrained fp32 precision baseline networks and fine-tuning, and (b) combat noise introduced by quantizing weights and activations during training, by using larger batches along with matched learning rate annealing. Sensitivity analysis indicates that these techniques, coupled with proper activation function range calibration, offer a promising heuristic to discover low-precision networks, if they exist, close to fp32 precision baseline networks.", "target": ["Il finetuning dopo la quantizzazione corrisponde o supera le reti a piena precisione allo stato dell'arte sia a 8 che a 4 bit.", "Questo articolo propone di migliorare le prestazioni dei modelli a bassa precisione facendo la quantizzazione sui modelli pre-trained, usando grandi batch size, e usando un adeguato annealing del learning rate con un tempo di training più lungo.", "Un metodo per una bassa quantizzazione dei bit per permettere l'inferenza su un hardware efficiente che raggiunge la piena accuratezza su ResNet50 con pesi e attivazioni a 4 bit, basato sull'osservazione che il fine-tuning a bassa precisione introduce rumore nel gradiente."]} +{"source": "Analysis methods which enable us to better understand the representations and functioning of neural models of language are increasingly needed as deep learning becomes the dominant approach in NLP.Here we present two methods based on Representational Similarity Analysis (RSA) and Tree Kernels (TK) which allow us to directly quantify how strongly the information encoded in neural activation patterns corresponds to information represented by symbolic structures such as syntax trees.We first validate our methods on the case of a simple synthetic language for arithmetic expressions with clearly defined syntax and semantics, and show that they exhibit the expected pattern of results.We then apply our methods to correlate neural representations of English sentences with their constituency parse trees.", "target": ["Due metodi basati su Representational Similarity Analysis (RSA) e Tree Kernels (TK) che quantificano direttamente quanto l'informazione codificata nei modelli di attivazione neurale corrisponde all'informazione rappresentata da strutture simboliche."]} +{"source": "Supervised deep learning requires a large amount of training samples with annotations (e.g. label class for classification task, pixel- or voxel-wised label map for segmentation tasks), which are expensive and time-consuming to obtain.During the training of a deep neural network, the annotated samples are fed into the network in a mini-batch way, where they are often regarded of equal importance.However, some of the samples may become less informative during training, as the magnitude of the gradient start to vanish for these samples.In the meantime, other samples of higher utility or hardness may be more demanded for the training process to proceed and require more exploitation.To address the challenges of expensive annotations and loss of sample informativeness, here we propose a novel training framework which adaptively selects informative samples that are fed to the training process.The adaptive selection or sampling is performed based on a hardness-aware strategy in the latent space constructed by a generative model.To evaluate the proposed training framework, we perform experiments on three different datasets, including MNIST and CIFAR-10 for image classification task and a medical image dataset IVUS for biophysical simulation task.On all three datasets, the proposed framework outperforms a random sampling method, which demonstrates the effectiveness of our framework.", "target": ["Questo articolo introduce un framework per l'apprendimento data-efficient di rappresentazioni attraverso il campionamento adattivo nello spazio latente.", "Un metodo per la selezione sequenziale e adattiva dei training examples da presentare all'algoritmo di training, dove la selezione avviene nello spazio latente basato sulla scelta dei campioni nella direzione del gradiente della loss.", "Un metodo per selezionare in modo efficiente i campioni difficili durante il training delle reti neurali, ottenuto tramite un variational auto-encoder che codifica i campioni in uno spazio latente."]} +{"source": "Existing methods for AI-generated artworks still struggle with generating high-quality stylized content, where high-level semantics are preserved, or separating fine-grained styles from various artists.We propose a novel Generative Adversarial Disentanglement Network which can disentangle two complementary factors of variations when only one of them is labelled in general, and fully decompose complex anime illustrations into style and content in particular.Training such model is challenging, since given a style, various content data may exist but not the other way round.Our approach is divided into two stages, one that encodes an input image into a style independent content, and one based on a dual-conditional generator.We demonstrate the ability to generate high-fidelity anime portraits with a fixed content and a large variety of styles from over a thousand artists, and vice versa, using a single end-to-end network and with applications in style transfer.We show this unique capability as well as superior output to the current state-of-the-art.", "target": ["Un metodo basato sull'adversarial training per distinguere due insiemi complementari di variazioni in un dataset in cui solo uno di essi è etichettato, testato su stile vs. contenuto nelle illustrazioni di anime.", "Un metodo di generazione di immagini che combina GAN condizionali e VAE condizionali che genera immagini di anime ad alta fedeltà con vari stili di vari artisti.", "Proposta di un metodo per imparare rappresentazioni disgiunte di stile (artista) e contenuto negli anime."]} +{"source": "Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns.Inspired by the intuition that humans are more sensitive to the lower-frequency (larger-scale) patterns we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel.We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness.Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients.", "target": ["Introduciamo una regolarizzazione di smoothness per i kernel convoluzionali di CNN che può aiutare a migliorare la robustezza avversaria e portare a gradienti percettivamente allineati", "Questo articolo propone un nuovo schema di regolarizzazione che incoraggia i kernel convoluzionali ad essere più smooth, sostenendo che ridurre la dipendenza della rete neurale dalle componenti ad alta frequenza aiuta la robustezza contro gli esempi adversarial.", "Gli autori propongono un metodo per l'apprendimento di kernel convoluzionali più smooth, in particolare, un regolarizzatore che penalizza i grandi cambiamenti tra pixel consecutivi del kernel con l'intuizione di penalizzare l'uso di componenti di input ad alta frequenza."]} +{"source": "Despite an ever growing literature on reinforcement learning algorithms and applications, much less is known about their statistical inference.In this paper, we investigate the large-sample behaviors of the Q-value estimates with closed-form characterizations of the asymptotic variances.This allows us to efficiently construct confidence regions for Q-value and optimal value functions, and to develop policies to minimize their estimation errors.This also leads to a policy exploration strategy that relies on estimating the relative discrepancies among the Q estimates.Numerical experiments show superior performances of our exploration strategy than other benchmark approaches.", "target": ["Indaghiamo il comportamento su grandi campioni delle stime dei valori Q e abbiamo proposto una strategia di esplorazione efficiente che si basa sulla stima delle discrepanze relative tra le stime Q."]} +{"source": "Entailment vectors are a principled way to encode in a vector what information is known and what is unknown. They are designed to model relations where one vector should include all the information in another vector, called entailment. This paper investigates the unsupervised learning of entailment vectors for the semantics of words. Using simple entailment-based models of the semantics of words in text (distributional semantics), we induce entailment-vector word embeddings which outperform the best previous results for predicting entailment between words, in unsupervised and semi-supervised experiments on hyponymy.", "target": ["Addestriamo word embeddings basati sull'entailment invece che sulla somiglianza, prevedendo con successo l'entailment lessicale.", "L'articolo presenta un algoritmo di word embedding per l'entailment lessicale che segue il lavoro di Henderson e Popa (ACL, 2016)."]} +{"source": "We describe a simple scheme that allows an agent to learn about its environment in an unsupervised manner.Our scheme pits two versions of the same agent, Alice and Bob, against one another.Alice proposes a task for Bob to complete; and then Bob attempts to complete the task. In this work we will focus on two kinds of environments: (nearly) reversible environments and environments that can be reset.Alice will \"propose\" the task by doing a sequence of actions and then Bob must undo or repeat them, respectively. Via an appropriate reward structure, Alice and Bob automatically generate a curriculum of exploration, enabling unsupervised training of the agent.When Bob is deployed on an RL task within the environment, this unsupervised training reduces the number of supervised episodes needed to learn, and in some cases converges to a higher reward.", "target": ["Unsupervised learning per reinforcement learning utilizzando un curriculum automatico di self-play", "Una nuova formulazione per esplorare l'ambiente in modo unsupervised per aiutare un task specifico in seguito, dove un agente propone task sempre più difficili e l'agente che impara cerca di realizzarli.", "Un modello di self-play in cui un agente impara a proporre task che sono facili per lui ma difficili per un avversario, creando un moving target di obiettivi di self-play e curriculum di apprendimento."]} +{"source": "Many real-world data sets are represented as graphs, such as citation links, social media, and biological interaction.The volatile graph structure makes it non-trivial to employ convolutional neural networks (CNN's) for graph data processing.Recently, graph attention network (GAT) has proven a promising attempt by combining graph neural networks with attention mechanism, so as to achieve massage passing in graphs with arbitrary structures.However, the attention in GAT is computed mainly based on the similarity between the node content, while the structures of the graph remains largely unemployed (except in masking the attention out of one-hop neighbors).In this paper, we propose an `````````````````````````````\"ADaptive Structural Fingerprint\" (ADSF) model to fully exploit both topological details of the graph and content features of the nodes.The key idea is to contextualize each node with a weighted, learnable receptive field encoding rich and diverse local graph structures.By doing this, structural interactions between the nodes can be inferred accurately, thus improving subsequent attention layer as well as the convergence of learning.Furthermore, our model provides a useful platform for different subspaces of node features and various scales of graph structures to ``cross-talk'' with each other through the learning of multi-head attention, being particularly useful in handling complex real-world data. Encouraging performance is observed on a number of benchmark data sets in node classification.", "target": ["Sfruttare i ricchi dettagli strutturali nei dati strutturati a grafo tramite \"impronte digitali strutturali\" adattive", "Una metodologia basata sulla struttura del grafo per arricchire il meccanismo di attention delle graph neural networks, con l'idea principale di esplorare le interazioni tra diversi tipi di nodi nelle vicinanze di un nodo radice.", "Questo articolo estende l'idea di self-attention nelle NN a grafo, che è tipicamente basata sulla somiglianza delle caratteristiche tra i nodi, per includere la somiglianza strutturale."]} +{"source": "Informed and robust decision making in the face of uncertainty is critical for robots that perform physical tasks alongside people.We formulate this as a Bayesian Reinforcement Learning problem over latent Markov Decision Processes (MDPs).While Bayes-optimality is theoretically the gold standard, existing algorithms do not scale well to continuous state and action spaces.We propose a scalable solution that builds on the following insight: in the absence of uncertainty, each latent MDP is easier to solve.We split the challenge into two simpler components.First, we obtain an ensemble of clairvoyant experts and fuse their advice to compute a baseline policy.Second, we train a Bayesian residual policy to improve upon the ensemble's recommendation and learn to reduce uncertainty.Our algorithm, Bayesian Residual Policy Optimization (BRPO), imports the scalability of policy gradient methods as well as the initialization from prior models.BRPO significantly improves the ensemble of experts and drastically outperforms existing adaptive RL methods.", "target": ["Proponiamo un algoritmo scalabile di Bayesian Reinforcement Learning che impara una correzione bayesiana su un ensemble di esperti chiaroveggenti per risolvere problemi con reward e dinamiche latenti complesse.", "Questo articolo considera il problema del reifnorcement learning bayesiano su processi decisionali di Markov latenti (MDP) prendendo decisioni con esperti.", "In questo articolo, gli autori motivano e propongono un algoritmo di apprendimento, chiamato Bayesian Residual Policy Optimization (BRPO), per problemi di reinforcement learning bayesiani."]} +{"source": "One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth.This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks.For an $m$ hidden node shallow neural network with ReLU activation and $n$ training data, we show as long as $m$ is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function.Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum.We believe these insights are also useful in analyzing deep models and other first order methods.", "target": ["Dimostriamo che gradient descent raggiunge un valore di zero nella loss di training con un rate lineare su reti neurali sovra-parametrizzate.", "Questo lavoro considera l'ottimizzazione di una rete ReLU a due layer sovraparametrizzata con la loss quadratica e con un dataset con etichette arbitrarie.", "Questo articolo studia le reti neurali con un hidden layer con loss quadratica, dove dimostrano che nell'impostazione sovra-parametrizzata, l'inizializzazione casuale e la gradient descent arriva a loss zero."]} +{"source": "For many applications, in particular in natural science, the task is todetermine hidden system parameters from a set of measurements.Often,the forward process from parameter- to measurement-space is well-defined,whereas the inverse problem is ambiguous: multiple parameter sets canresult in the same measurement.To fully characterize this ambiguity, the fullposterior parameter distribution, conditioned on an observed measurement,has to be determined.We argue that a particular class of neural networksis well suited for this task – so-called Invertible Neural Networks (INNs).Unlike classical neural networks, which attempt to solve the ambiguousinverse problem directly, INNs focus on learning the forward process, usingadditional latent output variables to capture the information otherwiselost.Due to invertibility, a model of the corresponding inverse process islearned implicitly.Given a specific measurement and the distribution ofthe latent variables, the inverse pass of the INN provides the full posteriorover parameter space.We prove theoretically and verify experimentally, onartificial data and real-world problems from medicine and astrophysics, thatINNs are a powerful analysis tool to find multi-modalities in parameter space,uncover parameter correlations, and identify unrecoverable parameters.", "target": ["Analizzare problemi inversi con reti neurali invertibili", "L'autore propone di utilizzare le reti invertibili per risolvere problemi inversi ambigui e suggerisce di non addestrare solo il modello forward, ma anche il modello inverso con un critico MMD.", "Il paper propone una rete invertibile con osservazioni per la probabilità posteriore di distribuzioni di input complesse con uno schema di training bidirezionale teoricamente valido."]} +{"source": "Decisions made by machine learning systems have increasing influence on the world.Yet it is common for machine learning algorithms to assume that no such influence exists.An example is the use of the i.i.d.assumption in online learning for applications such as content recommendation, where the (choice of) content displayed can change users' perceptions and preferences, or even drive them away, causing a shift in the distribution of users.Generally speaking, it is possible for an algorithm to change the distribution of its own inputs.We introduce the term self-induced distributional shift (SIDS) to describe this phenomenon.A large body of work in reinforcement learning and causal machine learning aims to deal with distributional shift caused by deploying learning systems previously trained offline.Our goal is similar, but distinct: we point out that changes to the learning algorithm, such as the introduction of meta-learning, can reveal hidden incentives for distributional shift (HIDS), and aim to diagnose and prevent problems associated with hidden incentives.We design a simple  environment as a \"unit test\" for HIDS, as well as a content recommendation environment which allows us to disentangle different types of SIDS.  We demonstrate the potential for HIDS to cause unexpected or undesirable behavior in these environments, and propose and test a mitigation strategy.Â", "target": ["Le metriche di performance sono specifiche incomplete; il fine non sempre giustifica i mezzi.", "Gli autori mostrano come il meta-learning rivela gli incentivi nascosti per il distributional shift e propongono un approccio basato sullo scambio di learner tra gli ambienti per ridurre il distributional shift introdotto da loro stessi.", "L'articolo generalizza l'incentivo intrinseco per il learner a vincere rendendo il task più facile nel meta-learning a una classe più ampia di problemi."]} +{"source": "In one-class-learning tasks, only the normal case can be modeled with data, whereas the variation of all possible anomalies is too large to be described sufficiently by samples.Thus, due to the lack of representative data, the wide-spread discriminative approaches cannot cover such learning tasks, and rather generative models, which attempt to learn the input density of the normal cases, are used.However, generative models suffer from a large input dimensionality (as in images) and are typically inefficient learners.We propose to learn the data distribution more efficiently with a multi-hypotheses autoencoder.Moreover, the model is criticized by a discriminator, which prevents artificial data modes not supported by data, and which enforces diversity across hypotheses.This consistency-based anomaly detection (ConAD) framework allows the reliable identification of outof- distribution samples.For anomaly detection on CIFAR-10, it yields up to 3.9% points improvement over previously reported results.On a real anomaly detection task, the approach reduces the error of the baseline models from 6.8% to 1.5%.", "target": ["Proponiamo un approccio di rilevamento delle anomalie che combina la modellazione della classe di primo piano tramite densità locali multiple con adversarial training.", "L'articolo propone una tecnica per rendere i modelli generativi più robusti rendendoli coerenti con la densità locale."]} +{"source": "Generative Adversarial Networks (GAN) can achieve promising performance on learning complex data distributions on different types of data.In this paper, we first show that a straightforward extension of an existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data.We propose a two fold modification to a GAN algorithm to be able to generate point clouds (PC-GAN).First, we combine ideas from hierarchical Bayesian modeling and implicit generative models by learning a hierarchical and interpretable sampling process.A key component of our method is that we train a posterior inference network for the hidden variables.Second, PC-GAN defines a generic framework that can incorporate many existing GAN algorithms.We further propose a sandwiching objective, which results in a tighter Wasserstein distance estimate than the commonly used dual form in WGAN.We validate our claims on the ModelNet40 benchmark dataset and observe that PC- GAN trained by the sandwiching objective achieves better results on test data than existing methods.We also conduct studies on several tasks, including generalization on unseen point clouds, latent space interpolation, classification, and image to point clouds transformation, to demonstrate the versatility of the proposed PC-GAN algorithm.", "target": ["Proponiamo una variante GAN che impara a generare nuvole di punti. Diversi studi sono stati esplorati, tra cui una stima più precisa della distanza di Wasserstein, la generazione condizionale, la generalizzazione a nuvole di punti non visti e l'image to point cloud.", "Questo articolo propone di utilizzare GAN per generare nuvole di punti 3D e introduce un obiettivo sandwich, facendo la media tra il limite superiore e inferiore della distanza di Wasserstein tra le distribuzioni.", "Questo articolo propone un nuovo modello generativo per dati non ordinati, con una particolare applicazione alle nuvole di punti, che include un metodo di inferenza e una nuova funzione obiettivo."]} +{"source": "Existing attention mechanisms, are mostly item-based in that a model is trained to attend to individual items in a collection (the memory) where each item has a predefined, fixed granularity, e.g., a character or a word.Intuitively, an area in the memory consisting of multiple items can be worth attending to as a whole.We propose area attention: a way to attend to an area of the memory, where each area contains a group of items that are either spatially adjacent when the memory has a 2-dimensional structure, such as images, or temporally adjacent for 1-dimensional memory, such as natural language sentences.Importantly, the size of an area, i.e., the number of items in an area or the level of aggregation, is dynamically determined via learning, which can vary depending on the learned coherence of the adjacent items.By giving the model the option to attend to an area of items, instead of only individual items, a model can attend to information with varying granularity.Area attention can work along multi-head attention for attending to multiple areas in the memory.We evaluate area attention on two tasks: neural machine translation (both character and token-level) and image captioning, and improve upon strong (state-of-the-art) baselines in all the cases.These improvements are obtainable with a basic form of area attention that is parameter free.In addition to proposing the novel concept of area attention, we contribute an efficient way for computing it by leveraging the technique of summed area tables.", "target": ["L'articolo presenta un nuovo approccio per i meccanismi di attention che può beneficiare una serie di task come la traduzione automatica e la didascalia delle immagini.", "Questo articolo estende gli attuali modelli di attention dal livello di parola alla combinazione di parole adiacenti, applicando i modelli agli oggetti composti da parole adiacenti fuse."]} +{"source": "We identify a phenomenon, which we refer to as *multi-model forgetting*, that occurs when sequentially training multiple deep networks with partially-shared parameters; the performance of previously-trained models degrades as one optimizes a subsequent one, due to the overwriting of shared parameters.To overcome this, we introduce a statistically-justified weight plasticity loss that regularizes the learning of a model's shared parameters according to their importance for the previous models, and demonstrate its effectiveness when training two models sequentially and for neural architecture search.Adding weight plasticity in neural architecture search preserves the best models to the end of the search and yields improved results in both natural language processing and computer vision tasks.", "target": ["Identifichiamo un fenomeno, il lavaggio del cervello neurale, e introduciamo una loss di plasticità del peso statisticamente giustificata per superarlo.", "Questo articolo discute il fenomeno del \"lavaggio del cervello neurale\", che si riferisce al fatto che le prestazioni di un modello sono influenzate da un altro modello che ne condivide i parametri."]} +{"source": "Revealing latent structure in data is an active field of research, having introduced exciting technologies such as variational autoencoders and adversarial networks, and is essential to push machine learning towards unsupervised knowledge discovery.However, a major challenge is the lack of suitable benchmarks for an objective and quantitative evaluation of learned representations.To address this issue we introduce Morpho-MNIST, a framework that aims to answer: \"to what extent has my model learned to represent specific factors of variation in the data?\"We extend the popular MNIST dataset by adding a morphometric analysis enabling quantitative comparison of trained models, identification of the roles of latent variables, and characterisation of sample diversity.We further propose a set of quantifiable perturbations to assess the performance of unsupervised and supervised methods on challenging tasks such as outlier detection and domain adaptation.", "target": ["Questo articolo introduce Morpho-MNIST, una collezione di metriche di forma e perturbazioni, in un passo verso la valutazione quantitativa del representation learning.", "Questo articolo discute il problema della valutazione e della diagnosi delle rappresentazioni apprese utilizzando un modello generativo.", "Gli autori presentano un insieme di criteri per categorizzare le cifre di MNIST e un insieme di perturbazioni interessanti per modificare il dataset MNIST."]} +{"source": "Exploration in environments with sparse rewards is a key challenge for reinforcement learning.How do we design agents with generic inductive biases so that they can explore in a consistent manner instead of just using local exploration schemes like epsilon-greedy?We propose an unsupervised reinforcement learning agent which learns a discrete pixel grouping model that preserves spatial geometry of the sensors and implicitly of the environment as well.We use this representation to derive geometric intrinsic reward functions, like centroid coordinates and area, and learn policies to control each one of them with off-policy learning.These policies form a basis set of behaviors (options) which allows us explore in a consistent way and use them in a hierarchical reinforcement learning setup to solve for extrinsically defined rewards.We show that our approach can scale to a variety of domains with competitive performance, including navigation in 3D environments and Atari games with sparse rewards.", "target": ["esplorazione strutturata nel deep reinforcement learning attraverso la scoperta e il controllo dell'astrazione visiva unsupervised", "L'articolo introduce astrazioni visive che sono utilizzate per il reinforcement learning, dove un algoritmo impara a \"controllare\" ogni astrazione così come a selezionare le opzioni per raggiungere il task complessivo."]} +{"source": "Combinatorial optimization is a common theme in computer science.While in general such problems are NP-Hard, from a practical point of view, locally optimal solutions can be useful.In some combinatorial problems however, it can be hard to define meaningful solution neighborhoods that connect large portions of the search space, thus hindering methods that search this space directly.We suggest to circumvent such cases by utilizing a policy gradient algorithm that transforms the problem to the continuous domain, and to optimize a new surrogate objective that renders the former as generic stochastic optimizer.This is achieved by producing a surrogate objective whose distribution is fixed and predetermined, thus removing the need to fine-tune various hyper-parameters in a case by case manner.Since we are interested in methods which can successfully recover locally optimal solutions, we use the problem of finding locally maximal cliques as a challenging experimental benchmark, and we report results on a large dataset of graphs that is designed to test clique finding algorithms.Notably, we show in this benchmark that fixing the distribution of the surrogate is key to consistently recovering locally optimal solutions, and that our surrogate objective leads to an algorithm that outperforms other methods we have tested in a number of measures.", "target": ["Un nuovo algoritmo policy gradient progettato per affrontare problemi di ottimizzazione combinatoria black-box. L'algoritmo si basa solo su valutazioni di funzioni e restituisce soluzioni localmente ottimali con alta probabilità.", "L'articolo propone un approccio per costruire obiettivi surrogati per l'applicazione dei metodi policy gradient all'ottimizzazione combinatoria con lo scopo di ridurre la necessità di sintonizzazione degli iperparametri.", "L'articolo propone di sostituire il termine di reward nell'algoritmo policy gradient con la sua distribuzione cumulativa empirica centrata."]} +{"source": "Deterministic neural networks (NNs) are increasingly being deployed in safety critical domains, where calibrated, robust and efficient measures of uncertainty are crucial.While it is possible to train regression networks to output the parameters of a probability distribution by maximizing a Gaussian likelihood function, the resulting model remains oblivious to the underlying confidence of its predictions.In this paper, we propose a novel method for training deterministic NNs to not only estimate the desired target but also the associated evidence in support of that target.We accomplish this by placing evidential priors over our original Gaussian likelihood function and training our NN to infer the hyperparameters of our evidential distribution.We impose priors during training such that the model is penalized when its predicted evidence is not aligned with the correct output.Thus the model estimates not only the probabilistic mean and variance of our target but also the underlying uncertainty associated with each of those parameters.We observe that our evidential regression method learns well-calibrated measures of uncertainty on various benchmarks, scales to complex computer vision tasks, and is robust to adversarial input perturbations.", "target": ["Stima dell'incertezza veloce e calibrata per reti neurali senza campionamento", "Questo articolo propone un nuovo approccio per stimare la fiducia delle predizioni in un ambiente di regressione, aprendo la porta ad applicazioni online con stime di incertezza completamente integrate.", "Questo articolo ha proposto la regressione evidenziale deep, un metodo per il training delle reti neurali per stimare non solo l'output ma anche le prove associate a sostegno di quell'output."]} +{"source": "The Lottery Ticket Hypothesis from Frankle & Carbin (2019) conjectures that, for typically-sized neural networks, it is possible to find small sub-networks which train faster and yield superior performance than their original counterparts.The proposed algorithm to search for such sub-networks (winning tickets), Iterative Magnitude Pruning (IMP), consistently finds sub-networks with 90-95% less parameters which indeed train faster and better than the overparameterized models they were extracted from, creating potential applications to problems such as transfer learning.In this paper, we propose a new algorithm to search for winning tickets, Continuous Sparsification, which continuously removes parameters from a network during training, and learns the sub-network's structure with gradient-based methods instead of relying on pruning strategies.We show empirically that our method is capable of finding tickets that outperforms the ones learned by Iterative Magnitude Pruning, and at the same time providing up to 5 times faster search, when measured in number of training epochs.", "target": ["Proponiamo un nuovo algoritmo che trova rapidamente i winning ticket nelle reti neurali.", "Questo articolo propone una nuova funzione obiettivo che può essere usata per ottimizzare congiuntamente un obiettivo di classificazione mentre incoraggia la sparsificazione in una rete che funziona con alta precisione.", "Questo lavoro propone un nuovo metodo di pruning iterativo chiamato Continuous Sparsification, che riduce continuamente il peso corrente fino a raggiungere il rapporto target."]} +{"source": "In most practical settings and theoretical analyses, one assumes that a model can be trained until convergence.However, the growing complexity of machine learning datasets and models may violate such assumptions.Indeed, current approaches for hyper-parameter tuning and neural architecture search tend to be limited by practical resource constraints.Therefore, we introduce a formal setting for studying training under the non-asymptotic, resource-constrained regime, i.e., budgeted training.We analyze the following problem: \"given a dataset, algorithm, and fixed resource budget, what is the best achievable performance?\"We focus on the number of optimization iterations as the representative resource.Under such a setting, we show that it is critical to adjust the learning rate schedule according to the given budget.Among budget-aware learning schedules, we find simple linear decay to be both robust and high-performing.We support our claim through extensive experiments with state-of-the-art models on ImageNet (image classification), Kinetics (video classification), MS COCO (object detection and instance segmentation), and Cityscapes (semantic segmentation).We also analyze our results and find that the key to a good schedule is budgeted convergence, a phenomenon whereby the gradient vanishes at the end of each allowed budget.We also revisit existing approaches for fast convergence and show that budget-aware learning schedules readily outperform such approaches under (the practical but under-explored) budgeted training setting.", "target": ["Introdurre un'impostazione formale per un training con budget e proporre uno schedule lineare del learning rate consapevole del budget", "Questo lavoro presenta una tecnica per adattare il learning rate per il training delle reti neurali sotto un numero fisso di epoche.", "Questo articolo ha analizzato quale programma di learning rate dovrebbe essere usato quando il numero di iterazioni è limitato usando un nuovo concetto di BAS (Budget-Aware Schedule)."]} +{"source": "We present a new approach for efficient exploration which leverages a low-dimensional encoding of the environment learned with a combination of model-based and model-free objectives.Our approach uses intrinsic rewards that are based on a weighted distance of nearest neighbors in the low dimensional representational space to gauge novelty.We then leverage these intrinsic rewards for sample-efficient exploration with planning routines in representational space.One key element of our approach is that we perform more gradient steps in-between every environment step in order to ensure the model accuracy.We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines.", "target": ["Conduciamo l'esplorazione usando ricompense intrinseche che si basano su una distanza ponderata dei nearest neighbor nello spazio delle rappresentazioni.", "Questo articolo propone un metodo per un'esplorazione efficiente in MDP tabulari e anche un semplice ambiente di controllo, usando encoder deterministici per imparare una rappresentazione a bassa dimensione della dinamica dell'ambiente.", "Questo articolo propone un metodo di esplorazione efficiente per agenti RL usando una combinazione di approcci basati sul modello e senza modello con una metrica di novità."]} +{"source": "Neural networks are vulnerable to small adversarial perturbations.While existing literature largely focused on the vulnerability of learned models, we demonstrate an intriguing phenomenon that adversarial robustness, unlike clean accuracy, is sensitive to the input data distribution.Even a semantics-preserving transformations on the input data distribution can cause a significantly different robustness for the adversarially trained model that is both trained and evaluated on the new distribution.We show this by constructing semantically- identical variants for MNIST and CIFAR10 respectively, and show that standardly trained models achieve similar clean accuracies on them, but adversarially trained models achieve significantly different robustness accuracies.This counter-intuitive phenomenon indicates that input data distribution alone can affect the adversarial robustness of trained neural networks, not necessarily the tasks themselves.Lastly, we discuss the practical implications on evaluating adversarial robustness, and make initial attempts to understand this complex phenomenon.", "target": ["Le prestazioni di robustezza dei modelli PGD trained sono sensibili alle trasformazioni semantics-preserving dei dataset di immagini, il che implica la difficoltà della valutazione degli algoritmi di reinforcement learning nella pratica."]} +{"source": "Sample inefficiency is a long-lasting problem in reinforcement learning (RL). The state-of-the-art uses action value function to derive policy while it usually involves an extensive search over the state-action space and unstable optimization.Towards the sample-efficient RL, we propose ranking policy gradient (RPG), a policy gradient method that learns the optimal rank of a set of discrete actions. To accelerate the learning of policy gradient methods, we establish the equivalence between maximizing the lower bound of return and imitating a near-optimal policy without accessing any oracles.These results lead to a general off-policy learning framework, which preserves the optimality, reduces variance, and improves the sample-efficiency.We conduct extensive experiments showing that when consolidating with the off-policy learning framework, RPG substantially reduces the sample complexity, comparing to the state-of-the-art.", "target": ["Proponiamo il ranking policy gradient che impara il rango ottimale delle azioni per massimizzare il rendimento. Proponiamo un framework generale di apprendimento off-policy con le proprietà di conservazione dell'ottimalità, riduzione della varianza e sample-efficiency.", "Questo articolo propone di riparametrizzare la politica utilizzando una forma di ranking per convertire il problema RL in un problema di supervised learning.", "Questo articolo presenta un nuovo punto di vista sui metodi policy gradient dal punto di vista del ranking."]} +{"source": "We introduce MultiGrain, a neural network architecture that generates compact image embedding vectors that solve multiple tasks of different granularity: class, instance, and copy recognition.MultiGrain is trained jointly for classification by optimizing the cross-entropy loss and for instance/copy recognition by optimizing a self-supervised ranking loss.The self-supervised loss only uses data augmentation and thus does not require additional labels.Remarkably, the unified embeddings are not only much more compact than using several specialized embeddings, but they also have the same or better accuracy.When fed to a linear classifier, MultiGrain using ResNet-50 achieves 79.4% top-1 accuracy on ImageNet, a +1.8% absolute improvement over the the current state-of-the-art AutoAugment method.The same embeddings perform on par with state-of-the-art instance retrieval with images of moderate resolution.An ablation study shows that our approach benefits from the self-supervision, the pooling method and the mini-batches with repeated augmentations of the same image.", "target": ["Combinando la classificazione e il recupero delle immagini in un'architettura di rete neurale, otteniamo un miglioramento per entrambi i task.", "Questo articolo propone un embedding unificato per la classificazione delle immagini e il recupero delle istanze per migliorare le prestazioni di entrambi i task.", "L'articolo propone di addestrare congiuntamente una deep neural network per la classificazione delle immagini, l'istanza e il riconoscimento delle copie."]} +{"source": "In this paper, we investigate mapping the hyponymy relation of wordnet to feature vectors. We aim to model lexical knowledge in such a way that it can be used as input in generic machine-learning models, such as phrase entailment predictors. We propose two models.The first one leverages an existing mapping of words to feature vectors (fasttext), and attempts to classify such vectors as within or outside of each class.The second model is fully supervised, using solely wordnet as a ground truth.It maps each concept to an interval or a disjunction thereof. On the first model, we approach, but not quite attain state of the art performance.The second model can achieve near-perfect accuracy.", "target": ["Studiamo la mappatura della relazione di iponimia di wordnet ai vettori di feature", "Questo articolo studia come l'iponimia tra le parole può essere mappata nelle rappresentazioni delle feature.", "Questo articolo esplora la nozione di iponimia nelle rappresentazioni vettoriali di parole e descrive un metodo per organizzare le relazioni WordNet in una struttura ad albero per definire l'iponimia."]} +{"source": "Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice’s Maxims.In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation.Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting generator is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.", "target": ["Costruiamo un generatore di linguaggio naturale più forte addestrando in modo discriminatorio funzioni di scoring che classificano le generazioni candidate rispetto a varie qualità di buona scrittura.", "Questo articolo propone di riunire più bias induttivi che sperano di correggere le incongruenze nella decodifica delle sequenze e propone di ottimizzare per i parametri di una combinazione predefinita di vari sotto-obiettivi.", "Questo documento combina il modello linguistico RNN con diversi modelli trained in modo discriminatorio per migliorare la generazione del linguaggio.", "Questo articolo propone di migliorare la generazione di modelli linguistici RNN utilizzando obiettivi aumentati ispirati alle massime di comunicazione di Grice."]} +{"source": "In recent years, the efficiency and even the feasibility of traditional load-balancing policies are challenged by the rapid growth of cloud infrastructure with increasing levels of server heterogeneity and increasing size of cloud services and applications.In such many software-load-balancers heterogeneous systems, traditional solutions, such as JSQ, incur an increasing communication overhead, whereas low-communication alternatives, such as JSQ(d) and the recently proposed JIQ scheme are either unstable or provide poor performance.We argue that a better low-communication load balancing scheme can be established by allowing each dispatcher to have a different view of the system and keep using JSQ, rather than greedily trying to avoid starvation on a per-decision basis. accordingly, we introduce the Loosely-Shortest -Queue family of load balancing algorithms.Roughly speaking, in Loosely-shortest -Queue, each dispatcher keeps a different approximation of the server queue lengths and routes jobs to the shortest among them.Communication is used only to update the approximations and make sure that they are not too far from the real queue lengths in expectation.We formally establish the strong stability of any Loosely-Shortest -Queue policy and provide an easy-to-verify sufficient condition for verifying that a policy is Loosely-Shortest -Queue.We further demonstrate that the Loosely-Shortest -Queue approach allows constructing throughput optimal policies with an arbitrarily low communication budget.Finally, using extensive simulations that consider homogeneous, heterogeneous and highly skewed heterogeneous systems in scenarios with a single dispatcher as well as with multiple dispatchers, we show that the examined Loosely-Shortest -Queue example policies are always stable as dictated by theory.Moreover, it exhibits an appealing performance and significantly outperforms well-known low-communication policies, such as JSQ(d) and JIQ, while using a similar communication budget.", "target": ["Soluzione di load balancing scalabile e a bassa comunicazione per sistemi multi-dispatcher a server eterogenei con forti garanzie teoriche e promettenti risultati empirici."]} +{"source": "We propose a novel quantitative measure to predict the performance of a deep neural network classifier, where the measure is derived exclusively from the graph structure of the network.We expect that this measure is a fundamental first step in developing a method to evaluate new network architectures and reduce the reliance on the computationally expensive trial and error or \"brute force\" optimisation processes involved in model selection.The measure is derived in the context of multi-layer perceptrons (MLPs), but the definitions are shown to be useful also in the context of deep convolutional neural networks (CNN), where it is able to estimate and compare the relative performance of different types of neural networks, such as VGG, ResNet, and DenseNet.Our measure is also used to study the effects of some important \"hidden\" hyper-parameters of the DenseNet architecture, such as number of layers, growth rate and the dimension of 1x1 convolutions in DenseNet-BC.Ultimately, our measure facilitates the optimisation of the DenseNet design, which shows improved results compared to the baseline.", "target": ["Una misura quantitativa per prevedere le prestazioni dei modelli di deep neural network.", "L'articolo propone una nuova quantità che conta il numero di percorsi nella rete neurale che è predittiva delle prestazioni delle reti neurali con lo stesso numero di parametri.", "L'articolo presenta un metodo per contare i percorsi nelle reti neurali deep che plausibilmente può essere utilizzato per misurare le prestazioni della rete."]} +{"source": "There is a stark disparity between the learning rate schedules used in the practice of large scale machine learning and what are considered admissible learning rate schedules prescribed in the theory of stochastic approximation.Recent results, such as in the 'super-convergence' methods which use oscillating learning rates, serve to emphasize this point even more.One plausible explanation is that non-convex neural network training procedures are better suited to the use of fundamentally different learning rate schedules, such as the ``cut the learning rate every constant number of epochs'' method (which more closely resembles an exponentially decaying learning rate schedule); note that this widely used schedule is in stark contrast to the polynomial decay schemes prescribed in the stochastic approximation literature, which are indeed shown to be (worst case) optimal for classes of convex optimization problems.The main contribution of this work shows that the picture is far more nuanced, where we do not even need to move to non-convex optimization to show other learning rate schemes can be far more effective.In fact, even for the simple case of stochastic linear regression with a fixed time horizon, the rate achieved by any polynomial decay scheme is sub-optimal compared to the statistical minimax rate (by a factor of condition number); in contrast the ```''cut the learning rate every constant number of epochs'' provides an exponential improvement (depending only logarithmically on the condition number) compared to any polynomial decay scheme. Finally, it is important to ask if our theoretical insights are somehow fundamentally tied to quadratic loss minimization (where we have circumvented minimax lower bounds for more general convex optimization problems)?Here, we conjecture that recent results which make the gradient norm small at a near optimal rate, for both convex and non-convex optimization, may also provide more insights into learning rate schedules used in practice.", "target": ["Questo articolo presenta uno studio rigoroso del perché le learning rate schedule usati praticamente (per un dato budget computazionale) offrono vantaggi significativi anche se questi schemi non sono sostenuti dalla teoria classica dell'approssimazione stocastica.", "Questo articolo presenta uno studio teorico di diversi learning rate schedule che ha portato a limiti minimi statistici minimax per entrambi gli schemi polinomiali e constant-and-cut.", "L'articolo studia l'effetto delle scelte di learning-rate per l'ottimizzazione stocastica, concentrandosi sul least-mean-squares con stepsize decrescenti"]} +{"source": "We present Value Propagation (VProp), a set of parameter-efficient differentiable planning modules built on Value Iteration which can successfully be trained using reinforcement learning to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments.We show that the modules enable learning to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems.We evaluate on static and dynamic configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes, and on a StarCraft navigation scenario, with more complex dynamics, and pixels as input.", "target": ["Presentiamo pianificatori basati su convnet che sono sample-efficient e che si generalizzano a istanze più grandi di problemi di navigazione e pathfinding.", "Propone metodi che possono essere visti come modifiche delle Value Iteration Network (VIN), con alcuni miglioramenti volti a migliorare la sample efficiency e la generalizzazione a grandi dimensioni dell'ambiente.", "L'articolo presenta un'estensione delle value iteration network (VIN) considerando una funzione di transizione dipendente dallo stato."]} +{"source": "Learning high-quality word embeddings is of significant importance in achieving better performance in many down-stream learning tasks.On one hand, traditional word embeddings are trained on a large scale corpus for general-purpose tasks, which are often sub-optimal for many domain-specific tasks.On the other hand, many domain-specific tasks do not have a large enough domain corpus to obtain high-quality embeddings.We observe that domains are not isolated and a small domain corpus can leverage the learned knowledge from many past domains to augment that corpus in order to generate high-quality embeddings.In this paper, we formulate the learning of word embeddings as a lifelong learning process.Given knowledge learned from many previous domains and a small new domain corpus, the proposed method can effectively generate new domain embeddings by leveraging a simple but effective algorithm and a meta-learner, where the meta-learner is able to provide word context similarity information at the domain-level.Experimental results demonstrate that the proposed method can effectively learn new domain embeddings from a small corpus and past domain knowledges\\footnote{We will release the code after final revisions.}.Wealso demonstrate that general-purpose embeddings trained from a large scale corpus are sub-optimal in domain-specific tasks.", "target": ["imparare migliori embedding di dominio attraverso lifelong learning e meta-learning", "Presenta un metodo di lifelong learning per l'apprendimento di word embedding.", "Questo articolo propone un approccio per imparare gli embedding in nuovi domini e batte significativamente la baseline su un task di aspect extraction."]} +{"source": "Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance loss.Despite its effectiveness, existing regularization-based parameter pruning methods usually drive weights towards zero with large and constant regularization factors, which neglects the fact that the expressiveness of CNNs is fragile and needs a more gentle way of regularization for the networks to adapt during pruning.To solve this problem, we propose a new regularization-based pruning method (named IncReg) to incrementally assign different regularization factors to different weight groups based on their relative importance, whose effectiveness is proved on popular CNNs compared with state-of-the-art methods.", "target": ["proponiamo un nuovo metodo di pruning basato sulla regolarizzazione (chiamato IncReg) per assegnare in modo incrementale diversi fattori di regolarizzazione a diversi gruppi di pesi in base alla loro importanza relativa.", "Questo articolo propone un metodo di pruning basato sulla regolarizzazione per assegnare in modo incrementale diversi fattori di regolarizzazione a diversi gruppi di pesi in base alla loro importanza relativa."]} +{"source": "Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov's accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD).Rigorously speaking, fast gradient methods have provable improvements over gradient descent only for the deterministic case, where the gradients are exact.In the stochastic case, the popular explanations for their wide applicability is that when these fast gradient methods are applied in the stochastic case, they partially mimic their exact gradient counterparts, resulting in some practical gain.This work provides a counterpoint to this belief by proving that there exist simple problem instances where these methods cannot outperform SGD despite the best setting of its parameters.These negative problem instances are, in an informal sense, generic; they do not look like carefully constructed pathological instances.These results suggest (along with empirical evidence) that HB or NAG's practical performance gains are a by-product of minibatching.Furthermore, this work provides a viable (and provable) alternative, which, on the same set of problem instances, significantly improves over HB, NAG, and SGD's performance.This algorithm, referred to as Accelerated Stochastic Gradient Descent (ASGD), is a simple to implement stochastic algorithm, based on a relatively less popular variant of Nesterov's Acceleration.Extensive empirical results in this paper show that ASGD has performance gains over HB, NAG, and SGD.The code for implementing the ASGD Algorithm can be found at https://github.com/rahulkidambi/AccSGD.", "target": ["Gli schemi esistenti di momento/accelerazione come il metodo heavy ball e l'accelerazione di Nesterov impiegati con gradienti stocastici non migliorano rispetto al stochastic gradient descent standard, specialmente quando sono impiegati con batch size piccole."]} +{"source": "Oversubscription planning (OSP) is the problem of finding plans that maximize the utility value of their end state while staying within a specified cost bound.Recently, it has been shown that OSP problems can be reformulated as classical planning problems with multiple cost functions but no utilities. Here we take advantage of this reformulation to show that OSP problems can be solved optimally using the A* search algorithm, in contrast to previous approaches that have used variations on branch-and-bound search.This allows many powerful techniques developed for classical planning to be applied to OSP problems.We also introduce novel bound-sensitive heuristics, which are able to reason about the primary cost of a solution while taking into account secondary cost functions and bounds, to provide superior guidance compared to heuristics that do not take these bounds into account.We implement two such bound-sensitive variants of existing classical planning heuristics, and show experimentally that the resulting search is significantly more informed than comparable heuristics that do not consider bounds.", "target": ["Mostriamo che i task di oversubscription planning possono essere risolti usando A* e introduciamo nuove euristiche sensibili ai bound per i task di oversubscription planning.", "Presenta un approccio per risolvere in modo ottimale i task di oversubscription planning (OSP) utilizzando una traduzione alla pianificazione classica con funzioni di costo multiple.", "L'articolo propone delle modifiche all'euristiche ammissibili per renderle meglio informate in un ambiente multi-criterio dove."]} +{"source": "Previous work on adversarially robust neural networks requires large training sets and computationally expensive training procedures. On the other hand, few-shot learning methods are highly vulnerable to adversarial examples. The goal of our work is to produce networks which both perform well at few-shot tasks and are simultaneously robust to adversarial examples. We adapt adversarial training for meta-learning, we adapt robust architectural features to small networks for meta-learning, we test pre-processing defenses as an alternative to adversarial training for meta-learning, and we investigate the advantages of robust meta-learning over robust transfer-learning for few-shot tasks. This work provides a thorough analysis of adversarially robust methods in the context of meta-learning, and we lay the foundation for future work on defenses for few-shot tasks.", "target": ["Sviluppiamo metodi di meta-learning per adversarially robust few-shot learning.", "Questo articolo presenta un metodo che migliora la robustezza del few-shot learning introducendo l'adversarial attack di dati di query nella fase di fine-tuning dell'inner-task di un algoritmo di meta-learning.", "Gli autori di questo articolo propongono un nuovo approccio per il training di un modello few-shot robusto."]} +{"source": "Many of our core assumptions about how neural networks operate remain empirically untested.One common assumption is that convolutional neural networks need to be stable to small translations and deformations to solve image recognition tasks.For many years, this stability was baked into CNN architectures by incorporating interleaved pooling layers.Recently, however, interleaved pooling has largely been abandoned.This raises a number of questions: Are our intuitions about deformation stability right at all?Is it important?Is pooling necessary for deformation invariance?If not, how is deformation invariance achieved in its absence?In this work, we rigorously test these questions, and find that deformation stability in convolutional networks is more nuanced than it first appears: (1) Deformation invariance is not a binary property, but rather that different tasks require different degrees of deformation stability at different layers.(2) Deformation stability is not a fixed property of a network and is heavily adjusted over the course of training, largely through the smoothness of the convolutional filters.(3) Interleaved pooling layers are neither necessary nor sufficient for achieving the optimal form of deformation stability for natural image classification.(4) Pooling confers \\emph{too much} deformation stability for image classification at initialization, and during training, networks have to learn to \\emph{counteract} this inductive bias.Together, these findings provide new insights into the role of interleaved pooling and deformation invariance in CNNs, and demonstrate the importance of rigorous empirical testing of even our most basic assumptions about the working of neural networks.", "target": ["Troviamo che il pooling da solo non determina la stabilità della deformazione nelle CNN e che la smoothness del filtro gioca un ruolo importante nel determinare la stabilità."]} +{"source": "Deep neural networks (DNNs) have been shown to over-fit a dataset when being trained with noisy labels for a long enough time.To overcome this problem, we present a simple and effective method self-ensemble label filtering (SELF) to progressively filter out the wrong labels during training.Our method improves the task performance by gradually allowing supervision only from the potentially non-noisy (clean) labels and stops learning on the filtered noisy labels.For the filtering, we form running averages of predictions over the entire training dataset using the network output at different training epochs.We show that these ensemble estimates yield more accurate identification of inconsistent predictions throughout training than the single estimates of the network at the most recent training epoch.While filtered samples are removed entirely from the supervised training loss, we dynamically leverage them via semi-supervised learning in the unsupervised loss.We demonstrate the positive effect of such an approach on various image classification tasks under both symmetric and asymmetric label noise and at different noise ratios.It substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures.", "target": ["Proponiamo un framework di self-ensemble per addestrare modelli di deep learning più robusti sotto dataset con label rumorose.", "Questo articolo ha proposto il \"self-ensemble label filtering\" per l'apprendimento con label rumorose, dove il rumore della label è indipendente dall'istanza, che produce un'identificazione più accurata delle predizioni incoerenti. ", "Questo articolo propone un algoritmo per l'apprendimento da dati con label rumorose che alterna l'aggiornamento del modello alla rimozione dei campioni che sembrano avere label rumorose."]} +{"source": "Long training times of deep neural networks are a bottleneck in machine learning research.The major impediment to fast training is the quadratic growth of both memory and compute requirements of dense and convolutional layers with respect to their information bandwidth.Recently, training `a priori' sparse networks has been proposed as a method for allowing layers to retain high information bandwidth, while keeping memory and compute low.However, the choice of which sparse topology should be used in these networks is unclear.In this work, we provide a theoretical foundation for the choice of intra-layer topology.First, we derive a new sparse neural network initialization scheme that allows us to explore the space of very deep sparse networks.Next, we evaluate several topologies and show that seemingly similar topologies can often have a large difference in attainable accuracy.To explain these differences, we develop a data-free heuristic that can evaluate a topology independently from the dataset the network will be trained on.We then derive a set of requirements that make a good topology, and arrive at a single topology that satisfies all of them.", "target": ["Indaghiamo il pruning delle DNN prima del training e forniamo una risposta su quale topologia dovrebbe essere usata per il training di reti sparse a priori.", "Gli autori propongono di sostituire i layer densi con layer lineari scarsamente connessi e un approccio per trovare la migliore topologia misurando quanto bene i layer scarsi approssimino i pesi casuali delle loro controparti dense.", "L'articolo propone un'architettura sparsa a cascata che è una moltiplicazione di diverse matrici sparse e un modello di connettività specifico che supera altre considerazioni fornite."]} +{"source": "Deep learning models require extensive architecture design exploration and hyperparameter optimization to perform well on a given task.The exploration of the model design space is often made by a human expert, and optimized using a combination of grid search and search heuristics over a large space of possible choices.Neural Architecture Search (NAS) is a Reinforcement Learning approach that has been proposed to automate architecture design.NAS has been successfully applied to generate Neural Networks that rival the best human-designed architectures.However, NAS requires sampling, constructing, and training hundreds to thousands of models to achieve well-performing architectures.This procedure needs to be executed from scratch for each new task.The application of NAS to a wide set of tasks currently lacks a way to transfer generalizable knowledge across tasks.In this paper, we present the Multitask Neural Model Search (MNMS) controller.Our goal is to learn a generalizable framework that can condition model construction on successful model searches for previously seen tasks, thus significantly speeding up the search for new tasks.We demonstrate that MNMS can conduct an automated architecture search for multiple tasks simultaneously while still learning well-performing, specialized models for each task.We then show that pre-trained MNMS controllers can transfer learning to new tasks.By leveraging knowledge from previous searches, we find that pre-trained MNMS models start from a better location in the search space and reduce search time on unseen tasks, while still discovering models that outperform published human-designed models.", "target": ["Presentiamo Multitask Neural Model Search, un meta-learner che può progettare modelli per più task simultaneamente e trasferire l'apprendimento a task non visti prima.", "Questo articolo estende la ricerca dell'architettura neurale al problema dell'apprendimento multitasking in cui un controllore di ricerca del modello condizionato dal task viene appreso per gestire più task contemporaneamente.", "In questo articolo, gli autori riassumono il loro lavoro sulla costruzione di un framework, chiamato Multitask Neural Model Search controller, per la costruzione automatizzata di reti neurali attraverso più task contemporaneamente."]} +{"source": "This work studies the problem of modeling non-linear visual processes by leveraging deep generative architectures for learning linear, Gaussian models of observed sequences.We propose a joint learning framework, combining a multivariate autoregressive model and deep convolutional generative networks.After justification of theoretical assumptions of inearization, we propose an architecture that allows Variational Autoencoders and Generative Adversarial Networks to simultaneously learn the non-linear observation as well as the linear state-transition model from a sequence of observed frames.Finally, we demonstrate our approach on conceptual toy examples and dynamic textures.", "target": ["Modelliamo i processi visivi non lineari come rumore autoregressivo attraverso generative deep learning.", "Propone un nuovo metodo che modella il processo visivo non lineare con una versione deep di un processo lineare (processo di Markov).", "Questo articolo propone un nuovo modello generativo deep per sequenze, in particolare sequenze di immagini e video, che utilizza una struttura lineare in una parte del modello."]} +{"source": "Partial differential equations (PDEs) play a prominent role in many disciplines such as applied mathematics, physics, chemistry, material science, computer science, etc.PDEs are commonly derived based on physical laws or empirical observations.However, the governing equations for many complex systems in modern applications are still not fully known.With the rapid development of sensors, computational power, and data storage in the past decade, huge quantities of data can be easily collected and efficiently stored.Such vast quantity of data offers new opportunities for data-driven discovery of hidden physical laws.Inspired by the latest development of neural network designs in deep learning, we propose a new feed-forward deep network, called PDE-Net, to fulfill two objectives at the same time: to accurately predict dynamics of complex systems and to uncover the underlying hidden PDE models.The basic idea of the proposed PDE-Net is to learn differential operators by learning convolution kernels (filters), and apply neural networks or other machine learning methods to approximate the unknown nonlinear responses.Comparing with existing approaches, which either assume the form of the nonlinear response is known or fix certain finite difference approximations of differential operators, our approach has the most flexibility by learning both differential operators and the nonlinear responses.A special feature of the proposed PDE-Net is that all filters are properly constrained, which enables us to easily identify the governing PDE models while still maintaining the expressive and predictive power of the network.These constrains are carefully designed by fully exploiting the relation between the orders of differential operators and the orders of sum rules of filters (an important concept originated from wavelet theory).We also discuss relations of the PDE-Net with some existing networks in computer vision such as Network-In-Network (NIN) and Residual Neural Network (ResNet).Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.", "target": ["Questo articolo propone una nuova rete feed-forward, chiamata PDE-Net, per imparare le PDE dai dati.", "L'articolo espone l'uso di architetture di deep learning allo scopo di identificare sistemi dinamici specificati da PDE.", "L'articolo propone un algoritmo basato su reti neurali per l'apprendimento da dati che derivano da sistemi dinamici con equazioni che li goveranano che possono essere scritte come equazioni differenziali parziali.", "Questo articolo affronta la modellazione di sistemi dinamici complessi attraverso equazioni differenziali parziali non parametriche utilizzando architetture neurali, dove l'idea più importante del paper (PDE-net) è quella di imparare sia gli operatori differenziali che la funzione che governa la PDE."]} +{"source": "Each training step for a variational autoencoder (VAE) requires us to sample from the approximate posterior, so we usually choose simple (e.g. factorised) approximate posteriors in which sampling is an efficient computation that fully exploits GPU parallelism. However, such simple approximate posteriors are often insufficient, as they eliminate statistical dependencies in the posterior. While it is possible to use normalizing flow approximate posteriors for continuous latents, there is nothing analogous for discrete latents.The most natural approach to model discrete dependencies is an autoregressive distribution, but sampling from such distributions is inherently sequential and thus slow. We develop a fast, parallel sampling procedure for autoregressive distributions based on fixed-point iterations which enables efficient and accurate variational inference in discrete state-space models. To optimize the variational bound, we considered two ways to evaluate probabilities: inserting the relaxed samples directly into the pmf for the discrete distribution, or converting to continuous logistic latent variables and interpreting the K-step fixed-point iterations as a normalizing flow. We found that converting to continuous latent variables gave considerable additional scope for mismatch between the true and approximate posteriors, which resulted in biased inferences, we thus used the former approach. We tested our approach on the neuroscience problem of inferring discrete spiking activity from noisy calcium-imaging data, and found that it gave accurate connectivity estimates in an order of magnitude less time.", "target": ["Presentiamo una procedura di sampling veloce simile al normalizing flow per modelli di variabili latenti discreti.", "Questo articolo usa un'approssimazione variazionale di filtraggio autoregressivo per la stima dei parametri nei sistemi dinamici discreti usando iterazioni a virgola fissa.", "Gli autori pongono una famiglia di probabilità posteriori autoregressiva generale per le variabili discrete o i loro rilassamenti continui.", "Questo articolo ha due contributi principali: estende i normalizing flow alle impostazioni discrete e presenta una regola di aggiornamento approssimativa a virgola fissa per le serie temporali autoregressive che può sfruttare il parallelismo delle GPU."]} +{"source": "Deep neural networks (DNNs) had great success on NLP tasks such as language modeling, machine translation and certain question answering (QA) tasks.However, the success is limited at more knowledge intensive tasks such as QA from a big corpus.Existing end-to-end deep QA models (Miller et al., 2016; Weston et al., 2014) need to read the entire text after observing the question, and therefore their complexity in responding a question is linear in the text size.This is prohibitive for practical tasks such as QA from Wikipedia, a novel, or the Web.We propose to solve this scalability issue by using symbolic meaning representations, which can be indexed and retrieved efficiently with complexity that is independent of the text size.More specifically, we use sequence-to-sequence models to encode knowledge symbolically and generate programs to answer questions from the encoded knowledge.We apply our approach, called the N-Gram Machine (NGM), to the bAbI tasks (Weston et al., 2015) and a special version of them (“life-long bAbI”) which has stories of up to 10 million sentences.Our experiments show that NGM can successfully solve both of these tasks accurately and efficiently.Unlike fully differentiable memory models, NGM’s time complexity and answering quality are not affected by the story length.The whole system of NGM is trained end-to-end with REINFORCE (Williams, 1992).To avoid high variance in gradient estimation, which is typical in discrete latent variable models, we use beam search instead of sampling.To tackle the exponentially large search space, we use a stabilized auto-encoding objective and a structure tweak procedure to iteratively reduce and refine the search space.", "target": ["Proponiamo un framework che impara a codificare la conoscenza simbolicamente e a generare programmi per ragionare sulla conoscenza codificata.", "Gli autori propongono la N-Gram machine per rispondere a domande su documenti lunghi.", "Questo articolo presenta la n-gram machine, un modello che codifica le frasi in semplici rappresentazioni simboliche che possono essere interrogate in modo efficiente."]} +{"source": "We propose to use a meta-learning objective that maximizes the speed of transfer on a modified distribution to learn how to modularize acquired knowledge.In particular, we focus on how to factor a joint distribution into appropriate conditionals, consistent with the causal directions.We explain when this can work, using the assumption that the changes in distributions are localized (e.g. to one of the marginals, for example due to an intervention on one of the variables).We prove that under this assumption of localized changes in causal mechanisms, the correct causal graph will tend to have only a few of its parameters with non-zero gradient, i.e. that need to be adapted (those of the modified variables).We argue and observe experimentally that this leads to faster adaptation, and use this property to define a meta-learning surrogate score which, in addition to a continuous parametrization of graphs, would favour correct causal graphs.Finally, motivated by the AI agent point of view (e.g. of a robot discovering its environment autonomously), we consider how the same objective can discover the causal variables themselves, as a transformation of observed low-level variables with no causal meaning.Experiments in the two-variable case validate the proposed ideas and theoretical results.", "target": ["Questo articolo propone un obiettivo di meta-learning basato sulla velocità di adattamento alle distribuzioni di trasferimento per scoprire una decomposizione modulare e variabili causali.", "L'articolo mostra che un modello con la struttura di base corretta si adatterà più velocemente a un intervento causale rispetto a un modello con una struttura non corretta.", "In questo lavoro, gli autori hanno proposto un framework generale e sistematico dell'obiettivo di meta-trasfer che incorpora l'apprendimento della struttura causale sotto interventi sconosciuti."]} +{"source": "Continual learning is a longstanding goal of artificial intelligence, but is often counfounded by catastrophic forgetting that prevents neural networks from learning tasks sequentially.Previous methods in continual learning have demonstrated how to mitigate catastrophic forgetting, and learn new tasks while retaining performance on the previous tasks.We analyze catastrophic forgetting from the perspective of change in classifier likelihood and propose a simple L1 minimization criterion which can be adapted to different use cases.We further investigate two ways to minimize forgetting as quantified by this criterion and propose strategies to achieve finer control over forgetting.Finally, we evaluate our strategies on 3 datasets of varying difficulty and demonstrate improvements over previously known L2 strategies for mitigating catastrophic forgetting.", "target": ["Un'altra prospettiva sul catastrophic forgetting", "Questo articolo introduce un framework per combattere il catastrophic forgetting basato sulla modifica del termine della loss per minimizzare i cambiamenti nella probabilità del classificatore, ottenuta tramite un'approssimazione in serie di Taylor.", "Questo articolo cerca di risolvere il problema del continual learning concentrandosi sugli approcci di regolarizzazione, e propone una strategia L_1 per mitigare il problema."]} +{"source": "We propose an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attributeediting workflow.Current face modeling methods using 3DMM suffer from the lack of local control.We thus create a 3DMM bycombining local part-based 3DMM for the eyes, nose, mouth, ears, and facial mask regions.Our local PCA-based approachuses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressivewhile allowing accurate reconstruction.The editing controls we provide to the user are intuitive as they are extracted fromanthropometric measurements found in the literature.Out of a large set of possible anthropometric measurements, we filter theones that have meaningful generative power given the face data set.We bind the measurements to the part-based 3DMM throughmapping matrices derived from our data set of facial scans.Our part-based 3DMM is compact yet accurate, and compared toother 3DMM methods, it provides a new trade-off between local and global control.We tested our approach on a data set of 135scans used to derive the 3DMM, plus 19 scans that served for validation.The results show that our part-based 3DMM approachhas excellent generative properties and allows intuitive local control to the user.", "target": ["Proponiamo un approccio per costruire modelli realistici 3D morphable facial (3DMM) che permette un flusso di lavoro intuitivo di editing degli attributi facciali selezionando i migliori set di autovettori e misure antropometriche.", "Propone un modello frammentario e morfabile per le mesh dei volti umani e propone anche una mappatura tra le misure antropometriche del volto e i parametri del modello al fine di sintetizzare e modificare i volti con gli attributi desiderati.", "Questo articolo descrive un metodo di modello facciale morphable basato su parti che permette il controllo localizzato dell'utente."]} +{"source": "We review eight machine learning classification algorithms to analyze Electroencephalographic (EEG) signals in order to distinguish EEG patterns associated with five basic educational tasks.There is a large variety of classifiers being used in this EEG-based Brain-Computer Interface (BCI) field.While previous EEG experiments used several classifiers in the same experiments or reviewed different algorithms on datasets from different experiments, our approach focuses on review eight classifier categories on the same dataset, including linear classifiers, non-linear Bayesian classifiers, nearest neighbour classifiers, ensemble methods, adaptive classifiers, tensor classifiers, transfer learning and deep learning.Besides, we intend to find an approach which can run smoothly on the current mainstream personal computers and smartphones. The empirical evaluation demonstrated that Random Forest and LSTM (Long Short-Term Memory) outperform other approaches.We used a data set which users were conducting five frequently-conduct learning-related tasks, including reading, writing, and typing.Results showed that these best two algorithms could correctly classify different users with an accuracy increase of 5% to 9%, use each task independently.Within each subject, the tasks could be recognized with an accuracy increase of 4% to 7%, compared with other approaches.This work suggests that Random Forest could be a recommended approach (fast and accurate) for current mainstream hardware, while LSTM has the potential to be the first-choice approach when the mainstream computers and smartphones can process more data in a shorter time.", "target": ["Due algoritmi hanno superato altri otto su un esperimento BCI basato su EEG"]} +{"source": "Multi-agent reinforcement learning offers a way to study how communication could emerge in communities of agents needing to solve specific problems.In this paper, we study the emergence of communication in the negotiation environment, a semi-cooperative model of agent interaction.We introduce two communication protocols - one grounded in the semantics of the game, and one which is a priori ungrounded. We show that self-interested agents can use the pre-grounded communication channel to negotiate fairly, but are unable to effectively use the ungrounded, cheap talk channel to do the same. However, prosocial agents do learn to use cheap talk to find an optimal negotiating strategy, suggesting that cooperation is necessary for language to emerge.We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation.", "target": ["Insegniamo agli agenti a negoziare usando soloreinforcement learning; gli agenti egoisti possono farlo, ma solo usando un canale di comunicazione affidabile, e gli agenti prosociali possono negoziare usando il cheap talk.", "Gli autori descrivono una variante del gioco di negoziazione con la considerazione di un canale di comunicazione secondario per il cheap talk, trovando che il canale secondario migliora i risultati di negoziazione.", "Questo articolo esplora come gli agenti possono imparare a comunicare per risolvere un task di negoziazione e trovano che gli agenti prosociali sono in grado di imparare a basare i simboli usando RL, mentre gli agenti self-interested no.", "Esamina i problemi di come gli agenti possono usare la comunicazione per massimizzare le loro reward in un semplice gioco di negoziazione."]} +{"source": "The goal of few-shot learning is to learn a classifier that generalizes well even when trained with a limited number of training instances per class.The recently introduced meta-learning approaches tackle this problem by learning a generic classifier across a large number of multiclass classification tasks and generalizing the model to a new task.Yet, even with such meta-learning, the low-data problem in the novel classification task still remains.In this paper, we propose Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem.Specifically, we propose to learn to propagate labels from labeled instances to unlabeled test instances, by learning a graph construction module that exploits the manifold structure in the data.TPN jointly learns both the parameters of feature embedding and the graph construction in an end-to-end manner. We validate TPN on multiple benchmark datasets, on which it largely outperforms existing few-shot learning approaches and achieves the state-of-the-art results.", "target": ["Proponiamo un nuovo framework di meta-learning per l'inferenza trasduttiva che classifica l'intero test set in una sola volta per alleviare il problema dei pochi dati.", "Questo articolo propone di affrontare few-shot learning in modo trasduttivo imparando un modello di propagazione delle label in una maniera end-to-end, è il primo paper dove si impara label propagation few-shot per l'apprendimento trasduttivo e ha prodotto utili risultati empirici.", "Questo articolo propone un framework di meta-learning che sfrutta i dati non labelled imparando la propogazione delle label basata sul grafo in modo end-to-end.", "Studia l'apprendimento few-shot in un'impostazione trasduttiva: usando il meta learning per imparare a propagare le label dai training sample ai test sample."]} +{"source": "We describe the use of an automated scheduling system for observation policy design and to schedule operations of the NASA (National Aeronautics and Space Administration) ECOSystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS).We describe the adaptation of the Compressed Large-scale Activity Scheduler and Planner (CLASP) scheduling system to the ECOSTRESS scheduling problem, highlighting multiple use cases for automated scheduling and several challenges for the scheduling technology: handling long-term campaigns with changing information, Mass Storage Unit Ring Buffer operations challenges, and orbit uncertainty.The described scheduling system has been used for operations of the ECOSTRESS instrument since its nominal operations start July 2018 and is expected to operate until mission end in Summer 2019.", "target": ["Descriviamo l'uso di un sistema di pianificazione automatizzato per la progettazione della policy di osservazione e per programmare le operazioni della missione ECOSTRESS della NASA.", "Questo articolo presenta un adattamento di un sistema di programmazione automatica, CLASP, per indirizzare un esperimento EO (ECOSTRESS) sulla ISS."]} +{"source": "Adversarial examples are modified samples that preserve original image structures but deviate classifiers.Researchers have put efforts into developing methods for generating adversarial examples and finding out origins.Past research put much attention on decision boundary changes caused by these methods.This paper, in contrast, discusses the origin of adversarial examples from a more underlying knowledge representation point of view.Human beings can learn and classify prototypes as well as transformations of objects.While neural networks store learned knowledge in a more hybrid way of combining all prototypes and transformations as a whole distribution.Hybrid storage may lead to lower distances between different classes so that small modifications can mislead the classifier.A one-step distribution imitation method is designed to imitate distribution of the nearest different class neighbor.Experiments show that simply by imitating distributions from a training set without any knowledge of the classifier can still lead to obvious impacts on classification results from deep networks.It also implies that adversarial examples can be in more forms than small perturbations.Potential ways of alleviating adversarial examples are discussed from the representation point of view.The first path is to change the encoding of data sent to the training step.Training data that are more prototypical can help seize more robust and accurate structural knowledge.The second path requires constructing learning frameworks with improved representations.", "target": ["La memorizzazione e la rappresentazione hybrid della conoscenza appresa può essere una ragione per gli adversarial sample."]} +{"source": "Differently from the popular Deep Q-Network (DQN) learning, Alternating Q-learning (AltQ) does not fully fit a target Q-function at each iteration, and is generally known to be unstable and inefficient.Limited applications of AltQ mostly rely on substantially altering the algorithm architecture in order to improve its performance.Although Adam appears to be a natural solution, its performance in AltQ has rarely been studied before.In this paper, we first provide a solid exploration on how well AltQ performs with Adam.We then take a further step to improve the implementation by adopting the technique of parameter restart.More specifically, the proposed algorithms are tested on a batch of Atari 2600 games and exhibit superior performance than the DQN learning method.The convergence rate of the slightly modified version of the proposed algorithms is characterized under the linear function approximation.To the best of our knowledge, this is the first theoretical study on the Adam-type algorithms in Q-learning.", "target": ["Nuovi esperimenti e teoria per il Q-Learning basato su Adam", "Questo articolo fornisce un risultato di convergenza per il tradizionale Q-learning con approssimazione di funzione lineare quando si usa un aggiornamento simile ad Adam.", "Questo articolo descrive un metodo per migliorare l'algoritmo AltQ utilizzando una combinazione di un ottimizzatore Adam e riavviando regolarmente i parametri interni all'ottimizzatore Adam."]} +{"source": "In search for more accurate predictive models, we customize capsule networks for the learning to diagnose problem.We also propose Spectral Capsule Networks, a novel variation of capsule networks, that converge faster than capsule network with EM routing.Spectral capsule networks consist of spatial coincidence filters that detect entities based on the alignment of extracted features on a one-dimensional linear subspace.Experiments on a public benchmark learning to diagnose dataset not only shows the success of capsule networks on this task, but also confirm the faster convergence of the spectral capsule networks.", "target": ["Una nuova capsule network che converge più velocemente sui nostri esperimenti di benchmark nell'ambito sanitario.", "Presenta una variante delle capsule network che invece di usare il routing EM impiega un sottospazio lineare coperto dall'autovettore dominante sulla matrice dei voti pesati della capsule precedente.", "L'articolo propone un metodo di routing migliorato, che impiega strumenti di decomposizione agli autovalori per trovare l'attivazione e la posa della capsule."]} +{"source": "One of the big challenges in machine learning applications is that training data can be different from the real-world data faced by the algorithm.In language modeling, users’ language (e.g. in private messaging) could change in a year and be completely different from what we observe in publicly available data.At the same time, public data can be used for obtaining general knowledge (i.e. general model of English).We study approaches to distributed fine-tuning of a general model on user private data with the additional requirements of maintaining the quality on the general data and minimization of communication costs.We propose a novel technique that significantly improves prediction quality on users’ language compared to a general model and outperforms gradient compression methods in terms of communication efficiency.The proposed procedure is fast and leads to an almost 70% perplexity reduction and 8.7 percentage point improvement in keystroke saving rate on informal English texts.Finally, we propose an experimental framework for evaluating differential privacy of distributed training of language models and show that our approach has good privacy guarantees.", "target": ["Proponiamo un metodo di fne-tuning distribuito dei language model sui dispositivi degli utenti senza raccolta di dati privati", "Questo articolo si occupa di migliorare i language model su apparecchiature mobili basati su una piccola porzione di testo che l'utente ha immesso utilizzando un obiettivo interpolato linearmente tra il testo specifico dell'utente e l'inglese generale."]} +{"source": "We propose that approximate Bayesian algorithms should optimize a new criterion, directly derived from the loss, to calculate their approximate posterior which we refer to as pseudo-posterior.Unlike standard variational inference which optimizes a lower bound on the log marginal likelihood, the new algorithms can be analyzed to provide loss guarantees on the predictions with the pseudo-posterior.Our criterion can be used to derive new sparse Gaussian process algorithms that have error guarantees applicable to various likelihoods.", "target": ["Questo articolo utilizza l'analisi della loss di Lipschitz su uno spazio di ipotesi delimitato per derivare nuovi algoritmi di tipo ERM con forti garanzie di performance che possono essere applicati al modello GP sparso non coniugato."]} +{"source": "In this paper, we propose a novel regularization method, RotationOut, for neural networks. Different from Dropout that handles each neuron/channel independently, RotationOut regards its input layer as an entire vector and introduces regularization by randomly rotating the vector. RotationOut can also be used in convolutional layers and recurrent layers with a small modification.We further use a noise analysis method to interpret the difference between RotationOut and Dropout in co-adaptation reduction. Using this method, we also show how to use RotationOut/Dropout together with Batch Normalization. Extensive experiments in vision and language tasks are conducted to show the effectiveness of the proposed method. Codes will be available.", "target": ["Proponiamo un metodo di regolarizzazione per la rete neurale e un metodo di analisi del rumore", "Questo articolo propone un nuovo metodo di regolarizzazione per mitigare il problema dell'overfitting delle reti neurali deep ruotando le caratteristiche con una matrice di rotazione casuale per ridurre il co-adattamento.", "Questo articolo propone un nuovo metodo di regolarizzazione per il training delle reti neurali, che aggiunge neuroni di rumore in modo interdipendente."]} +{"source": "Formulating the reinforcement learning (RL) problem in the framework of probabilistic inference not only offers a new perspective about RL, but also yields practical algorithms that are more robust and easier to train.While this connection between RL and probabilistic inference has been extensively studied in the single-agent setting, it has not yet been fully understood in the multi-agent setup.In this paper, we pose the problem of multi-agent reinforcement learning as the problem of performing inference in a particular graphical model.We model the environment, as seen by each of the agents, using separate but related Markov decision processes.We derive a practical off-policy maximum-entropy actor-critic algorithm that we call Multi-agent Soft Actor-Critic (MA-SAC) for performing approximate inference in the proposed model using variational inference.MA-SAC can be employed in both cooperative and competitive settings.Through experiments, we demonstrate that MA-SAC outperforms a strong baseline on several multi-agent scenarios.While MA-SAC is one resultant multi-agent RL algorithm that can be derived from the proposed probabilistic framework, our work provides a unified view of maximum-entropy algorithms in the multi-agent setting.", "target": ["Un framework probabilistico per multi-agent reinforcement learning", "Questo articolo propone un nuovo algoritmo chiamato Multi-Agent Soft Actor-Critic (MA-SAC) basato sull'algoritmo off-policy maximum-entropy actor critic Soft Actor-Critic (SAC)"]} +{"source": "Sorting input objects is an important step in many machine learning pipelines.However, the sorting operator is non-differentiable with respect to its inputs, which prohibits end-to-end gradient-based optimization.In this work, we propose NeuralSort, a general-purpose continuous relaxation of the output of the sorting operator from permutation matrices to the set of unimodal row-stochastic matrices, where every row sums to one and has a distinct argmax.This relaxation permits straight-through optimization of any computational graph involve a sorting operation.Further, we use this relaxation to enable gradient-based stochastic optimization over the combinatorially large space of permutations by deriving a reparameterized gradient estimator for the Plackett-Luce family of distributions over permutations.We demonstrate the usefulness of our framework on three tasks that require learning semantic orderings of high-dimensional objects, including a fully differentiable, parameterized extension of the k-nearest neighbors algorithm", "target": ["Forniamo un rilassamento continuo all'operatore di ordinamento, consentendo un'ottimizzazione stocastica basata sul gradiente end-to-end.", "L'articolo considera come ordinare un certo numero di elementi senza imparare esplicitamente e necessariamente i loro significati o valori reali e propone un metodo per eseguire l'ottimizzazione attraverso un rilassamento continuo.", "Questo lavoro si basa su un'identità sum(top k) per derivare un campionatore differenziabile pathwise di matrici 'unimodal row stochastic'.", "Introduce un rilassamento continuo dell'operatore di ordinamento per costruire un'ottimizzazione basata sul gradiente end-to-end e introduce un'estensione stocastica del suo metodo usando distribuzioni Placket-Luce e Monte Carlo."]} +{"source": "Transferring knowledge across tasks to improve data-efficiency is one ofthe open key challenges in the area of global optimization algorithms.Readilyavailable algorithms are typically designed to be universal optimizers and, thus,often suboptimal for specific tasks.We propose a novel transfer learning method toobtain customized optimizers within the well-established framework of Bayesianoptimization, allowing our algorithm to utilize the proven generalizationcapabilities of Gaussian processes.Using reinforcement learning to meta-train anacquisition function (AF) on a set of related tasks, the proposed method learns toextract implicit structural information and to exploit it for improved data-efficiency.We present experiments on a sim-to-real transfer task as well as on several simulatedfunctions and two hyperparameter search problems.The results show that ouralgorithm (1) automatically identifies structural properties of objective functionsfrom available source tasks or simulations, (2) performs favourably in settings withboth scarse and abundant source data, and (3) falls back to the performance levelof general AFs if no structure is present.", "target": ["Eseguiamo un transfer learning efficiente e flessibile nel framework dell'ottimizzazione bayesiana attraverso funzioni di acquisizione neurale meta-learned.", "Gli autori presentano MetaBO che usa il reinforcement learning per apprendere tramite meta-learning la funzione di acquisizione per l'ottimizzazione bayesiana, mostrando una crescente sample-efficiency su nuovi task.", "Gli autori propongono un'alternativa basata sul meta-learning alle funzioni di acquisizione standard (AF), in cui una rete neurale pretrained produce valori di acquisizione in funzione di feature scelte a mano."]} +{"source": "We study the evolution of internal representations during deep neural network (DNN) training, aiming to demystify the compression aspect of the information bottleneck theory.The theory suggests that DNN training comprises a rapid fitting phase followed by a slower compression phase, in which the mutual information I(X;T) between the input X and internal representations T decreases.Several papers observe compression of estimated mutual information on different DNN models, but the true I(X;T) over these networks is provably either constant (discrete X) or infinite (continuous X).This work explains the discrepancy between theory and experiments, and clarifies what was actually measured by these past works.To this end, we introduce an auxiliary (noisy) DNN framework for which I(X;T) is a meaningful quantity that depends on the network's parameters.This noisy framework is shown to be a good proxy for the original (deterministic) DNN both in terms of performance and the learned representations.We then develop a rigorous estimator for I(X;T) in noisy DNNs and observe compression in various models.By relating I(X;T) in the noisy DNN to an information-theoretic communication problem, we show that compression is driven by the progressive clustering of hidden representations of inputs from the same class.Several methods to directly monitor clustering of hidden representations, both in noisy and deterministic DNNs, are used to show that meaningful clusters form in the T space.Finally, we return to the estimator of I(X;T) employed in past works, and demonstrate that while it fails to capture the true (vacuous) mutual information, it does serve as a measure for clustering.This clarifies the past observations of compression and isolates the geometric clustering of hidden representations as the true phenomenon of interest.", "target": ["Le reti neurali deep deterministiche non scartano le informazioni, ma raggruppano i loro input.", "Questo articolo fornisce un principio per esaminare la fase di compressione nelle reti neurali deep, fornendo uno stimatore di entropia supportato dalla teoria per stimare l'informazione mutua."]} +{"source": "A central challenge in multi-agent reinforcement learning is the induction of coordination between agents of a team.In this work, we investigate how to promote inter-agent coordination using policy regularization and discuss two possible avenues respectively based on inter-agent modelling and synchronized sub-policy selection.We test each approach in four challenging continuous control tasks with sparse rewards and compare them against three baselines including MADDPG, a state-of-the-art multi-agent reinforcement learning algorithm.To ensure a fair comparison, we rely on a thorough hyper-parameter selection and training methodology that allows a fixed hyper-parameter search budget for each algorithm and environment.We consequently assess both the hyper-parameter sensitivity, sample-efficiency and asymptotic performance of each learning method.Our experiments show that the proposed methods lead to significant improvements on cooperative problems.We further analyse the effects of the proposed regularizations on the behaviors learned by the agents.", "target": ["Proponiamo obiettivi di regolarizzazione per algoritmi RL multi-agente che favoriscono la coordinazione nei task cooperativi.", "Questo articolo propone due metodi per indirizzare gli agenti verso l'apprendimento di comportamenti coordinati e valuta entrambi rigorosamente attraverso domini multi-agente di adeguata complessità.", "Questo articolo propone due metodi basati su MADDPG per incoraggiare la collaborazione tra agenti MARL decentralizzati."]} +{"source": "Multimodal sentiment analysis is a core research area that studies speaker sentiment expressed from the language, visual, and acoustic modalities.The central challenge in multimodal learning involves inferring joint representations that can process and relate information from these modalities.However, existing work learns joint representations using multiple modalities as input and may be sensitive to noisy or missing modalities at test time.With the recent success of sequence to sequence models in machine translation, there is an opportunity to explore new ways of learning joint representations that may not require all input modalities at test time.In this paper, we propose a method to learn robust joint representations by translating between modalities.Our method is based on the key insight that translation from a source to a target modality provides a method of learning joint representations using only the source modality as input.We augment modality translations with a cycle consistency loss to ensure that our joint representations retain maximal information from all modalities.Once our translation model is trained with paired multimodal data, we only need data from the source modality at test-time for prediction.This ensures that our model remains robust from perturbations or missing target modalities.We train our model with a coupled translation-prediction objective and it achieves new state-of-the-art results on multimodal sentiment analysis datasets: CMU-MOSI, ICT-MMMO, and YouTube.Additional experiments show that our model learns increasingly discriminative joint representations with more input modalities while maintaining robustness to perturbations of all other modalities.", "target": ["Presentiamo un modello che impara rappresentazioni congiunte robuste eseguendo traduzioni cicliche gerarchiche tra modalità multiple.", "Questo articolo presenta il Multimodal Cyclic Translation Network (MCTN) e lo valuta per l'analisi multimodale del sentimento."]} +{"source": "The geometric properties of loss surfaces, such as the local flatness of a solution, are associated with generalization in deep learning.The Hessian is often used to understand these geometric properties.We investigate the differences between the eigenvalues of the neural network Hessian evaluated over the empirical dataset, the Empirical Hessian, and the eigenvalues of the Hessian under the data generating distribution, which we term the True Hessian.Under mild assumptions, we use random matrix theory to show that the True Hessian has eigenvalues of smaller absolute value than the Empirical Hessian.We support these results for different SGD schedules on both a 110-Layer ResNet and VGG-16.To perform these experiments we propose a framework for spectral visualization, based on GPU accelerated stochastic Lanczos quadrature.This approach is an order of magnitude faster than state-of-the-art methods for spectral visualization, and can be generically used to investigate the spectral properties of matrices in deep learning.", "target": ["Comprendere gli autovalori dell'Hessiana delle reti neurali sotto la distribuzione generatrice di dati.", "Questo articolo analizza lo spettro della matrice Hessiana di grandi reti neurali, con un'analisi max/min degli autovalori e la visualizzazione degli spettri utilizzando un approccio di quadratura Lanczos.", "Questo articolo usa la teoria delle matrici casuali per studiare la distribuzione dello spettro dell'Hessiana empirica e dell'Hessiana vera per deep learning, e propone un metodo efficiente di visualizzazione dello spettro."]} +{"source": "Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input.Based on the promising results of graph neural networks on highly structured data, we develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text.In an extensive evaluation, we show that the resulting hybrid sequence-graph models outperform both pure sequence models as well as pure graph models on a range of summarization tasks.", "target": ["Un semplice trucco per migliorare i modelli che processano le sequenze componendoli con un modello a grafo", "Questo articolo presenta un modello di summarization strutturale con un encoder basato su grafi esteso da RNN.", "Questo lavoro combina le Graph Neural Networks con un approccio sequenziale all'abstractive summarization, efficace su tutti i dataset rispetto alle baseline esterne."]} +{"source": "In probabilistic classification, a discriminative model based on Gaussian mixture exhibits flexible fitting capability.Nevertheless, it is difficult to determine the number of components.We propose a sparse classifier based on a discriminative Gaussian mixture model (GMM), which is named sparse discriminative Gaussian mixture (SDGM).In the SDGM, a GMM-based discriminative model is trained by sparse Bayesian learning.This learning algorithm improves the generalization capability by obtaining a sparse solution and automatically determines the number of components by removing redundant components.The SDGM can be embedded into neural networks (NNs) such as convolutional NNs and can be trained in an end-to-end manner.Experimental results indicated that the proposed method prevented overfitting by obtaining sparsity.Furthermore, we demonstrated that the proposed method outperformed a fully connected layer with the softmax function in certain cases when it was used as the last layer of a deep NN.", "target": ["Un classificatore sparso basato su un modello discriminativo a miscela gaussiana, che può anche essere incorporato in una rete neurale.", "L'articolo presenta un modello di mistura gaussiana trained tramite gradient descent che permette di indurre la sparsità e di ridurre i parametri trainable di layer del modello.", "Questo articolo propone un classificatore, chiamato SDGM, basato sulla mistura discriminativa gaussiana e la sua stima dei parametri sparsi."]} +{"source": "We recently observed that convolutional filters initializedfarthest apart from each other using offthe-shelf pre-computed Grassmannian subspacepacking codebooks performed surprisingly wellacross many datasets.Through this short paper,we’d like to disseminate some initial results in thisregard in the hope that we stimulate the curiosityof the deep-learning community towards consideringclassical Grassmannian subspace packingresults as a source of new ideas for more efficientinitialization strategies.", "target": ["Inizializzare i pesi utilizzando i codebook di Grassmann, ottenere un training più veloce e una migliore accuratezza"]} +{"source": "Domain adaptation is critical for success in new, unseen environments.Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts.Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs.We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model.CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings.We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.", "target": ["Un approccio di domain adaptation unsupervised che si adatta sia a livello di pixel che di feature", "Questo articolo propone un approccio di domain adaptation estendendo la CycleGAN con funzioni di loss specifiche del task e loss imposte sia sui pixel che sulle feature.", "Questo articolo propone l'uso di CycleGAN per domain adaptation", "Questo articolo fa una nuova estensione al lavoro precedente su CycleGAN accoppiandolo con approcci di adversarial adaptation, includendo una nuova feature e una loss semantica nell'obiettivo generale del CycleGAN, con chiari benefici."]} +{"source": "Stemming is the process of removing affixes( i.e. prefixes, infixes and suffixes) that improve the accuracy and performance of information retrieval systems.This paper presents the reduction of Amharic words to corresponding stem where with the intention that it preserves semantic information.The proposed approach efficiently removes affixes from an Amharic word.The process of removing such affixes (prefixes, infixes and suffixes) from a word to its base form is called stemming.While many stemmers exist for dominant languages such as English, under resourced languages such as Amharic which lacks such powerful tool support.In this paper, we design a light Amharic stemmer relying on the rules that receives an Amharic word and then it finds a match to the beginning of a word to the possible prefixes and to its ending with the possible suffixes and finally it checks whether it has infix.The final result is the stem if there is any prefix, infix or/and suffix, otherwise it remains in one of the earlier states.The technique does not rely on any additional resource (e.g. dictionary) to verify the generated stem.The performance of the generated stemmer is evaluated using manually annotated Amharic words.The result is compared with current state-of-the-art stemmer for Amharic showing an increase of 7% in stemmer correctness.", "target": ["Amharic Light Stemmer è progettato per migliorare le prestazioni di Amharic Sentiment Classification.", "Questo articolo studia lo stemming per le lingue morfologicamente ricche con uno stemmer leggero che rimuove solo gli affissi nella misura in cui l'informazione semantica originale della parola viene mantenuta.", "Questo articolo propone una tecnica di stemming leggera per la lingua amarica utilizzando una cascata di trasformazioni che standardizzano la forma, rimuovono i suffissi, i prefissi e gli infissi."]} +{"source": "Place and grid-cells are known to aid navigation in animals and humans.Together with concept cells, they allow humans to form an internal representation of the external world, namely the concept space.We investigate the presence of such a space in deep neural networks by plotting the activation profile of its hidden layer neurons.Although place cell and concept-cell like properties are found, grid-cell like firing patterns are absent thereby indicating a lack of path integration or feature transformation functionality in trained networks.Overall, we present a plausible inadequacy in current deep learning practices that restrict deep networks from performing analogical reasoning and memory retrieval tasks.", "target": ["Abbiamo studiato se le reti deep semplici possiedono neuroni artificiali simili a celle di griglia durante il recupero della memoria nello spazio concettuale appreso."]} +{"source": "We develop a comprehensive description of the active inference framework, as proposed by Friston (2010), under a machine-learning compliant perspective.Stemming from a biological inspiration and the auto-encoding principles, a sketch of a cognitive architecture is proposed that should provide ways to implement estimation-oriented control policies. Computer simulations illustrate the effectiveness of the approach through a foveated inspection of the input data.The pros and cons of the control policy are analyzed in detail, showing interesting promises in terms of processing compression.Though optimizing future posterior entropy over the actions set is shown enough to attain locally optimal action selection, offline calculation using class-specific saliency maps is shown better for it saves processing costs through saccades pathways pre-processing, with a negligible effect on the recognition/compression rates.", "target": ["Pro e contro della saccade-based computer vision in una prospettiva di codifica predittiva", "Presenta un framework computazionale per il problema della visione attiva e spiega come la policy di controllo può essere appresa per ridurre l'entropia della credenza a posteriori."]} +{"source": "Graphs possess exotic features like variable size and absence of natural ordering of the nodes that make them difficult to analyze and compare.To circumvent this problem and learn on graphs, graph feature representation is required.Main difficulties with feature extraction lie in the trade-off between expressiveness, consistency and efficiency, i.e. the capacity to extract features that represent the structural information of the graph while being deformation-consistent and isomorphism-invariant.While state-of-the-art methods enhance expressiveness with powerful graph neural-networks, we propose to leverage natural spectral properties of graphs to study a simple graph feature: the graph Laplacian spectrum (GLS).We analyze the representational power of this object that satisfies both isomorphism-invariance, expressiveness and deformation-consistency.In particular, we propose a theoretical analysis based on graph perturbation to understand what kind of comparison between graphs we do when comparing GLS.To do so, we derive bounds for the distance between GLS that are related to the divergence to isomorphism, a standard computationally expensive graph divergence.Finally, we experiment GLS as graph representation through consistency tests and classification tasks, and show that it is a strong graph feature representation baseline.", "target": ["Studiamo teoricamente la consistenza dello spettro del Laplaciano e lo usiamo come embedding di interi grafi", "Questo articolo si basa sullo spettro del laplaciano di un grafo come mezzo per generare una rappresentazione da utilizzare per confrontare i grafi e classificarli.", "Questo lavoro ha proposto di usare lo spettro del Graph Laplacian per imparare la rappresentazione dei grafi."]} +{"source": "Adversarial training, a method for learning robust deep networks, is typically assumed to be more expensive than traditional training due to the necessity of constructing adversarial examples via a first-order method like projected gradient decent (PGD). In this paper, we make the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice. Specifically, we show that adversarial training with the fast gradient sign method (FGSM), when combined with random initialization, is as effective as PGD-based training but has significantly lower cost. Furthermore we show that FGSM adversarial training can be further accelerated by using standard techniques for efficient training of deep networks, allowing us to learn a robust CIFAR10 classifier with 45% robust accuracy at epsilon=8/255 in 6 minutes, and a robust ImageNet classifier with 43% robust accuracy at epsilon=2/255 in 12 hours, in comparison to past work based on ``free'' adversarial training which took 10 and 50 hours to reach the same respective thresholds.", "target": ["Il training avversario basato su FGSM, con randomizzazione, funziona altrettanto bene dell'adversarial training basato su PGD: possiamo usarlo per addestrare un classificatore robusto in 6 minuti su CIFAR10, e 12 ore su ImageNet, su una singola macchina.", "Questo articolo rivisita il metodo Random+FGSM per addestrare modelli robusti contro forti attacchi di evasione PGD più velocemente dei metodi precedenti.", "L'affermazione principale di questo articolo è che una semplice strategia di randomizzazione più il gradient sign method adversarial training produce reti neurali robuste."]} +{"source": "In seeking for sparse and efficient neural network models, many previous works investigated on enforcing L1 or L0 regularizers to encourage weight sparsity during training.The L0 regularizer measures the parameter sparsity directly and is invariant to the scaling of parameter values.But it cannot provide useful gradients and therefore requires complex optimization techniques.The L1 regularizer is almost everywhere differentiable and can be easily optimized with gradient descent.Yet it is not scale-invariant and causes the same shrinking rate to all parameters, which is inefficient in increasing sparsity.Inspired by the Hoyer measure (the ratio between L1 and L2 norms) used in traditional compressed sensing problems, we present DeepHoyer, a set of sparsity-inducing regularizers that are both differentiable almost everywhere and scale-invariant.Our experiments show that enforcing DeepHoyer regularizers can produce even sparser neural network models than previous works, under the same accuracy level.We also show that DeepHoyer can be applied to both element-wise and structural pruning.", "target": ["Proponiamo regolatori quasi ovunque differenziabili e invarianti rispetto alla scala per il pruning DNN, che possono portare alla sparsità suprema attraverso il training standard SGD.", "L'articolo propone un regolarizzatore invariante rispetto alla scala (DeepHoyer) ispirato alla misura Hoyer per indurre la sparsità nelle reti neurali."]} +{"source": "Self-supervision, in which a target task is improved without external supervision, has primarily been explored in settings that assume the availability of additional data.However, in many cases, particularly in healthcare, one may not have access to additional data (labeled or otherwise).In such settings, we hypothesize that self-supervision based solely on the structure of the data at-hand can help.We explore a novel self-supervision framework for time-series data, in which multiple auxiliary tasks (e.g., forecasting) are included to improve overall performance on a sequence-level target task without additional training data.We call this approach limited self-supervision, as we limit ourselves to only the data at-hand.We demonstrate the utility of limited self-supervision on three sequence-level classification tasks, two pertaining to real clinical data and one using synthetic data.Within this framework, we introduce novel forms of self-supervision and demonstrate their utility in improving performance on the target task.Our results indicate that limited self-supervision leads to a consistent improvement over a supervised baseline, across a range of domains.In particular, for the task of identifying atrial fibrillation from small amounts of electrocardiogram data, we observe a nearly 13% improvement in the area under the receiver operating characteristics curve (AUC-ROC) relative to the baseline (AUC-ROC=0.55 vs. AUC-ROC=0.62).Limited self-supervision applied to sequential data can aid in learning intermediate representations, making it particularly applicable in settings where data collection is difficult.", "target": ["Dimostriamo che i dati extra senza label non sono necessari affinché i task ausiliari self-supervised siano utili per la classificazione delle serie temporali, e presentiamo nuovi ed efficaci task ausiliari.", "Questo articolo propone un metodo self-supervised per l'apprendimento da dati di serie temporali in ambienti sanitari attraverso la progettazione di task ausiliari basati sulla struttura interna dei dati per creare training task ausiliari con più label.", "Questo articolo propone un approccio per l'apprendimento self-supervised su serie temporali."]} +{"source": "Are neural networks biased toward simple functions?Does depth always help learn more complex features?Is training the last layer of a network as good as training all layers?These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective.We will study the spectra of the *Conjugate Kernel, CK,* (also called the *Neural Network-Gaussian Process Kernel*), and the *Neural Tangent Kernel, NTK*.Roughly, the CK and the NTK tell us respectively ``\"what a network looks like at initialization\" and \"``what a network looks like during and after training.\"Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks.By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks.We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks.We have open-sourced the code for it and for generating the plots in this paper at github.com/jxVmnLgedVwv6mNcGCBy/NNspectra.", "target": ["Gli autovalori di Conjugate (aka NNGP) e Neural Tangent Kernel possono essere calcolati in forma chiusa sul cubo booleano e rivelano gli effetti degli iperparametri sul bias induttivo, training e generalizzazione delle reti neurali.", "Questo articolo fornisce un'analisi spettrale sul kernel coniugato delle reti neurali e sul kernel tangente neurale sul cubo booleano per risolvere il motivo per cui le reti deep sono orientate verso funzioni semplici."]} +{"source": "To communicate, to ground hypotheses, to analyse data, neuroscientists often refer to divisions of the brain.Here we consider atlases used to parcellate the brain when studying brain function.We discuss the meaning and the validity of these parcellations, from a conceptual point of view as well as by running various analytical tasks on popular functional brain parcellations.", "target": ["Tutte le parcellizzazioni funzionali del cervello sono sbagliate, ma alcune sono utili"]} +{"source": "High-dimensional sparse reward tasks present major challenges for reinforcement learning agents. In this work we use imitation learning to address two of these challenges: how to learn a useful representation of the world e.g. from pixels, and how to explore efficiently given the rarity of a reward signal?We show that adversarial imitation can work well even in this high dimensional observation space.Surprisingly the adversary itself, acting as the learned reward function, can be tiny, comprising as few as 128 parameters, and can be easily trained using the most basic GAN formulation.Our approach removes limitations present in most contemporary imitation approaches: requiring no demonstrator actions (only video), no special initial conditions or warm starts, and no explicit tracking of any single demo.The proposed agent can solve a challenging robot manipulation task of block stacking from only video demonstrations and sparse reward, in which the non-imitating agents fail to learn completely. Furthermore, our agent learns much faster than competing approaches that depend on hand-crafted, staged dense reward functions, and also better compared to standard GAIL baselines.Finally, we develop a new adversarial goal recognizer that in some cases allows the agent to learn stacking without any task reward, purely from imitation.", "target": ["Imitazione dai pixel, con reward scarsa o nulla, usando RL off-policy e una piccola funzione di reward adversarially-learned.", "L'articolo propone di utilizzare un \"avversario minimo\" nel generative adversarial imitation learning in spazi visivi ad alta densità.", "Questo articolo mira a risolvere il problema della stima di reward sparse in un ambiente di input ad alta densità."]} +{"source": "In this paper we show strategies to easily identify fake samples generated with the Generative Adversarial Network framework.One strategy is based on the statistical analysis and comparison of raw pixel values and features extracted from them.The other strategy learns formal specifications from the real data and shows that fake samples violate the specifications of the real data.We show that fake samples produced with GANs have a universal signature that can be used to identify fake samples.We provide results on MNIST, CIFAR10, music and speech data.", "target": ["Mostriamo strategie per identificare facilmente i fake sample generati con il framework Generative Adversarial Network.", "Mostrare che i fake sample creati con le comuni implementazioni di reti generative avversarie (GAN) sono facilmente identificabili utilizzando varie tecniche statistiche.", "L'articolo propone delle statistiche per identificare i dati falsi generati usando le GAN basate su semplici statistiche marginali o specifiche formali generate automaticamente dai dati reali."]} +{"source": "Efforts to reduce the numerical precision of computations in deep learning training have yielded systems that aggressively quantize weights and activations, yet employ wide high-precision accumulators for partial sums in inner-product operations to preserve the quality of convergence.The absence of any framework to analyze the precision requirements of partial sum accumulations results in conservative design choices.This imposes an upper-bound on the reduction of complexity of multiply-accumulate units.We present a statistical approach to analyze the impact of reduced accumulation precision on deep learning training.Observing that a bad choice for accumulation precision results in loss of information that manifests itself as a reduction in variance in an ensemble of partial sums, we derive a set of equations that relate this variance to the length of accumulation and the minimum number of bits needed for accumulation.We apply our analysis to three benchmark networks: CIFAR-10 ResNet 32, ImageNet ResNet 18 and ImageNet AlexNet.In each case, with accumulation precision set in accordance with our proposed equations, the networks successfully converge to the single precision floating-point baseline.We also show that reducing accumulation precision further degrades the quality of the trained network, proving that our equations produce tight bounds.Overall this analysis enables precise tailoring of computation hardware to the application, yielding area- and power-optimal systems.", "target": ["Presentiamo un framework analitico per determinare i requisiti di bit-width di accumulazione in tutti e tre i GEMM di deep learning e verifichiamo la validità e la precisione del nostro metodo tramite esperimenti di benchmarking.", "Gli autori propongono un metodo analitico per prevedere il numero di bit di mantissa necessari per le sommatorie parziali per i layer convoluzionali e fully connected", "Gli autori conducono un'analisi approfondita della precisione numerica richiesta per le operazioni di accumulazione nella formazione delle reti neurali e mostrano l'impatto teorico della riduzione del numero di bit nell'accumulatore in virgola mobile."]} +{"source": "Unsupervised domain adaptation is a promising avenue to enhance the performance of deep neural networks on a target domain, using labels only from a source domain.However, the two predominant methods, domain discrepancy reduction learning and semi-supervised learning, are not readily applicable when source and target domains do not share a common label space.This paper addresses the above scenario by learning a representation space that retains discriminative power on both the (labeled) source and (unlabeled) target domains while keeping representations for the two domains well-separated.Inspired by a theoretical analysis, we first reformulate the disjoint classification task, where the source and target domains correspond to non-overlapping class labels, to a verification one.To handle both within and cross domain verifications, we propose a Feature Transfer Network (FTN) to separate the target feature space from the original source space while aligned with a transformed source space.Moreover, we present a non-parametric multi-class entropy minimization loss to further boost the discriminative power of FTNs on the target domain.In experiments, we first illustrate how FTN works in a controlled setting of adapting from MNIST-M to MNIST with disjoint digit classes between the two domains and then demonstrate the effectiveness of FTNs through state-of-the-art performances on a cross-ethnicity face recognition problem.", "target": ["Una nuova teoria di unsupervised domain adaptation per il distance metric learning e la sua applicazione al riconoscimento dei volti attraverso diverse variazioni di etnia.", "Propone una nuova rete di trasferimento delle feature che ottimizza la domain adversarial loss e la domain separation loss.."]} +{"source": "In this paper, we consider the problem of training neural networks (NN).To promote a NN with specific structures, we explicitly take into consideration the nonsmooth regularization (such as L1-norm) and constraints (such as interval constraint).This is formulated as a constrained nonsmooth nonconvex optimization problem, and we propose a convergent proximal-type stochastic gradient descent (Prox-SGD) algorithm.We show that under properly selected learning rates, momentum eventually resembles the unknown real gradient and thus is crucial in analyzing the convergence.We establish that with probability 1, every limit point of the sequence generated by the proposed Prox-SGD is a stationary point.Then the Prox-SGD is tailored to train a sparse neural network and a binary neural network, and the theoretical analysis is also supported by extensive numerical tests.", "target": ["Proponiamo un algoritmo di stochastic gradient descent di tipo prossimale convergente per problemi di ottimizzazione non smooth non convessi vincolati", "Questo articolo propone Prox-SGD, un framework teorico per algoritmi di ottimizzazione stocastica che convergono asintoticamente alla stazionarietà per loss smooth non convesse + vincoli/regolarizzatori convessi.", "L'articolo propone un nuovo algoritmo di ottimizzazione stocastica basato sul gradiente con gradient averaging adattando la teoria degli algoritmi prossimali ad un setting non convesso."]} +{"source": "The loss of a few neurons in a brain rarely results in any visible loss of function.However, the insight into what “few” means in this context is unclear.How many random neuron failures will it take to lead to a visible loss of function?In this paper, we address the fundamental question of the impact of the crash of a random subset of neurons on the overall computation of a neural network and the error in the output it produces.We study fault tolerance of neural networks subject to small random neuron/weight crash failures in a probabilistic setting.We give provable guarantees on the robustness of the network to these crashes.Our main contribution is a bound on the error in the output of a network under small random Bernoulli crashes proved by using a Taylor expansion in the continuous limit, where close-by neurons at a layer are similar.The failure mode we adopt in our model is characteristic of neuromorphic hardware, a promising technology to speed up artificial neural networks, as well as of biological networks.We show that our theoretical bounds can be used to compare the fault tolerance of different architectures and to design a regularizer improving the fault tolerance of a given architecture.We design an algorithm achieving fault tolerance using a reasonable number of neurons.In addition to the theoretical proof, we also provide experimental validation of our results and suggest a connection to the generalization capacity problem.", "target": ["Diamo un bound per le NN sull'errore di output in caso di fallimenti dei pesi casuali usando un'espansione di Taylor nel limite continuo in cui i neuroni vicini sono simili", "Questo articolo considera il problema del dropout dei neuroni da una rete neurale, mostrando che se l'obiettivo è quello di diventare robusto ai neuroni randomly dropped durante l'evaluation, allora è sufficiente fare training con dropout.", "Questo contributo studia l'impatto delle cancellazioni di neuroni casuali sulla precisione di predizione dell'architettura addestrata, con l'applicazione all'analisi degli errori e al contesto specifico dell'hardware neuromorfo."]} +{"source": "Truly intelligent agents need to capture the interplay of all their senses to build a rich physical understanding of their world.In robotics, we have seen tremendous progress in using visual and tactile perception; however we have often ignored a key sense: sound.This is primarily due to lack of data that captures the interplay of action and sound.In this work, we perform the first large-scale study of the interactions between sound and robotic action.To do this, we create the largest available sound-action-vision dataset with 15,000 interactions on 60 objects using our robotic platform Tilt-Bot.By tilting objects and allowing them to crash into the walls of a robotic tray, we collect rich four-channel audio information.Using this data, we explore the synergies between sound and action, and present three key insights.First, sound is indicative of fine-grained object class information, e.g., sound can differentiate a metal screwdriver from a metal wrench.Second, sound also contains information about the causal effects of an action, i.e. given the sound produced, we can predict what action was applied on the object.Finally, object representations derived from audio embeddings are indicative of implicit physical properties.We demonstrate that on previously unseen objects, audio embeddings generated through interactions can predict forward models 24% better than passive visual embeddings.", "target": ["Esploriamo e studiamo le sinergie tra suono e azione.", "Questo articolo esplora le connessioni tra azione e suono costruendo un dataset sound-action-vision con un tilt-bot.", "Questo articolo studia il ruolo dell'audio nella percezione degli oggetti e delle azioni, e anche il modo in cui le informazioni uditive possono aiutare l'apprendimento di modelli dinamici forward e inversi."]} +{"source": "Hierarchical label structures widely exist in many machine learning tasks, ranging from those with explicit label hierarchies such as image classification to the ones that have latent label hierarchies such as semantic segmentation.Unfortunately, state-of-the-art methods often utilize cross-entropy loss which in-explicitly assumes the independence among class labels.Motivated by the fact that class members from the same hierarchy need to be similar to each others, we design a new training diagram called Hierarchical Complement Objective Training (HCOT).In HCOT, in addition to maximizing the probability of the ground truth class, we also neutralize the probabilities of rest of the classes in a hierarchical fashion, making the model take advantage of the label hierarchy explicitly.We conduct our method on both image classification and semantic segmentation.Results show that HCOT outperforms state-of-the-art models in CIFAR100, Imagenet, and PASCAL-context.Our experiments also demonstrate that HCOT can be applied on tasks with latent label hierarchies, which is a common characteristic in many machine learning tasks.", "target": ["Proponiamo Hierarchical Complement Objective Training, un nuovo paradigma di training per sfruttare efficacemente la gerarchia delle categorie nello spazio delle label sia nella classificazione delle immagini che nella segmentazione semantica.", "Un metodo che regolarizza l'entropia della distribuzione posteriore sulle classi che può essere utile per i task di classificazione e segmentazione delle immagini"]} +{"source": "There is a growing interest in automated neural architecture search (NAS).To improve the efficiency of NAS, previous approaches adopt weight sharing method to force all models share the same set of weights. However, it has been observed that a model performing better with shared weights does not necessarily perform better when trained alone.In this paper, we analyse existing weight sharing one-shot NAS approaches from a Bayesian point of view and identify the posterior fading problem, which compromises the effectiveness of shared weights.To alleviate this problem, we present a practical approach to guide the parameter posterior towards its true distribution.Moreover, a hard latency constraint is introduced during the search so that the desired latency can be achieved.The resulted method, namely Posterior Convergent NAS (PC-NAS), achieves state-of-the-art performance under standard GPU latency constraint on ImageNet.In our small search space, our model PC-NAS-S attains76.8% top-1 accuracy, 2.1% higher than MobileNetV2 (1.4x) with the same latency.When adopted to our large search space, PC-NAS-L achieves 78.1% top-1 accuracy within 11ms.The discovered architecture also transfers well to other computer vision applications such as object detection and person re-identification.", "target": ["Il nostro articolo identifica il problema dell'approccio di weight sharing esistente nella ricerca dell'architettura neurale e propone un metodo pratico, ottenendo importanti risultati.", "L'autore identifica un problema con il NAS chiamato posterior fading e introduce il Posterior Convergent NAS per mitigare questo effetto"]} +{"source": "Noisy labels are very common in real-world training data, which lead to poor generalization on test data because of overfitting to the noisy labels.In this paper, we claim that such overfitting can be avoided by \"early stopping\" training a deep neural network before the noisy labels are severely memorized.Then, we resume training the early stopped network using a \"maximal safe set,\" which maintains a collection of almost certainly true-labeled samples at each epoch since the early stop point.Putting them all together, our novel two-phase training method, called Prestopping, realizes noise-free training under any type of label noise for practical use.Extensive experiments using four image benchmark data sets verify that our method significantly outperforms four state-of-the-art methods in test error by 0.4–8.2 percent points under existence of real-world noise.", "target": ["Proponiamo un nuovo approccio di training in due fasi basato sull'early stopping per una training robusto su label rumorose.", "Il documento propone di studiare come l'arresto precoce nell'ottimizzazione aiuta a trovare esempi sicuri", "Questo articolo propone un metodo di training in due fasi per l'apprendimento con rumore di label."]} +{"source": "Learning when to communicate and doing that effectively is essential in multi-agent tasks.Recent works show that continuous communication allows efficient training with back-propagation in multi-agent scenarios, but have been restricted to fully-cooperative tasks.In this paper, we present Individualized Controlled Continuous Communication Model (IC3Net) which has better training efficiency than simple continuous communication model, and can be applied to semi-cooperative and competitive settings along with the cooperative settings.IC3Net controls continuous communication with a gating mechanism and uses individualized rewards foreach agent to gain better performance and scalability while fixing credit assignment issues.Using variety of tasks including StarCraft BroodWars explore and combat scenarios, we show that our network yields improved performance and convergence rates than the baselines as the scale increases.Our results convey that IC3Net agents learn when to communicate based on the scenario and profitability.", "target": ["Introduciamo IC3Net, una singola rete che può essere usata per addestrare gli agenti in scenari cooperativi, competitivi e misti. Mostriamo anche che gli agenti possono imparare quando comunicare usando il nostro modello.", "L'autore propone una nuova architettura per l'reinforcement learning multi-agente che utilizza diversi controllori LSTM con pesi legati che trasmettono un vettore continuo l'uno all'altro", "Gli autori propongono un interessante schema di gating che permette agli agenti di comunicare in un ambiente RL multi-agente."]} +{"source": "Neural sequence-to-sequence models are a recently proposed family of approaches used in abstractive summarization of text documents, useful for producing condensed versions of source text narratives without being restricted to using only words from the original text.Despite the advances in abstractive summarization, custom generation of summaries (e.g. towards a user's preference) remains unexplored.In this paper, we present CATS, an abstractive neural summarization model, that summarizes content in a sequence-to-sequence fashion but also introduces a new mechanism to control the underlying latent topic distribution of the produced summaries.Our experimental results on the well-known CNN/DailyMail dataset show that our model achieves state-of-the-art performance.", "target": ["Presentiamo il primo modello di riassunto neurale astrattivo in grado di personalizzare i riassunti generati."]} +{"source": "We propose a software framework based on ideas of the Learning-Compression algorithm , that allows one to compress any neural network by different compression mechanisms (pruning, quantization, low-rank, etc.).By design, the learning of the neural net (handled by SGD) is decoupled from the compression of its parameters (handled by a signal compression function), so that the framework can be easily extended to handle different combinations of neural net and compression type.In addition, it has other advantages, such as easy integration with deep learning frameworks, efficient training time, competitive practical performance in the loss-compression tradeoff, and reasonable convergence guarantees.Our toolkit is written in Python and Pytorch and we plan to make it available by the workshop time, and eventually open it for contributions from the community.", "target": ["Proponiamo una framework software basata sulle idee dell'algoritmo Learning-Compression, che permette di comprimere qualsiasi rete neurale con diversi meccanismi di compressione (pruning, quantizzazione, low-rank, ecc.).", "Questo articolo presenta il progetto di una libreria software che rende più facile per l'utente comprimere le loro reti nascondendo i dettagli dei metodi di compressione."]} +{"source": "This work seeks the possibility of generating the human face from voice solely based on the audio-visual data without any human-labeled annotations.To this end, we propose a multi-modal learning framework that links the inference stage and generation stage.First, the inference networks are trained to match the speaker identity between the two different modalities.Then the pre-trained inference networks cooperate with the generation network by giving conditional information about the voice.", "target": ["Questo articolo propone un metodo di generazione multimodale end-to-end del volto umano dal discorso basato su un framework di apprendimento auto-supervisionato.", "Questo articolo presenta un framework di apprendimento multimodale che collega la fase di inferenza e la fase di generazione per cercare la possibilità di generare il volto umano dalla sola voce.", "Questo lavoro mira a costruire una framework condizionale per la generazione di immagini del viso dal segnale audio."]} +{"source": "We present a simple neural model that given a formula and a property tries to answer the question whether the formula has the given property, for example whether a propositional formula is always true.The structure of the formula is captured by a feedforward neural network recursively built for the given formula in a top-down manner.The results of this network are then processed by two recurrent neural networks.One of the interesting aspects of our model is how propositional atoms are treated.For example, the model is insensitive to their names, it only matters whether they are the same or distinct.", "target": ["Viene presentato un approccio top-down per rappresentare ricorsivamente le formule proposizionali tramite reti neurali.", "Questo articolo fornisce un nuovo modello a rete neurale di formule logiche che raccoglie informazioni su una data formula attraversando il suo albero di parse dall'alto verso il basso.", "L'articolo persegue il percorso di una rete frameworkta ad albero isomorfa all'albero di parse di una formula del calcolo proposizionale, ma passando informazioni dall'alto verso il basso piuttosto che dal basso verso l'alto."]} +{"source": "Despite significant advances in the field of deep Reinforcement Learning (RL), today's algorithms still fail to learn human-level policies consistently over a set of diverse tasks such as Atari 2600 games.We identify three key challenges that any algorithm needs to master in order to perform well on all games: processing diverse reward distributions, reasoning over long time horizons, and exploring efficiently. In this paper, we propose an algorithm that addresses each of these challenges and is able to learn human-level policies on nearly all Atari games.A new transformed Bellman operator allows our algorithm to process rewards of varying densities and scales; an auxiliary temporal consistency loss allows us to train stably using a discount factor of 0.999 (instead of 0.99) extending the effective planning horizon by an order of magnitude; and we ease the exploration problem by using human demonstrations that guide the agent towards rewarding states.When tested on a set of 42 Atari games, our algorithm exceeds the performance of an average human on 40 games using a common set of hyper parameters.", "target": ["Ape-X DQfD = DQN distribuito (molti attori + un discente + replay prioritario) con dimostrazioni che ottimizzano il ritorno non scontato di 0,999 su Atari.", "L'articolo propone tre estensioni (aggiornamento di Bellman, loss di coerenza temporale e dimostrazione esperta) a DQN per migliorare le prestazioni di apprendimento sui giochi Atari, raggiungendo prestazioni superiori ai risultati dello stato dell'arte per i giochi Atari.", "Questo articolo propone un operatore di Bellman trasformato che mira a risolvere la sensibilità alla ricompensa non presa, la robustezza al valore del fattore di sconto e il problema dell'esplorazione."]} +{"source": "The knowledge that humans hold about a problem often extends far beyond a set of training data and output labels.While the success of deep learning mostly relies on supervised training, important properties cannot be inferred efficiently from end-to-end annotations alone, for example causal relations or domain-specific invariances.We present a general technique to supplement supervised training with prior knowledge expressed as relations between training instances.We illustrate the method on the task of visual question answering to exploit various auxiliary annotations, including relations of equivalence and of logical entailment between questions.Existing methods to use these annotations, including auxiliary losses and data augmentation, cannot guarantee the strict inclusion of these relations into the model since they require a careful balancing against the end-to-end objective.Our method uses these relations to shape the embedding space of the model, and treats them as strict constraints on its learned representations.%The resulting model encodes relations that better generalize across instances.In the context of VQA, this approach brings significant improvements in accuracy and robustness, in particular over the common practice of incorporating the constraints as a soft regularizer.We also show that incorporating this type of prior knowledge with our method brings consistent improvements, independently from the amount of supervised data used.It demonstrates the value of an additional training signal that is otherwise difficult to extract from end-to-end annotations alone.", "target": ["Metodo di training per imporre vincoli rigorosi sulle embeddings apprese durante la training supervisionata. Applicato alla risposta alle domande visive.", "Gli autori propongono un framework per incorporare una conoscenza preliminare semantica aggiuntiva nell'training tradizionale dei modelli di apprendimento deep per regolarizzare lo spazio di incorporazione invece dello spazio dei parametri.", "L'articolo sostiene la necessità di codificare la conoscenza esterna nello layer di incorporazione linguistica di una rete neurale multimodale, come un insieme di vincoli rigidi."]} +{"source": "Artificial neural networks revolutionized many areas of computer science in recent years since they provide solutions to a number of previously unsolved problems.On the other hand, for many problems, classic algorithms exist, which typically exceed the accuracy and stability of neural networks.To combine these two concepts, we present a new kind of neural networks—algorithmic neural networks (AlgoNets).These networks integrate smooth versions of classic algorithms into the topology of neural networks.Our novel reconstructive adversarial network (RAN) enables solving inverse problems without or with only weak supervision.", "target": ["Risolvere problemi inversi usando approssimazioni smooth degli algoritmi forward per addestrare i modelli inversi."]} +{"source": "Pointwise localization allows more precise localization and accurate interpretability, compared to bounding box, in applications where objects are highly unstructured such as in medical domain.In this work, we focus on weakly supervised localization (WSL) where a model is trained to classify an image and localize regions of interest at pixel-level using only global image annotation.Typical convolutional attentions maps are prune to high false positive regions.To alleviate this issue, we propose a new deep learning method for WSL, composed of a localizer and a classifier, where the localizer is constrained to determine relevant and irrelevant regions using conditional entropy (CE) with the aim to reduce false positive regions.Experimental results on a public medical dataset and two natural datasets, using Dice index, show that, compared to state of the art WSL methods, our proposal can provide significant improvements in terms of image-level classification and pixel-level localization (low false positive) with robustness to overfitting.A public reproducible PyTorch implementation is provided.", "target": ["Un metodo di apprendimento deep per la localizzazione puntiforme weakly-supervised che impara usando solo le label a livello di immagine. Si basa sull'entropia condizionale per localizzare le regioni rilevanti e irrilevanti con l'obiettivo di minimizzare le regioni false positive.", "Questo lavoro esplora il problema del WSL usando un nuovo design di termini di regolarizzazione e un algoritmo di cancellazione ricorsivo.", "Questo articolo presenta un nuovo approccio weakly-supervised per l'apprendimento della segmentazione degli oggetti con label di classe a livello di immagine."]} +{"source": "Model-based reinforcement learning has been empirically demonstrated as a successful strategy to improve sample efficiency.Particularly, Dyna architecture, as an elegant model-based architecture integrating learning and planning, provides huge flexibility of using a model.One of the most important components in Dyna is called search-control, which refers to the process of generating state or state-action pairs from which we query the model to acquire simulated experiences.Search-control is critical to improve learning efficiency.In this work, we propose a simple and novel search-control strategy by searching high frequency region on value function.Our main intuition is built on Shannon sampling theorem from signal processing, which indicates that a high frequency signal requires more samples to reconstruct.We empirically show that a high frequency function is more difficult to approximate.This suggests a search-control strategy: we should use states in high frequency region of the value function to query the model to acquire more samples.We develop a simple strategy to locally measure the frequency of a function by gradient norm, and provide theoretical justification for this approach.We then apply our strategy to search-control in Dyna, and conduct experiments to show its property and effectiveness on benchmark domains.", "target": ["Acquisire stati dalla regione ad alta frequenza per il search-control in Dyna.", "Gli autori propongono di fare il campionamento nel dominio delle alte frequenze per aumentare la sample efficiency", "Questo articolo propone un nuovo modo di selezionare gli stati da cui fare le transizioni nell'algoritmo dyna."]} +{"source": "We propose a new architecture for distributed image compression from a group of distributed data sources.The work is motivated by practical needs of data-driven codec design, low power consumption, robustness, and data privacy.The proposed architecture, which we refer to as Distributed Recurrent Autoencoder for Scalable Image Compression (DRASIC), is able to train distributed encoders and one joint decoder on correlated data sources.Its compression capability is much better than the method of training codecs separately.Meanwhile, for 10 distributed sources, our distributed system remarkably performs within 2 dB peak signal-to-noise ratio (PSNR) of that of a single codec trained with all data sources.We experiment distributed sources with different correlations and show how our methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding (DSC).Our method is also shown to be robust to the lack of presence of encoded data from a number of distributed sources.Moreover, it is scalable in the sense that codes can be decoded simultaneously at more than one compression quality level.To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with deep learning.", "target": ["Introduciamo un framework di codifica delle fonti distribuite basato su Distributed Recurrent Autoencoder for Scalable Image Compression (DRASIC).", "L'articolo ha proposto un autoencoder ricorrente distribuito per la compressione delle immagini che utilizza un ConvLSTM per imparare codici binari che sono costruiti progressivamente dai residui delle informazioni precedentemente codificate", "Gli autori propongono un metodo per addestrare modelli di compressione delle immagini su fonti multiple, con un encoder separato su ogni fonte e un decoder condiviso."]} +{"source": "Long short-term memory networks (LSTMs) were introduced to combat vanishing gradients in simple recurrent neural networks (S-RNNs) by augmenting them with additive recurrent connections controlled by gates.We present an alternate view to explain the success of LSTMs: the gates themselves are powerful recurrent models that provide more representational power than previously appreciated.We do this by showing that the LSTM's gates can be decoupled from the embedded S-RNN, producing a restricted class of RNNs where the main recurrence computes an element-wise weighted sum of context-independent functions of the inputs.Experiments on a range of challenging NLP problems demonstrate that the simplified gate-based models work substantially better than S-RNNs, and often just as well as the original LSTMs, strongly suggesting that the gates are doing much more in practice than just alleviating vanishing gradients.", "target": ["I gate fanno tutto il lavoro pesante negli LSTM calcolando somme pesate in base agli elementi, e la rimozione della RNN semplice interna non degrada le prestazioni del modello.", "Questo articolo propone una variante semplificata di LSTM rimuovendo la non linearità dell'elemento contenuto e del gate di uscita", "Questo articolo presenta un'analisi delle LSTMS, mostrando che hanno una forma in cui il contenuto della cella di memoria ad ogni passo è una combinazione pesata dei valori di content update calcolati ad ogni passo temporale e offre una semplificazione delle LSTM che calcola il valore della cella di memoria ad ogni passo temporale in termini di una funzione deterministica dell'input piuttosto che una funzione dell'input e del contesto corrente.", "L'articolo propone una nuova visione di LSTM in cui il focus è una somma pesata element-wise e sostiene che LSTM è ridondante mantenendo solo input gate e forget gate per calcolare i pesi"]} +{"source": "Machine learning algorithms designed to characterize, monitor, and intervene on human health (ML4H) are expected to perform safely and reliably when operating at scale, potentially outside strict human supervision.This requirement warrants a stricter attention to issues of reproducibility than other fields of machine learning.In this work, we conduct a systematic evaluation of over 100 recently published ML4H research papers along several dimensions related to reproducibility we identified.We find that the field of ML4H compares poorly to more established machine learning fields, particularly concerning data accessibility and code accessibility. Finally, drawing from success in other fields of science, we propose recommendations to data providers, academic publishers, and the ML4H research community in order to promote reproducible research moving forward.", "target": ["Analizzando più di 300 articoli nelle recenti conferenze sul machine learning, abbiamo scoperto che le applicazioni di Machine Learning for Health (ML4H) sono in ritardo rispetto ad altri campi di apprendimento automatico in termini di metriche di riproducibilità.", "Questo documento conduce una revisione quantitativa e qualitativa dello stato della riproducibilità per le applicazioni sanitarie ML e propone raccomandazioni per rendere la ricerca più riproducibile."]} +{"source": "We propose a solution for evaluation of mathematical expression.However, instead of designing a single end-to-end model we propose a Lego bricks style architecture.In this architecture instead of training a complex end-to-end neural network, many small networks can be trained independently each accomplishing one specific operation and acting a single lego brick.More difficult or complex task can then be solved using a combination of these smaller network.In this work we first identify 8 fundamental operations that are commonly used to solve arithmetic operations (such as 1 digit multiplication, addition, subtraction, sign calculator etc).These fundamental operations are then learned using simple feed forward neural networks.We then shows that different operations can be designed simply by reusing these smaller networks.As an example we reuse these smaller networks to develop larger and a more complex network to solve n-digit multiplication, n-digit division, and cross product.This bottom-up strategy not only introduces reusability, we also show that it allows to generalize for computations involving n-digits and we show results for up to 7 digit numbers.Unlike existing methods, our solution also generalizes for both positive as well as negative numbers.", "target": ["Addestriamo molte piccole reti ognuna per una specifica operazione, queste sono poi combinate per eseguire operazioni complesse", "Questo articolo propone di usare le reti neurali per valutare le espressioni matematiche progettando 8 piccoli building block per 8 operazioni fondamentali, ad esempio, addizione, sottrazione, ecc. e poi progettando la moltiplicazione e la divisione a più cifre usando questi piccoli blocchi.", "L'articolo propone un metodo per progettare un motore di valutazione delle espressioni matematiche basato su NN."]} +{"source": "In standard generative adversarial network (SGAN), the discriminator estimates the probability that the input data is real.The generator is trained to increase the probability that fake data is real.We argue that it should also simultaneously decrease the probability that real data is real because1) this would account for a priori knowledge that half of the data in the mini-batch is fake,2) this would be observed with divergence minimization, and3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs. We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data.We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average.We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs).We show that IPM-based GANs are a subset of RGANs which use the identity function. Empirically, we observe that1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts,2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400%), and3) RaGANs are able to generate plausible high resolutions images (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization.The code is freely available on https://github.com/AlexiaJM/RelativisticGAN.", "target": ["Migliorare la qualità e la stabilità delle GAN usando un discriminatore relativistico; le GAN IPM (come WGAN-GP) sono un caso speciale.", "L'articolo propone un \"discriminatore relativistico\", che aiuta in alcune setting, anche se un po' sensibile a iperparametri, architetture e dataset.", "In questo lavoro, gli autori considerano una variante di GAN diminuendo simultaneamente la probabilità che i dati reali siano reali per il generatore."]} +{"source": "Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting.However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse.As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function.We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters.On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported.V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.", "target": ["Una versione di MPO basata sulla funzione stato-valore che raggiunge buoni risultati in una vasta gamma di task nel controllo discreto e continuo.", "Questo articolo presenta un algoritmo per il reinforcement learning on-policy che può gestire sia il controllo continuo/discreto, l'apprendimento singolo/multi-task e usare sia stati che pixel a bassa dimensione.", "L'articolo propone una variante online di MPO, V-MPO, che impara la funzione V e aggiorna la distribuzione non parametrica verso i vantaggi."]} +{"source": "Turing complete computation and reasoning are often regarded as necessary pre- cursors to general intelligence.There has been a significant body of work studying neural networks that mimic general computation, but these networks fail to generalize to data distributions that are outside of their training set.We study this problem through the lens of fundamental computer science problems: sorting and graph processing.We modify the masking mechanism of a transformer in order to allow them to implement rudimentary functions with strong generalization.We call this model the Neural Execution Engine, and show that it learns, through supervision, to numerically compute the basic subroutines comprising these algorithms with near perfect accuracy.Moreover, it retains this level of accuracy while generalizing to unseen data and long sequences outside of the training distribution.", "target": ["Proponiamo motori di esecuzione neurali (NEE), che sfruttano una maschera appresa e tracce di esecuzione supervised per imitare la funzionalità delle subroutine e dimostrare una forte generalizzazione.", "Questo articolo indaga un problema di costruzione di un motore di esecuzione di programmi con reti neurali e propone un modello basato su trasformer per imparare le subroutine di base e le applica in diversi algoritmi standard.", "Questo articolo affronta il problema della progettazione di architetture di reti neurali che possono imparare e implementare programmi generali."]} +{"source": "Meta-learning is a promising strategy for learning to efficiently learn within new tasks, using data gathered from a distribution of tasks.However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task.In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with a time-varying task.We present meta-learning via online changepoint analysis (MOCA), an approach which augments a meta-learning algorithm with a differentiable Bayesian changepoint detection scheme.The framework allows both training and testing directly on time series data without segmenting it into discrete tasks.We demonstrate the utility of this approach on a nonlinear meta-regression benchmark as well as two meta-image-classification benchmarks.", "target": ["Il rilevamento bayesiano dei punti di cambiamento permette il meta learning direttamente dai dati delle serie temporali.", "L'articolo considera il meta learning nel setting del task non segmentato e applica il rilevamento bayesiano online del punto di cambiamento con il meta learning.", "Questo articolo spinge il meta learning verso setting non segmentati, dove il framework MOCA adotta uno schema di stima bayesiana dei punti di cambiamento per il rilevamento dei cambiamenti di attività."]} +{"source": "People with high-frequency hearing loss rely on hearing aids that employ frequency lowering algorithms.These algorithms shift some of the sounds from the high frequency band to the lower frequency band where the sounds become more perceptible for the people with the condition.Fricative phonemes have an important part of their content concentrated in high frequency bands.It is important that the frequency lowering algorithm is activated exactly for the duration of a fricative phoneme, and kept off at all other times.Therefore, timely (with zero delay) and accurate fricative phoneme detection is a key problem for high quality hearing aids.In this paper we present a deep learning based fricative phoneme detection algorithm that has zero detection delay and achieves state-of-the-art fricative phoneme detection accuracy on the TIMIT Speech Corpus.All reported results are reproducible and come with easy to use code that could serve as a baseline for future research.", "target": ["Un approccio basato sul deep learning per il rilevamento di fonemi fricativi a ritardo zero", "Questo articolo applica metodi di deep learning supervised per rilevare la durata esatta di un fonema fricativo al fine di migliorare l'algoritmo pratico di abbassamento della frequenza."]} +{"source": "Sequence-to-sequence models with soft attention have been successfully applied to a wide variety of problems, but their decoding process incurs a quadratic time and space cost and is inapplicable to real-time sequence transduction.To address these issues, we propose Monotonic Chunkwise Attention (MoChA), which adaptively splits the input sequence into small chunks over which soft attention is computed.We show that models utilizing MoChA can be trained efficiently with standard backpropagation while allowing online and linear-time decoding at test time.When applied to online speech recognition, we obtain state-of-the-art results and match the performance of a model using an offline soft attention mechanism.In document summarization experiments where we do not expect monotonic alignments, we show significantly improved performance compared to a baseline monotonic attention-based model.", "target": ["Un meccanismo di attention online e a tempo lineare che esegue un'attention soft su pezzi della sequenza di input localizzati in modo adattivo.", "Questo articolo propone una piccola modifica all'attention monotonica in [1] aggiungendo un'attention soft al segmento previsto dall'attention monotonica.", "L'articolo propone un'estensione di un precedente modello di attention monotona (Raffel et al 2017) per usare attentetion su una finestra di dimensioni fisse fino alla posizione di allineamento."]} +{"source": "We present a framework for automatically ordering image patches that enables in-depth analysis of dataset relationship to learnability of a classification task using convolutional neural network.An image patch is a group of pixels residing in a continuous area contained in the sample.Our preliminary experimental results show that an informed smart shuffling of patches at a sample level can expedite training by exposing important features at early stages of training.In addition, we conduct systematic experiments and provide evidence that CNN’s generalization capabilities do not correlate with human recognizable features present in training samples.We utilized the framework not only to show that spatial locality of features within samples do not correlate with generalization, but also to expedite convergence while achieving similar generalization performance.Using multiple network architectures and datasets, we show that ordering image regions using mutual information measure between adjacent patches, enables CNNs to converge in a third of the total steps required to train the same network without patch ordering.", "target": ["Sviluppare nuove tecniche che si basino sul riordino delle patch per consentire un'analisi dettagliata della relazione tra i dataset e le prestazioni di training e generalizzazione."]} +{"source": "Producing agents that can generalize to a wide range of environments is a significant challenge in reinforcement learning.One method for overcoming this issue is domain randomization, whereby at the start of each training episode some parameters of the environment are randomized so that the agent is exposed to many possible variations.However, domain randomization is highly inefficient and may lead to policies with high variance across domains.In this work, we formalize the domain randomization problem, and show that minimizing the policy's Lipschitz constant with respect to the randomization parameters leads to low variance in the learned policies.We propose a method where the agent only needs to be trained on one variation of the environment, and its learned state representations are regularized during training to minimize this constant.We conduct experiments that demonstrate that our technique leads to more efficient and robust learning than standard domain randomization, while achieving equal generalization scores.", "target": ["Produciamo agenti di reinforcement learning che generalizzano bene ad una vasta gamma di ambienti utilizzando una nuova tecnica di regolarizzazione.", "L'articolo introduce la sfida delle policy ad alta varianza nella randomizzazione del dominio per il reinforcement learning e si concentra principalmente sul problema della randomizzazione visiva, dove i diversi domini randomizzati differiscono solo nello spazio di stato e le reward e le dinamiche sottostanti sono le stesse.", "Per migliorare la capacità di generalizzazione degli agenti di deep RL attraverso i task con diversi modelli visivi, questo articolo ha proposto una semplice tecnica di regolarizzazione per la randomizzazione del dominio."]} +{"source": "Claims from the fields of network neuroscience and connectomics suggest that topological models of the brain involving complex networks are of particular use and interest.The field of deep neural networks has mostly left inspiration from these claims out.In this paper, we propose three architectures and use each of them to explore the intersection of network neuroscience and deep learning in an attempt to bridge the gap between the two fields.Using the teachings from network neuroscience and connectomics, we show improvements over the ResNet architecture, we show a possible connection between early training and the spectral properties of the network, and we show the trainability of a DNN based on the neuronal network of C.Elegans.", "target": ["Esploriamo l'intersezione tra le neuroscienze di rete e il deep learning."]} +{"source": "Creating a knowledge base that is accurate, up-to-date and complete remains a significant challenge despite substantial efforts in automated knowledge base construction. In this paper, we present Alexandria -- a system for unsupervised, high-precision knowledge base construction.Alexandria uses a probabilistic program to define a process of converting knowledge base facts into unstructured text. Using probabilistic inference, we can invert this program and so retrieve facts, schemas and entities from web text.The use of a probabilistic program allows uncertainty in the text to be propagated through to the retrieved facts, which increases accuracy and helps merge facts from multiple sources.Because Alexandria does not require labelled training data, knowledge bases can be constructed with the minimum of manual input.We demonstrate this by constructing a high precision (typically 97\\%+) knowledge base for people from a single seed fact.", "target": ["Questo articolo presenta un sistema per la costruzione unsupervised e ad alta precisione di knowledge base, utilizzando un programma probabilistico per definire un processo di conversione dei fatti della knowledge base in testo non strutturato.", "Panoramica sulla knowledge base esistente che è costruita con un modello probabilistico, con l'approccio di costruzione della knowledge base valutato contro altri approcci di knowledge base YAGO2, NELL, Knowledge Vault, e DeepDive.", "Questo articolo usa un programma probabilistico che descrive il processo attraverso il quale i fatti che descrivono le entità possono essere realizzati nel testo e in un gran numero di pagine web, per imparare ad eseguire l'estrazione di fatti sulle persone usando un singolo fatto come seed."]} +{"source": "Recent advances have made it possible to create deep complex-valued neural networks.Despite this progress, many challenging learning tasks have yet to leverage the power of complex representations.Building on recent advances, we propose a new deep complex-valued method for signal retrieval and extraction in the frequency domain.As a case study, we perform audio source separation in the Fourier domain.Our new method takes advantage of the convolution theorem which states that the Fourier transform of two convolved signals is the elementwise product of their Fourier transforms.Our novel method is based on a complex-valued version of Feature-Wise Linear Modulation (FiLM) and serves as the keystone of our proposed signal extraction method.We also introduce a new and explicit amplitude and phase-aware loss, which is scale and time invariant, taking into account the complex-valued components of the spectrogram.Using the Wall Street Journal Dataset, we compared our phase-aware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss.", "target": ["Nuovo metodo di estrazione del segnale nel dominio di Fourier"]} +{"source": "We propose an implementation of GNN that predicts and imitates the motion be- haviors from observed swarm trajectory data.The network’s ability to capture interaction dynamics in swarms is demonstrated through transfer learning.We finally discuss the inherent availability and challenges in the scalability of GNN, and proposed a method to improve it with layer-wise tuning and mixing of data enabled by padding.", "target": ["Migliorare la scalabilità delle graph neural networks sull'imitation learning e la previsione del movimento dello swarm", "L'articolo propone un nuovo modello di serie temporali per l'apprendimento di una sequenza di grafi.", "Questo lavoro considera i problemi di predizione delle sequenze in un sistema multi-agente."]} +{"source": "Embedding layers are commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings.Despite their effectiveness, the number of parameters in an embedding layer increases linearly with the number of symbols and poses a critical challenge on memory and storage constraints.In this work, we propose a generic and end-to-end learnable compression framework termed differentiable product quantization (DPQ).We present two instantiations of DPQ that leverage different approximation techniques to enable differentiability in end-to-end learning.Our method can readily serve as a drop-in alternative for any existing embedding layer.Empirically, DPQ offers significant compression ratios (14-238x) at negligible or no performance cost on 10 datasets across three different language tasks.", "target": ["Proponiamo un framework di quantizzazione del prodotto differenziabile che può ridurre la dimensione del layer di embedding in un training end-to-end senza costi di performance.", "Questo articolo lavora sui metodi per comprimere i layer di embedding per l'inferenza a basso costo di memoria, dove gli embedding compressi sono appresi insieme ai modelli specifici del task in un modo differenziabile end-to-end."]} +{"source": "For multi-valued functions---such as when the conditional distribution on targets given the inputs is multi-modal---standard regression approaches are not always desirable because they provide the conditional mean.Modal regression approaches aim to instead find the conditional mode, but are restricted to nonparametric approaches.Such approaches can be difficult to scale, and make it difficult to benefit from parametric function approximation, like neural networks, which can learn complex relationships between inputs and targets.In this work, we propose a parametric modal regression algorithm, by using the implicit function theorem to develop an objective for learning a joint parameterized function over inputs and targets.We empirically demonstrate on several synthetic problems that our method(i) can learn multi-valued functions and produce the conditional modes,(ii) scales well to high-dimensional inputs and(iii) is even more effective for certain unimodal problems, particularly for high frequency data where the joint function over inputs and targets can better capture the complex relationship between them.We conclude by showing that our method provides small improvements on two regression datasets that have asymmetric distributions over the targets.", "target": ["Introduciamo un semplice e nuovo algoritmo di regressione modale che è facile da scalare a problemi di grossa taglia.", "L'articolo propone un approccio di funzione implicita per imparare le modalità di regressione multimodale.", "Il presente lavoro propone un approccio parametrico per stimare il modo condizionale usando il Teorema della Funzione Implicita per distribuzioni multimodali."]} +{"source": "Deep reinforcement learning algorithms require large amounts of experience to learn an individual task.While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality.Current methods rely heavily on on-policy experience, limiting their sample efficiency.They also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems.In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control.In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience.This probabilistic interpretation enables posterior sampling for structured and efficient exploration.We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency.Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.", "target": ["Meta-RL sample-efficient combinando l'inferenza variazionale delle variabili probabilistiche del task con RL off-policy", "Questo articolo propone di usare RL off-policy durante il meta-training per migliorare notevolmente la sample-efficiency dei metodi Meta-RL."]} +{"source": "Knowledge bases, massive collections of facts (RDF triples) on diverse topics, support vital modern applications.However, existing knowledge bases contain very little data compared to the wealth of information on the Web.This is because the industry standard in knowledge base creation and augmentation suffers from a serious bottleneck: they rely on domain experts to identify appropriate web sources to extract data from.Efforts to fully automate knowledge extraction have failed to improve this standard: these automated systems are able to retrieve much more data and from a broader range of sources, but they suffer from very low precision and recall.As a result, these large-scale extractions remain unexploited.In this paper, we present MIDAS, a system that harnesses the results of automated knowledge extraction pipelines to repair the bottleneck in industrial knowledge creation and augmentation processes.MIDAS automates the suggestion of good-quality web sources and describes what to extract with respect to augmenting an existing knowledge base.We make three major contributions.First, we introduce a novel concept, web source slices, to describe the contents of a web source.Second, we define a profit function to quantify the value of a web source slice with respect to augmenting an existing knowledge base.Third, we develop effective and highly-scalable algorithms to derive high-profit web source slices.We demonstrate that MIDAS produces high-profit results and outperforms the baselines significantly on both real-word and synthetic datasets.", "target": ["Questo articolo si concentra sull'identificazione di fonti web di alta qualità per la pipeline di arricchimento della knowledge base industriale."]} +{"source": "We explore the match prediction problem where one seeks to estimate the likelihood of a group of M items preferred over another, based on partial group comparison data.Challenges arise in practice.As existing state-of-the-art algorithms are tailored to certain statistical models, we have different best algorithms across distinct scenarios.Worse yet, we have no prior knowledge on the underlying model for a given scenario.These call for a unified approach that can be universally applied to a wide range of scenarios and achieve consistently high performances.To this end, we incorporate deep learning architectures so as to reflect the key structural features that most state-of-the-art algorithms, some of which are optimal in certain settings, share in common.This enables us to infer hidden models underlying a given dataset, which govern in-group interactions and statistical patterns of comparisons, and hence to devise the best algorithm tailored to the dataset at hand.Through extensive experiments on synthetic and real-world datasets, we evaluate our framework in comparison to state-of-the-art algorithms.It turns out that our framework consistently leads to the best performance across all datasets in terms of cross entropy loss and prediction accuracy, while the state-of-the-art algorithms suffer from inconsistent performances across different datasets.Furthermore, we show that it can be easily extended to attain satisfactory performances in rank aggregation tasks, suggesting that it can be adaptable for other tasks as well.", "target": ["Indaghiamo i meriti dell'impiego di reti neurali nel problema della predizione delle corrispondenze, dove si cerca di stimare la probabilità che un gruppo di M oggetti sia preferito ad un altro, sulla base di dati parziali di confronto di gruppo.", "Questo articolo propone una soluzione di deep neural network al problema della classificazione degli insiemi e progetta un'architettura per questo task ispirata da precedenti algoritmi progettati manualmente.", "Questo articolo fornisce una tecnica per risolvere il problema della predizione delle partite utilizzando un'architettura di deep learning."]} +{"source": "Recurrent Neural Networks (RNNs) are designed to handle sequential data but suffer from vanishing or exploding gradients. Recent work on Unitary Recurrent Neural Networks (uRNNs) have been used to address this issue and in some cases, exceed the capabilities of Long Short-Term Memory networks (LSTMs). We propose a simpler and novel update scheme to maintain orthogonal recurrent weight matrices without using complex valued matrices.This is done by parametrizing with a skew-symmetric matrix using the Cayley transform.Such a parametrization is unable to represent matrices with negative one eigenvalues, but this limitation is overcome by scaling the recurrent weight matrix by a diagonal matrix consisting of ones and negative ones. The proposed training scheme involves a straightforward gradient calculation and update step.In several experiments, the proposed scaled Cayley orthogonal recurrent neural network (scoRNN) achieves superior results with fewer trainable parameters than other unitary RNNs.", "target": ["Un nuovo approccio per mantenere matrici di peso ricorrenti ortogonali in una RNN.", "Introduce uno schema per l'apprendimento della matrice dei parametri ricorrenti in una rete neurale che usa la trasformata di Cayley e una matrice di peso di scaling.", "Questo articolo suggerisce una riparametrizzazione dei pesi ricorrenti di una RNN con una matrice asimmetrica utilizzando la trasformata di Cayley per mantenere la matrice dei pesi ricorrenti ortogonale.", "Una nuova parametrizzazione delle RNN permette di rappresentare le matrici di peso ortogonali in modo relativamente semplice."]} +{"source": "A large number of natural language processing tasks exist to analyze syntax, semantics, and information content of human language.These seemingly very different tasks are usually solved by specially designed architectures.In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks.We perform extensive experiments to test this insight on 10 disparate tasks as broad as dependency parsing (syntax), semantic role labeling (semantics), relation extraction (information content), aspect based sentiment analysis (sentiment), and many others, achieving comparable performance as state-of-the-art specialized models.We further demonstrate benefits in multi-task learning.We convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis.", "target": ["Usiamo un unico modello per risolvere una grande varietà di task di analisi del linguaggio naturale formulandoli in un formato unificato di span-relation.", "Questo articolo generalizza una vasta gamma di task di natural language processing in un unico framework basato sullo span e propone un'architettura generale per risolvere tutti questi problemi.", "Questo lavoro presenta una formulazione unificata di vari task NLP a livello di frase e di token."]} +{"source": "Large matrix inversions have often been cited as a major impediment to scaling Gaussian process (GP) models.With the use of GPs as building blocks for ever more sophisticated Bayesian deep learning models, removing these impediments is a necessary step for achieving large scale results.We present a variational approximation for a wide range of GP models that does not require a matrix inverse to be performed at each optimisation step.Our bound instead directly parameterises a free matrix, which is an additional variational parameter.At the local maxima of the bound, this matrix is equal to the matrix inverse.We prove that our bound gives the same guarantees as earlier variational approximations.We demonstrate some beneficial properties of the bound experimentally, although significant wall clock time speed improvements will require future improvements in optimisation and implementation.", "target": ["Presentiamo un limite inferiore variazionale per i modelli GP che può essere ottimizzato senza calcolare operazioni matriciali costose come le inverse, fornendo al tempo stesso le stesse garanzie delle approssimazioni variazionali esistenti."]} +{"source": "It has been shown that using geometric spaces with non-zero curvature instead of plain Euclidean spaces with zero curvature improves performance on a range of Machine Learning tasks for learning representations.Recent work has leveraged these geometries to learn latent variable models like Variational Autoencoders (VAEs) in spherical and hyperbolic spaces with constant curvature.While these approaches work well on particular kinds of data that they were designed for e.g.~tree-like data for a hyperbolic VAE, there exists no generic approach unifying all three models.We develop a Mixed-curvature Variational Autoencoder, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature can be learned.This generalizes the Euclidean VAE to curved latent spaces, as the model essentially reduces to the Euclidean VAE if curvatures of all latent space components go to 0.", "target": ["Gli autoencoder variazionali con spazi latenti modellati come prodotti di manifold Riemanni a curvatura costante migliorano la ricostruzione delle immagini rispetto alle varianti a singolo manifold.", "Questo articolo introduce una formulazione generale della nozione di VAE con uno spazio latente composto da un manifold curvo.", "Questo articolo riguarda lo sviluppo di VAE in spazi non euclidei."]} +{"source": "Machine learning algorithms for generating molecular structures offer a promising new approach to drug discovery.We cast molecular optimization as a translation problem, where the goal is to map an input compound to a target compound with improved biochemical properties.Remarkably, we observe that when generated molecules are iteratively fed back into the translator, molecular compound attributes improve with each step.We show that this finding is invariant to the choice of translation model, making this a \"black box\" algorithm.We call this method Black Box Recursive Translation (BBRT), a new inference method for molecular property optimization.This simple, powerful technique operates strictly on the inputs and outputs of any translation model.We obtain new state-of-the-art results for molecular property optimization tasks using our simple drop-in replacement with well-known sequence and graph-based models.Our method provides a significant boost in performance relative to its non-recursive peers with just a simple \"``for\" loop.Further, BBRT is highly interpretable, allowing users to map the evolution of newly discovered compounds from known starting points.", "target": ["Introduciamo un algoritmo black box per l'ottimizzazione ripetuta dei composti utilizzando un framework di traduzione.", "Gli autori inquadrano l'ottimizzazione delle molecole come un problema sequence-to-sequence, ed estendono i metodi esistenti per migliorare le molecole, mostrando che è vantaggioso per ottimizzare il logP ma non il QED.", "L'articolo si basa su modelli di traduzione esistenti sviluppati per l'ottimizzazione molecolare, facendo un uso iterativo di modelli di traduzione da sequenza a sequenza o da grafo a grafo."]} +{"source": "Deep Neural Networks (DNNs) are increasingly deployed in cloud servers and autonomous agents due to their superior performance.The deployed DNN is either leveraged in a white-box setting (model internals are publicly known) or a black-box setting (only model outputs are known) depending on the application.A practical concern in the rush to adopt DNNs is protecting the models against Intellectual Property (IP) infringement.We propose BlackMarks, the first end-to-end multi-bit watermarking framework that is applicable in the black-box scenario.BlackMarks takes the pre-trained unmarked model and the owner’s binary signature as inputs.The output is the corresponding marked model with specific keys that can be later used to trigger the embedded watermark.To do so, BlackMarks first designs a model-dependent encoding scheme that maps all possible classes in the task to bit ‘0’ and bit ‘1’.Given the owner’s watermark signature (a binary string), a set of key image and label pairs is designed using targeted adversarial attacks.The watermark (WM) is then encoded in the distribution of output activations of the DNN by fine-tuning the model with a WM-specific regularized loss.To extract the WM, BlackMarks queries the model with the WM key images and decodes the owner’s signature from the corresponding predictions using the designed encoding scheme.We perform a comprehensive evaluation of BlackMarks’ performance on MNIST, CIFAR-10, ImageNet datasets and corroborate its effectiveness and robustness.BlackMarks preserves the functionality of the original DNN and incurs negligible WM embedding overhead as low as 2.054%.", "target": ["Proponiamo il primo framework di watermarking per l'embedding e l'estrazione della firma multi-bit usando le uscite della DNN.", "Propone un metodo per il watermarking multi-bit delle reti neurali in un ambiente black-box e dimostra che le predizioni dei modelli esistenti possono trasportare una stringa multi-bit che può essere utilizzata in seguito per verificare l'appartenenza.", "L'articolo propone un approccio per il watermarking del modello in cui la filigrana è una stringa di bit incorporata nel modello come parte di una procedura di fine-tuning"]} +{"source": "Adversarial training provides a principled approach for training robust neural networks.From an optimization perspective, the adversarial training is essentially solving a minmax robust optimization problem.The outer minimization is trying to learn a robust classifier, while the inner maximization is trying to generate adversarial samples.Unfortunately, such a minmax problem is very difficult to solve due to the lack of convex-concave structure.This work proposes a new adversarial training method based on a general learning-to-learn framework.Specifically, instead of applying the existing hand-design algorithms for the inner problem, we learn an optimizer, which is parametrized as a convolutional neural network.At the same time, a robust classifier is learned to defense the adversarial attack generated by the learned optimizer.From the perspective of generative learning, our proposed method can be viewed as learning a deep generative model for generating adversarial samples, which is adaptive to the robust classification.Our experiments demonstrate that our proposed method significantly outperforms existing adversarial training methods on CIFAR-10 and CIFAR-100 datasets.", "target": ["Non sai come ottimizzare? Allora impara a ottimizzare!", "Questo articolo propone un modo per addestrare i modelli di classificazione delle immagini per essere resistenti agli attacchi di perturbazione L-infinity.", "Questo articolo propone di utilizzare un framework learning-to-learn per imparare un attacker."]} +{"source": "In this work we introduce a new framework for performing temporal predictionsin the presence of uncertainty.It is based on a simple idea of disentangling com-ponents of the future state which are predictable from those which are inherentlyunpredictable, and encoding the unpredictable components into a low-dimensionallatent variable which is fed into the forward model.Our method uses a simple su-pervised training objective which is fast and easy to train.We evaluate it in thecontext of video prediction on multiple datasets and show that it is able to consi-tently generate diverse predictions without the need for alternating minimizationover a latent space or adversarial training.", "target": ["Un metodo semplice e facile da addestrare per la predizione multimodale nelle serie temporali.", "Questo articolo introduce un modello di previsione di serie temporali che impara una mappatura deterministica e allena un'altra rete per prevedere i fotogrammi futuri dato l'input e l'errore residuo della prima rete.", "L'articolo propone un modello per la predizione sotto incertezza in cui si separano la predizione deterministica dei componenti e la predizione incerta dei componenti."]} +{"source": "Conducting reinforcement-learning experiments can be a complex and timely process.A full experimental pipeline will typically consist of a simulation of an environment, an implementation of one or many learning algorithms, a variety of additional components designed to facilitate the agent-environment interplay, and any requisite analysis, plotting, and logging thereof.In light of this complexity, this paper introduces simple rl, a new open source library for carrying out reinforcement learning experiments in Python 2 and 3 with a focus on simplicity.The goal of simple_rl is to support seamless, reproducible methods for running reinforcement learning experiments.This paper gives an overview of the core design philosophy of the package, how it differs from existing libraries, and showcases its central features.", "target": ["Questo articolo introduce e motiva simple_rl, una nuova libreria open source per realizzare esperimenti di reinforcement learning in Python 2 e 3 con un focus sulla semplicità."]} +{"source": "Wasserstein GAN(WGAN) is a model that minimizes the Wasserstein distance between a data distribution and sample distribution.Recent studies have proposed stabilizing the training process for the WGAN and implementing the Lipschitz constraint.In this study, we prove the local stability of optimizing the simple gradient penalty $\\mu$-WGAN(SGP $\\mu$-WGAN) under suitable assumptions regarding the equilibrium and penalty measure $\\mu$.The measure valued differentiation concept is employed to deal with the derivative of the penalty terms, which is helpful for handling abstract singular measures with lower dimensional support.Based on this analysis, we claim that penalizing the data manifold or sample manifold is the key to regularizing the original WGAN with a gradient penalty.Experimental results obtained with unintuitive penalty measures that satisfy our assumptions are also provided to support our theoretical results.", "target": ["Questo articolo si occupa della stabilità di una semplice ottimizzazione con gradient penalty $\\mu$-WGAN introducendo un concetto di differenziazione valutato da una misura.", "Si studia la WGAN con squared zero centered gradient penalty per una misura generale.", "Caratterizza la convergenza di Wasserstein GAN penalizzata dal gradiente."]} +{"source": "We present Random Partition Relaxation (RPR), a method for strong quantization of the parameters of convolutional neural networks to binary (+1/-1) and ternary (+1/0/-1) values.Starting from a pretrained model, we first quantize the weights and then relax random partitions of them to their continuous values for retraining before quantizing them again and switching to another weight partition for further adaptation. We empirically evaluate the performance of RPR with ResNet-18, ResNet-50 and GoogLeNet on the ImageNet classification task for binary and ternary weight networks.We show accuracies beyond the state-of-the-art for binary- and ternary-weight GoogLeNet and competitive performance for ResNet-18 and ResNet-50 using a SGD-based training method that can easily be integrated into existing frameworks.", "target": ["Metodo di training allo stato dell'arte per reti a pesi binari e ternari basato sull'ottimizzazione alternata di partizioni di peso con rilassamento continuo random.", "L'articolo propone un nuovo schema di training per ottimizzare una rete neurale ternaria.", "Gli autori propongono RPR, un modo per partizionare e quantizzare i pesi in modo casuale e addestrare i parametri rimanenti seguiti dal rilassamento a cicli alternati per addestrare i modelli quantizzati."]} +{"source": "Learning long-term dependencies is a key long-standing challenge of recurrent neural networks (RNNs).Hierarchical recurrent neural networks (HRNNs) have been considered a promising approach as long-term dependencies are resolved through shortcuts up and down the hierarchy.Yet, the memory requirements of Truncated Backpropagation Through Time (TBPTT) still prevent training them on very long sequences.In this paper, we empirically show that in (deep) HRNNs, propagating gradients back from higher to lower levels can be replaced by locally computable losses, without harming the learning capability of the network, over a wide range of tasks.This decoupling by local losses reduces the memory requirements of training by a factor exponential in the depth of the hierarchy in comparison to standard TBPTT.", "target": ["Sostituiamo alcuni percorsi dei gradienti nelle RNN gerarchiche con una loss ausiliaria. Mostriamo che questo può ridurre il costo della memoria preservando le prestazioni.", "L'articolo introduce un'architettura RNN gerarchica che potrebbe essere addestrata in modo più efficiente.", "Il documento proposto suggerisce di disaccoppiare i diversi livelli di gerarchia in RNN utilizzando loss ausiliarie."]} +{"source": "In a typical deep learning approach to a computer vision task, Convolutional Neural Networks (CNNs) are used to extract features at varying levels of abstraction from an image and compress a high dimensional input into a lower dimensional decision space through a series of transformations.In this paper, we investigate how a class of input images is eventually compressed over the course of these transformations.In particular, we use singular value decomposition to analyze the relevant variations in feature space.These variations are formalized as the effective dimension of the embedding.We consider how the effective dimension varies across layers within class.We show that across datasets and architectures, the effective dimension of a class increases before decreasing further into the network, suggesting some sort of initial whitening transformation.Further, the decrease rate of the effective dimension deeper in the network corresponds with training performance of the model.", "target": ["Le reti neurali che fanno un buon lavoro di classificazione proiettano i punti in forme più sferiche prima di comprimerli in meno dimensioni."]} +{"source": "Deep learning methods have achieved high performance in sound recognition tasks.Deciding how to feed the training data is important for further performance improvement.We propose a novel learning method for deep sound recognition: Between-Class learning (BC learning).Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between-class sounds.We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio.We then input the mixed sound to the model and train the model to output the mixing ratio.The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher’s criterion in the feature space and a regularization of the positional relationship among the feature distributions of the classes.The experimental results show that BC learning improves the performance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial.Furthermore, we construct a new deep sound recognition network (EnvNet-v2) and train it with BC learning.As a result, we achieved a performance surpasses the human level.", "target": ["Proponiamo un nuovo metodo di deep sound recognition chiamato apprendimento BC.", "Gli autori hanno definito un nuovo task di apprendimento che richiede una DNN per prevedere il rapporto di mistura tra i suoni di due classi diverse per aumentare il potere discriminatorio della rete finale.", "Propone un metodo per migliorare le prestazioni di un metodo di apprendimento generico generando sample di training \"tra le classi\" e presenta l'intuizione di base e la necessità della tecnica proposta."]} +{"source": "Spatiotemporal forecasting has become an increasingly important prediction task in machine learning and statistics due to its vast applications, such as climate modeling, traffic prediction, video caching predictions, and so on.While numerous studies have been conducted, most existing works assume that the data from different sources or across different locations are equally reliable.Due to cost, accessibility, or other factors, it is inevitable that the data quality could vary, which introduces significant biases into the model and leads to unreliable prediction results.The problem could be exacerbated in black-box prediction models, such as deep neural networks.In this paper, we propose a novel solution that can automatically infer data quality levels of different sources through local variations of spatiotemporal signals without explicit labels.Furthermore, we integrate the estimate of data quality level with graph convolutional networks to exploit their efficient structures.We evaluate our proposed method on forecasting temperatures in Los Angeles.", "target": ["Proponiamo un metodo che deduce il livello di qualità dei dati variabili nel tempo per la previsione spazio-temporale senza label esplicitamente assegnate.", "Introduce una nuova definizione di qualità dei dati che si basa sulla nozione di variazione locale definita in (Zhou e Scholkopf) e la estende a più fonti di dati eterogenee.", "Questo lavoro ha proposto un nuovo modo di valutare la qualità delle diverse fonti di dati con il modello a grafo variabile nel tempo, con il livello di qualità usato come termine di regolarizzazione nella funzione obiettivo"]} +{"source": "Human perception of 3D shapes goes beyond reconstructing them as a set of points or a composition of geometric primitives: we also effortlessly understand higher-level shape structure such as the repetition and reflective symmetry of object parts.In contrast, recent advances in 3D shape sensing focus more on low-level geometry but less on these higher-level relationships.In this paper, we propose 3D shape programs, integrating bottom-up recognition systems with top-down, symbolic program structure to capture both low-level geometry and high-level structural priors for 3D shapes.Because there are no annotations of shape programs for real shapes, we develop neural modules that not only learn to infer 3D shape programs from raw, unannotated shapes, but also to execute these programs for shape reconstruction.After initial bootstrapping, our end-to-end differentiable model learns 3D shape programs by reconstructing shapes in a self-supervised manner.Experiments demonstrate that our model accurately infers and executes 3D shape programs for highly complex shapes from various categories.It can also be integrated with an image-to-shape module to infer 3D shape programs directly from an RGB image, leading to 3D shape reconstructions that are both more accurate and more physically plausible.", "target": ["Proponiamo programmi di 3D shape, una rappresentazione strutturata e composita delle forme. Il nostro modello impara a dedurre ed eseguire shape program per spiegare le forme 3D.", "Un approccio per dedurre shape program dati modelli 3D, con un'architettura composta da una rete ricorrente che codifica una forma 3D e fornisce istruzioni, e un secondo modulo che renderizza il programma in 3D.", "Questo articolo introduce una descrizione semantica di alto livello per le forme 3D, data dallo ShapeProgram."]} +{"source": "Deep Reinforcement Learning (Deep RL) has been receiving increasingly more attention thanks to its encouraging performance on a variety of control tasks.Yet, conventional regularization techniques in training neural networks (e.g., $L_2$ regularization, dropout) have been largely ignored in RL methods, possibly because agents are typically trained and evaluated in the same environment.In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks.Interestingly, we find conventional regularization techniques on the policy networks can often bring large improvement on the task performance, and the improvement is typically more significant when the task is more difficult.We also compare with the widely used entropy regularization and find $L_2$ regularization is generally better.Our findings are further confirmed to be robust against the choice of training hyperparameters.We also study the effects of regularizing different components and find that only regularizing the policy network is typically enough.We hope our study provides guidance for future practices in regularizing policy optimization algorithms.", "target": ["Mostriamo che i metodi di regolarizzazione convenzionali (ad esempio, $L_2$, dropout), che sono stati ampiamente ignorati nei metodi RL, possono essere molto efficaci nell'ottimizzazione delle policy.", "Gli autori studiano una serie di metodi di ottimizzazione della policy diretta esistenti nel campo del reinforcement learning e forniscono un'indagine dettagliata sull'effetto dei regolamenti sulle prestazioni e sul comportamento degli agenti che seguono questi metodi.", "Questo articolo fornisce uno studio sull'effetto della regolarizzazione sulle prestazioni in ambienti di training in metodi di ottimizzazione delle policy in task di controllo continuo multipli."]} +{"source": "We introduce FigureQA, a visual reasoning corpus of over one million question-answer pairs grounded in over 100,000 images.The images are synthetic, scientific-style figures from five classes: line plots, dot-line plots, vertical and horizontal bar graphs, and pie charts.We formulate our reasoning task by generating questions from 15 templates; questions concern various relationships between plot elements and examine characteristics like the maximum, the minimum, area-under-the-curve, smoothness, and intersection.To resolve, such questions often require reference to multiple plot elements and synthesis of information distributed spatially throughout a figure.To facilitate the training of machine learning systems, the corpus also includes side data that can be used to formulate auxiliary objectives.In particular, we provide the numerical data used to generate each figure as well as bounding-box annotations for all plot elements.We study the proposed visual reasoning task by training several models, including the recently proposed Relation Network as strong baseline.Preliminary results indicate that the task poses a significant machine learning challenge.We envision FigureQA as a first step towards developing models that can intuitively recognize patterns from visual representations of data.", "target": ["Presentiamo un dataset di question-answering, FigureQA, come un primo passo verso lo sviluppo di modelli che possono riconoscere intuitivamente i pattern dalle rappresentazioni visive dei dati.", "Questo articolo introduce un dataset di question-answering con template sulle figure, che coinvolgono il ragionamento sugli elementi delle figure.", "L'articolo introduce un nuovo dataset di visual reasoning chiamato Figure-QA che consiste in 140K immagini di figure e 1.55M di coppie QA, che può aiutare nello sviluppo di modelli che possono estrarre informazioni utili dalle rappresentazioni visive dei dati."]} +{"source": "In this paper, I discuss some varieties of explanation that can arisein intelligent agents.I distinguish between process accounts, whichaddress the detailed decisions made during heuristic search, andpreference accounts, which clarify the ordering of alternativesindependent of how they were generated.I also hypothesize which types of users will appreciate which types of explanation.In addition, I discuss three facets of multi-step decision making-- conceptual inference, plan generation, and plan execution --in which explanations can arise.I also consider alternative waysto present questions to agents and for them provide their answers.", "target": ["Questo position paper analizza diversi tipi di self explanation che possono sorgere nella pianificazione e nei sistemi correlati.", "Discute diversi aspetti delle explanation, in particolare nel contesto del sequential decision making."]} +{"source": "Generative deep learning has sparked a new wave of Super-Resolution (SR) algorithms that enhance single images with impressive aesthetic results, albeit with imaginary details.Multi-frame Super-Resolution (MFSR) offers a more grounded approach to the ill-posed problem, by conditioning on multiple low-resolution views.This is important for satellite monitoring of human impact on the planet -- from deforestation, to human rights violations -- that depend on reliable imagery.To this end, we present HighRes-net, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion:(i) co-registration,(ii) fusion,(iii) up-sampling, and(iv) registration-at-the-loss.Co-registration of low-res views is learned implicitly through a reference-frame channel, with no explicit registration mechanism.We learn a global fusion operator that is applied recursively on an arbitrary number of low-res pairs.We introduce a registered loss, by learning to align the SR output to a ground-truth through ShiftNet.We show that by learning deep representations of multiple views, we can super-resolve low-resolution signals and enhance Earth observation data at scale.Our approach recently topped the European Space Agency's MFSR competition on real-world satellite imagery.", "target": ["Il primo approccio di deep learning a MFSR per risolvere la registrazione, fusione, up-sampling in modo end-to-end.", "Questo articolo propone un algoritmo di super-risoluzione end-to-end multi-frame, che si basa su una co-registrazione a coppie e su fusing block (convolutional residual block), incorporati in una rete di encoder-decoder \"HighRes-net\" che stima l'immagine con super-risoluzione.", "Questo articolo propone un framework che include la fusione ricorsiva alla loss di co-registrazione per risolvere il problema dei risultati con super-risoluzione e delle label ad alta risoluzione che non sono allineate ai pixel."]} +{"source": "Large mini-batch parallel SGD is commonly used for distributed training of deep networks.Approaches that use tightly-coupled exact distributed averaging based on AllReduce are sensitive to slow nodes and high-latency communication.In this work we show the applicability of Stochastic Gradient Push (SGP) for distributed training.SGP uses a gossip algorithm called PushSum for approximate distributed averaging, allowing for much more loosely coupled communications which can be beneficial in high-latency or high-variability scenarios.The tradeoff is that approximate distributed averaging injects additional noise in the gradient which can affect the train and test accuracies.We prove that SGP converges to a stationary point of smooth, non-convex objective functions.Furthermore, we validate empirically the potential of SGP.For example, using 32 nodes with 8 GPUs per node to train ResNet-50 on ImageNet, where nodes communicate over 10Gbps Ethernet, SGP completes 90 epochs in around 1.5 hours while AllReduce SGD takes over 5 hours, and the top-1 validation accuracy of SGP remains within 1.2% of that obtained using AllReduce SGD.", "target": ["Per il training distribuito su reti ad alta latenza, si usa una media distribuita approssimata gossip-based invece di una media distribuita esatta come AllReduce.", "Gli autori propongono di usare algoritmi di gossip come metodo generale per calcolare approssimativamente la media su un insieme di worker", "L'articolo dimostra la convergenza di SGP per funzioni smooth non convesse e mostra che SGP può raggiungere uno speed-up significativo nell'ambiente a bassa latenza senza sacrificare troppo in prestazioni predittive."]} +{"source": "In this paper, we extend the persona-based sequence-to-sequence (Seq2Seq) neural network conversation model to a multi-turn dialogue scenario by modifying the state-of-the-art hredGAN architecture to simultaneously capture utterance attributes such as speaker identity, dialogue topic, speaker sentiments and so on.The proposed system, phredGAN has a persona-based HRED generator (PHRED) and a conditional discriminator.We also explore two approaches to accomplish the conditional discriminator: (1) $phredGAN_a$, a system that passes the attribute representation as an additional input into a traditional adversarial discriminator, and (2) $phredGAN_d$, a dual discriminator system which in addition to the adversarial discriminator, collaboratively predicts the attribute(s) that generated the input utterance.To demonstrate the superior performance of phredGAN over the persona SeqSeq model, we experiment with two conversational datasets, the Ubuntu Dialogue Corpus (UDC) and TV series transcripts from the Big Bang Theory and Friends.Performance comparison is made with respect to a variety of quantitative measures as well as crowd-sourced human evaluation.We also explore the trade-offs from using either variant of $phredGAN$ on datasets with many but weak attribute modalities (such as with Big Bang Theory and Friends) and ones with few but strong attribute modalities (customer-agent interactions in Ubuntu dataset).", "target": ["Questo articolo sviluppa un framework di adversarial learning per modelli di conversazione neurali con persona", "Questo articolo propone un'estensione di hredGAN per imparare simultaneamente un insieme di embeddings di attributi che rappresentano la persona di ogni speaker e generare risposte basate sulla persona"]} +{"source": "We introduce bio-inspired artificial neural networks consisting of neurons that are additionally characterized by spatial positions.To simulate properties of biological systems we add the costs penalizing long connections and the proximity of neurons in a two-dimensional space.Our experiments show that in the case where the network performs two different tasks, the neurons naturally split into clusters, where each cluster is responsible for processing a different task.This behavior not only corresponds to the biological systems, but also allows for further insight into interpretability or continual learning.", "target": ["Le reti neurali artificiali ispirate biologicamente, composte da neuroni posizionati in uno spazio bidimensionale, sono in grado di formare gruppi indipendenti per eseguire diversi task."]} +{"source": "The transformer has become a central model for many NLP tasks from translation to language modeling to representation learning.Its success demonstrates the effectiveness of stacked attention as a replacement for recurrence for many tasks.In theory attention also offers more insights into the model’s internal decisions; however, in practice when stacked it quickly becomes nearly as fully-connected as recurrent models.In this work, we propose an alternative transformer architecture, discrete transformer, with the goal of better separating out internal model decisions.The model uses hard attention to ensure that each step only depends on a fixed context.Additionally, the model uses a separate “syntactic” controller to separate out network structure from decision making.Finally we show that this approach can be further sparsified with direct regularization.Empirically, this approach is able to maintain the same level of performance on several datasets, while discretizing reasoning decisions over the data.", "target": ["Trasformer discreto che usa hard attention per assicurare che ogni step dipenda solo da un contesto fisso.", "Questo articolo presenta delle modifiche all'architettura standard del transformer con l'obiettivo di migliorare l'interpretabilità mantenendo le prestazioni nei task NLP.", "Questo articolo propone tre transformer discreti: un modulo di attention discreto e stocastico basato su Gumbel-softmax, un transformer sintattico e semantico a due flussi e la regolarizzazione della sparsità."]} +{"source": "Deep predictive coding networks are neuroscience-inspired unsupervised learning models that learn to predict future sensory states.We build upon the PredNet implementation by Lotter, Kreiman, and Cox (2016) to investigate if predictive coding representations are useful to predict brain activity in the visual cortex.We use representational similarity analysis (RSA) to compare PredNet representations to functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) data from the Algonauts Project (Cichy et al., 2019).In contrast to previous findings in the literature (Khaligh-Razavi & Kriegeskorte, 2014), we report empirical data suggesting that unsupervised models trained to predict frames of videos without further fine-tuning may outperform supervised image classification baselines in terms of correlation to spatial (fMRI) and temporal (MEG) data.", "target": ["Mostriamo evidenza empirica che i modelli di predictive encoding producono rappresentazioni più correlate ai dati del cervello rispetto ai modelli supervised di riconoscimento delle immagini."]} +{"source": "The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples.Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest.Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a data-driven fashion.We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task.Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations.The method deals with meta-learning (such as domain adaptation, transfer and multi-task learning) in a unified fashion, and can easily deal with data arising from different types of sources.Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network.", "target": ["Un framework generico per gestire il transfer e multi-task learning usando coppie di autoencoder con pesi specifici del task e pesi condivisi.", "Propone un framework generico per il transfer learning end-to-end / domain adaptation con deep neural network.", "Questo articolo propone un modello per permettere alle architetture di deep neural network di condividere i parametri tra diversi dataset, e lo applica al transfer learning.", "L'articolo si concentra sull'apprendimento di feature comuni da dati di domini multipli e finisce con un'architettura generale per il multi-task learning, semi-supervised learning e transfer learning."]} +{"source": "Deep neural networks and decision trees operate on largely separate paradigms; typically, the former performs representation learning with pre-specified architectures, while the latter is characterised by learning hierarchies over pre-specified features with data-driven architectures.We unite the two via adaptive neural trees (ANTs), a model that incorporates representation learning into edges, routing functions and leaf nodes of a decision tree, along with a backpropagation-based training algorithm that adaptively grows the architecture from primitive modules (e.g., convolutional layers).ANTs allow increased interpretability via hierarchical clustering, e.g., learning meaningful class associations, such as separating natural vs. man-made objects.We demonstrate this on classification and regression tasks, achieving over 99% and 90% accuracy on the MNIST and CIFAR-10 datasets, and outperforming standard neural networks, random forests and gradient boosted trees on the SARCOS dataset.Furthermore, ANT optimisation naturally adapts the architecture to the size and complexity of the training data.", "target": ["Proponiamo una framework per combinare alberi decisionali e reti neurali, e dimostriamo su task di classificazione di immagini che gode dei benefici complementari dei due approcci, mentre affronta le limitazioni del lavoro precedente.", "Gli autori hanno proposto un nuovo modello, Adaptive Neural Trees, combinando l'apprendimento della rappresentazione e l'ottimizzazione del gradiente delle reti neurali con l'apprendimento dell'architettura degli alberi decisionali", "Questo articolo propone l'approccio Adaptive Neural Trees per combinare i due paradigmi di apprendimento delle deep neural network e degli alberi decisionali"]} +{"source": "While natural language processing systems often focus on a single language, multilingual transfer learning has the potential to improve performance, especially for low-resource languages. We introduce XLDA, cross-lingual data augmentation, a method that replaces a segment of the input text with its translation in another language.XLDA enhances performance of all 14 tested languages of the cross-lingual natural language inference (XNLI) benchmark.With improvements of up to 4.8, training with XLDA achieves state-of-the-art performance for Greek, Turkish, and Urdu.XLDA is in contrast to, and performs markedly better than, a more naive approach that aggregates examples in various languages in a way that each example is solely in one language.On the SQuAD question answering task, we see that XLDA provides a 1.0 performance increase on the English evaluation set.Comprehensive experiments suggest that most languages are effective as cross-lingual augmentors, that XLDA is robust to a wide range of translation quality, and that XLDA is even more effective for randomly initialized models than for pretrained models.", "target": ["La traduzione di porzioni dell'input durante l'training può migliorare le prestazioni multilingue.", "L'articolo propone un metodo di augmentation dei dati multilingue per migliorare l'inferenza linguistica e i task di question answering.", "Questo articolo propone di aumentare i dati crosslinguali con scambi euristici usando traduzioni allineate, come fanno gli umani bilingui nel code-switching."]} +{"source": "Training conditional generative latent-variable models is challenging in scenarios where the conditioning signal is very strong and the decoder is expressive enough to generate a plausible output given only the condition; the generative model tends to ignore the latent variable, suffering from posterior collapse. We find, and empirically show, that one of the major reasons behind posterior collapse is rooted in the way that generative models are conditioned, i.e., through concatenation of the latent variable and the condition. To mitigate this problem, we propose to explicitly make the latent variables depend on the condition by unifying the conditioning and latent variable sampling, thus coupling them so as to prevent the model from discarding the root of variations. To achieve this, we develop a conditional Variational Autoencoder architecture that learns a distribution not only of the latent variables, but also of the condition, the latter acting as prior on the former. Our experiments on the challenging tasks of conditional human motion prediction and image captioning demonstrate the effectiveness of our approach at avoiding posterior collapse. Video results of our approach are anonymously provided in http://bit.ly/iclr2020", "target": ["Proponiamo un framework di autoencoder condizionale variazionale che mitiga il posterior collapse negli scenari in cui il segnale di condizionamento è abbastanza forte per un decoder espressivo per generare da esso un output plausibile.", "Questo articolo considera i modelli generativi fortemente condizionati, e propone una funzione obiettivo e una parametrizzazione della distribuzione variazionale tale che le variabili latenti dipendano esplicitamente dalle condizioni di input.", "Questo articolo sostiene che quando il decoder è condizionato dalla concatenazione di variabili latenti e informazioni ausiliarie, allora il collasso posteriore è più probabile che nella VAE standard."]} +{"source": "We propose a study of the stability of several few-shot learning algorithms subject to variations in the hyper-parameters and optimization schemes while controlling the random seed. We propose a methodology for testing for statistical differences in model performances under several replications.To study this specific design, we attempt to reproduce results from three prominent papers: Matching Nets, Prototypical Networks, and TADAM.We analyze on the miniImagenet dataset on the standard classification task in the 5-ways, 5-shots learning setting at test time.We find that the selected implementations exhibit stability across random seed, and repeats.", "target": ["Proponiamo uno studio della stabilità di diversi algoritmi di few-shot learning soggetti a variazioni negli iper-parametri e negli schemi di ottimizzazione mentre controlliamo il random seed.", "Questo articolo studia la riproducibilità per few-shot learning"]} +{"source": "We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning.In such hierarchical structures, a higher-level controller solves tasks by iteratively communicating goals which a lower-level policy is trained to reach.Accordingly, the choice of representation -- the mapping of observation space to goal space -- is crucial.To study this problem, we develop a notion of sub-optimality of a representation, defined in terms of expected reward of the optimal hierarchical policy using this representation.We derive expressions which bound the sub-optimality and show how these expressions can be translated to representation learning objectives which may be optimized in practice.Results on a number of difficult continuous-control tasks show that our approach to representation learning yields qualitatively better representations as well as quantitatively better hierarchical policies, compared to existing methods.", "target": ["Traduciamo un bound sulla sub-ottimalità delle rappresentazioni in un obiettivo pratico di training nel contesto del reinforcement learning gerarchico.", "Gli autori propongono un nuovo approccio nell'apprendimento di una rappresentazione per HRL e dichiarano un'intrigante connessione tra representation learning e il contenimento della sub-ottimalità che risulta in un algoritmo basato sul gradiente", "Questo articolo propone un modo per gestire la sub-ottimalità nel contesto del representaton learning che si riferiscono alla sub-ottimalità della policy gerarchica rispetto alla reward del task."]} +{"source": "Heuristic search research often deals with finding algorithms for offline planning which aim to minimize the number of expanded nodes or planning time.In online planning, algorithms for real-time search or deadline-aware search have been considered before.However, in this paper, we are interested in the problem of {\\em situated temporal planning} in which an agent's plan can depend on exogenous events in the external world, and thus it becomes important to take the passage of time into account during the planning process. Previous work on situated temporal planning has proposed simple pruning strategies, as well as complex schemes for a simplified version of the associated metareasoning problem. In this paper, we propose a simple metareasoning technique, called the crude greedy scheme, which can be applied in a situated temporal planner.Our empirical evaluation shows that the crude greedy scheme outperforms standard heuristic search based on cost-to-go estimates.", "target": ["Metareasoning in un Situated Temporal Planner", "Questo articolo affronta il problema del Situated Temporal Planning, proponendo un'ulteriore semplificazione sulle strategie greedy precedentemente proposte da Shperberg."]} +{"source": "Neural networks are vulnerable to small adversarial perturbations.Existing literature largely focused on understanding and mitigating the vulnerability of learned models.In this paper, we demonstrate an intriguing phenomenon about the most popular robust training method in the literature, adversarial training: Adversarial robustness, unlike clean accuracy, is sensitive to the input data distribution.Even a semantics-preserving transformations on the input data distribution can cause a significantly different robustness for the adversarial trained model that is both trained and evaluated on the new distribution.Our discovery of such sensitivity on data distribution is based on a study which disentangles the behaviors of clean accuracy and robust accuracy of the Bayes classifier.Empirical investigations further confirm our finding.We construct semantically-identical variants for MNIST and CIFAR10 respectively, and show that standardly trained models achieve comparable clean accuracies on them, but adversarially trained models achieve significantly different robustness accuracies.This counter-intuitive phenomenon indicates that input data distribution alone can affect the adversarial robustness of trained neural networks, not necessarily the tasks themselves.Lastly, we discuss the practical implications on evaluating adversarial robustness, and make initial attempts to understand this complex phenomenon.", "target": ["Le prestazioni di robustezza dei modelli trained da PGD sono sensibili alle trasformazioni semantics-preserving dei dataset di immagini, il che implica la difficoltà nella pratica della valutazione degli algoritmi di learning robusto.", "Il documento chiarisce la differenza tra clean e robust accuracy e mostra che cambiare la distribuzione marginale dei dati di input P(x) conservando la sua semantica P(y|x) influisce sulla robustezza del modello.", "Questo articolo indaga l'origine della mancanza di robustezza dei classificatori alle perturbazioni degli adversarial input sotto perturbazioni l-inf bounded."]} +{"source": "Many tasks in natural language processing involve comparing two sentences to compute some notion of relevance, entailment, or similarity.Typically this comparison is done either at the word level or at the sentence level, with no attempt to leverage the inherent structure of the sentence.When sentence structure is used for comparison, it is obtained during a non-differentiable pre-processing step, leading to propagation of errors.We introduce a model of structured alignments between sentences, showing how to compare two sentences by matching their latent structures.Using a structured attention mechanism, our model matches possible spans in the first sentence to possible spans in the second sentence, simultaneously discovering the tree structure of each sentence and performing a comparison, in a model that is fully differentiable and is trained only on the comparison objective.We evaluate this model on two sentence comparison tasks: the Stanford natural language inference dataset and the TREC-QA dataset.We find that comparing spans results in superior performance to comparing words individually, and that the learned trees are consistent with actual linguistic structures.", "target": ["Corrispondenza delle frasi tramite l'apprendimento delle strutture latenti dell'albero dei costituenti con una variante dell'algoritmo inside-outside incorporato come layer della rete neurale.", "Questo articolo introduce un meccanismo di attention strutturato per calcolare i punteggi di allineamento tra tutti i possibili span in due frasi date", "Questo articolo propone un modello di allineamenti strutturati tra le frasi come mezzo per confrontare le frasi abbinando le loro strutture latenti."]} +{"source": "Learning disentangled representation from any unlabelled data is a non-trivial problem.In this paper we propose Information Maximising Autoencoder (InfoAE) where the encoder learns powerful disentangled representation through maximizing the mutual information between the representation and given information in an unsupervised fashion.We have evaluated our model on MNIST dataset and achieved approximately 98.9 % test accuracy while using complete unsupervised training.", "target": ["Imparare le rappresentazioni disaccopiate in modo unsupervised.", "Gli autori presentano un framework in cui un encoder automatico (E, D) è regolarizzato in modo che la sua rappresentazione latente condivida informazioni reciproche con una rappresentazione generata dello spazio latente."]} +{"source": "Effective training of neural networks requires much data.In the low-data regime,parameters are underdetermined, and learnt networks generalise poorly.DataAugmentation (Krizhevsky et al., 2012) alleviates this by using existing datamore effectively.However standard data augmentation produces only limitedplausible alternative data.Given there is potential to generate a much broader setof augmentations, we design and train a generative model to do data augmentation.The model, based on image conditional Generative Adversarial Networks, takesdata from a source domain and learns to take any data item and generalise itto generate other within-class data items.As this generative process does notdepend on the classes themselves, it can be applied to novel unseen classes of data.We show that a Data Augmentation Generative Adversarial Network (DAGAN)augments standard vanilla classifiers well.We also show a DAGAN can enhancefew-shot learning systems such as Matching Networks.We demonstrate theseapproaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, andVGG-Face data.In our experiments we can see over 13% increase in accuracy inthe low-data regime experiments in Omniglot (from 69% to 82%), EMNIST (73.9%to 76%) and VGG-Face (4.5% to 12%); in Matching Networks for Omniglot weobserve an increase of 0.5% (from 96.9% to 97.4%) and an increase of 1.8% inEMNIST (from 59.5% to 61.3%).", "target": ["Le GAN condizionali addestrate per generare sample di dati arricchite dei loro input condizionali utilizzati per migliorare la classificazione standard e i sistemi di apprendimento one shot come le matching network e la distanza tra pixel", "Gli autori propongono un metodo per condurre data augmentation in cui le trasformazioni tra classi sono mappate in uno spazio latente a bassa dimensione utilizzando GAN condizionali"]} +{"source": "Answering questions about data can require understanding what parts of an input X influence the response Y. Finding such an understanding can be built by testing relationships between variables through a machine learning model.For example, conditional randomization tests help determine whether a variable relates to the response given the rest of the variables.However, randomization tests require users to specify test statistics.We formalize a class of proper test statistics that are guaranteed to select a feature when it provides information about the response even when the rest of the features are known.We show that f-divergences provide a broad class of proper test statistics.In the class of f-divergences, the KL-divergence yields an easy-to-compute proper test statistic that relates to the AMI.Questions of feature importance can be asked at the level of an individual sample. We show that estimators from the same AMI test can also be used to find important features in a particular instance.We provide an example to show that perfect predictive models are insufficient for instance-wise feature selection.We evaluate our method on several simulation experiments, on a genomic dataset, a clinical dataset for hospital readmission, and on a subset of classes in ImageNet.Our method outperforms several baselines in various simulated datasets, is able to identify biologically significant genes, can select the most important predictors of a hospital readmission event, and is able to identify distinguishing features in an image-classification task.", "target": ["Sviluppiamo un semplice metodo di selezione delle feature basato sulla regressione model-agnostic per interpretare i processi di generazione dei dati con il controllo FDR, e superiamo diverse baseline popolari su diversi dataset simulati, di ambito medico e di immagini.", "Questo articolo propone un miglioramento pratico del test di randomizzazione condizionale e una nuova statistica di test, dimostra che la f-divergenza è una scelta possibile, e mostra che la divergenza KL annulla alcune distribuzioni condizionali..", "Questo articolo affronta il problema di trovare feature utili in un input che sono dipendenti da una variabile di risposta anche quando si condizionano tutte le altre variabili di input.", "Un metodo agnostico per fornire un'interpretazione dell'influenza delle feature di input sulla risposta di un modello dal livello di macchina fino al livello di istanza, e statistiche di test adeguate per la selezione delle feature agnostiche del modello."]} +{"source": "Supervised learning depends on annotated examples, which are taken to be the ground truth.But these labels often come from noisy crowdsourcing platforms, like Amazon Mechanical Turk.Practitioners typically collect multiple labels per example and aggregate the results to mitigate noise (the classic crowdsourcing problem).Given a fixed annotation budget and unlimited unlabeled data, redundant annotation comes at the expense of fewer labeled examples.This raises two fundamental questions: (1) How can we best learn from noisy workers?(2) How should we allocate our labeling budget to maximize the performance of a classifier?We propose a new algorithm for jointly modeling labels and worker quality from noisy crowd-sourced data.The alternating minimization proceeds in rounds, estimating worker quality from disagreement with the current model and then updating the model by optimizing a loss function that accounts for the current estimate of worker quality.Unlike previous approaches, even with only one annotation per example, our algorithm can estimate worker quality.We establish a generalization error bound for models learned with our algorithm and establish theoretically that it's better to label many examples once (vs less multiply) when worker quality exceeds a threshold.Experiments conducted on both ImageNet (with simulated noisy workers) and MS-COCO (using the real crowdsourced labels) confirm our algorithm's benefits.", "target": ["Un nuovo approccio per l'apprendimento di un modello da annotazioni rumorose in crowdsourcing.", "Questo articolo propone un metodo per l'apprendimento da label rumorose, concentrandosi sul caso in cui i dati non sono annotati in modo ridondante con una validazione teorica e sperimentale", "Questo articolo si concentra sul learning-from-crowds problem, dove l'aggiornamento congiunto dei pesi del classificatore e delle matrici di confusione dei worker può aiutare nel problema della stima con label rare in crowdsourcing.", "Propone un algoritmo di apprendimento supervised per modellare la qualità delle label e dei lavoratori e utilizza l'algoritmo per studiare quanta ridondanza è necessaria nel crowdsourcing e se una bassa ridondanza con abbondanti esempi di rumore porta a label migliori."]} +{"source": "Neural networks make mistakes.The reason why a mistake is made often remains a mystery.As such neural networks often are considered a black box.It would be useful to have a method that can give an explanation that is intuitive to a user as to why an image is misclassified.In this paper we develop a method for explaining the mistakes of a classifier model by visually showing what must be added to an image such that it is correctly classified.Our work combines the fields of adversarial examples, generative modeling and a correction technique based on difference target propagation to create an technique that creates explanations of why an image is misclassified.In this paper we explain our method and demonstrate it on MNIST and CelebA.This approach could aid in demystifying neural networks for a user.", "target": ["Nuovo modo per spiegare perché una rete neurale ha classificato male un'immagine", "Questo articolo propone un metodo per spiegare gli errori di classificazione delle reti neurali.", "Mira a comprendere meglio la classificazione delle reti neurali ed esplora lo spazio latente di un autoencoder variazionale e considera le perturbazioni dello spazio latente per ottenere la classificazione corretta."]} +{"source": "In the context of multi-task learning, neural networks with branched architectures have often been employed to jointly tackle the tasks at hand.Such ramified networks typically start with a number of shared layers, after which different tasks branch out into their own sequence of layers.Understandably, as the number of possible network configurations is combinatorially large, deciding what layers to share and where to branch out becomes cumbersome.Prior works have either relied on ad hoc methods to determine the level of layer sharing, which is suboptimal, or utilized neural architecture search techniques to establish the network design, which is considerably expensive.In this paper, we go beyond these limitations and propose a principled approach to automatically construct branched multi-task networks, by leveraging the employed tasks' affinities.Given a specific budget, i.e. number of learnable parameters, the proposed approach generates architectures, in which shallow layers are task-agnostic, whereas deeper ones gradually grow more task-specific.Extensive experimental analysis across numerous, diverse multi-tasking datasets shows that, for a given budget, our method consistently yields networks with the highest performance, while for a certain performance threshold it requires the least amount of learnable parameters.", "target": ["Un metodo per la costruzione automatica di reti multi-task ramificate con una forte valutazione sperimentale su diversi dataset multi-tasking.", "Questo articolo propone un nuovo framework di apprendimento multi-task con condivisione soft dei parametri basato su un framework ad albero.", "Questo articolo presenta un metodo per dedurre l'architettura delle reti multi-task per determinare quale parte della rete dovrebbe essere condivisa tra diversi task."]} +{"source": "Typical recent neural network designs are primarily convolutional layers, but the tricks enabling structured efficient linear layers (SELLs) have not yet been adapted to the convolutional setting.We present a method to express the weight tensor in a convolutional layer using diagonal matrices, discrete cosine transforms (DCTs) and permutations that can be optimised using standard stochastic gradient methods.A network composed of such structured efficient convolutional layers (SECL) outperforms existing low-rank networks and demonstrates competitive computational efficiency.", "target": ["È possibile sostituire la matrice dei pesi in un layer convoluzionale per addestrarlo come un layer efficiente strutturato; con le stesse prestazioni della decomposizione low-rank.", "Questo lavoro applica i precedenti Structured Efficient Linear Layer ai conv layer e propone Structured Efficient Convolutional Layer come sostituzione dei conv layer originali."]} +{"source": "Blind document deblurring is a fundamental task in the field of document processing and restoration, having wide enhancement applications in optical character recognition systems, forensics, etc.Since this problem is highly ill-posed, supervised and unsupervised learning methods are well suited for this application.Using various techniques, extensive work has been done on natural-scene deblurring.However, these extracted features are not suitable for document images.We present SVDocNet, an end-to-end trainable U-Net based spatial recurrent neural network (RNN) for blind document deblurring where the weights of the RNNs are determined by different convolutional neural networks (CNNs).This network achieves state of the art performance in terms of both quantitative measures and qualitative results.", "target": ["Presentiamo SVDocNet, una rete neurale ricorrente spaziale (RNN) addestrabile end-to-end basata su U-Net per blind document deblurring."]} +{"source": "In contrast to the monolithic deep architectures used in deep learning today for computer vision, the visual cortex processes retinal images via two functionally distinct but interconnected networks: the ventral pathway for processing object-related information and the dorsal pathway for processing motion and transformations.Inspired by this cortical division of labor and properties of the magno- and parvocellular systems, we explore an unsupervised approach to feature learning that jointly learns object features and their transformations from natural videos.We propose a new convolutional bilinear sparse coding model that (1) allows independent feature transformations and (2) is capable of processing large images.Our learning procedure leverages smooth motion in natural videos.Our results show that our model can learn groups of features and their transformations directly from natural videos in a completely unsupervised manner.The learned \"dynamic filters\" exhibit certain equivariance properties, resemble cortical spatiotemporal filters, and capture the statistics of transitions between video frames.Our model can be viewed as one of the first approaches to demonstrate unsupervised learning of primary \"capsules\" (proposed by Hinton and colleagues for supervised learning) and has strong connections to the Lie group approach to visual perception.", "target": ["Estendiamo lo sparse coding bilineare e sfruttiamo le sequenze video per imparare i filtri dinamici."]} +{"source": "Conventional out-of-distribution (OOD) detection schemes based on variational autoencoder or Random Network Distillation (RND) are known to assign lower uncertainty to the OOD data than the target distribution.In this work, we discover that such conventional novelty detection schemes are also vulnerable to the blurred images.Based on the observation, we construct a novel RND-based OOD detector, SVD-RND, that utilizes blurred images during training.Our detector is simple, efficient in test time, and outperforms baseline OOD detectors in various domains.Further results show that SVD-RND learns a better target distribution representation than the baselines.Finally, SVD-RND combined with geometric transform achieves near-perfect detection accuracy in CelebA domain.", "target": ["Proponiamo un nuovo rilevatore di OOD che utilizza immagini sfocate come adversarial sample. Il nostro modello raggiunge prestazioni significative di rilevamento OOD in vari domini.", "Questo articolo presenta l'idea di utilizzare immagini sfocate come esempi di regolarizzazione per migliorare le prestazioni di rilevamento di fuori della distribuzione basate su Random Network Distillation.", "Questo articolo affronta la out-of-data distribution sfruttando la RND applicata alla data augmentation, addestrando un modello per abbinare gli output di una rete casuale con una augmentation come input."]} +{"source": "Training large deep neural networks on massive datasets is  computationally very challenging.There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue.The most prominent algorithm in this line of research is LARS, which by  employing layerwise adaptive learning rates trains ResNet on ImageNet in a few minutes.However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks.In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches.Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings.Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and ResNet-50 training with very little hyperparameter tuning.In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance.  By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes (Table 1).", "target": ["Un ottimizzatore veloce per applicazioni generali e per il training con grandi batch size.", "In questo articolo, gli autori hanno fatto uno studio sul training con grandi batch size per BERT, e hanno addestrato con successo un modello BERT in 76 minuti.", "Questo articolo sviluppa una strategia di layerwise adaptation che permette di addestrare i modelli BERT con grandi mini-batch da 32k contro i 512 di base."]} +{"source": "Model-agnostic meta-learning (MAML) is known as a powerful meta-learning method.However, MAML is notorious for being hard to train because of the existence of two learning rates.Therefore, in this paper, we derive the conditions that inner learning rate $\\alpha$ and meta-learning rate $\\beta$ must satisfy for MAML to converge to minima with some simplifications.We find that the upper bound of $\\beta$ depends on $ \\alpha$, in contrast to the case of using the normal gradient descent method.Moreover, we show that the threshold of $\\beta$ increases as $\\alpha$ approaches its own upper bound.This result is verified by experiments on various few-shot tasks and architectures; specifically, we perform sinusoid regression and classification of Omniglot and MiniImagenet datasets with a multilayer perceptron and a convolutional neural network.Based on this outcome, we present a guideline for determining the learning rates: first, search for the largest possible $\\alpha$; next, tune $\\beta$ based on the chosen value of $\\alpha$.", "target": ["Abbiamo analizzato il ruolo di due learning rate nel meta learning model-agnostic nella convergenza.", "Gli autori hanno affrontato il problema dell'instabilità dell'ottimizzazione in MAML studiando i due learning rate.", "Questo articolo studia un metodo per aiutare a sintonizzare i due learning rate utilizzati nell'algoritmo di training MAML."]} +{"source": "We present a neural framework for learning associations between interrelated groups of words such as the ones found in Subject-Verb-Object (SVO) structures.Our model induces a joint function-specific word vector space, where vectors of e.g. plausible SVO compositions lie close together.The model retains information about word group membership even in the joint space, and can thereby effectively be applied to a number of tasks reasoning over the SVO structure.We show the robustness and versatility of the proposed framework by reporting state-of-the-art results on the tasks of estimating selectional preference (i.e., thematic fit) and event similarity.The results indicate that the combinations of representations learned with our task-independent model outperform task-specific architectures from prior work, while reducing the number of parameters by up to 95%.The proposed framework is versatile and holds promise to support learning function-specific representations beyond the SVO structures.", "target": ["Modello neurale indipendente dal task per l'apprendimento di associazioni tra gruppi di parole correlate.", "L'articolo ha proposto un metodo per addestrare word vector specifici per una funzione, in cui ogni parola è rappresentata con tre vettori ciascuno in una categoria diversa (Soggetto-Verbo-Oggetto).", "Questo articolo propone una rete neurale per imparare work representation specifiche per una funzione e dimostra il vantaggio rispetto alle alternative."]} +{"source": "The fabrication of semiconductor involves etching process to remove selected areas from wafers.However, the measurement of etched structure in micro-graph heavily relies on time-consuming manual routines.Traditional image processing usually demands on large number of annotated data and the performance is still poor.We treat this challenge as segmentation problem and use deep learning approach to detect masks of objects in etched structure of wafer.Then, we use simple image processing to carry out automatic measurement on the objects.We attempt Generative Adversarial Network (GAN) to generate more data to overcome the problem of very limited dataset.We download 10 SEM (Scanning Electron Microscope) images of 4 types from Internet, based on which we carry out our experiments.Our deep learning based method demonstrates superiority over image processing approach with mean accuracy reaching over 96% for the measurements, compared with the ground truth.To the best of our knowledge, it is the first time that deep learning has been applied in semiconductor industry for automatic measurement.", "target": ["Utilizzo del metodo di deep learning per effettuare la misurazione automatica delle immagini SEM nell'industria dei semiconduttori"]} +{"source": "Generating and scheduling activities is particularly challengingwhen considering both consumptive resources andcomplex resource interactions such as time-dependent resourceusage.We present three methods of determining validtemporal placement intervals for an activity in a temporallygrounded plan in the presence of such constraints.We introducethe Max Duration and Probe algorithms which aresound, but incomplete, and the Linear algorithm which issound and complete for linear rate resource consumption.We apply these techniques to the problem of schedulingawakes for a planetary rover where the awake durationsare affected by existing activities.We demonstrate how theProbe algorithm performs competitively with the Linear algorithmgiven an advantageous problem space and well-definedheuristics.We show that the Probe and Linear algorithmsoutperform the Max Duration algorithm empirically.We then empirically present the runtime differences betweenthe three algorithms.The Probe algorithm is currently base-linedfor use in the onboard scheduler for NASA’s next planetaryrover, the Mars 2020 rover.", "target": ["Questo articolo descrive e analizza tre metodi per programmare attività a durata non fissa in presenza di risorse consumabili.", "L'articolo presenta tre approcci per la programmazione di attività a bordo di un rover planetario sotto vincoli di risorse di riserva."]} +{"source": "A disentangled representation of a data set should be capable of recovering the underlying factors that generated it.One question that arises is whether using Euclidean space for latent variable models can produce a disentangled representation when the underlying generating factors have a certain geometrical structure.Take for example the images of a car seen from different angles.The angle has a periodic structure but a 1-dimensional representation would fail to capture this topology.How can we address this problem?The submissions presented for the first stage of the NeurIPS2019 Disentanglement Challenge consist of a Diffusion Variational Autoencoder ($\\Delta$VAE) with a hyperspherical latent space which can for example recover periodic true factors.The training of the $\\Delta$VAE is enhanced by incorporating a modified version of the Evidence Lower Bound (ELBO) for tailoring the encoding capacity of the posterior approximate.", "target": ["Descrizione della submission a NeurIPS2019 Disentanglement Challenge basata su autoencoder ipersferici variazionali"]} +{"source": "Anomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence.Recently, classification-based methods were shown to achieve superior results on this task.In this work, we present a unifying view and propose an open-set method to relax current generalization assumptions.Furthermore, we extend the applicability of transformation-based methods to non-image data using random affine transformations.Our method is shown to obtain state-of-the-art accuracy and is applicable to broad data types.The strong performance of our method is extensively validated on multiple datasets from different domains.", "target": ["Un rilevamento delle anomalie che utilizza la classificazione a trasformazione casuale per generalizzare ai dati che non sono immagini.", "Questo articolo propone un metodo deep per il rilevamento delle anomalie che unifica la recente classificazione deep one class e gli approcci di classificazione basati su trasformazioni.", "Questo articolo propone un approccio al rilevamento delle anomalie basato sulla classificazione per dati generali utilizzando la trasformazione affine y = Wx+b."]} +{"source": "Recent improvements in large-scale language models have driven progress on automatic generation of syntactically and semantically consistent text for many real-world applications.Many of these advances leverage the availability of large corpora.While training on such corpora encourages the model to understand long-range dependencies in text, it can also result in the models internalizing the social biases present in the corpora.This paper aims to quantify and reduce biases exhibited by language models.Given a conditioning context (e.g. a writing prompt) and a language model, we analyze if (and how) the sentiment of the generated text is affected by changes in values of sensitive attributes (e.g. country names, occupations, genders, etc.) in the conditioning context, a.k.a. counterfactual evaluation.We quantify these biases by adapting individual and group fairness metrics from the fair machine learning literature.Extensive evaluation on two different corpora (news articles and Wikipedia) shows that state-of-the-art Transformer-based language models exhibit biases learned from data.We propose embedding-similarity and sentiment-similarity regularization methods that improve both individual and group fairness metrics without sacrificing perplexity and semantic similarity---a positive step toward development and deployment of fairer language models for real-world applications.", "target": ["Riduciamo i sentiment bias basandoci sulla valutazione controfattuale della text generation utilizzando language model.", "Questo articolo misura i sentiment bias nei language model come riflesso del testo generato dai modelli, e aggiunge altri termini al solito obiettivo di language modelling per ridurre i bias.", "Questo articolo propone di valutare i bias nei language model pre-trained utilizzando un sistema di fixed sentiment e prova diversi modelli di prefisso.", "Un metodo basato sulla somiglianza semantica e un metodo basato sulla somiglianza del sentimento per il debiasing dei language model addestrati su grandi dataset."]} +{"source": "Topic modeling of text documents is one of the most important tasks in representation learning.In this work, we propose iTM-VAE, which is a Bayesian nonparametric (BNP) topic model with variational auto-encoders.On one hand, as a BNP topic model, iTM-VAE potentially has infinite topics and can adapt the topic number to data automatically.On the other hand, different with the other BNP topic models, the inference of iTM-VAE is modeled by neural networks, which has rich representation capacity and can be computed in a simple feed-forward manner.Two variants of iTM-VAE are also proposed in this paper, where iTM-VAE-Prod models the generative process in products-of-experts fashion for better performance and iTM-VAE-G places a prior over the concentration parameter such that the model can adapt a suitable concentration parameter to data automatically.Experimental results on 20News and Reuters RCV1-V2 datasets show that the proposed models outperform the state-of-the-arts in terms of perplexity, topic coherence and document retrieval tasks.Moreover, the ability of adjusting the concentration parameter to data is also confirmed by experiments.", "target": ["Un modello nonparametrico bayesiano di topic con autoencoder variazionali che raggiunge lo stato dell'arte su benchmark pubblici in termini di perplexity, coerenza di argomenti e task di retrieval.", "Questo articolo costruisce un modello infinito di topic con autoencoder variazionali combinando l'autoencoder variazionale di Nalisnick & Smith con l'allocazione latente di Dirichlet e diverse tecniche di inferenza usate in Miao."]} +{"source": "Knowledge Distillation (KD) is a widely used technique in recent deep learning research to obtain small and simple models whose performance is on a par with their large and complex counterparts.Standard Knowledge Distillation tends to be time-consuming because of the training time spent to obtain a teacher model that would then provide guidance for the student model.It might be possible to cut short the time by training a teacher model on the fly, but it is not trivial to have such a high-capacity teacher that gives quality guidance to student models this way.To improve this, we present a novel framework of Knowledge Distillation exploiting dark knowledge from the whole training set.In this framework, we propose a simple and effective implementation named Distillation by Utilizing Peer Samples (DUPS) in one generation.We verify our algorithm on numerous experiments.Compared with standard training on modern architectures, DUPS achieves an average improvement of 1%-2% on various tasks with nearly zero extra cost.Considering some typical Knowledge Distillation methods which are much more time-consuming, we also get comparable or even better performance using DUPS.", "target": ["Presentiamo un nuovo framework di Knowledge Distillation che utilizza peer sample come teacher", "Propone un metodo per migliorare l'efficacia della knowledge distillation rendendo soft le label utilizzate e utilizzando un dataset invece di un singolo sample.", "Questo articolo propone di affrontare il costo computazionale extra del training con la knowledge distillation, basandosi sulla tecnica Snapshot Distillation recentemente proposta."]} +{"source": "We develop a metalearning approach for learning hierarchically structured poli- cies, improving sample efficiency on unseen tasks through the use of shared primitives—policies that are executed for large numbers of timesteps.Specifi- cally, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies.We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks.We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies.We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes.We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.", "target": ["imparare le sotto-policy gerarchiche attraverso il training end-to-end su una distribuzione di task", "Gli autori considerano il problema dell'apprendimento di un utile insieme di \"policy secondarie\" che possono essere condivise tra i task in modo da avviare l'apprendimento su nuovi task tratti dalla distribuzione dei task. ", "Questo articolo propone un nuovo metodo per indurre un framework gerarchico temporale in un ambiente specializzato multi-task."]} +{"source": "This paper proposes a new model for document embedding.Existing approaches either require complex inference or use recurrent neural networks that are difficult to parallelize.We take a different route and use recent advances in language modeling to develop a convolutional neural network embedding model.This allows us to train deeper architectures that are fully parallelizable.Stacking layers together increases the receptive filed allowing each successive layer to model increasingly longer range semantic dependences within the document.Empirically we demonstrate superior results on two publicly available benchmarks.Full code will be released with the final version of this paper.", "target": ["Modello di rete neurale convoluzionale per document embedding unsupervised.", "Introduce un nuovo modello per il task generale di indurre rappresentazioni di documenti (embeddings) che utilizza un'architettura CNN per migliorare l'efficienza computazionale.", "Questo articolo propone di usare CNN con un obiettivo simile a skip-gram come un modo veloce per produrre embedding di documenti"]} +{"source": "We prove bounds on the generalization error of convolutional networks.The bounds are in terms of the training loss, the number ofparameters, the Lipschitz constant of the loss and the distance fromthe weights to the initial weights. They are independent of thenumber of pixels in the input, and the height and width of hiddenfeature maps. We present experiments with CIFAR-10, along with varyinghyperparameters of a deep convolutional network, comparing our boundswith practical generalization gaps.", "target": ["Dimostriamo i limiti di generalizzazione per le reti neurali convoluzionali che tengono conto del weight-tying", "Studia il potere di generalizzazione delle CNN e migliora i limiti superiori degli errori di generalizzazione, mostrando la correlazione tra l'errore di generalizzazione delle CNN apprese e il termine dominante del limite superiore.", "Questo articolo presenta un limite di generalizzazione per le reti neurali convoluzionali basato sul numero di parametri, la costante Lipschitz e la distanza dei pesi finali dall'inizializzazione."]} +{"source": "MobileNets family of computer vision neural networks have fueled tremendous progress in the design and organization of resource-efficient architectures in recent years.New applications with stringent real-time requirements in highly constrained devices require further compression of MobileNets-like already computeefficient networks.Model quantization is a widely used technique to compress and accelerate neural network inference and prior works have quantized MobileNets to 4 − 6 bits albeit with a modest to significant drop in accuracy.While quantization to sub-byte values (i.e. precision ≤ 8 bits) has been valuable, even further quantization of MobileNets to binary or ternary values is necessary to realize significant energy savings and possibly runtime speedups on specialized hardware, such as ASICs and FPGAs.Under the key observation that convolutional filters at each layer of a deep neural network may respond differently to ternary quantization, we propose a novel quantization method that generates per-layer hybrid filter banks consisting of full-precision and ternary weight filters for MobileNets.The layer-wise hybrid filter banks essentially combine the strengths of full-precision and ternary weight filters to derive a compact, energy-efficient architecture for MobileNets.Using this proposed quantization method, we quantized a substantial portion of weight filters of MobileNets to ternary values resulting in 27.98% savings in energy, and a 51.07% reduction in the model size, while achieving comparable accuracy and no degradation in throughput on specialized hardware in comparison to the baseline full-precision MobileNets.", "target": ["Un risparmio di un fattore 2 nella dimensione del modello, riduzione del 28% dell'energia per MobileNet su ImageNet senza perdita di precisione utilizzando layer ibridi composti da filtri convenzionali a piena precisione e filtri ternari", "Si concentra sulla quantizzazione dell'architettura MobileNet a valori ternari, abbassando lo spazio richiesto e il calcolo per rendere le reti neurali più efficienti dal punto di vista energetico.", "L'articolo propone un banco di filtri ibrido a layer che quantizza a valori ternari solo una frazione dei filtri convoluzionali verso l'architettura MobileNet."]} +{"source": "Performing controlled experiments on noisy data is essential in thoroughly understanding deep learning across a spectrum of noise levels.Due to the lack of suitable datasets, previous research have only examined deep learning on controlled synthetic noise, and real-world noise has never been systematically studied in a controlled setting.To this end, this paper establishes a benchmark of real-world noisy labels at 10 controlled noise levels.As real-world noise possesses unique properties, to understand the difference, we conduct a large-scale study across a variety of noise levels and types, architectures, methods, and training settings.Our study shows that: (1) Deep Neural Networks (DNNs) generalize much better on real-world noise.(2) DNNs may not learn patterns first on real-world noisy data.(3) When networks are fine-tuned, ImageNet architectures generalize well on noisy data.(4) Real-world noise appears to be less harmful, yet it is more difficult for robust DNN methods to improve.(5) Robust learning methods that work well on synthetic noise may not work as well on real-world noise, and vice versa.We hope our benchmark, as well as our findings, will facilitate deep learning research on noisy data.", "target": ["Stabiliamo un benchmark di rumore reale controllato e riveliamo diversi risultati interessanti sui dati rumorosi reali.", "Questo articolo confronta 6 metodi esistenti per l'apprendimento di label rumorose in due setting di training: da zero e finetuning.", "Gli autori stabiliscono un grande dataset e un benchmark di rumore reale controllato per eseguire esperimenti controllati su dati rumorosi nel deep learning."]} +{"source": "Designing RNA molecules has garnered recent interest in medicine, synthetic biology, biotechnology and bioinformatics since many functional RNA molecules were shown to be involved in regulatory processes for transcription, epigenetics and translation.Since an RNA's function depends on its structural properties, the RNA Design problem is to find an RNA sequence which satisfies given structural constraints.Here, we propose a new algorithm for the RNA Design problem, dubbed LEARNA.LEARNA uses deep reinforcement learning to train a policy network to sequentially design an entire RNA sequence given a specified target structure.By meta-learning across 65000 different RNA Design tasks for one hour on 20 CPU cores, our extension Meta-LEARNA constructs an RNA Design policy that can be applied out of the box to solve novel RNA Design tasks.Methodologically, for what we believe to be the first time, we jointly optimize over a rich space of architectures for the policy network, the hyperparameters of the training procedure and the formulation of the decision process.Comprehensive empirical results on two widely-used RNA Design benchmarks, as well as a third one that we introduce, show that our approach achieves new state-of-the-art performance on the former while also being orders of magnitudes faster in reaching the previous state-of-the-art performance.In an ablation study, we analyze the importance of our method's different components.", "target": ["Impariamo a risolvere il problema RNA Design con il reinforcement learning usando approcci di meta learning e autoML.", "Ha utilizzato policy gradient optimization per generare sequenze di RNA che si ripiegano in un framework target secondario, ottenendo chiari miglioramenti in termini di precisione e tempo di esecuzione."]} +{"source": "Pruning is a popular technique for compressing a neural network: a large pre-trained network is fine-tuned while connections are successively removed.However, the value of pruning has largely evaded scrutiny.In this extended abstract, we examine residual networks obtained through Fisher-pruning and make two interesting observations.First, when time-constrained, it is better to train a simple, smaller network from scratch than prune a large network.Second, it is the architectures obtained through the pruning process --- not the learnt weights --- that prove valuable.Such architectures are powerful when trained from scratch.Furthermore, these architectures are easy to approximate without any further pruning: we can prune once and obtain a family of new, scalable network architectures for different memory requirements.", "target": ["Addestrare piccole reti batte il pruning, ma il pruning trova buone piccole reti da addestrare che sono facili da copiare."]} +{"source": "Supervised learning problems---particularly those involving social data---are often subjective.That is, human readers, looking at the same data, might come to legitimate but completely different conclusions based on their personal experiences.Yet in machine learning settings feedback from multiple human annotators is often reduced to a single ``ground truth'' label, thus hiding the true, potentially rich and diverse interpretations of the data found across the social spectrum.We explore the rewards and challenges of discovering and learning representative distributions of the labeling opinions of a large human population.A major, critical cost to this approach is the number of humans needed to provide enough labels not only to obtain representative samples but also to train a machine to predict representative distributions on unlabeled data.We propose aggregating label distributions over, not just individuals, but also data items, in order to maximize the costs of humans in the loop.We test different aggregation approaches on state-of-the-art deep learning models.Our results suggest that careful label aggregation methods can greatly reduce the number of samples needed to obtain representative distributions.", "target": ["Studiamo il problema dell'apprendimento per prevedere la diversità sottostante dei belief presenti nei domini di apprendimento supervised."]} +{"source": "Recent advancements in deep learning techniques such as Convolutional Neural Networks(CNN) and Generative Adversarial Networks(GAN) have achieved breakthroughs in the problem of semantic image inpainting, the task of reconstructing missing pixels in given images.While much more effective than conventional approaches, deep learning models require large datasets and great computational resources for training, and inpainting quality varies considerably when training data vary in size and diversity.To address these problems, we present in this paper a inpainting strategy of \\textit{Comparative Sample Augmentation}, which enhances the quality of training set by filtering out irrelevant images and constructing additional images using information about the surrounding regions of the images to be inpainted.Experiments on multiple datasets demonstrate that our method extends the applicability of deep inpainting models to training sets with varying sizes, while maintaining inpainting quality as measured by qualitative and quantitative metrics for a large class of deep models, with little need for model-specific consideration.", "target": ["Abbiamo introdotto una strategia che permette l'inpainting dei modelli su dataset di varie dimensioni", "Migliora l'inpainting dell'immagine usando GAN utilizzando un comparative augmenting filter e aggiungendo rumore casuale ad ogni pixel."]} +{"source": "Generative adversarial networks (GANs) are a family of generative models that do not minimize a single training criterion.Unlike other generative models, the data distribution is learned via a game between a generator (the generative model) and a discriminator (a teacher providing training signal) that each minimize their own cost.GANs are designed to reach a Nash equilibrium at which each player cannot reduce their cost without changing the other players’ parameters.One useful approach for the theory of GANs is to show that a divergence between the training distribution and the model distribution obtains its minimum value at equilibrium.Several recent research directions have been motivated by the idea that this divergence is the primary guide for the learning process and that every step of learning should decrease the divergence.We show that this view is overly restrictive.During GAN training, the discriminator provides learning signal in situations where the gradients of the divergences between distributions would not be useful.We provide empirical counterexamples to the view of GAN training as divergence minimization.Specifically, we demonstrate that GANs are able to learn distributions in situations where the divergence minimization point of view predicts they would fail.We also show that gradient penalties motivated from the divergence minimization perspective are equally helpful when applied in other contexts in which the divergence minimization perspective does not predict they would be helpful.This contributes to a growing body of evidence that GAN training may be more usefully viewed as approaching Nash equilibria via trajectories that do not necessarily minimize a specific divergence at each step.", "target": ["Troviamo prove che la minimizzazione della divergenza potrebbe non essere una caratterizzazione accurata del training delle GAN.", "La submission mira a presentare l'evidenza empirica che la teoria della minimizzazione della divergenza è più uno strumento per comprendere il risultato del training delle GAN che una condizione necessaria da far rispettare durante il training stesso", "Questo articolo studia le GAN non saturanti e l'effetto di due approcci con penalty del gradiente, considerando diversi thought experiment per dimostrare le osservazioni e validarle su esperimenti di dati reali."]} +{"source": "Measuring Mutual Information (MI) between high-dimensional, continuous, random variables from observed samples has wide theoretical and practical applications.Recent works have developed accurate MI estimators through provably low-bias approximations and tight variational lower bounds assuming abundant supply of samples, but require an unrealistic number of samples to guarantee statistical significance of the estimation.In this work, we focus on improving data efficiency and propose a Data-Efficient MINE Estimator (DEMINE) that can provide a tight lower confident interval of MI under limited data, through adding cross-validation to the MINE lower bound (Belghazi et al., 2018).Hyperparameter search is employed and a novel meta-learning approach with task augmentation is developed to increase robustness to hyperparamters, reduce overfitting and improve accuracy.With improved data-efficiency, our DEMINE estimator enables statistical testing of dependency at practical dataset sizes.We demonstrate the effectiveness of DEMINE on synthetic benchmarks and a real world fMRI dataset, with application of inter-subject correlation analysis.", "target": ["Un nuovo e pratico test statistico di dipendenza usando le reti neurali, con un benchmark su dataset sintetici e reali di fMRI.", "Propone una stima dell'informazione reciproca basata su reti neurali che può funzionare in modo affidabile con piccoli dataset, riducendo la sample complexity disaccoppiando il problema dell'apprendimento della rete e il problema della stima."]} +{"source": "Language and vision are processed as two different modal in current work for image captioning.However, recent work on Super Characters method shows the effectiveness of two-dimensional word embedding, which converts text classification problem into image classification problem.In this paper, we propose the SuperCaptioning method, which borrows the idea of two-dimensional word embedding from Super Characters method, and processes the information of language and vision together in one single CNN model.The experimental results on Flickr30k data shows the proposed method gives high quality image captions.An interactive demo is ready to show at the workshop.", "target": ["Image captioning utilizzando word embedding bidimensionali."]} +{"source": "Determining the optimal order in which data examples are presented to Deep Neural Networks during training is a non-trivial problem.However, choosing a non-trivial scheduling method may drastically improve convergence.In this paper, we propose a Self-Paced Learning (SPL)-fused Deep Metric Learning (DML) framework, which we call Learning Embeddings for Adaptive Pace (LEAP).Our method parameterizes mini-batches dynamically based on the \\textit{easiness} and \\textit{true diverseness} of the sample within a salient feature representation space.In LEAP, we train an \\textit{embedding} Convolutional Neural Network (CNN) to learn an expressive representation space by adaptive density discrimination using the Magnet Loss.The \\textit{student} CNN classifier dynamically selects samples to form a mini-batch based on the \\textit{easiness} from cross-entropy losses and \\textit{true diverseness} of examples from the representation space sculpted by the \\textit{embedding} CNN.We evaluate LEAP using deep CNN architectures for the task of supervised image classification on MNIST, FashionMNIST, CIFAR-10, CIFAR-100, and SVHN.We show that the LEAP framework converges faster with respect to the number of mini-batch updates required to achieve a comparable or better test performance on each of the datasets.", "target": ["LEAP combina la forza dell'adaptive sampling con quella dell'apprendimento online in mini-batch e dell'apprendimento adattivo della rappresentazione per formulare una strategia rappresentativa self-paced in un protocollo di training DNN end-to-end.", "Introduce un metodo per creare mini batch per una rete di student utilizzando un secondo spazio di rappresentazione appreso per selezionare dinamicamente gli esempi in base alla loro \"facilità e reale diversità\".", "Sperimenta l'accuratezza di classificazione su MNIST, FashionMNIST, e CIFAR-10 dataset per imparare una rappresentazione con la selezione di minibatch tramite curriculum learning in un framework end-to-end."]} +{"source": "Conventional deep reinforcement learning typically determines an appropriate primitive action at each timestep, which requires enormous amount of time and effort for learning an effective policy, especially in large and complex environments.To deal with the issue fundamentally, we incorporate macro actions, defined as sequences of primitive actions, into the primitive action space to form an augmented action space.The problem lies in how to find an appropriate macro action to augment the primitive action space. The agent using a proper augmented action space is able to jump to a farther state and thus speed up the exploration process as well as facilitate the learning procedure.In previous researches, macro actions are developed by mining the most frequently used action sequences or repeating previous actions.However, the most frequently used action sequences are extracted from a past policy, which may only reinforce the original behavior of that policy.On the other hand, repeating actions may limit the diversity of behaviors of the agent.Instead, we propose to construct macro actions by a genetic algorithm, which eliminates the dependency of the macro action derivation procedure from the past policies of the agent. Our approach appends a macro action to the primitive action space once at a time and evaluates whether the augmented action space leads to promising performance or not. We perform extensive experiments and show that the constructed macro actions are able to speed up the learning process for a variety of deep reinforcement learning methods.Our experimental results also demonstrate that the macro actions suggested by our approach are transferable among deep reinforcement learning methods and similar environments.We further provide a comprehensive set of ablation analysis to validate our methodology.", "target": ["Proponiamo di costruire le macro azioni con un algoritmo genetico, che elimina la dipendenza della procedura di derivazione delle macro azioni dalle policy passate dell'agente.", "Questo articolo propone un algoritmo generico per la costruzione di macro azioni per il deep reinforcement learning aggiungendo una macro azione allo spazio delle azioni primitive."]} +{"source": "A key problem in neuroscience and life sciences more generally is that the data generation process is often best thought of as a hierarchy of dynamic systems.One example of this is in-vivo calcium imaging data, where observed calcium transients are driven by a combination of electro-chemical kinetics where hypothesized trajectories around manifolds determining the frequency of these transients.A recent approach using sequential variational auto-encoders demonstrated it was possible to learn the latent dynamic structure of reaching behaviour from spiking data modelled as a Poisson process.Here we extend this approach using a ladder method to infer the spiking events driving calcium transients along with the deeper latent dynamic system.We show strong performance of this approach on a benchmark synthetic dataset against a number of alternatives.", "target": ["Proponiamo un'estensione di LFADS in grado di dedurre gli spike train per ricostruire le tracce di fluorescenza del calcio usando i VAE gerarchici."]} +{"source": "In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs.There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal.In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora.Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation.Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French-to-English and German-to-English translation.The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively.Our implementation is released as an open source project.", "target": ["Introduciamo il primo metodo di successo per neural machine translation in modo unsupervised, usando solo corpora monolingue", "Gli autori presentano un modello di NMT unsupervised che non richiede corpora paralleli tra le due lingue di interesse.", "Questo è un articolo sulla machine translation unsupervised che addestra un'architettura standard utilizzando word embedding in uno spazio di embedding condiviso solo con word paper bilingui e un encoder-decoder addestrato utilizzando dati monolingua."]} +{"source": "We describe a new training methodology for generative adversarial networks.The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses.This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2.We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10.Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator.Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation.As an additional contribution, we construct a higher-quality version of the CelebA dataset.", "target": ["Addestriamo le generative adversarial network in modo progressivo, permettendoci di generare immagini ad alta risoluzione con alta qualità.", "Introduce la crescita progressiva e una semplice funzione statistica di riepilogo dei minibatch senza parametri da usare nel training delle GAN per consentire la sintesi di immagini ad alta risoluzione."]} +{"source": "Designing a convolution for a spherical neural network requires a delicate tradeoff between efficiency and rotation equivariance.DeepSphere, a method based on a graph representation of the discretized sphere, strikes a controllable balance between these two desiderata.This contribution is twofold.First, we study both theoretically and empirically how equivariance is affected by the underlying graph with respect to the number of pixels and neighbors.Second, we evaluate DeepSphere on relevant problems.Experiments show state-of-the-art performance and demonstrates the efficiency and flexibility of this formulation.Perhaps surprisingly, comparison with previous work suggests that anisotropic filters might be an unnecessary price to pay.", "target": ["Una CNN sferica basata su grafi che trova un interessante equilibrio di compromessi per un'ampia varietà di applicazioni.", "Combina le strutture CNN esistenti basate sulla discretizzazione di una sfera come un grafo per mostrare un risultato di convergenza che è legato all'equivalenza rispetto alla rotazione su una sfera.", "Gli autori usano la formulazione esistente di CNN a grafo e una strategia di pooling che sfrutta le pixelature gerarchiche della sfera per imparare dalla sfera discretizzata."]} +{"source": "The notion of the stationary equilibrium ensemble has played a central role in statistical mechanics.In machine learning as well, training serves as generalized equilibration that drives the probability distribution of model parameters toward stationarity.Here, we derive stationary fluctuation-dissipation relations that link measurable quantities and hyperparameters in the stochastic gradient descent algorithm.These relations hold exactly for any stationary state and can in particular be used to adaptively set training schedule.We can further use the relations to efficiently extract information pertaining to a loss-function landscape such as the magnitudes of its Hessian and anharmonicity.Our claims are empirically verified.", "target": ["Dimostriamo le relazioni di fluttuazione-dissipazione per SGD, che possono essere utilizzate per (i) impostare in modo adattivo i learning rate e (ii) sondare le superfici della loss.", "I concetti del paper rientrano nel formalismo a tempo discreto, usano l'equazione master ed eliminano la dipendenza da un'approssimazione localmente quadratica della funzione di loss o da qualsiasi ipotesi gaussiana del rumore SGD.", "Gli autori derivano le relazioni di fluttuazione-dissipazione stazionarie che collegano le quantità misurabili e gli iperparametri in SGD e usano le relazioni per impostare la schedule di training in modo adattivo e analizzare il landscape della funzione di loss."]} +{"source": "Recurrent neural networks (RNNs) are difficult to train on sequence processing tasks, not only because input noise may be amplified through feedback, but also because any inaccuracy in the weights has similar consequences as input noise.We describe a method for denoising the hidden state during training to achieve more robust representations thereby improving generalization performance.Attractor dynamics are incorporated into the hidden state to `clean up' representations at each step of a sequence.The attractor dynamics are trained through an auxillary denoising loss to recover previously experienced hidden states from noisy versions of those states.This state-denoised recurrent neural network (SDRNN) performs multiple steps of internal processing for each external sequence step.On a range of tasks, we show that the SDRNN outperforms a generic RNN as well as a variant of the SDRNN with attractor dynamics on the hidden state but without the auxillary loss.We argue that attractor dynamics---and corresponding connectivity constraints---are an essential component of the deep learning arsenal and should be invoked not only for recurrent networks but also for improving deep feedforward nets and intertask transfer.", "target": ["Proponiamo un meccanismo di denoising dello stato interno di una RNN per migliorare le prestazioni di generalizzazione."]} +{"source": "We consider reinforcement learning in input-driven environments, where an exogenous, stochastic input process affects the dynamics of the system.Input processes arise in many applications, including queuing systems, robotics control with disturbances, and object tracking.Since the state dynamics and rewards depend on the input process, the state alone provides limited information for the expected future returns.Therefore, policy gradient methods with standard state-dependent baselines suffer high variance during training.We derive a bias-free, input-dependent baseline to reduce this variance, and analytically show its benefits over state-dependent baselines.We then propose a meta-learning approach to overcome the complexity of learning a baseline that depends on a long sequence of inputs.Our experimental results show that across environments from queuing systems, computer networks, and MuJoCo robotic locomotion, input-dependent baselines consistently improve training stability and result in better eventual policies.", "target": ["Per environment dettati parzialmente da processi di input esterni, deriviamo una baseline dipendente dall'input che riduce provatamente la varianza per i metod policy gradient e migliora le prestazioni della policy in una vasta gamma di task RL.", "Gli autori considerano il problema dell'apprendimento in environment guidati dall'input, mostrano come il teorema PG si applichi ancora per un critic consapevole dell'input, e mostrano che le baseline dipendenti dall'input sono le migliori da usare nella congettura con quel critic.", "Questo articolo introduce la nozione di baseline dipendente dall'input nei metodi di policy gradient in RL, e propone diversi metodi per addestrare la funzione di baseline dipendente dall'input per aiutare a cancellare la varianza dalla perturbazione dei fattori esterni."]} +{"source": "Deep networks have shown great performance in classification tasks.However, the parameters learned by the classifier networks usually discard stylistic information of the input, in favour of information strictly relevant to classification.We introduce a network that has the capacity to do both classification and reconstruction by adding a \"style memory\" to the output layer of the network.We also show how to train such a neural network as a deep multi-layer autoencoder, jointly minimizing both classification and reconstruction losses.The generative capacity of our network demonstrates that the combination of style-memory neurons with the classifier neurons yield good reconstructions of the inputs when the classification is correct.We further investigate the nature of the style memory, and how it relates to composing digits and letters.", "target": ["Aumentare il layer superiore di una rete classificatrice con una style memory le permette di essere generativa.", "Questo articolo propone di addestrare una rete neurale classificatrice non solo per classificare, ma anche per ricostruire una rappresentazione del suo input, al fine di fattorizzare le informazioni della classe dall'aspetto.", "L'articolo propone di addestrare un autoencoder in modo tale che la rappresentazione del layer intermedio consista nella label di classe dell'input e in una hidden representation."]} +{"source": "Routing models, a form of conditional computation where examples are routed through a subset of components in a larger network, have shown promising results in recent works.Surprisingly, routing models to date have lacked important properties, such as architectural diversity and large numbers of routing decisions.Both architectural diversity and routing depth can increase the representational power of a routing network.In this work, we address both of these deficiencies.We discuss the significance of architectural diversity in routing models, and explain the tradeoffs between capacity and optimization when increasing routing depth.In our experiments, we find that adding architectural diversity to routing models significantly improves performance, cutting the error rates of a strong baseline by 35% on an Omniglot setup.However, when scaling up routing depth, we find that modern routing techniques struggle with optimization.We conclude by discussing both the positive and negative results, and suggest directions for future research.", "target": ["I modelli di routing che lavorano sul singolo esempio beneficiano della diversità dell'architettura, ma faticano ancora nello scalare a un gran numero di decisioni di routing.", "Aggiunge diversità al tipo di unità architetturale disponibile per il router ad ogni decisione e scalando a reti più profonde, raggiungendo prestazioni allo stato dell'arte su Omniglot.", "Questo lavoro estende le reti di routing per utilizzare diverse architetture attraverso i moduli instradati"]} +{"source": "Across numerous applications, forecasting relies on numerical solvers for partial differential equations (PDEs).Although the use of deep-learning techniques has been proposed, the uses have been restricted by the fact the training data are obtained using PDE solvers.Thereby, the uses were limited to domains, where the PDE solver was applicable, but no further. We present methods for training on small domains, while applying the trained models on larger domains, with consistency constraints ensuring the solutions are physically meaningful even at the boundary of the small domains.We demonstrate the results on an air-pollution forecasting model for Dublin, Ireland.", "target": ["Presentiamo RNN per il training di modelli surrogati di PDE, in cui i vincoli di coerenza assicurano che le soluzioni siano fisicamente significative, anche quando il training utilizza domini molto più piccoli di quelli a cui viene applicato il modello addestrato."]} +{"source": "We address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs.Recent theoretical results have shown that optimistic mirror decent (OMD) can enjoy faster regret rates in the context of zero-sum games.WGANs is exactly a context of solving a zero-sum game with simultaneous no-regret dynamics. Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs.We formally show that in the case of bi-linear zero-sum games the last iterate of OMD dynamics converges to an equilibrium, in contrast to GD dynamics which are bound to cycle.We also portray the huge qualitative difference between GD and OMD dynamics with toy examples, even when GD is modified with many adaptations proposed in the recent literature, such as gradient penalty or momentum.We apply OMD WGAN training to a bioinformatics problem of generating DNA sequences.We observe that models trained with OMD achieve consistently smaller KL divergence with respect to the true underlying distribution, than models trained with GD variants.Finally, we introduce a new algorithm, Optimistic Adam, which is an optimistic variant of Adam.We apply it to WGAN training on CIFAR10 and observe improved performance in terms of inception score as compared to Adam.", "target": ["Proponiamo l'uso di mirror ottimistici sufficienti per affrontare i problemi di cycling nel training delle GAN. Introduciamo anche l'algoritmo Optimistic Adam", "Questo articolo propone l'uso della mirror descent ottimistica per addestrare le WGAN", "L'articolo propone di usare la gradient descent ottimistica per il training GAN che evita il cycling osservato con SGD e le sue varianti e fornisce risultati promettenti nel training GAN.", "Questo articolo propone una semplice modifica del gradient descent standard, sostenendo di migliorare la convergenza delle GAN e di altri problemi di ottimizzazione min-max."]} +{"source": "Learning good representations of users and items is crucially important to recommendation with implicit feedback.Matrix factorization is the basic idea to derive the representations of users and items by decomposing the given interaction matrix.However, existing matrix factorization based approaches share the limitation in that the interaction between user embedding and item embedding is only weakly enforced by fitting the given individual rating value, which may lose potentially useful information.In this paper, we propose a novel Augmented Generalized Matrix Factorization (AGMF) approach that is able to incorporate the historical interaction information of users and items for learning effective representations of users and items.Despite the simplicity of our proposed approach, extensive experiments on four public implicit feedback datasets demonstrate that our approach outperforms state-of-the-art counterparts.Furthermore, the ablation study demonstrates that by using multi-hot encoding to enrich user embedding and item embedding for Generalized Matrix Factorization, better performance, faster convergence, and lower training loss can be achieved.", "target": ["Una semplice estensione della fattorizzazione matriciale generalizzata può superare gli approcci state-of-the-art per recommendation.", "Il lavoro presenta un framework di fattorizzazione della matrice per imporre l'effetto dei dati storici quando si imparano le preferenze degli utenti nei setting di filtraggio collaborativo."]} +{"source": "We propose an unsupervised method for building dynamic representations of sequential data, particularly of observed interactions.The method simultaneously acquires representations of input data and its dynamics.It is based on a hierarchical generative model composed of two levels.In the first level, a model learns representations to generate observed data.In the second level, representational states encode the dynamics of the lower one.The model is designed as a Bayesian network with switching variables represented in the higher level, and which generates transition models.The method actively explores the latent space guided by its knowledge and the uncertainty about it.That is achieved by updating the latent variables from prediction error signals backpropagated to the latent space.So, no encoder or inference models are used since the generators also serve as their inverse transformations.The method is evaluated in two scenarios, with static images and with videos.The results show that the adaptation over time leads to better performance than with similar architectures without temporal dependencies, e.g., variational autoencoders.With videos, it is shown that the system extracts the dynamics of the data in states that highly correlate with the ground truth of the actions observed.", "target": ["Un metodo che costruisce rappresentazioni di dati sequenziali e le sue dinamiche attraverso modelli generativi con un processo attivo", "Combina reti neurali e distribuzioni gaussiane per creare un'architettura e un modello generativo per immagini e video che minimizza l'errore tra le immagini generate e quelle fornite.", "L'articolo propone un modello di rete bayesiana, realizzato come una rete neurale, che apprende diversi dati sotto forma di un sistema dinamico lineare"]} +{"source": "Activation is a nonlinearity function that plays a predominant role in the convergence and performance of deep neural networks.While Rectified Linear Unit (ReLU) is the most successful activation function, its derivatives have shown superior performance on benchmark datasets.In this work, we explore the polynomials as activation functions (order ≥ 2) that can approximate continuous real valued function within a given interval.Leveraging this property, the main idea is to learn the nonlinearity, accepting that the ensuing function may not be monotonic.While having the ability to learn more suitable nonlinearity, we cannot ignore the fact that it is a challenge to achieve stable performance due to exploding gradients - which is prominent with the increase in order.To handle this issue, we introduce dynamic input scaling, output scaling, and lower learning rate for the polynomial weights.Moreover, lower learning rate will control the abrupt fluctuations of the polynomials between weight updates.In experiments on three public datasets, our proposed method matches the performance of prior activation functions, thus providing insight into a network’s nonlinearity preference.", "target": ["Proponiamo polinomi come funzioni di attivazione.", "Gli autori introducono funzioni di attivazione learnable che sono parametrizzate da funzioni polinomiali e mostrano risultati leggermente migliori di ReLU."]} +{"source": "We introduce CBF, an exploration method that works in the absence of rewards or end of episode signal.CBF is based on intrinsic reward derived from the error of a dynamics model operating in feature space.It was inspired by (Pathak et al., 2017), is easy to implement, and can achieve results such as passing four levels of Super Mario Bros, navigating VizDoom mazes and passing two levels of SpaceInvaders.We investigated the effect of combining the method with several auxiliary tasks, but find inconsistent improvements over the CBF baseline.", "target": ["Un semplice metodo di motivazione intrinseca utilizzando l'errore del modello di dinamica forward nello spazio delle feature della policy."]} +{"source": "This paper is concerned with the robustness of VAEs to adversarial attacks.We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as β-TCVAE (Chen et al., 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks (Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al.(2018)).This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.", "target": ["Mostriamo che i VAE disentangled sono più robusti dei VAE standard agli adversarial attack che mirano a ingannarli nella decodifica dell'input avversario verso un obiettivo scelto. Sviluppiamo poi un VAE gerarchico disentangled ancora più robusto, Seatbelt-VAE.", "Gli autori propongono un nuovo modello VAE chiamato seatbelt-VAE, che dimostra di essere più robusto per gli attacchi latenti rispetto ai benchmark."]} +{"source": "The backpropagation algorithm is the de-facto standard for credit assignment in artificial neural networks due to its empirical results.Since its conception, variants of the backpropagation algorithm have emerged.More specifically, variants that leverage function changes in the backpropagation equations to satisfy their specific requirements.Feedback Alignment is one such example, which replaces the weight transpose matrix in the backpropagation equations with a random matrix in search of a more biologically plausible credit assignment algorithm.In this work, we show that function changes in the backpropagation procedure is equivalent to adding an implicit learning rate to an artificial neural network.Furthermore, we learn activation function derivatives in the backpropagation equations to demonstrate early convergence in these artificial neural networks.Our work reports competitive performances with early convergence on MNIST and CIFAR10 on sufficiently large deep neural network architectures.", "target": ["Dimostriamo che i cambiamenti di funzione nella backpropagation sono equivalenti a un learning rate implicito"]} +{"source": "Unsupervised text style transfer is the task of re-writing text of a given style into a target style without using a parallel corpus of source style and target style sentences for training.Style transfer systems are evaluated on their ability to generate sentences that1) possess the target style,2) are fluent and natural sounding, and3) preserve the non-stylistic parts (content) of the source sentence.We train a reinforcement learning (RL) based unsupervised style transfer system that incorporates rewards for the above measures, and describe novel rewards shaping methods for the same.Our approach does not attempt to disentangle style and content, and leverages the power of massively pre-trained language models as well as the Transformer.Our system significantly outperforms existing state-of-art systems based on human as well as automatic evaluations on target style, fluency and content preservation as well as on overall success of style transfer, on a variety of datasets.", "target": ["Un approccio di reinforcement learning al text style transfer", "Introduce un metodo basato su RL che fa leva su un language model pre-trained per trasferire lo stile del testo, senza un obiettivo di disentanglement, utilizzando style-transfer generation da un altro modello.", "Gli autori propongono una reward combinata composta da fluidità, contenuto e stile per il text style transfer."]} +{"source": "Despite the success of Generative Adversarial Networks (GANs) in image synthesis, there lacks enough understanding on what networks have learned inside the deep generative representations and how photo-realistic images are able to be composed from random noises.In this work, we show that highly-structured semantic hierarchy emerges from the generative representations as the variation factors for synthesizing scenes.By probing the layer-wise representations with a broad set of visual concepts at different abstraction levels, we are able to quantify the causality between the activations and the semantics occurring in the output image.Such a quantification identifies the human-understandable variation factors learned by GANs to compose scenes.The qualitative and quantitative results suggest that the generative representations learned by GAN are specialized to synthesize different hierarchical semantics: the early layers tend to determine the spatial layout and configuration, the middle layers control the categorical objects, and the later layers finally render the scene attributes as well as color scheme.Identifying such a set of manipulatable latent semantics facilitates semantic scene manipulation.", "target": ["Mostriamo che la gerarchia semantica altamente strutturata emerge nelle rappresentazioni generative deep come risultato per sintetizzare le scene.", "Il paper studia gli aspetti codificati dalle variabili latenti in ingresso nei diversi layer di StyleGAN.", "L'articolo presenta un'interpretazione visivamente guidata delle attivazioni dei layer di convoluzione nel generatore di StyleGAN su layout, categoria di scena, attributi di scena e colore."]} +{"source": "Variational autoencoders (VAEs) defined over SMILES string and graph-based representations of molecules promise to improve the optimization of molecular properties, thereby revolutionizing the pharmaceuticals and materials industries.However, these VAEs are hindered by the non-unique nature of SMILES strings and the computational cost of graph convolutions.To efficiently pass messages along all paths through the molecular graph, we encode multiple SMILES strings of a single molecule using a set of stacked recurrent neural networks, harmonizing hidden representations of each atom between SMILES representations, and use attentional pooling to build a final fixed-length latent representation.By then decoding to a disjoint set of SMILES strings of the molecule, our All SMILES VAE learns an almost bijective mapping between molecules and latent representations near the high-probability-mass subspace of the prior.Our SMILES-derived but molecule-based latent representations significantly surpass the state-of-the-art in a variety of fully- and semi-supervised property regression and molecular property optimization tasks.", "target": ["Mettiamo in comune i messaggi tra più stringhe SMILES della stessa molecola per passare le informazioni lungo tutti i percorsi attraverso il grafo molecolare, producendo rappresentazioni latenti che superano significativamente lo stato dell'arte in una varietà di task.", "Il metodo utilizza input multipli di stringhe SMILES, la fusione delle feature in base ai caratteri attraverso queste stringhe e il training della rete attraverso target multipli di output di stringhe SMILES, creando una robusta rappresentazione latente a lunghezza fissa indipendente dalla variazione SMILES.", "Gli autori descrivono un nuovo metodo di autoencoder variazionale per le molecole che codifica le molecole come stringhe per ridurre le operazioni necessarie per condividere le informazioni tra gli atomi nella molecola."]} +{"source": "We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network (cGAN).Although conditional distributions are multi-modal (i.e., having many modes) in practice, most cGAN approaches tend to learn an overly simplified distribution where an input is always mapped to a single output regardless of variations in latent code.To address such issue, we propose to explicitly regularize the generator to produce diverse outputs depending on latent codes.The proposed regularization is simple, general, and can be easily integrated into most conditional GAN objectives.Additionally, explicit regularization on generator allows our method to control a balance between visual quality and diversity.We demonstrate the effectiveness of our method on three conditional generation tasks: image-to-image translation, image inpainting, and future video prediction.We show that simple addition of our regularization to existing models leads to surprisingly diverse generations, substantially outperforming the previous approaches for multi-modal conditional generation specifically designed in each individual task.", "target": ["Proponiamo un approccio semplice e generale che evita un mode collapse in varie GAN condizionali.", "L'articolo propone un termine di regolarizzazione per l'obiettivo GAN condizionale al fine di promuovere una generazione multimodale diversa e prevenire il mode collapse.", "L'articolo propone un metodo per generare diversi output per vari framework GAN condizionali tra cui la traduzione image-to-image, l'image-inpainting e la predizione video, che può essere applicato a vari framework di sintesi condizionale per vari task."]} +{"source": "The transformer is a state-of-the-art neural translation model that uses attention to iteratively refine lexical representations with information drawn from the surrounding context.Lexical features are fed into the first layer and propagated through a deep network of hidden layers.We argue that the need to represent and propagate lexical features in each layer limits the model’s capacity for learning and representing other information relevant to the task.To alleviate this bottleneck, we introduce gated shortcut connections between the embedding layer and each subsequent layer within the encoder and decoder.This enables the model to access relevant lexical content dynamically, without expending limited resources on storing it within intermediate states.We show that the proposed modification yields consistent improvements on standard WMT translation tasks and reduces the amount of lexical information passed along the hidden layers.We furthermore evaluate different ways to integrate lexical connections into the transformer architecture and present ablation experiments exploring the effect of proposed shortcuts on model behavior.", "target": ["Dotare il transformer di shortcut per il layer di embedding libera capacità del modello per imparare nuove informazioni."]} +{"source": "Probability density estimation is a classical and well studied problem, but standard density estimation methods have historically lacked the power to model complex and high-dimensional image distributions. More recent generative models leverage the power of neural networks to implicitly learn and represent probability models over complex images. We describe methods to extract explicit probability density estimates from GANs, and explore the properties of these image density functions. We perform sanity check experiments to provide evidence that these probabilities are reasonable. However, we also show that density functions of natural images are difficult to interpret and thus limited in use. We study reasons for this lack of interpretability, and suggest that we can get better interpretability by doing density estimation on latent representations of images.", "target": ["Esaminiamo la relazione tra i valori di densità di probabilità e il contenuto dell'immagine nelle GAN non invertibili.", "Gli autori cercano di stimare la distribuzione di probabilità dell'immagine con l'aiuto di GAN e sviluppano un'approssimazione adeguata alle PDF nello spazio latente."]} +{"source": "Convolutional Neural Networks (CNNs) are composed of multiple convolution layers and show elegant performance in vision tasks.The design of the regular convolution is based on the Receptive Field (RF) where the information within a specific region is processed.In the view of the regular convolution's RF, the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF. As a result, the neurons in high layers are able to capture the global context even though the neurons in low layers only see the local information.However, in lower layers of the biological brain, the information outside of the RF changes the properties of neurons.In this work, we extend the regular convolution and propose spatially shuffled convolution (ss convolution).In ss convolution, the regular convolution is able to use the information outside of its RF by spatial shuffling which is a simple and lightweight operation.We perform experiments on CIFAR-10 and ImageNet-1k dataset, and show that ss convolution improves the classification performance across various CNNs.", "target": ["Proponiamo una convoluzione spaziale rimescolata che la convoluzione regolare incorpora le informazioni dall'esterno del suo campo recettivo.", "Propone la convoluzione SS che utilizza informazioni al di fuori della sua RF, mostrando risultati migliori quando viene testata su più modelli CNN.", "Gli autori hanno proposto una strategia di shuffling per i layer di convoluzione nelle reti neurali convoluzionali."]} +{"source": "We propose a framework to model the distribution of sequential data coming froma set of entities connected in a graph with a known topology.The method isbased on a mixture of shared hidden Markov models (HMMs), which are trainedin order to exploit the knowledge of the graph structure and in such a way that theobtained mixtures tend to be sparse.Experiments in different application domainsdemonstrate the effectiveness and versatility of the method.", "target": ["Un metodo per modellare la distribuzione generativa di sequenze provenienti da entità connesse a grafi.", "Gli autori propongono un metodo per modellare i dati sequenziali da fonti multiple interconnesse utilizzando una miscela di pool comune di HMM."]} +{"source": "To gain high rewards in muti-agent scenes, it is sometimes necessary to understand other agents and make corresponding optimal decisions.We can solve these tasks by first building models for other agents and then finding the optimal policy with these models.To get an accurate model, many observations are needed and this can be sample-inefficient.What's more, the learned model and policy can overfit to current agents and cannot generalize if the other agents are replaced by new agents.In many practical situations, each agent we face can be considered as a sample from a population with a fixed but unknown distribution.Thus we can treat the task against some specific agents as a task sampled from a task distribution.We apply meta-learning method to build models and learn policies.Therefore when new agents come, we can adapt to them efficiently.Experiments on grid games show that our method can quickly get high rewards.", "target": ["Il nostro lavoro applica il meta learning al multi-agent Reinforcement Learning per aiutare il nostro agente ad adattarsi in modo efficiente ai nuovi avversari in arrivo.", "Questo articolo si concentra sull'adattamento veloce al nuovo comportamento degli altri agenti dell'ambiente utilizzando un metodo basato su MAML", "L'articolo presenta un approccio all'apprendimento multi-agente basato sul framework del meta learning agnostico per il task di modellazione dell'avversario per il RL multi-agente."]} +{"source": "We characterize the singular values of the linear transformation associated with a standard 2D multi-channel convolutional layer, enabling their efficient computation. This characterization also leads to an algorithm for projecting a convolutional layer onto an operator-norm ball.We show that this is an effective regularizer; for example, it improves the test error of a deep residual network using batch normalization on CIFAR-10 from 6.2% to 5.3%.", "target": ["Caratterizziamo i valori singolari della trasformata lineare associata a un layer convoluzionario standard 2D multicanale, permettendo il loro calcolo efficiente.", "L'articolo è dedicato al calcolo dei valori singolari dei layer convoluzionali", "Deriva formule esatte per il calcolo dei valori singolari dei layer di convoluzione delle deep neural network e mostra che il calcolo dei valori singolari può essere fatto molto più velocemente del calcolo del SVD completo della matrice di convoluzione facendo appello alle trasformazioni FFT veloci."]} +{"source": "Trading off exploration and exploitation in an unknown environment is key to maximising expected return during learning.A Bayes-optimal policy, which does so optimally, conditions its actions not only on the environment state but on the agent's uncertainty about the environment.Computing a Bayes-optimal policy is however intractable for all but the smallest tasks.In this paper, we introduce variational Bayes-Adaptive Deep RL (variBAD), a way to meta-learn to perform approximate inference in an unknown environment, and incorporate task uncertainty directly during action selection.In a grid-world domain, we illustrate how variBAD performs structured online exploration as a function of task uncertainty.We also evaluate variBAD on MuJoCo domains widely used in meta-RL and show that it achieves higher return during training than existing methods.", "target": ["VariBAD apre una strada all'esplorazione approssimativa Bayes-ottimale trattabile per il deep RL usando idee dal meta learning, del RL bayesiano e dall'inferenza variazionale approssimata.", "Questo articolo presenta un nuovo metodo di deep reinforcement learning che può fare un trade-off in modo efficiente tra exploration e exploitation combinando meta learning, inferenza variazionale e RL bayesiana."]} +{"source": "In a continual learning setting, new categories may be introduced over time, and an ideal learning system should perform well on both the original categories and the new categories.While deep neural nets have achieved resounding success in the classical setting, they are known to forget about knowledge acquired in prior episodes of learning if the examples encountered in the current episode of learning are drastically different from those encountered in prior episodes.This makes deep neural nets ill-suited to continual learning.In this paper, we propose a new model that can both leverage the expressive power of deep neural nets and is resilient to forgetting when new categories are introduced.We demonstrate an improvement in terms of accuracy on original classes compared to a vanilla deep neural net.", "target": ["Mostriamo che il metric learning può aiutare a ridurre il catastrophic forgetting", "Questo articolo applica il metric learning per ridurre il catastrophic forgetting sulle reti neurali migliorando l'espressività del layer finale, portando a migliori risultati nel continual learning."]} +{"source": "Biomedical knowledge bases are crucial in modern data-driven biomedical sciences, but auto-mated biomedical knowledge base construction remains challenging.In this paper, we consider the problem of disease entity normalization, an essential task in constructing a biomedical knowledge base. We present NormCo, a deep coherence model which considers the semantics of an entity mention, as well as the topical coherence of the mentions within a single document.NormCo mod-els entity mentions using a simple semantic model which composes phrase representations from word embeddings, and treats coherence as a disease concept co-mention sequence using an RNN rather than modeling the joint probability of all concepts in a document, which requires NP-hard inference. To overcome the issue of data sparsity, we used distantly supervised data and synthetic data generated from priors derived from the BioASQ dataset. Our experimental results show thatNormCo outperforms state-of-the-art baseline methods on two disease normalization corpora in terms of (1) prediction quality and (2) efficiency, and is at least as performant in terms of accuracy and F1 score on tagged documents.", "target": ["Presentiamo NormCo, un modello di deep coherence che considera la semantica di un'entity mention, così come la topical coherence delle mention all'interno di un singolo documento per eseguire la normalizzazione delle disease entity.", "Utilizza un autoencoder GRU per rappresentare il \"contesto\" (entità correlate di una data malattia nell'arco di una frase), risolvendo il task BioNLP con miglioramenti significativi rispetto ai metodi più noti."]} +{"source": "We explore the role of multiplicative interaction as a unifying framework to describe a range of classical and modern neural network architectural motifs, such as gating, attention layers, hypernetworks, and dynamic convolutions amongst others.Multiplicative interaction layers as primitive operations have a long-established presence in the literature, though this often not emphasized and thus under-appreciated.We begin by showing that such layers strictly enrich the representable function classes of neural networks.We conjecture that multiplicative interactions offer a particularly powerful inductive bias when fusing multiple streams of information or when conditional computation is required.We therefore argue that they should be considered in many situation where multiple compute or information paths need to be combined, in place of the simple and oft-used concatenation operation.Finally, we back up our claims and demonstrate the potential of multiplicative interactions by applying them in large-scale complex RL and sequence modelling tasks, where their use allows us to deliver state-of-the-art results, and thereby provides new evidence in support of multiplicative interactions playing a more prominent role when designing new neural network architectures.", "target": ["Esploriamo il ruolo dell'interazione moltiplicativa come framework unificante per descrivere una serie di pattern architetturali di reti neurali classiche e moderne, come il gating, i layer di attention, le iperreti e le convoluzioni dinamiche.", "Presenta l'interazione moltiplicativa come caratterizzazione unificata per rappresentare i componenti di progettazione dell'architettura del modello comunemente usati, mostrando prove empiriche di prestazioni superiori in task come la modellazione di RL e sequenze.", "L'articolo esplora diversi tipi di interazioni moltiplicative e trova modelli MI in grado di raggiungere una performance allo stato dell'arte nel language modelling e nei problemi di reinforcement learning."]} +{"source": "Developing conditional generative models for text-to-video synthesis is an extremely challenging yet an important topic of research in machine learning.In this work, we address this problem by introducing Text-Filter conditioning Generative Adversarial Network (TFGAN), a GAN model with novel conditioning scheme that aids improving the text-video associations.With a combination of this conditioning scheme and a deep GAN architecture, TFGAN generates photo-realistic videos from text on very challenging real-world video datasets.In addition, we construct a benchmark synthetic dataset of moving shapes to systematically evaluate our conditioning scheme.Extensive experiments demonstrate that TFGAN significantly outperforms the existing approaches, and can also generate videos of novel categories not seen during training.", "target": ["Un'efficace framework GAN di text-conditioning per la generazione di video dal testo", "Questo articolo presenta un metodo basato su GAN per la generazione di video condizionati dalla descrizione del testo, con un nuovo metodo di condizionamento che genera filtri di convoluzione dal testo codificato e li usa per una convoluzione nel discriminatore.", "Questo articolo propone modelli GAN condizionali per la sintesi testo-video: sviluppando filtri CNN condizionati dalle feature del testo e costruendo dataset moving-shape con prestazioni migliorate nella generazione di video/immagini."]} +{"source": "Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.However, compressive networks are desired in many real world applications and direct training of small networks may be trapped in local optima.In this paper, instead of pruning or distilling over-parameterized models to compressive ones, we propose a new approach based on \\emph{differential inclusions of inverse scale spaces}, that generates a family of models from simple to complex ones by coupling gradient descent and mirror descent to explore model structural sparsity.It has a simple discretization, called the Split Linearized Bregman Iteration (SplitLBI), whose global convergence analysis in deep learning is established that from any initializations, algorithmic iterations converge to a critical point of empirical risks.Experimental evidence shows that\\ SplitLBI may achieve state-of-the-art performance in large scale training on ImageNet-2012 dataset etc., while with \\emph{early stopping} it unveils effective subnet architecture with comparable test accuracies to dense models after retraining instead of pruning well-trained ones.", "target": ["SplitLBI è applicato al deep learning per esplorare la sparsità strutturale del modello, raggiungendo prestazioni allo stato dell'arte in ImageNet-2012 e svelando un'efficace architettura di subnet.", "Propone un algoritmo basato sull'ottimizzazione per trovare importanti strutture sparse di reti neurali su larga scala accoppiando l'apprendimento della matrice dei pesi e i vincoli di sparsità, offrendo una convergenza garantita su problemi di ottimizzazione non convessi."]} +{"source": "In this paper, we study the learned iterative shrinkage thresholding algorithm (LISTA) for solving sparse coding problems. Following assumptions made by prior works, we first discover that the code components in its estimations may be lower than expected, i.e., require gains, and to address this problem, a gated mechanism amenable to theoretical analysis is then introduced.Specific design of the gates is inspired by convergence analyses of the mechanism and hence its effectiveness can be formally guaranteed.In addition to the gain gates, we further introduce overshoot gates for compensating insufficient step size in LISTA.Extensive empirical results confirm our theoretical findings and verify the effectiveness of our method.", "target": ["Proponiamo meccanismi gated per migliorare l'ISTA appreso per la codifica sparsa, con garanzie teoriche sulla superiorità del metodo.", "Propone estensioni di LISTA che affrontano la sottostima introducendo \"gain gates\" e includendo il momento con \"overshoot gates\", mostrando tassi di convergenza migliorati.", "Questo articolo si concentra sulla soluzione di problemi di codifica sparsa usando reti di tipo LISTA, proponendo una \"gain gating function\" per mitigare la debolezza dell'ipotesi \"nessun falso positivo\"."]} +{"source": "The learning of hierarchical representations for image classification has experienced an impressive series of successes due in part to the availability of large-scale labeled data for training.On the other hand, the trained classifiers have traditionally been evaluated on a handful of test images, which are deemed to be extremely sparsely distributed in the space of all natural images.It is thus questionable whether recent performance improvements on the excessively re-used test sets generalize to real-world natural images with much richer content variations.In addition, studies on adversarial learning show that it is effortless to construct adversarial examples that fool nearly all image classifiers, adding more complications to relative performance comparison of existing models.This work presents an efficient framework for comparing image classifiers, which we name the MAximum Discrepancy (MAD) competition.Rather than comparing image classifiers on fixed test sets, we adaptively sample a test set from an arbitrarily large corpus of unlabeled images so as to maximize the discrepancies between the classifiers, measured by the distance over WordNet hierarchy.Human labeling on the resulting small and model-dependent image sets reveals the relative performance of the competing classifiers and provides useful insights on potential ways to improve them.We report the MAD competition results of eleven ImageNet classifiers while noting that the framework is readily extensible and cost-effective to add future classifiers into the competition.", "target": ["Presentiamo un framework efficiente e adattivo per confrontare i classificatori di immagini al fine di massimizzare le discrepanze tra i classificatori, invece di confrontarli su test set fissi.", "Meccanismo di individuazione degli errori che confronta i classificatori di immagini creando un test set dove sono più in disaccordo, misurando il disaccordo attraverso una distanza semantica derivata dall'ontologia WordNet."]} +{"source": "Robustness of neural networks has recently been highlighted by the adversarial examples, i.e., inputs added with well-designed perturbations which are imperceptible to humans but can cause the network to give incorrect outputs.In this paper, we design a new CNN architecture that by itself has good robustness.We introduce a simple but powerful technique, Random Mask, to modify existing CNN structures.We show that CNN with Random Mask achieves state-of-the-art performance against black-box adversarial attacks without applying any adversarial training.We next investigate the adversarial examples which “fool” a CNN with Random Mask.Surprisingly, we find that these adversarial examples often “fool” humans as well.This raises fundamental questions on how to define adversarial examples and robustness properly.", "target": ["Proponiamo una tecnica che modifica le strutture CNN per migliorare la robustezza mantenendo un'alta accuratezza di test, e solleviamo dubbi sull'adeguatezza dell'attuale definizione di adversarial example generando adversarial example in grado di ingannare gli umani.", "Questo articolo propone una tecnica semplice per migliorare la robustezza delle reti neurali contro gli attacchi black-box.", "Gli autori propongono un metodo semplice per aumentare la robustezza delle reti neurali convoluzionali contro gli adversarial example, con risultati sorprendentemente buoni."]} +{"source": "Supervised deep learning methods require cleanly labeled large-scale datasets, but collecting such data is difficult and sometimes impossible.There exist two popular frameworks to alleviate this problem: semi-supervised learning and robust learning to label noise.Although these frameworks relax the restriction of supervised learning, they are studied independently.Hence, the training scheme that is suitable when only small cleanly-labeled data are available remains unknown.In this study, we consider learning from bi-quality data as a generalization of these studies, in which a small portion of data is cleanly labeled, and the rest is corrupt.Under this framework, we compare recent algorithms for semi-supervised and robust learning.The results suggest that semi-supervised learning outperforms robust learning with noisy labels.We also propose a training strategy for mixing mixup techniques to learn from such bi-quality data effectively.", "target": ["Proponiamo di confrontare l'apprendimento semi-supervised e robusto per le label rumorose in un setting condiviso", "Gli autori propongono una strategia basata sul mixup per il training di un modello in un setting formale che include i task di apprendimento semi-supervised e robusto come casi speciali."]} +{"source": "Hierarchical Sparse Coding (HSC) is a powerful model to efficiently represent multi-dimensional, structured data such as images.The simplest solution to solve this computationally hard problem is to decompose it into independent layerwise subproblems.However, neuroscientific evidence would suggest inter-connecting these subproblems as in the Predictive Coding (PC) theory, which adds top-down connections between consecutive layers.In this study, a new model called Sparse Deep Predictive Coding (SDPC) is introduced to assess the impact of this inter-layer feedback connection.In particular, the SDPC is compared with a Hierarchical Lasso (Hi-La) network made out of a sequence of Lasso layers.A 2-layered SDPC and a Hi-La networks are trained on 3 different databases and with different sparsity parameters on each layer.First, we show that the overall prediction error generated by SDPC is lower thanks to the feedback mechanism as it transfers prediction error between layers.Second, we demonstrate that the inference stage of the SDPC is faster to converge than for the Hi-La model.Third, we show that the SDPC also accelerates the learning process.Finally, the qualitative analysis of both models dictionaries, supported by their activation probability, show that the SDPC features are more generic and informative.", "target": ["Questo articolo dimostra sperimentalmente l'effetto benefico delle connessioni top-down nell'algoritmo Hierarchical Sparse Coding.", "Questo articolo presenta uno studio che confronta le tecniche di Hierarchical Sparse Coding, mostrando che il termine top-down è vantaggioso nel ridurre l'errore predittivo e può imparare più velocemente."]} +{"source": "Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings.Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce an explanation approach for image similarity models, where a model's output is a score measuring the similarity of two inputs rather than a classification. In this task, an explanation depends on both of the input images, so standard methods do not apply.We propose an explanation method that pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve performance on the classic task of attribute recognition.Our approach's ability to generalize is demonstrated on two datasets from diverse domains, Polyvore Outfits and Animals with Attributes 2.", "target": ["Un approccio a scatola nera per spiegare le previsioni di un modello di image similarity.", "Introduce un metodo per la spiegazione del modello di image similarity che identifica gli attributi che contribuiscono positivamente al punteggio di somiglianza e li accoppia con una mappa generata di saliency.", "L'articolo propone un meccanismo di explanation che accoppia le regioni tipiche della mappa di saliency insieme agli attributi per le deep neural network per similarity matching."]} +{"source": "Adversarial examples have been shown to be an effective way of assessing the robustness of neural sequence-to-sequence (seq2seq) models, by applying perturbations to the input of a model leading to large degradation in performance.However, these perturbations are only indicative of a weakness in the model if they do not change the semantics of the input in a way that would change the expected output.Using the example of machine translation (MT), we propose a new evaluation framework for adversarial attacks on seq2seq models taking meaning preservation into account and demonstrate that existing methods may not preserve meaning in general.Based on these findings, we propose new constraints for attacks on word-based MT systems and show, via human and automatic evaluation, that they produce more semantically similar adversarial inputs.Furthermore, we show that performing adversarial training with meaning-preserving attacks is beneficial to the model in terms of adversarial robustness without hurting test performance.", "target": ["Come dovreste valutare gli adversarial attack su seq2seq", "Gli autori studiano modi di generare adversarial example, mostrando che l'adversarial training con l'attacco più coerente con i criteri meaning-preserving introdotti risulta in una migliore robustezza a questo tipo di attacco senza degradazione nel setting non-adversarial.", "L'articolo riguarda le adversarial perturbation meaning-preserving che preservano il significato nel contesto dei modelli Seq2Seq"]} +{"source": "We introduce a new normalization technique that exhibits the fast convergence properties of batch normalization using a transformation of layer weights instead of layer outputs.The proposed technique keeps the contribution of positive and negative weights to the layer output in equilibrium.We validate our method on a set of standard benchmarks including CIFAR-10/100, SVHN and ILSVRC 2012 ImageNet.", "target": ["Una tecnica di normalizzazione alternativa alla batch normalization", "Introduce una tecnica di normalizzazione, che normalizza i pesi dei layer convoluzionali.", "Questo manoscritto introduce una nuova trasformazione per i layer, EquiNorm, per migliorare la batch normalization che non modifica gli input ai layer ma piuttosto i pesi dei layer."]} +{"source": "We present a framework for building unsupervised representations of entities and their compositions, where each entity is viewed as a probability distribution rather than a fixed length vector.In particular, this distribution is supported over the contexts which co-occur with the entity and are embedded in a suitable low-dimensional space.This enables us to consider the problem of representation learning with a perspective from Optimal Transport and take advantage of its numerous tools such as Wasserstein distance and Wasserstein barycenters.We elaborate how the method can be applied for obtaining unsupervised representations of text and illustrate the performance quantitatively as well as qualitatively on tasks such as measuring sentence similarity and word entailment, where we empirically observe significant gains (e.g., 4.1% relative improvement over Sent2vec and GenSen).The key benefits of the proposed approach include:(a) capturing uncertainty and polysemy via modeling the entities as distributions,(b) utilizing the underlying geometry of the particular task (with the ground cost),(c) simultaneously providing interpretability with the notion of optimal transport between contexts and(d) easy applicability on top of existing point embedding methods.In essence, the framework can be useful for any unsupervised or supervised problem (on text or other modalities); and only requires a co-occurrence structure inherent to many problems.The code, as well as pre-built histograms, are available under https://github.com/context-mover.", "target": ["Rappresentare ogni entità come una distribuzione di probabilità su contesti codificati in un ground space.", "Propone di costruire embedding di parole da un istogramma su parole di contesto, invece che come point vector, che permette di misurare le distanze tra due parole in termini di trasporto ottimale tra gli istogrammi attraverso un metodo che aumenta la rappresentazione di un'entità da \"punto in uno spazio vettoriale\" standard a un istogramma con bins situati in alcuni punti di quello spazio vettoriale. "]} +{"source": "Over the last few years, the phenomenon of adversarial examples --- maliciously constructed inputs that fool trained machine learning models --- has captured the attention of the research community, especially when the adversary is restricted to making small modifications of a correctly handled input.At the same time, less surprisingly, image classifiers lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise.In this work, we show that these are two manifestations of the same underlying phenomenon.We establish this connection in several ways.First, we find that adversarial examples exist at the same distance scales we would expect from a linear model with the same performance on corrupted images.Next, we show that Gaussian data augmentation during training improves robustness to small adversarial perturbations and that adversarial training improves robustness to several types of image corruptions.Finally, we present a model-independent upper bound on the distance from a corrupted image to its nearest error given test performance and show that in practice we already come close to achieving the bound, so that improving robustness further for the corrupted image distribution requires significantly reducing test error.All of this suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions.This yields a computationally tractable evaluation metric for defenses to consider: test error in noisy image distributions.", "target": ["Piccole perturbazioni adversarial dovrebbero essere attese dati i tassi di errore osservati nei modelli al di fuori della distribuzione naturale dei dati.", "Questo articolo propone una visione alternativa per gli adversarial example in spazi ad alta dimensione, considerando il \"tasso di errore\" in una distribuzione gaussiana centrata su ogni test point."]} +{"source": "Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training.Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed (Sun et al., 2019; Sanh, 2019).However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked.In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work.Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation.The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements.Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data.One surprising observation is that they have a compound effect even when sequentially applied on the same data.To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available.", "target": ["Studia come l'apprendimento self-supervised e la knowledge distillation interagiscono nel contesto della costruzione di modelli compatti.", "Studia il training di language model pre-trained tramite la distillation e mostra che l'uso di un teacher per distillare un modello compatto di student funziona meglio del pre-training diretto del modello.", "Questa presentazione mostra che il pre-training su masked language modelling di uno student è meglio della distillation, e il meglio è combinare entrambi e distillare da quel modello uno student pretrained."]} +{"source": "In this paper, we investigate lossy compression of deep neural networks (DNNs) by weight quantization and lossless source coding for memory-efficient deployment.Whereas the previous work addressed non-universal scalar quantization and entropy coding of DNN weights, we for the first time introduce universal DNN compression by universal vector quantization and universal source coding.In particular, we examine universal randomized lattice quantization of DNNs, which randomizes DNN weights by uniform random dithering before lattice quantization and can perform near-optimally on any source without relying on knowledge of its probability distribution.Moreover, we present a method of fine-tuning vector quantized DNNs to recover the performance loss after quantization.Our experimental results show that the proposed universal DNN compression scheme compresses the 32-layer ResNet (trained on CIFAR-10) and the AlexNet (trained on ImageNet) with compression ratios of $47.1$ and $42.5$, respectively.", "target": ["Introduciamo lo schema di compressione universale delle deep neural network, che è applicabile universalmente per la compressione di qualsiasi modello e può funzionare in modo quasi ottimale indipendentemente dalla loro distribuzione dei pesi.", "Introduce una pipeline per la compressione di rete che è simile alla deep compression e usa la quantizzazione a reticolo randomizzata invece della classica quantizzazione vettoriale, e usa la codifica universale della sorgente (bzip2) invece della codifica Huffman."]} +{"source": "What would be learned by variational autoencoder(VAE) and what influence the disentanglement of VAE?This paper tries to preliminarily address VAE's intrinsic dimension, real factor, disentanglement and indicator issues theoretically in the idealistic situation and implementation issue practically through noise modeling perspective in the realistic case. On intrinsic dimension issue, due to information conservation, the idealistic VAE learns and only learns intrinsic factor dimension.Besides, suggested by mutual information separation property, the constraint induced by Gaussian prior to the VAE objective encourages the information sparsity in dimension.On disentanglement issue, subsequently, inspired by information conservation theorem the clarification on disentanglement in this paper is made.On real factor issue, due to factor equivalence, the idealistic VAE possibly learns any factor set in the equivalence class. On indicator issue, the behavior of current disentanglement metric is discussed, and several performance indicators regarding the disentanglement and generating influence are subsequently raised to evaluate the performance of VAE model and to supervise the used factors.On implementation issue, the experiments under noise modeling and constraints empirically testify the theoretical analysis and also show their own characteristic in pursuing disentanglement.", "target": ["Questo articolo cerca di affrontare preliminarmente il disentanglement nella situazione ideale teorica e in pratica attraverso la prospettiva di modellazione del rumore nel caso realistico.", "Studia l'importanza della modellazione del rumore nei VAE gaussiani e propone di addestrare il rumore usando il metodo Empirical-Bayes.", "Modificare il modo in cui i fattori di rumore sono trattati nello sviluppo dei modelli VAE"]} +{"source": "Weight decay is one of the standard tricks in the neural network toolbox, but the reasons for its regularization effect are poorly understood, and recent results have cast doubt on the traditional interpretation in terms of $L_2$ regularization.Literal weight decay has been shown to outperform $L_2$ regularization for optimizers for which they differ. We empirically investigate weight decay for three optimization algorithms (SGD, Adam, and K-FAC) and a variety of network architectures.We identify three distinct mechanisms by which weight decay exerts a regularization effect, depending on the particular optimization algorithm and architecture: (1) increasing the effective learning rate, (2) approximately regularizing the input-output Jacobian norm, and (3) reducing the effective damping coefficient for second-order optimization. Our results provide insight into how to improve the regularization of neural networks.", "target": ["Studiamo la weight decay regularization per diversi ottimizzatori e identifichiamo tre meccanismi distinti attraverso i quali il weight decay migliora la generalizzazione.", "Discute l'effetto del weight decay sul training dei modelli di deep network con e senza batch normalization e quando si usano metodi di ottimizzazione del primo/secondo ordine e ipotizza che un learning rate più grande abbia un effetto di regolarizzazione."]} +{"source": "In this paper we present the first freely available dataset for the development and evaluation of domain adaptation methods, for the sound event detection task.The dataset contains 40 log mel-band energies extracted from $100$ different synthetic sound event tracks, with additive noise from nine different acoustic scenes (from indoor, outdoor, and vehicle environments), mixed at six different sound-to-noise ratios, SNRs, (from -12 to -27 dB with a step of -3 dB), and totaling to 5400 (9 * 100 * 6) sound files and a total length of 30 564 minutes.We provide the dataset as is, the code to re-create the dataset and remix the sound event tracks and the acoustic scenes with different SNRs, and a baseline method that tests the adaptation performance with the proposed dataset and establishes some first results.", "target": ["Il primo dataset di domain adaptation liberamente disponibile per il rilevamento di eventi sonori."]} +{"source": "This paper aims to address the limitations of mutual information estimators based on variational optimization.By redefining the cost using generalized functions from nonextensive statistical mechanics we raise the upper bound of previous estimators and enable the control of the bias variance trade off.Variational based estimators outperform previous methods especially in high dependence high dimensional scenarios found in machine learning setups.Despite their performance, these estimators either exhibit a high variance or are upper bounded by log(batch size).Our approach inspired by nonextensive statistical mechanics uses different generalizations for the logarithm and the exponential in the partition function.This enables the estimator to capture changes in mutual information over a wider range of dimensions and correlations of the input variables whereas previous estimators saturate them.", "target": ["Stimatore di informazioni reciproche basato sulla meccanica statistica non estensiva", "Questo articolo cerca di stabilire nuovi limiti inferiori variazionali per l'informazione reciproca introducendo il parametro q e definendo q-algebra, mostrando che i limiti inferiori hanno una varianza minore e raggiungono valori elevati."]} +{"source": "Generative adversarial networks (GANs) are a widely used framework for learning generative models.Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax problem to global optimality, but in practice, are successfully trained with stochastic gradient descent-ascent.In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution in polynomial time and sample complexity.", "target": ["Mostriamo che lo stochastic gradient descent ascent converge a un ottimo globale per WGAN con una rete di generatori a un layer.", "Tenta di dimostrare che lo Stochastic Gradient Descent-Ascent potrebbe convergere ad una soluzione globale per il problema min-max di WGAN."]} +{"source": "Classifiers such as deep neural networks have been shown to be vulnerable against adversarial perturbations on problems with high-dimensional input space.While adversarial training improves the robustness of classifiers against such adversarial perturbations, it leaves classifiers sensitive to them on a non-negligible fraction of the inputs.We argue that there are two different kinds of adversarial perturbations: shared perturbations which fool a classifier on many inputs and singular perturbations which only fool the classifier on a small fraction of the data.We find that adversarial training increases the robustness of classifiers against shared perturbations.Moreover, it is particularly effective in removing universal perturbations, which can be seen as an extreme form of shared perturbations.Unfortunately, adversarial training does not consistently increase the robustness against singular perturbations on unseen inputs.However, we find that adversarial training decreases robustness of the remaining perturbations against image transformations such as changes to contrast and brightness or Gaussian blurring.It thus makes successful attacks on the classifier in the physical world less likely.Finally, we show that even singular perturbations can be easily detected and must thus exhibit generalizable patterns even though the perturbations are specific for certain inputs.", "target": ["Mostriamo empiricamente che l'adversarial training è efficace per rimuovere le perturbazioni universali, rende gli adversarial example meno robusti alle trasformazioni dell'immagine, e li lascia rilevabili per un approccio di rilevamento.", "Analizza l'adversarial training e il suo effetto sugli adversarial example universali così come sugli adversarial example standard (iterazione di base) e come l'adversarial training influisce sul rilevamento.", "Gli autori mostrano che l'adversarial training è efficace nel proteggere contro l'adversarial perturbation \"condivisa\", in particolare contro la perturbazione universale, ma meno efficace per proteggere contro le perturbazioni singolari."]} +{"source": "We address the challenging problem of efficient deep learning model deployment, where the goal is to design neural network architectures that can fit different hardware platform constraints.Most of the traditional approaches either manually design or use Neural Architecture Search (NAS) to find a specialized neural network and train it from scratch for each case, which is computationally expensive and unscalable.Our key idea is to decouple model training from architecture search to save the cost.To this end, we propose to train a once-for-all network (OFA) that supports diverse architectural settings (depth, width, kernel size, and resolution).Given a deployment scenario, we can then quickly get a specialized sub-network by selecting from the OFA network without additional training.To prevent interference between many sub-networks during training, we also propose a novel progressive shrinking algorithm, which can train a surprisingly large number of sub-networks ($> 10^{19}$) simultaneously.Extensive experiments on various hardware platforms (CPU, GPU, mCPU, mGPU, FPGA accelerator) show that OFA consistently outperforms SOTA NAS methods (up to 4.0% ImageNet top1 accuracy improvement over MobileNetV3) while reducing orders of magnitude GPU hours and $CO_2$ emission.In particular, OFA achieves a new SOTA 80.0% ImageNet top1 accuracy under the mobile setting ($<$600M FLOPs).Code and pre-trained models are released at https://github.com/mit-han-lab/once-for-all.", "target": ["Introduciamo tecniche per addestrare una singola rete una volta per tutte che si adatta a molte piattaforme hardware.", "Il metodo si traduce in una rete da cui si possono estrarre sottoreti per vari vincoli di risorse (latenza, memoria) che funzionano bene senza bisogno di retraining.", "Questo articolo cerca di affrontare il problema della ricerca delle migliori architetture per scenari distribuiti con vincoli di risorse specializzate con un metodo NAS basato sulla predizione."]} +{"source": "A deep generative model is a powerful method of learning a data distribution, which has achieved tremendous success in numerous scenarios.However, it is nontrivial for a single generative model to faithfully capture the distributions of the complex data such as images with complicate structures.In this paper, we propose a novel approach of cascaded boosting for boosting generative models, where meta-models (i.e., weak learners) are cascaded together to produce a stronger model.Any hidden variable meta-model can be leveraged as long as it can support the likelihood evaluation.We derive a decomposable variational lower bound of the boosted model, which allows each meta-model to be trained separately and greedily.We can further improve the learning power of the generative models by combing our cascaded boosting framework with the multiplicative boosting framework.", "target": ["Proporre un approccio per il boosting dei modelli generativi mediante modelli a cascata di variabili nascoste", "Questo articolo ha proposto un nuovo approccio di boosting in cascata per il boosting dei modelli generativi che permette ad ogni meta-modello di essere addestrato separatamente e in modo greedy."]} +{"source": "Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks.Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline.We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena.We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.", "target": ["Abbiamo analizzato la struttura della frase in ELMo e nei contextual embedding model. Troviamo che i modelli esistenti codificano efficientemente la sintassi e mostrano prove di saper gestire long-range dependency, ma offrono solo piccoli miglioramenti sui task semantici.", "Propone il metodo \"edge probing\" e si concentra sulla relazione tra gli span piuttosto che sulle singole parole, permettendo agli autori di guardare alla costituzione sintattica, alle dipendenze, alle label delle entità e al semantic role labelling.", "Fornisce nuove intuizioni su ciò che viene catturato dai contextualized word embedding compilando una serie di task di \"edge probing\". "]} +{"source": "Deep reinforcement learning has succeeded in sophisticated games such as Atari, Go, etc.Real-world decision making, however, often requires reasoning with partial information extracted from complex visual observations.This paper presents Discriminative Particle Filter Reinforcement Learning (DPFRL), a new reinforcement learning framework for partial and complex observations.DPFRL encodes a differentiable particle filter with learned transition and observation models in a neural network, which allows for reasoning with partial observations over multiple time steps.While a standard particle filter relies on a generative observation model, DPFRL learns a discriminatively parameterized model that is training directly for decision making.We show that the discriminative parameterization results in significantly improved performance, especially for tasks with complex visual observations, because it circumvents the difficulty of modelling observations explicitly.In most cases, DPFRL outperforms state-of-the-art POMDP RL models in Flickering Atari Games, an existing POMDP RL benchmark, and in Natural Flickering Atari Games, a new, more challenging POMDP RL benchmark that we introduce.We further show that DPFRL performs well for visual navigation with real-world data.", "target": ["Introduciamo DPFRL, un framework per il reinforcement learning sotto osservazioni parziali e complesse con un fully differentiable discriminative particle filter", "Introduce idee per il training di agenti DLR con variabili di stato latenti, modellate come una distribuzione di belief, in modo che possano gestire ambienti parzialmente osservati.", "Questo articolo introduce un metodo di principio per POMDP RL: Discriminative Particle Filter Reinforcement Learning che permette di ragionare con osservazioni parziali su più time step, raggiungendo lo stato dell'arte sui benchmark."]} +{"source": "Extending models with auxiliary latent variables is a well-known technique to in-crease model expressivity.Bachman & Precup (2015); Naesseth et al. (2018); Cremer et al. (2017); Domke & Sheldon (2018) show that Importance Weighted Autoencoders (IWAE) (Burda et al., 2015) can be viewed as extending the variational family with auxiliary latent variables.Similarly, we show that this view encompasses many of the recent developments in variational bounds (Maddisonet al., 2017; Naesseth et al., 2018; Le et al., 2017; Yin & Zhou, 2018; Molchanovet al., 2018; Sobolev & Vetrov, 2018).The success of enriching the variational family with auxiliary latent variables motivates applying the same techniques to the generative model.We develop a generative model analogous to the IWAE bound and empirically show that it outperforms the recently proposed Learned Accept/Reject Sampling algorithm (Bauer & Mnih, 2018), while being substantially easier to implement.Furthermore, we show that this generative process provides new insights on ranking Noise Contrastive Estimation (Jozefowicz et al.,2016; Ma & Collins, 2018) and Contrastive Predictive Coding (Oord et al., 2018).", "target": ["Gli obiettivi di Monte Carlo sono analizzati utilizzando l'inferenza variazionale delle variabili ausiliarie, ottenendo una nuova analisi di CPC e NCE così come un nuovo modello generativo.", "Propone una visione diversa sul miglioramento dei limiti variazionali con modelli ausiliari di variabili latenti ed esplora l'uso di questi modelli nel modello generativo."]} +{"source": "Stochastic Gradient Descent or SGD is the most popular optimization algorithm for large-scale problems.SGD estimates the gradient by uniform sampling with sample size one.There have been several other works that suggest faster epoch wise convergence by using weighted non-uniform sampling for better gradient estimates.Unfortunately, the per-iteration cost of maintaining this adaptive distribution for gradient estimation is more than calculating the full gradient.As a result, the false impression of faster convergence in iterations leads to slower convergence in time, which we call a chicken-and-egg loop.In this paper, we break this barrier by providing the first demonstration of a sampling scheme, which leads to superior gradient estimation, while keeping the sampling cost per iteration similar to that of the uniform sampling.Such an algorithm is possible due to the sampling view of Locality Sensitive Hashing (LSH), which came to light recently.As a consequence of superior and fast estimation, we reduce the running time of all existing gradient descent algorithms.We demonstrate the benefits of our proposal on both SGD and AdaGrad.", "target": ["Miglioriamo l'esecuzione di tutti gli algoritmi di gradient descent esistenti.", "Gli autori propongono di campionare i gradienti stocastici da una funzione monotona proporzionale alla grandezza del gradiente utilizzando LSH.", "Considera SGD su un obiettivo a forma di una somma su esempi di una loss quadratica."]} +{"source": "In recent years we have made significant progress identifying computational principles that underlie neural function.While not yet complete, we have sufficient evidence that a synthesis of these ideas could result in an understanding of how neural computation emerges from a combination of innate dynamics and plasticity, and which could potentially be used to construct new AI technologies with unique capabilities.I discuss the relevant principles, the advantages they have for computation, and how they can benefit AI.Limitations of current AI are generally recognized, but fewer people are aware that we understand enough about the brain to immediately offer novel AI formulations.", "target": ["Le limitazioni dell'attuale IA sono generalmente riconosciute, ma meno persone sono consapevoli che comprendiamo abbastanza del cervello per offrire immediatamente nuove formulazioni di IA."]} +{"source": "Recent work has demonstrated how predictive modeling can endow agents with rich knowledge of their surroundings, improving their ability to act in complex environments.We propose question-answering as a general paradigm to decode and understand the representations that such agents develop, applying our method to two recent approaches to predictive modeling – action-conditional CPC (Guo et al., 2018) and SimCore (Gregor et al., 2019).After training agents with these predictive objectives in a visually-rich, 3D environment with an assortment of objects, colors, shapes, and spatial configurations, we probe their internal state representations with a host of synthetic (English) questions, without backpropagating gradients from the question-answering decoder into the agent.The performance of different agents when probed in this way reveals that they learn to encode detailed, and seemingly compositional, information about objects, properties and spatial relations from their physical environment.Our approach is intuitive, i.e. humans can easily interpret the responses of the model as opposed to inspecting continuous vectors, and model-agnostic, i.e. applicable to any modeling approach.By revealing the implicit knowledge of objects, quantities, properties and relations acquired by agents as they learn, question-conditional agent probing can stimulate the design and development of stronger predictive learning objectives.", "target": ["Utilizziamo question answering per valutare quanta conoscenza dell'environment può essere appresa dagli agenti attraverso la predizione self-supervised .", "Propone QA come uno strumento per investigare ciò che gli agenti imparano nel mondo, sostenendo che questo è un metodo intuitivo per gli umani che permette una complessità arbitraria.", "Gli autori propongono un framework per valutare le rappresentazioni costruite dai modelli predittivi che contengono informazioni sufficienti per rispondere alle domande sull'ambiente su cui sono addestrati, mostrando che quelli di SimCore contenevano informazioni sufficienti perché la LSTM possa rispondere accuratamente alle domande."]} +{"source": "In most real-world scenarios, training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.In this paper, we explore a novel yet simple way to alleviate this issue via synthesizing less-frequent classes with adversarial examples of other classes.Surprisingly, we found this counter-intuitive method can effectively learn generalizable features of minority classes by transferring and leveraging the diversity of the majority information.Our experimental results on various types of class-imbalanced datasets in image classification and natural language processing show that the proposed method not only improves the generalization of minority classes significantly compared to other re-sampling or re-weighting methods, but also surpasses other methods of state-of-art level for the class-imbalanced classification.", "target": ["Sviluppiamo un nuovo metodo per la classificazione imbalanced usando adversarial example", "Propone un nuovo obiettivo di ottimizzazione che genera sample sintetici sovracampionando le classi di maggioranza invece di quelle di minoranza, risolvendo il problema dell'overfitting alle classi di minoranza.", "Gli autori propongono di affrontare la classificazione imbalanced usando metodi di resampling, mostrando che gli adversarial example nella classe minoritaria aiuterebbero ad addestrare un nuovo modello che generalizza meglio."]} +{"source": "Active matter consists of active agents which transform energy extracted from surroundings into momentum, producing a variety of collective phenomena.A model, synthetic active system composed of microtubule polymers driven by protein motors spontaneously forms a liquid-crystalline nematic phase.Extensile stress created by the protein motors precipitates continuous buckling and folding of the microtubules creating motile topological defects and turbulent fluid flows.Defect motion is determined by the rheological properties of the material; however, these remain largely unquantified.Measuring defects dynamics can yield fundamental insights into active nematics, a class of materials that include bacterial films and animal cells.Current methods for defect detection lack robustness and precision, and require fine-tuning for datasets with different visual quality. In this study, we applied Deep Learning to train a defect detector to automatically analyze microscopy videos of the microtubule active nematic. Experimental results indicate that our method is robust and accurate.It is expected to significantly increase the amount of video data that can be processed.", "target": ["Un'interessante applicazione della CNN negli esperimenti di fisica della materia soft condensed.", "Gli autori dimostrano che un approccio di deep learning offre un miglioramento sia nell'accuratezza dell'identificazione che al tasso di identificazione dei difetti dei cristalli liquidi nematici.", "Applicare un modello neurale ben noto (YOLO) per rilevare le bounding box degli oggetti nelle immagini."]} +{"source": "In this work we study locality and compositionality in the context of learning representations for Zero Shot Learning (ZSL). In order to well-isolate the importance of these properties in learned representations, we impose the additional constraint that, differently from most recent work in ZSL, no pre-training on different datasets (e.g. ImageNet) is performed.The results of our experiment show how locality, in terms of small parts of the input, and compositionality, i.e. how well can the learned representations be expressed as a function of a smaller vocabulary, are both deeply related to generalization and motivate the focus on more local-aware models in future research directions for representation learning.", "target": ["Un'analisi degli effetti della compositività e della località sul representation learning per lo zero-shot learning.", "Propone un framework di valutazione per ZSL in cui il modello non può essere pretrained ed invece i parametri del modello sono inizializzati in modo casuale per una migliore comprensione di ciò che accade in ZSL."]} +{"source": "It is becoming increasingly clear that many machine learning classifiers are vulnerable to adversarial examples.In attempting to explain the origin of adversarial examples, previous studies have typically focused on the fact that neural networks operate on high dimensional data, they overfit, or they are too linear.Here we show that distributions of logit differences have a universal functional form.This functional form is independent of architecture, dataset, and training protocol; nor does it change during training.This leads to adversarial error having a universal scaling, as a power-law, with respect to the size of the adversarial perturbation.We show that this universality holds for a broad range of datasets (MNIST, CIFAR10, ImageNet, and random data), models (including state-of-the-art deep networks, linear models, adversarially trained networks, and networks trained on randomly shuffled labels), and attacks (FGSM, step l.l., PGD).Motivated by these results, we study the effects of reducing prediction entropy on adversarial robustness.Finally, we study the effect of network architectures on adversarial sensitivity.To do this, we use neural architecture search with reinforcement learning to find adversarially robust architectures on CIFAR10.Our resulting architecture is more robust to white \\emph{and} black box attacks compared to previous attempts.", "target": ["L'adversarial error ha una forma simile alla power-law per tutti i dataset e i modelli studiati, e l'architettura ha un ruolo importante."]} +{"source": "Reinforcement learning (RL) has led to increasingly complex looking behavior in recent years.However, such complexity can be misleading and hides over-fitting.We find that visual representations may be a useful metric of complexity, and both correlates well objective optimization and causally effects reward optimization.We then propose curious representation learning (CRL) which allows us to use better visual representation learning algorithms to correspondingly increase visual representation in policy through an intrinsic objective on both simulated environments and transfer to real images.Finally, we show better visual representations induced by CRL allows us to obtain better performance on Atari without any reward than other curiosity objectives.", "target": ["Presentiamo una formulazione della curiosity come un problema di visual representation learning e mostriamo che ottiene buone rappresentazioni visive negli agenti.", "Questo articolo formula il training RL basato sulla curiosity come apprendimento di un modello di visual representation, sostenendo che concentrandosi su una migliore LR e massimizzando la loss del modello per le scene nuove si otterrà una migliore performance complessiva."]} +{"source": "This paper introduces the task of semantic instance completion: from an incomplete RGB-D scan of a scene, we aim to detect the individual object instances comprising the scene and infer their complete object geometry.This enables a semantically meaningful decomposition of a scanned scene into individual, complete 3D objects, including hidden and unobserved object parts.This will open up new possibilities for interactions with object in a scene, for instance for virtual or robotic agents.To address this task, we propose 3D-SIC, a new data-driven approach that jointly detects object instances and predicts their completed geometry.The core idea of 3D-SIC is a novel end-to-end 3D neural network architecture that leverages joint color and geometry feature learning.The fully-convolutional nature of our 3D network enables efficient inference of semantic instance completion for 3D scans at scale of large indoor environments in a single forward pass.In a series evaluation, we evaluate on both real and synthetic scan benchmark data, where we outperform state-of-the-art approaches by over 15 in mAP@0.5 on ScanNet, and over 18 in mAP@0.5 on SUNCG.", "target": ["Da una scansione RGB-D incompleta di una scena, miriamo a rilevare le istanze individuali degli oggetti che compongono la scena e a dedurre la loro geometria completa.", "Propone una CNN 3D end-to-end che combina feature di colore e feature 3D per prevedere la struttura 3D mancante di una scena da scansioni RGB-D.", "Gli autori propongono una nuova rete di convoluzione 3D end-to-end che predice il completamento dell'istanza semantica 3D come bounding box dell'oggetto, le label di classe e la geometria completa dell'oggetto."]} +{"source": "Style transfer usually refers to the task of applying color and texture information from a specific style image to a given content image while preserving the structure of the latter.Here we tackle the more generic problem of semantic style transfer: given two unpaired collections of images, we aim to learn a mapping between the corpus-level style of each collection, while preserving semantic content shared across the two domains.We introduce XGAN (\"Cross-GAN\"), a dual adversarial autoencoder, which captures a shared representation of the common domain semantic content in an unsupervised way, while jointly learning the domain-to-domain image translations in both directions. We exploit ideas from the domain adaptation literature and define a semantic consistency loss which encourages the model to preserve semantics in the learned embedding space.We report promising qualitative results for the task of face-to-cartoon translation.The cartoon dataset we collected for this purpose will also be released as a new benchmark for semantic style transfer.", "target": ["XGAN è un modello unsupervised per la traduzione image-to-image a livello di feature applicato a problemi di semantic style transfer come il face-to-cartoon task, per il quale introduciamo un nuovo dataset.", "Questo articolo propone un nuovo modello basato su GAN per la image-to-image translation non accoppiata simile a DTN"]} +{"source": "Training neural networks on large datasets can be accelerated by distributing the workload over a network of machines.As datasets grow ever larger, networks of hundreds or thousands of machines become economically viable.The time cost of communicating gradients limits the effectiveness of using such large machine counts, as may the increased chance of network faults.We explore a particularly simple algorithm for robust, communication-efficient learning---signSGD.Workers transmit only the sign of their gradient vector to a server, and the overall update is decided by a majority vote.This algorithm uses 32x less communication per iteration than full-precision, distributed SGD.Under natural conditions verified by experiment, we prove that signSGD converges in the large and mini-batch settings, establishing convergence for a parameter regime of Adam as a byproduct.Aggregating sign gradients by majority vote means that no individual worker has too much power.We prove that unlike SGD, majority vote is robust when up to 50% of workers behave adversarially.The class of adversaries we consider includes as special cases those that invert or randomise their gradient estimate.On the practical side, we built our distributed training system in Pytorch.Benchmarking against the state of the art collective communications library (NCCL), our framework---with the parameter server housed entirely on one machine---led to a 25% reduction in time for training resnet50 on Imagenet when using 15 AWS p3.2xlarge machines.", "target": ["I lavoratori inviano i segni del gradiente al server, e l'aggiornamento viene deciso secondo la maggioranza. Mostriamo che questo algoritmo è convergente, efficiente nella comunicazione e tollerante agli errori, sia in teoria che in pratica.", "Presenta un'implementazione distribuita di signSGD con voto di maggioranza come aggregazione."]} +{"source": "Profiling cellular phenotypes from microscopic imaging can provide meaningful biological information resulting from various factors affecting the cells.One motivating application is drug development: morphological cell features can be captured from images, from which similarities between different drugs applied at different dosages can be quantified.The general approach is to find a function mapping the images to an embedding space of manageable dimensionality whose geometry captures relevant features of the input images.An important known issue for such methods is separating relevant biological signal from nuisance variation.For example, the embedding vectors tend to be more correlated for cells that were cultured and imaged during the same week than for cells from a different week, despite having identical drug compounds applied in both cases.In this case, the particular batch a set of experiments were conducted in constitutes the domain of the data; an ideal set of image embeddings should contain only the relevant biological information (e.g. drug effects).We develop a general framework for adjusting the image embeddings in order to `forget' domain-specific information while preserving relevant biological information.To do this, we minimize a loss function based on distances between marginal distributions (such as the Wasserstein distance) of embeddings across domains for each replicated treatment.For the dataset presented, the replicated treatment is the negative control.We find that for our transformed embeddings (1) the underlying geometric structure is not only preserved but the embeddings also carry improved biological signal (2) less domain-specific information is present.", "target": ["Correggiamo la nuisance variation per image embedding in diversi domini, conservando solo le informazioni rilevanti.", "Discute un metodo per regolare gli image embedding al fine di separare la variazione tecnica dal segnale biologico.", "Gli autori presentano un metodo per rimuovere l'informazione specifica del dominio preservando l'informazione biologica rilevante, addestrando una rete che minimizza la distanza di Wasserstein tra le distrbuzioni."]} +{"source": "This paper presents a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size.MINE is back-propable and we prove that it is strongly consistent.We illustrate a handful of applications in which MINE is succesfully applied to enhance the property of generative models in both unsupervised and supervised settings.We apply our framework to estimate the information bottleneck, and apply it in tasks related to supervised classification problems.Our results demonstrate substantial added flexibility and improvement in these settings.", "target": ["Uno stimatore di informazione mutua scalabile per dimensione e sample size."]} +{"source": "Reinforcement learning methods have recently achieved impressive results on a wide range of control problems.However, especially with complex inputs, they still require an extensive amount of training data in order to converge to a meaningful solution.This limitation largely prohibits their usage for complex input spaces such as video signals, and it is still impossible to use it for a number of complex problems in a real world environments, including many of those for video based control.Supervised learning, on the contrary, is capable of learning on a relatively small number of samples, however it does not take into account reward-based control policies and is not capable to provide independent control policies. In this article we propose a model-free control method, which uses a combination of reinforcement and supervised learning for autonomous control and paves the way towards policy based control in real world environments.We use SpeedDreams/TORCS video game to demonstrate that our approach requires much less samples (hundreds of thousands against millions or tens of millions) comparing to the state-of-the-art reinforcement learning techniques on similar data, and at the same time overcomes both supervised and reinforcement learning approaches in terms of quality.Additionally, we demonstrate the applicability of the method to MuJoCo control problems.", "target": ["La nuova combinazione di reinforcement e supervised learning diminuisce drasticamente il numero di sample richiesti per il training su video", "Questo articolo propone di sfruttare i dati controllati annotati per accelerare il reinforcement learning di una policy di controllo"]} +{"source": "A typical experiment to study cognitive function is to train animals to perform tasks, while the researcher records the electrical activity of the animals neurons.The main obstacle faced, when using this type of electrophysiological experiment to uncover the circuit mechanisms underlying complex behaviors, is our incomplete access to relevant circuits in the brain.One promising approach is to model neural circuits using an artificial neural network (ANN), which can provide complete access to the “neural circuits” responsible for a behavior.More recently, reinforcement learning models have been adopted to understand the functions of cortico-basal ganglia circuits as reward-based learning has been found in mammalian brain.In this paper, we propose a Biologically-plausible Actor-Critic with Episodic Memory (B-ACEM) framework to model a prefrontal cortex-basal ganglia-hippocampus (PFC-BG) circuit, which is verified to capture the behavioral findings from a well-known perceptual decision-making task, i.e., random dots motion discrimination.This B-ACEM framework links neural computation to behaviors, on which we can explore how episodic memory should be considered to govern future decision.Experiments are conducted using different settings of the episodic memory and results show that all patterns of episodic memories can speed up learning.In particular, salient events are prioritized to propagate reward information and guide decisions.Our B-ACEM framework and the built-on experiments give inspirations to both designs for more standard decision-making models in biological system and a more biologically-plausible ANN.", "target": ["Apprendimento veloce attraverso la memoria episodica verificata da un framework biologicamente plausibile per il prefrontal cortex-basal ganglia-hippocampus (PFC-BG) circuit."]} +{"source": "Understanding the representational power of Deep Neural Networks (DNNs) and how their structural properties (e.g., depth, width, type of activation unit) affect the functions they can compute, has been an important yet challenging question in deep learning and approximation theory.In a seminal paper, Telgarsky high- lighted the benefits of depth by presenting a family of functions (based on sim- ple triangular waves) for which DNNs achieve zero classification error, whereas shallow networks with fewer than exponentially many nodes incur constant error.Even though Telgarsky’s work reveals the limitations of shallow neural networks, it doesn’t inform us on why these functions are difficult to represent and in fact he states it as a tantalizing open question to characterize those functions that cannot be well-approximated by smaller depths.In this work, we point to a new connection between DNNs expressivity and Sharkovsky’s Theorem from dynamical systems, that enables us to characterize the depth-width trade-offs of ReLU networks for representing functions based on the presence of a generalized notion of fixed points, called periodic points (a fixed point is a point of period 1).Motivated by our observation that the triangle waves used in Telgarsky’s work contain points of period 3 – a period that is special in that it implies chaotic behaviour based on the celebrated result by Li-Yorke – we proceed to give general lower bounds for the width needed to represent periodic functions as a function of the depth.Technically, the crux of our approach is based on an eigenvalue analysis of the dynamical systems associated with such functions.", "target": ["In questo lavoro indichiamo una nuova connessione tra l'espressività delle DNN e il teorema di Sharkovsky dei sistemi dinamici, che ci permette di caratterizzare il trade-off tra profondità e larghezza delle reti ReLU", "Mostra come la potenza espressiva di NN dipende dalla sua profondità e larghezza, approfondendo la comprensione del beneficio delle deep neural network per rappresentare certe classi di funzioni.", "Gli autori derivano condizioni di trade-off depth-width per quando le reti relu sono in grado di rappresentare funzioni periodiche usando l'analisi dei sistemi dinamici."]} +{"source": "We investigate low-bit quantization to reduce computational cost of deep neural network (DNN) based keyword spotting (KWS).We propose approaches to further reduce quantization bits via integrating quantization into keyword spotting model training, which we refer to as quantization-aware training.Our experimental results on large dataset indicate that quantization-aware training can recover performance models quantized to lower bits representations.By combining quantization-aware training and weight matrix factorization, we are able to significantly reduce model size and computation for small-footprint keyword spotting, while maintaining performance.", "target": ["Studiamo il training informato sulla quantizzazione nei keyword spotter low-bit quantized per ridurre il costo on-device del keyword spotting.", "Questa presentazione propone una combinazione di decomposizione low-rank e un approccio di quantizzazione per comprimere i modelli DNN per keyword spotting."]} +{"source": "Single-cell RNA-sequencing (scRNA-seq) is a powerful tool for analyzing biological systems.However, due to biological and technical noise, quantifying the effects of multiple experimental conditions presents an analytical challenge.To overcome this challenge, we developed MELD: Manifold Enhancement of Latent Dimensions.MELD leverages tools from graph signal processing to learn a latent dimension within the data scoring the prototypicality of each datapoint with respect to experimental or control conditions.We call this dimension the Enhanced Experimental Signal (EES).MELD learns the EES by filtering the noisy categorical experimental label in the graph frequency domain to recover a smooth signal with continuous values.This method can be used to identify signature genes that vary between conditions and identify which cell types are most affected by a given perturbation.We demonstrate the advantages of MELD analysis in two biological datasets, including T-cell activation in response to antibody-coated beads and treatment of human pancreatic islet cells with interferon gamma.", "target": ["Un nuovo framework di elaborazione dei graph signal per quantificare gli effetti delle perturbazioni sperimentali nei dati biomedici delle singole cellule.", "Questo articolo introduce diversi metodi per elaborare i risultati sperimentali sulle cellule biologiche e propone un algoritmo MELD che mappa le assegnazioni di gruppo hard in assegnazioni soft, permettendo di raggruppare i gruppi di cellule rilevanti."]} +{"source": "Models of user behavior are critical inputs in many prescriptive settings and can be viewed as decision rules that transform state information available to the user into actions.Gaussian processes (GPs), as well as nonlinear extensions thereof, provide a flexible framework to learn user models in conjunction with approximate Bayesian inference.However, the resulting models may not be interpretable in general.We propose decision-rule GPs (DRGPs) that apply GPs in a transformed space defined by decision rules that have immediate interpretability to practitioners.We illustrate this modeling tool on a real application and show that structural variational inference techniques can be used with DRGPs.We find that DRGPs outperform the direct use of GPs in terms of out-of-sample performance.", "target": ["Proponiamo una classe di modelli utente basati sull'utilizzo di processi gaussiani applicati a uno spazio trasformato definito da regole decisionali"]} +{"source": "While Bayesian optimization (BO) has achieved great success in optimizing expensive-to-evaluate black-box functions, especially tuning hyperparameters of neural networks, methods such as random search (Li et al., 2016) and multi-fidelity BO (e.g. Klein et al. (2017)) that exploit cheap approximations, e.g. training on a smaller training data or with fewer iterations, can outperform standard BO approaches that use only full-fidelity observations.In this paper, we propose a novel Bayesian optimization algorithm, the continuous-fidelity knowledge gradient (cfKG) method, that can be used when fidelity is controlled by one or more continuous settings such as training data size and the number of training iterations.cfKG characterizes the value of the information gained by sampling a point at a given fidelity, choosing to sample at the point and fidelity with the largest value per unit cost.Furthermore, cfKG can be generalized, following Wu et al. (2017), to settings where derivatives are available in the optimization process, e.g. large-scale kernel learning, and where more than one point can be evaluated simultaneously.Numerical experiments show that cfKG outperforms state-of-art algorithms when optimizing synthetic functions, tuning convolutional neural networks (CNNs) on CIFAR-10 and SVHN, and in large-scale kernel learning.", "target": ["Proponiamo un algoritmo di ottimizzazione bayesiana Bayes-optimal per la scelta degli iperparametri sfruttando approssimazioni economiche.", "Studia l'ottimizzazione degli iperparametri tramite l'ottimizzazione bayesiana, usando il framework del Knowledge Gradient e permettendo all'ottimizzatore bayesiano di fare tuning della fidelity rispetto al costo."]} +{"source": "Neural networks trained only to optimize for training accuracy can often be fooled by adversarial examples --- slightly perturbed inputs misclassified with high confidence.Verification of networks enables us to gauge their vulnerability to such adversarial examples.We formulate verification of piecewise-linear neural networks as a mixed integer program.On a representative task of finding minimum adversarial distortions, our verifier is two to three orders of magnitude quicker than the state-of-the-art.We achieve this computational speedup via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available.The computational speedup allows us to verify properties on convolutional and residual networks with over 100,000 ReLUs --- several orders of magnitude more than networks previously verified by any complete verifier.In particular, we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded l-∞ norm ε=0.1: for this classifier, we find an adversarial example for 4.38% of samples, and a certificate of robustness to norm-bounded perturbations for the remainder.Across all robust training procedures and network architectures considered, and for both the MNIST and CIFAR-10 datasets, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack.", "target": ["Verifichiamo in modo efficiente la robustezza dei modelli neurali deep con oltre 100.000 ReLU, certificando più sample dello stato dell'arte e trovando più adversarial example rispetto ad un forte first-order attack.", "Esegue un attento studio degli approcci di programmazione lineare intera mista per verificare la robustezza delle reti neurali alle adversarial perturbations e propone tre miglioramenti alle formulazioni MILP della verifica delle reti neurali."]} +{"source": "Uncertainty estimation is an essential step in the evaluation of the robustness for deep learning models in computer vision, especially when applied in risk-sensitive areas.However, most state-of-the-art deep learning models either fail to obtain uncertainty estimation or need significant modification (e.g., formulating a proper Bayesian treatment) to obtain it.None of the previous methods are able to take an arbitrary model off the shelf and generate uncertainty estimation without retraining or redesigning it.To address this gap, we perform the first systematic exploration into training-free uncertainty estimation. We propose three simple and scalable methods to analyze the variance of output from a trained network under tolerable perturbations: infer-transformation, infer-noise, and infer-dropout.They operate solely during inference, without the need to re-train, re-design, or fine-tune the model, as typically required by other state-of-the-art uncertainty estimation methods.Surprisingly, even without involving such perturbations in training, our methods produce comparable or even better uncertainty estimation when compared to other training-required state-of-the-art methods.Last but not least, we demonstrate that the uncertainty from our proposed methods can be used to improve the neural network training.", "target": ["Un insieme di metodi per ottenere la stima dell'incertezza di qualsiasi modello dato senza ri-progettazione, re-training o fine-tuning.", "Descrive diversi approcci per misurare l'incertezza nelle reti neurali arbitrarie quando c'è un'assenza di distorsione durante il training."]} +{"source": "Capturing spatiotemporal dynamics is an essential topic in video recognition.In this paper, we present learnable higher-order operation as a generic family of building blocks for capturing higher-order correlations from high dimensional input video space.We prove that several successful architectures for visual classification tasks are in the family of higher-order neural networks, theoretical and experimental analysis demonstrates their underlying mechanism is higher-order. On the task of video recognition, even using RGB only without fine-tuning with other video datasets, our higher-order models can achieve results on par with or better than the existing state-of-the-art methods on both Something-Something (V1 and V2) and Charades datasets.", "target": ["Operazione di ordine superiore proposta per context learning", "Propone un nuovo blocco di convoluzione 3D che effettua la convoluzione tra l'input video e il suo contesto, basato sull'ipotesi che il contesto rilevante sia presente intorno all'oggetto dell'immagine."]} +{"source": "Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters.To understand consistency regularization, we conceptually explore how loss geometry interacts with training procedures.The consistency loss dramatically improves generalization performance over supervised-only training; however, we show that SGD struggles to converge on the consistency loss and continues to make large steps that lead to changes in predictions on the test data.Motivated by these observations, we propose to train consistency-based methods with Stochastic Weight Averaging (SWA), a recent approach which averages weights along the trajectory of SGD with a modified learning rate schedule.We also propose fast-SWA, which further accelerates convergence by averaging multiple points within each cycle of a cyclical learning rate schedule.With weight averaging, we achieve the best known semi-supervised results on CIFAR-10 and CIFAR-100, over many different quantities of labeled training data.For example, we achieve 5.0% error on CIFAR-10 with only 4000 labels, compared to the previous best result in the literature of 6.3%.", "target": ["I modelli basati sulla consistency per l'apprendimento semi-supervised non convergono verso un singolo punto, ma continuano ad esplorare un insieme vario di soluzioni plausibili sul perimetro di una regione piatta. Il weight averaging aiuta a migliorare le prestazioni di generalizzazione.", "L'articolo propone di applicare lo Stochastic Weight Averaging al contesto dell'apprendimento semi-supervised, sostenendo che i modelli MT/Pi semi-supervisionati sono particolarmente adatti a SWA e propone SWA veloce per accelerare il training."]} +{"source": "In this paper, we find that by designing a novel loss function entitled, ''tracking loss'', Convolutional Neural Network (CNN) based object detectors can be successfully converted to well-performed visual trackers without any extra computational cost.This property is preferable to visual tracking where annotated video sequences for training are always absent, because rich features learned by detectors from still images could be utilized by dynamic trackers.It also avoids extra machinery such as feature engineering and feature aggregation proposed in previous studies.Tracking loss achieves this property by exploiting the internal structure of feature maps within the detection network and treating different feature points discriminatively.Such structure allows us to simultaneously consider discrimination quality and bounding box accuracy which is found to be crucial to the success.We also propose a network compression method to accelerate tracking speed without performance reduction.That also verifies tracking loss will remain highly effective even if the network is drastically compressed.Furthermore, if we employ a carefully designed tracking loss ensemble, the tracker would be much more robust and accurate.Evaluation results show that our trackers (including the ensemble tracker and two baseline trackers), outperform all state-of-the-art methods on VOT 2016 Challenge in terms of Expected Average Overlap (EAO) and robustness.We will make the code publicly available.", "target": ["Abbiamo convertito con successo un popolare rilevatore RPN in un tracker ben performante dal punto di vista della funzione di loss."]} +{"source": "We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code.The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated.In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program.Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of real-world source code that has been augmented with synthetically injected bugs.Our framework adopts a two-stage approach where first a large set of repair candidates are generated by rule-based processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete.Specifically, the architecture (1) generates a shared encoding of the source code using an RNN over the abstract syntax tree, (2) scores each candidate repair using specialized network modules, and (3) then normalizes these scores together so they can compete against one another in comparable probability space.We evaluate our model on a real-world test set gathered from GitHub containing four common categories of bugs.Our model is able to predict the exact correct repair 41% of the time with a single guess, compared to 13% accuracy for an attentional sequence-to-sequence model.", "target": ["Un'architettura neurale per segnare e classificare i candidati alla riparazione del programma per eseguire la riparazione semantica del programma in modo statico senza accesso agli unit test.", "Presenta un'architettura di rete neurale composta dalle parti share, specialize e compete per riparare il codice in quattro casi."]} +{"source": "Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images) (Tsipras et al., 2019).Such a dilemma is shown to be rooted in the inherently higher sample complexity (Schmidt et al., 2018) and/or model capacity (Nakkiran, 2019), for learning a high-accuracy and robust classifier.In view of that, give a classification task, growing the model capacity appears to help draw a win-win between accuracy and robustness, yet at the expense of model size and latency, therefore posing challenges for resource-constrained applications.Is it possible to co-design model accuracy, robustness and efficiency to achieve their triple wins?This paper studies multi-exit networks associated with input-adaptive efficient inference, showing their strong promise in achieving a “sweet point\" in co-optimizing model accuracy, robustness, and efficiency.Our proposed solution, dubbed Robust Dynamic Inference Networks (RDI-Nets), allows for each input (either clean or adversarial) to adaptively choose one of the multiple output layers (early branches or the final one) to output its prediction.That multi-loss adaptivity adds new variations and flexibility to adversarial attacks and defenses, on which we present a systematical investigation.We show experimentally that by equipping existing backbones with such robust adaptive inference, the resulting RDI-Nets can achieve better accuracy and robustness, yet with over 30% computational savings, compared to the defended original models.", "target": ["È possibile co-progettare l'accuratezza, la robustezza e l'efficienza dei modelli per raggiungere un triplo vantaggio? Sì!", "Sfrutta l'input-adaptive multiple early-exits per il campo dell'attacco e della difesa adversarial, riducendo la complessità media dell'inferenza senza entrare in conflitto con l'ipotesi di maggiore capacità."]} +{"source": "Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret.Especially, little is known about how they represent language in their intermediate layers.In an attempt to understand the representations of deep convolutional networks trained on language tasks, we show that individual units are selectively responsive to specific morphemes, words, and phrases, rather than responding to arbitrary and uninterpretable patterns.In order to quantitatively analyze such intriguing phenomenon, we propose a concept alignment method based on how units respond to replicated text.We conduct analyses with different architectures on multiple datasets for classification and translation tasks and provide new insights into how deep models understand natural language.", "target": ["Mostriamo che le singole unità nelle rappresentazioni CNN apprese in task NLP sono selettivamente reattive a specifici concetti del linguaggio naturale.", "Utilizza le unità grammaticali del linguaggio naturale che conservano i significati per dimostrare che le unità delle CNN deep apprese in task NLP potrebbero agire come un rilevatore di concetti in linguaggio naturale."]} +{"source": "We study the problem of building models that disentangle independent factors of variation.Such models encode features that can efficiently be used for classification and to transfer attributes between different images in image synthesis.As data we use a weakly labeled training set, where labels indicate what single factor has changed between two data samples, although the relative value of the change is unknown.This labeling is of particular interest as it may be readily available without annotation costs.We introduce an autoencoder model and train it through constraints on image pairs and triplets.We show the role of feature dimensionality and adversarial training theoretically and experimentally.We formally prove the existence of the reference ambiguity, which is inherently present in the disentangling task when weakly labeled data is used.The numerical value of a factor has different meaning in different reference frames.When the reference depends on other factors, transferring that factor becomes ambiguous.We demonstrate experimentally that the proposed model can successfully transfer attributes on several datasets, but show also cases when the reference ambiguity occurs.", "target": ["Si tratta di un documento prevalentemente teorico che descrive le sfide nel separare i fattori di variazione, utilizzando autoencoder e GAN.", "Questo articolo considera la separazione dei fattori di variazione nelle immagini, mostra che in generale, senza ulteriori presupposti, non si possono distinguere due diversi fattori di variazione, e suggerisce una nuova architettura AE+GAN per cercare di distinguere i fattori di variazione.", "Questo articolo studia le sfide della separazione dei fattori indipendenti di variazione sotto dati debolmente annotati e introduce il termine ambiguità di riferimento per la mappatura dei punti dati."]} +{"source": "In information retrieval, learning to rank constructs a machine-based ranking model which given a query, sorts the search results by their degree of relevance or importance to the query.Neural networks have been successfully applied to this problem, and in this paper, we propose an attention-based deep neural network which better incorporates different embeddings of the queries and search results with an attention-based mechanism.This model also applies a decoder mechanism to learn the ranks of the search results in a listwise fashion.The embeddings are trained with convolutional neural networks or the word2vec model.We demonstrate the performance of this model with image retrieval and text querying data sets.", "target": ["learning to rank con diversi embedding e attention", "Propone di usare l'attention per combinare rappresentazioni di input multipli sia per la query che per i risultati della ricerca nel task di learning to rank ."]} +{"source": "Computational neuroscience aims to fit reliable models of in vivo neural activity and interpret them as abstract computations.Recent work has shown that functional diversity of neurons may be limited to that of relatively few cell types; other work has shown that incorporating constraints into artificial neural networks (ANNs) can improve their ability to mimic neural data.Here we develop an algorithm that takes as input recordings of neural activity and returns clusters of neurons by cell type and models of neural activity constrained by these clusters.The resulting models are both more predictive and more interpretable, revealing the contributions of functional cell types to neural computation and ultimately informing the design of future ANNs.", "target": ["Abbiamo sviluppato un algoritmo che prende come input registrazioni di attività neurale e restituisce cluster di neuroni per tipo di cellule e modelli di attività neurale vincolati da questi cluster."]} +{"source": "Graph Neural Networks (GNNs) are a powerful representational tool for solving problems on graph-structured inputs.In almost all cases so far, however, they have been applied to directly recovering a final solution from raw inputs, without explicit guidance on how to structure their problem-solving.Here, instead, we focus on learning in the space of algorithms: we train several state-of-the-art GNN architectures to imitate individual steps of classical graph algorithms, parallel (breadth-first search, Bellman-Ford) as well as sequential (Prim's algorithm).As graph algorithms usually rely on making discrete decisions within neighbourhoods, we hypothesise that maximisation-based message passing neural networks are best-suited for such objectives, and validate this claim empirically.We also demonstrate how learning in the space of algorithms can yield new opportunities for positive transfer between tasks---showing how learning a shortest-path algorithm can be substantially improved when simultaneously learning a reachability algorithm.", "target": ["Supervisioniamo le graph neural network per imitare gli output intermediate e step-wise dei classici algoritmi su grafo, recuperando intuizioni molto favorevoli.", "Suggerisce il training di reti neurali per imitare gli algoritmi su grafo imparando primitive e subroutine al posto dell'output finale."]} +{"source": "Prospection is an important part of how humans come up with new task plans, but has not been explored in depth in robotics.Predicting multiple task-level is a challenging problem that involves capturing both task semantics and continuous variability over the state of the world.Ideally, we would combine the ability of machine learning to leverage big data for learning the semantics of a task, while using techniques from task planning to reliably generalize to new environment.In this work, we propose a method for learning a model encoding just such a representation for task planning.We learn a neural net that encodes the k most likely outcomes from high level actions from a given world.Our approach creates comprehensible task plans that allow us to predict changes to the environment many time steps into the future.We demonstrate this approach via application to a stacking task in a cluttered environment, where the robot must select between different colored blocks while avoiding obstacles, in order to perform a task.We also show results on a simple navigation task.Our algorithm generates realistic image and pose predictions at multiple points in a given task.", "target": ["Descriviamo un'architettura per generare diverse ipotesi di obiettivi intermedi durante i task di manipolazione robotica.", "Valuta la qualità di un modello predittivo generativo proposto per generare piani di esecuzione dei robot.", "Questo articolo propone un metodo per imparare una funzione di transizione di alto livello utile per la pianificazione dei task."]} +{"source": "Adaptive gradient algorithms perform gradient-based updates using the history of gradients and are ubiquitous in training deep neural networks.While adaptive gradient methods theory is well understood for minimization problems, the underlying factors driving their empirical success in min-max problems such as GANs remain unclear.In this paper, we aim at bridging this gap from both theoretical and empirical perspectives.First, we analyze a variant of Optimistic Stochastic Gradient (OSG) proposed in~\\citep{daskalakis2017training} for solving a class of non-convex non-concave min-max problem and establish $O(\\epsilon^{-4})$ complexity for finding $\\epsilon$-first-order stationary point, in which the algorithm only requires invoking one stochastic first-order oracle while enjoying state-of-the-art iteration complexity achieved by stochastic extragradient method by~\\citep{iusem2017extragradient}.Then we propose an adaptive variant of OSG named Optimistic Adagrad (OAdagrad) and reveal an \\emph{improved} adaptive complexity $\\widetilde{O}\\left(\\epsilon^{-\\frac{2}{1-\\alpha}}\\right)$~\\footnote{Here $\\widetilde{O}(\\cdot)$ compresses a logarithmic factor of $\\epsilon$.}, where $\\alpha$ characterizes the growth rate of the cumulative stochastic gradient and $0\\leq \\alpha\\leq 1/2$.To the best of our knowledge, this is the first work for establishing adaptive complexity in non-convex non-concave min-max optimization.Empirically, our experiments show that indeed adaptive gradient algorithms outperform their non-adaptive counterparts in GAN training.Moreover, this observation can be explained by the slow growth rate of the cumulative stochastic gradient, as observed empirically.", "target": ["Questo articolo fornisce un'analisi innovativa degli algoritmi adattivi per il gradiente per risolvere problemi di min-max non concavi come GAN, e spiega il motivo per cui i metodi di gradiente adattivi superano le loro controparti non adattive attraverso studi empirici.", "Sviluppa algoritmi per la soluzione di disuguaglianze variazionali in ambiente stocastico, proponendo una variazione del metodo dell'extragradiente."]} +{"source": "We consider the problem of unsupervised learning of a low dimensional, interpretable, latent state of a video containing a moving object.The problem of distilling dynamics from pixels has been extensively considered through the lens of graphical/state space models that exploit Markov structure for cheap computation and structured graphical model priors for enforcing interpretability on latent representations.We take a step towards extending these approaches by discarding the Markov structure; instead, repurposing the recently proposed Gaussian Process Prior Variational Autoencoder for learning sophisticated latent trajectories.We describe the model and perform experiments on a synthetic dataset and see that the model reliably reconstructs smooth dynamics exhibiting U-turns and loops.We also observe that this model may be trained without any beta-annealing or freeze-thaw of training parameters.Training is performed purely end-to-end on the unmodified evidence lower bound objective.This is in contrast to previous works, albeit for slightly different use cases, where application specific training tricks are often required.", "target": ["Rendiamo possibile l'apprendimento di traiettorie sofisticate di un oggetto puramente da pixel con un dataset giocattolo di video utilizzando una framework VAE con un processo Gaussiano precedente."]} +{"source": "Dreams and our ability to recall them are among the most puzzling questions in sleep research.Specifically, putative differences in brain network dynamics between individuals with high versus low dream recall rates, are still poorly understood.In this study, we addressed this question as a classification problem where we applied deep convolutional networks (CNN) to sleep EEG recordings to predict whether subjects belonged to the high or low dream recall group (HDR and LDR resp.).Our model achieves significant accuracy levels across all the sleep stages, thereby indicating subtle signatures of dream recall in the sleep microstructure.We also visualized the feature space to inspect the subject-specificity of the learned features, thus ensuring that the network captured population level differences.Beyond being the first study to apply deep learning to sleep EEG in order to classify HDR and LDR, guided backpropagation allowed us to visualize the most discriminant features in each sleep stage.The significance of these findings and future directions are discussed.", "target": ["Indaghiamo la base neurale del dream recall usando una rete neurale convoluzionale e tecniche di visualizzazione delle feature, come tSNE e la backpropagation guidata."]} +{"source": "This paper considers multi-agent reinforcement learning (MARL) in networked system control.Specifically, each agent learns a decentralized control policy based on local observations and messages from connected neighbors.We formulate such a networked MARL (NMARL) problem as a spatiotemporal Markov decision process and introduce a spatial discount factor to stabilize the training of each local agent.Further, we propose a new differentiable communication protocol, called NeurComm, to reduce information loss and non-stationarity in NMARL.Based on experiments in realistic NMARL scenarios of adaptive traffic signal control and cooperative adaptive cruise control, an appropriate spatial discount factor effectively enhances the learning curves of non-communicative MARL algorithms, while NeurComm outperforms existing communication protocols in both learning efficiency and control performance.", "target": ["Questo articolo propone una nuova formulazione e un nuovo protocollo di comunicazione per problemi di controllo multi-agente in rete", "Riguarda le N-MARL dove gli agenti aggiornano la loro policy basandosi solo sui messaggi dei nodi vicini, mostrando che l'introduzione di un fattore di sconto spaziale stabilizza l'apprendimento."]} +{"source": "Variational Bayesian Inference is a popular methodology for approximating posterior distributions over Bayesian neural network weights.Recent work developing this class of methods has explored ever richer parameterizations of the approximate posterior in the hope of improving performance.In contrast, here we share a curious experimental finding that suggests instead restricting the variational distribution to a more compact parameterization.For a variety of deep Bayesian neural networks trained using Gaussian mean-field variational inference, we find that the posterior standard deviations consistently exhibit strong low-rank structure after convergence.This means that by decomposing these variational parameters into a low-rank factorization, we can make our variational approximation more compact without decreasing the models' performance.Furthermore, we find that such factorized parameterizations improve the signal-to-noise ratio of stochastic gradient estimates of the variational lower bound, resulting in faster convergence.", "target": ["Mean field VB usa il doppio dei parametri; vincoliamo i parametri di varianza in mean field VB senza alcuna loss in ELBO, guadagnando velocità e gradienti con varianza più bassa."]} +{"source": "Aspect extraction in online product reviews is a key task in sentiment analysis and opinion mining.Training supervised neural networks for aspect extraction is not possible when ground truth aspect labels are not available, while the unsupervised neural topic models fail to capture the particular aspects of interest.In this work, we propose a weakly supervised approach for training neural networks for aspect extraction in cases where only a small set of seed words, i.e., keywords that describe an aspect, are available.Our main contributions are as follows.First, we show that current weakly supervised networks fail to leverage the predictive power of the available seed words by comparing them to a simple bag-of-words classifier. Second, we propose a distillation approach for aspect extraction where the seed words are considered by the bag-of-words classifier (teacher) and distilled to the parameters of a neural network (student).Third, we show that regularization encourages the student to consider non-seed words for classification and, as a result, the student outperforms the teacher, which only considers the seed words.Finally, we empirically show that our proposed distillation approach outperforms (by up to 34.4% in F1 score) previous weakly supervised approaches for aspect extraction in six domains of Amazon product reviews.", "target": ["Sfruttiamo efficacemente alcune keyword come supervisione debole per addestrare le reti neurali per aspect extraction.", "Discute una variante di knowledge distillation che usa un \"teacher\" basato su un classificatore di bag-of-words con seed word e uno \"studente\" che è una rete neurale basata sull'embedding."]} +{"source": "Forming perceptual groups and individuating objects in visual scenes is an essential step towards visual intelligence.This ability is thought to arise in the brain from computations implemented by bottom-up, horizontal, and top-down connections between neurons.However, the relative contributions of these connections to perceptual grouping are poorly understood.We address this question by systematically evaluating neural network architectures featuring combinations of these connections on two synthetic visual tasks, which stress low-level \"Gestalt\" vs. high-level object cues for perceptual grouping.We show that increasing the difficulty of either task strains learning for networks that rely solely on bottom-up processing.Horizontal connections resolve this limitation on tasks with Gestalt cues by supporting incremental spatial propagation of activities, whereas top-down connections rescue learning on tasks with high-level object cues by modifying coarse predictions about the position of the target object.Our findings dissociate the computational roles of bottom-up, horizontal and top-down connectivity, and demonstrate how a model featuring all of these interactions can more flexibly learn to form perceptual groups.", "target": ["Le connessioni di feedback orizzontali e top-down sono responsabili delle strategie complementari di raggruppamento percettivo nei sistemi di visione biologica e ricorrente.", "Utilizzando le reti neurali come modello computazionale del cervello, esamina l'efficienza di diverse strategie per risolvere due visual challenge."]} +{"source": "Generative adversarial networks have seen rapid development in recent years and have led to remarkable improvements in generative modelling of images.However, their application in the audio domain has received limited attention,and autoregressive models, such as WaveNet, remain the state of the art in generative modelling of audio signals such as human speech.To address this paucity, we introduce GAN-TTS, a Generative Adversarial Network for Text-to-Speech.Our architecture is composed of a conditional feed-forward generator producing raw speech audio, and an ensemble of discriminators which operate on random windows of different sizes.The discriminators analyse the audio both in terms of general realism, as well as how well the audio corresponds to the utterance that should be pronounced. To measure the performance of GAN-TTS, we employ both subjective human evaluation (MOS - Mean Opinion Score), as well as novel quantitative metrics (Fréchet DeepSpeech Distance and Kernel DeepSpeech Distance), which we find to be well correlated with MOS.We show that GAN-TTS is capable of generating high-fidelity speech with naturalness comparable to the state-of-the-art models, and unlike autoregressive models, it is highly parallelisable thanks to an efficient feed-forward generator.Listen to GAN-TTS reading this abstract at http://tiny.cc/gantts.", "target": ["Introduciamo GAN-TTS, una Generative Adversarial Network per Text-to-Speech, che raggiunge il Mean Opinion Score (MOS) 4.2.", "Risolve il problema delle GAN nella sintesi delle forme d'onda grezze e comincia a colmare il divario di prestazioni esistente tra i modelli autoregressivi e le GAN per gli audio grezzi."]} +{"source": "This paper proposes a Pruning in Training (PiT) framework of learning to reduce the parameter size of networks.Different from existing works, our PiT framework employs the sparse penalties to train networks and thus help rank the importance of weights and filters.Our PiT algorithms can directly prune the network without any fine-tuning.The pruned networks can still achieve comparable performance to the original networks.In particular, we introduce the (Group) Lasso-type Penalty (L-P /GL-P), and (Group) Split LBI Penalty (S-P / GS-P) to regularize the networks, and a pruning strategy proposed is used in help prune the network.We conduct the extensive experiments on MNIST, Cifar-10, and miniImageNet.The results validate the efficacy of our proposed methods.Remarkably, on MNIST dataset, our PiT framework can save 17.5% parameter size of LeNet-5, which achieves the 98.47% recognition accuracy.", "target": ["proponiamo un algoritmo di apprendimento per fare pruning della rete applicando penalty di sparsità della struttura", "Questo articolo introduce un approccio al pruning durante il training di una rete usando lasso e penalty split LBI"]} +{"source": "We first pose the Unsupervised Continual Learning (UCL) problem: learning salient representations from a non-stationary stream of unlabeled data in which the number of object classes varies with time.Given limited labeled data just before inference, those representations can also be associated with specific object types to perform classification.To solve the UCL problem, we propose an architecture that involves a single module, called Self-Taught Associative Memory (STAM), which loosely models the function of a cortical column in the mammalian brain.Hierarchies of STAM modules learn based on a combination of Hebbian learning, online clustering, detection of novel patterns and forgetting outliers, and top-down predictions.We illustrate the operation of STAMs in the context of learning handwritten digits in a continual manner with only 3-12 labeled examples per class.STAMs suggest a promising direction to solve the UCL problem without catastrophic forgetting.", "target": ["Introduciamo l'unsupervised continual learning (UCL) e un'architettura ispirata dalla neurologia che risolve il problema UCL.", "Propone l'uso di gerarchie di moduli STAM per risolvere il problema UCL, fornendo la prova che le rappresentazioni che i moduli imparano sono adatte alla classificazione few-shot."]} +{"source": "Recent advances have made it possible to create deep complex-valued neural networks.Despite this progress, the potential power of fully complex intermediate computations and representations has not yet been explored for many challenging learning problems.Building on recent advances, we propose a novel mechanism for extracting signals in the frequency domain.As a case study, we perform audio source separation in the Fourier domain.Our extraction mechanism could be regarded as a local ensembling method that combines a complex-valued convolutional version of Feature-Wise Linear Modulation (FiLM) and a signal averaging operation.We also introduce a new explicit amplitude and phase-aware loss, which is scale and time invariant, taking into account the complex-valued components of the spectrogram.Using the Wall Street Journal Dataset, we compare our phase-aware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss.When operating in the complex-valued frequency domain, our deep complex-valued network substantially outperforms its real-valued counterparts even with half the depth and a third of the parameters.Our proposed mechanism improves significantly deep complex-valued networks' performance and we demonstrate the usefulness of its regularizing effect.", "target": ["Nuovo metodo di estrazione del segnale nel dominio di Fourier", "Contribuisce a una versione con convoluzione a valori complessi della Feature-Wise Linear Modulation che permette l'ottimizzazione dei parametri e progetta una loss che tiene conto di modulo e fase."]} +{"source": "It is challenging to disentangle an object into two orthogonal spaces of content and style since each can influence the visual observation in a different and unpredictable way.It is rare for one to have access to a large number of data to help separate the influences.In this paper, we present a novel framework to learn this disentangled representation in a completely unsupervised manner.We address this problem in a two-branch Autoencoder framework.For the structural content branch, we project the latent factor into a soft structured point tensor and constrain it with losses derived from prior knowledge.This encourages the branch to distill geometry information.Another branch learns the complementary style information.The two branches form an effective framework that can disentangle object's content-style representation without any human annotation.We evaluate our approach on four image datasets, on which we demonstrate the superior disentanglement and visual analogy quality both in synthesized and real-world data.We are able to generate photo-realistic images with 256x256 resolution that are clearly disentangled in content and style.", "target": ["Presentiamo un nuovo framework per imparare la rappresentazione separata del contenuto e dello stile in modo completamente unsupervised.", "Propongono un modello basato sul framework dell'autoencoder per distinguere la rappresentazione di un oggetto, i risultati mostrano che il modello può produrre rappresentazioni che catturano il contenuto e lo stile."]} +{"source": "We develop the Y-learner for estimating heterogeneous treatment effects in experimental and observational studies.The Y-learner is designed to leverage the abilities of neural networks to optimize multiple objectives and continually update, which allows for better pooling of underlying feature information between treatment and control groups.We evaluate the Y-learner on three test problems: (1) A set of six simulated data benchmarks from the literature.(2) A real-world large-scale experiment on voter persuasion.(3) A task from the literature that estimates artificially generated treatment effects on MNIST didgits.The Y-learner achieves state of the art results on two of the three tasks.On the MNIST task, it gets the second best results.", "target": ["Sviluppiamo una strategia di stima CATE che sfrutta alcune delle interessanti proprietà delle reti neurali.", "Mostra i miglioramenti su X-learner modellando la funzione di risposta al trattamento, la funzione di risposta al controllo e la mappatura dall'effetto di trattamento imputato all'effetto di trattamento medio condizionato tramite reti neurali.", "Gli autori propongono Y-learner per stimare l'effetto di trattamento medio condizionato (CATE), che aggiorna simultaneamente i parametri delle funzioni di risultato e lo stimatore CATE."]} +{"source": "With the rapid proliferation of IoT devices, our cyberspace is nowadays dominated by billions of low-cost computing nodes, which expose an unprecedented heterogeneity to our computing systems.Dynamic analysis, one of the most effective approaches to finding software bugs, has become paralyzed due to the lack of a generic emulator capable of running diverse previously-unseen firmware.In recent years, we have witnessed devastating security breaches targeting IoT devices.These security concerns have significantly hamstrung further evolution of IoT technology.In this work, we present Laelaps, a device emulator specifically designed to run diverse software on low-cost IoT devices.We do not encode into our emulator any specific information about a device.Instead, Laelaps infers the expected behavior of firmware via symbolic-execution-assisted peripheral emulation and generates proper inputs to steer concrete execution on the fly.This unique design feature makes Laelaps the first generic device emulator capable of running diverse firmware with no a priori knowledge about the target device.To demonstrate the capabilities of Laelaps, we deployed two popular dynamic analysis techniques---fuzzing testing and dynamic symbolic execution---on top of our emulator.We successfully identified both self-injected and real-world vulnerabilities.", "target": ["Esecuzione del firmware indipendente dal dispositivo"]} +{"source": "Deep neural models, such as convolutional and recurrent networks, achieve phenomenal results over spatial data such as images and text.However, when considering tabular data, gradient boosting of decision trees (GBDT) remains the method of choice.Aiming to bridge this gap, we propose \\emph{deep neural forests} (DNF)-- a novel architecture that combines elements from decision trees as well as dense residual connections. We present the results of extensive empirical study in which we examine the performance of GBDTs, DNFs and (deep) fully-connected networks. These results indicate that DNFs achieve comparable results to GBDTs on tabular data, and open the door to end-to-end neural modeling of multi-modal data.To this end, we present a successful application of DNFs as part of a hybrid architecture for a multi-modal driving scene understanding classification task.", "target": ["Un'architettura per dati tabulari, che emula i rami degli alberi decisionali e usa una densa connettività residua", "Questo articolo propone la deep neural forest, un algoritmo che si rivolge ai dati tabulari e integra i punti forti del gradient boosting degli alberi decisionali.", "Una nuova architettura di rete neurale che imita il funzionamento delle foreste decisionali per affrontare il problema generale del training di modelli deep per i dati tabulari e mostrare un'efficacia alla pari con GBDT."]} +{"source": "Hyperparameter tuning is one of the most time-consuming workloads in deep learning.State-of-the-art optimizers, such as AdaGrad, RMSProp and Adam, reduce this labor by adaptively tuning an individual learning rate for each variable.Recently researchers have shown renewed interest in simpler methods like momentum SGD as they may yield better results.Motivated by this trend, we ask: can simple adaptive methods, based on SGD perform as well or better?We revisit the momentum SGD algorithm and show that hand-tuning a single learning rate and momentum makes it competitive with Adam.We then analyze its robustness to learning rate misspecification and objective curvature variation.Based on these insights, we design YellowFin, an automatic tuner for momentum and learning rate in SGD.YellowFin optionally uses a negative-feedback loop to compensate for the momentum dynamics in asynchronous settings on the fly.We empirically show YellowFin can converge in fewer iterations than Adam on ResNets and LSTMs for image recognition, language modeling and constituency parsing, with a speedup of up to $3.28$x in synchronous and up to $2.69$x in asynchronous settings.", "target": ["YellowFin è un ottimizzatore basato su SGD con adattabilità sia al momento che al learning rate.", "Propone un metodo per sintonizzare automaticamente il parametro momento nei metodi SGD con momento, che raggiunge risultati migliori e una velocità di convergenza veloce rispetto all'algoritmo state-of-the-art Adam."]} +{"source": "Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits.With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms.Here, we study the robustness of the latent space of a deep variational autoencoder (dVAE), an unsupervised generative framework, to show that it is indeed possible to perturb the latent space, flip the class predictions and keep the classification probability approximately equal before and after an attack.This means that an agent that looks at the outputs of a decoder would remain oblivious to an attack.", "target": ["Adversarial attack sullo spazio latente degli autoencoder variazionali per cambiare il significato semantico degli input", "Questo articolo riguarda la sicurezza e l'apprendimento automatico e propone un attacco man-in-middle che altera la codifica VAE dei dati di input in modo che l'output decodificato sia classificato erroneamente."]} +{"source": "Graph-based dependency parsing consists of two steps: first, an encoder produces a feature representation for each parsing substructure of the input sentence, which is then used to compute a score for the substructure; and second, a decoder} finds the parse tree whose substructures have the largest total score.Over the past few years, powerful neural techniques have been introduced into the encoding step which substantially increases parsing accuracies.However, advanced decoding techniques, in particular high-order decoding, have seen a decline in usage.It is widely believed that contextualized features produced by neural encoders can help capture high-order decoding information and hence diminish the need for a high-order decoder.In this paper, we empirically evaluate the combinations of different neural and non-neural encoders with first- and second-order decoders and provide a comprehensive analysis about the effectiveness of these combinations with varied training data sizes.We find that: first, when there is large training data, a strong neural encoder with first-order decoding is sufficient to achieve high parsing accuracy and only slightly lags behind the combination of neural encoding and second-order decoding; second, with small training data, a non-neural encoder with a second-order decoder outperforms the other combinations in most cases.", "target": ["Uno studio empirico che esamina l'efficacia di diverse combinazioni encoder-decoder per il task di dependency parsing", "Analizza empiricamente vari encoder, decoder e le loro dipendenze per dependecy parsing basato su grafi."]} +{"source": "Meta-learning will be crucial to creating lifelong, generalizable AI.In practice, however, it is hard to define the meta-training task distribution that is used to train meta-learners.If made too small, tasks are too similar for a model to meaningfully generalize.If made too large, generalization becomes incredibly difficult.We argue that both problems can be alleviated by introducing a teacher model that controls the sequence of tasks that a meta-learner is trained on.This teacher model is incentivized to start the student meta-learner on simple tasks then adaptively increase task difficulty in response to student progress.While this approach has been previously studied in curriculum generation, our main contribution is in extending it to meta-learning.", "target": ["Teacher che allena i meta-learner come gli umani"]} +{"source": "Using higher order knowledge to reduce training data has become a popular research topic.However, the ability for available methods to draw effective decision boundaries is still limited: when training set is small, neural networks will be biased to certain labels.Based on this observation, we consider constraining output probability distribution as higher order domain knowledge.We design a novel algorithm that jointly optimizes output probability distribution on a clustered embedding space to make neural networks draw effective decision boundaries. While directly applying probability constraint is not effective, users need to provide additional very weak supervisions: mark some batches that have output distribution greatly differ from target probability distribution.We use experiments to empirically prove that our model can converge to an accuracy higher than other state-of-art semi-supervised learning models with less high quality labeled training examples.", "target": ["Introduciamo un approccio di embedding space per vincolare la distribuzione di probabilità dell'output delle reti neurali.", "Questo articolo introduce un metodo per eseguire semi-supervised learning con deep neural network, e il modello raggiunge una precisione relativamente alta, data una piccola dimensione di training.", "Questo articolo incorpora la distribuzione delle label nel training del modello quando è disponibile un numero limitato di istanze di training, e propone due tecniche per gestire il problema della distribuzione biased delle label di output."]} +{"source": "We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pretrained on a large text corpus.We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pretrained network is crucial, allowing downstream models to mix different types of semi-supervision signals.", "target": ["Introduciamo un nuovo tipo di rappresentazione deep contestualizzata delle parole che migliora significativamente lo stato dell'arte per una serie di task NLP impegnativi."]} +{"source": "This work addresses the long-standing problem of robust event localization in the presence of temporally of misaligned labels in the training data.We propose a novel versatile loss function that generalizes a number of training regimes from standard fully-supervised cross-entropy to count-based weakly-supervised learning.Unlike classical models which are constrained to strictly fit the annotations during training, our soft localization learning approach relaxes the reliance on the exact position of labels instead.Training with this new loss function exhibits strong robustness to temporal misalignment of labels, thus alleviating the burden of precise annotation of temporal sequences.We demonstrate state-of-the-art performance against standard benchmarks in a number of challenging experiments and further show that robustness to label noise is not achieved at the expense of raw performance.", "target": ["Questo lavoro introduce una nuova funzione di loss per il training robusto di una DNN per localizzazione temporale in presenza di label non allineate.", "Una nuova loss per il training di modelli che prevedono dove si verificano gli eventi in una sequenza di training con label rumorose, confrontando le label con smoothing e la sequenza di predizione."]} +{"source": "The driving force behind deep networks is their ability to compactly represent rich classes of functions.The primary notion for formally reasoning about this phenomenon is expressive efficiency, which refers to a situation where one network must grow unfeasibly large in order to replicate functions of another.To date, expressive efficiency analyses focused on the architectural feature of depth, showing that deep networks are representationally superior to shallow ones.In this paper we study the expressive efficiency brought forth by connectivity, motivated by the observation that modern networks interconnect their layers in elaborate ways.We focus on dilated convolutional networks, a family of deep models delivering state of the art performance in sequence processing tasks.By introducing and analyzing the concept of mixed tensor decompositions, we prove that interconnecting dilated convolutional networks can lead to expressive efficiency.In particular, we show that even a single connection between intermediate layers can already lead to an almost quadratic gap, which in large-scale settings typically makes the difference between a model that is practical and one that is not.Empirical evaluation demonstrates how the expressive efficiency of connectivity, similarly to that of depth, translates into gains in accuracy.This leads us to believe that expressive efficiency may serve a key role in developing new tools for deep network design.", "target": ["Introduciamo la nozione di decomposizioni tensoriali miste e la usiamo per dimostrare che l'interconnessione di reti convoluzionali dilatate aumenta la loro potenza espressiva.", "Questo articolo prova teoricamente che l'interconnessione di reti con diverse dilatazioni può portare all'efficienza espressiva utilizzando la decomposizione tensoriale mista.", "Gli autori studiano le reti convoluzionali dilatate e mostrano che l'intreccio di due reti convoluzionali dilatate A e B in varie fasi è più efficiente dal punto di vista espressivo che non intrecciarle.", "Mostra che l'assunzione strutturale della WaveNet di un singolo albero binario perfetto ostacola le sue prestazioni e che le architetture simili alla WaveNet con strutture ad albero miste più complesse funzionano meglio."]} +{"source": "We apply multi-task learning to image classification tasks on MNIST-like datasets.MNIST dataset has been referred to as the {\\em drosophila} of machine learning and has been the testbed of many learning theories.The NotMNIST dataset and the FashionMNIST dataset have been created with the MNIST dataset as reference.In this work, we exploit these MNIST-like datasets for multi-task learning.The datasets are pooled together for learning the parameters of joint classification networks.Then the learned parameters are used as the initial parameters to retrain disjoint classification networks.The baseline recognition model are all-convolution neural networks.Without multi-task learning, the recognition accuracies for MNIST, NotMNIST and FashionMNIST are 99.56\\%, 97.22\\% and 94.32\\% respectively.With multi-task learning to pre-train the networks, the recognition accuracies are respectively 99.70\\%, 97.46\\% and 95.25\\%.The results re-affirm that multi-task learning framework, even with data with different genres, does lead to significant improvement.", "target": ["il multi-task learning funziona", "Questo articolo presenta una rete neurale multi-task per la classificazione su dataset simili a MNIST"]} +{"source": "Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network.To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization.This approach provides us with a broad and unifying view on much prior work on this topic.Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.In particular, they specify a concrete security guarantee that would protect against a well-defined class of adversaries.These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks.They also suggest robustness against a first-order adversary as a natural security guarantee.We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.", "target": ["Forniamo una rivisitazione basata sull'ottimizzazione della nozione di adversarial example, e sviluppiamo metodi che producono modelli che sono robusti contro una vasta gamma di avversari.", "Studia una formulazione minimax dell'apprendimento delle deep neural network per aumentare la loro robustezza, utilizzando la discesa del gradiente proiettata come principale avversario.", "Questo articolo propone di cercare di rendere le reti neurali resistenti alle loss adversarial attraverso il framework dei problemi del punto di sella."]} +{"source": "In recent years there has been a rapid increase in classification methods on graph structured data.Both in graph kernels and graph neural networks, one of the implicit assumptions of successful state-of-the-art models was that incorporating graph isomorphism features into the architecture leads to better empirical performance.However, as we discover in this work, commonly used data sets for graph classification have repeating instances which cause the problem of isomorphism bias, i.e. artificially increasing the accuracy of the models by memorizing target information from the training set.This prevents fair competition of the algorithms and raises a question of the validity of the obtained results.We analyze 54 data sets, previously extensively used for graph-related tasks, on the existence of isomorphism bias, give a set of recommendations to machine learning practitioners to properly set up their models, and open source new data sets for the future experiments.", "target": ["Molti dataset di classificazione dei grafi hanno duplicati, sollevando così domande sulle capacità di generalizzazione e sull'equo confronto dei modelli.", "Gli autori discutono il bias di isomorfismo nei dataset di grafi, l'effetto di overfitting nell'apprendimento delle reti quando le feature di isomorfismo dei grafi sono incorporate nel modello, teoricamente analogo agli effetti di leakage dei dati."]} +{"source": "Imitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently.However, learning from demonstrations often suffers from the covariate shift problem, which resultsin cascading errors of the learned policy.We introduce a notion of conservatively extrapolated value functions, which provably lead to policies with self-correction.We design an algorithm Value Iteration with Negative Sampling (VINS) that practically learns such value functions with conservative extrapolation.We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks.We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm, which is shown to outperform prior works in sample efficiency.", "target": ["Introduciamo una nozione di value function estrapolate in modo conservativo, che portano probabilimente a policy che possono auto-correggersi per rimanere vicine agli stati di dimostrazione, e le impariamo con una nuova tecnica di negative sampling.", "Un algoritmo chiamato value iteration con negative sampling per affrontare il problema del covariate shift nell'imitation learning."]} +{"source": "A structured understanding of our world in terms of objects, relations, and hierarchies is an important component of human cognition.Learning such a structured world model from raw sensory data remains a challenge.As a step towards this goal, we introduce Contrastively-trained Structured World Models (C-SWMs).C-SWMs utilize a contrastive approach for representation learning in environments with compositional structure.We structure each state embedding as a set of object representations and their relations, modeled by a graph neural network.This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process.We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent, simple Atari games, and a multi-object physics simulation.Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.", "target": ["I world model strutturati contrastively-trained imparano rappresentazioni di stato orientate agli oggetti e un modello relazionale di un ambiente a partire da input di pixel grezzi.", "Gli autori superano il problema dell'utilizzo di loss basate sui pixel nella costruzione e nell'apprendimento di world model strutturati utilizzando uno spazio latente contrastive."]} +{"source": "Neural machine translation (NMT) models learn representations containing substantial linguistic information.However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons.We develop unsupervised methods for discovering important neurons in NMT models.Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision.We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena.Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.", "target": ["Metodi non supervisionati per trovare, analizzare e controllare i neuroni importanti nella NMT", "Questo lavoro propone di trovare neuroni \"significativi\" nei modelli di traduzione automatica neurale classificando in base alla correlazione tra coppie di modelli, diverse epoche o diversi dataset, e propone un meccanismo di controllo dei modelli."]} +{"source": "Computations for the softmax function in neural network models are expensive when the number of output classes is large.This can become a significant issue in both training and inference for such models.In this paper, we present Doubly Sparse Softmax (DS-Softmax), Sparse Mixture of Sparse of Sparse Experts, to improve the efficiency for softmax inference.During training, our method learns a two-level class hierarchy by dividing entire output class space into several partially overlapping experts.Each expert is responsible for a learned subset of the output class space and each output class only belongs to a small number of those experts.During inference, our method quickly locates the most probable expert to compute small-scale softmax.Our method is learning-based and requires no knowledge of the output class partition space a priori.We empirically evaluate our method on several real-world tasks and demonstrate that we can achieve significant computation reductions without loss of", "target": ["Presentiamo un softmax doppiamente sparso, la miscela sparsa di esperti sparsi, per migliorare l'efficienza dell'inferenza softmax attraverso lo sfruttamento della gerarchia a due livelli di sovrapposizione.", "L'articolo propone la nuova implementazione dell'algoritmo softmax con due livelli gerarchici di sparsità che accelera l'operazione nel language modelling."]} +{"source": "Our work presents empirical evidence that layer rotation, i.e. the evolution across training of the cosine distance between each layer's weight vector and its initialization, constitutes an impressively consistent indicator of generalization performance.Compared to previously studied indicators of generalization, we show that layer rotation has the additional benefit of being easily monitored and controlled, as well as having a network-independent optimum: the training procedures during which all layers' weights reach a cosine distance of 1 from their initialization consistently outperform other configurations -by up to 20% test accuracy.Finally, our results also suggest that the study of layer rotation can provide a unified framework to explain the impact of weight decay and adaptive gradient methods on generalization.", "target": ["Questo articolo presenta prove empiriche che supportano la scoperta di un indicatore di generalizzazione: l'evoluzione durante il training della distanza coseno tra il vettore di peso di ogni layer e la sua inizializzazione."]} +{"source": "Models of code can learn distributed representations of a program's syntax and semantics to predict many non-trivial properties of a program.Recent state-of-the-art models leverage highly structured representations of programs, such as trees, graphs and paths therein (e.g. data-flow relations), which are precise and abundantly available for code.This provides a strong inductive bias towards semantically meaningful relations, yielding more generalizable representations than classical sequence-based models.Unfortunately, these models primarily rely on graph-based message passing to represent relations in code, which makes them de facto local due to the high cost of message-passing steps, quite in contrast to modern, global sequence-based models, such as the Transformer.In this work, we bridge this divide between global and structured models by introducing two new hybrid model families that are both global and incorporate structural bias: Graph Sandwiches, which wrap traditional (gated) graph message-passing layers in sequential message-passing layers; and Graph Relational Embedding Attention Transformers (GREAT for short), which bias traditional Transformers with relational information from graph edge types.By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation.Starting with a graph-based model that already improves upon the prior state-of-the-art for this task by 20%, we show that our proposed hybrid models improve an additional 10-15%, while training both faster and using fewer parameters.", "target": ["I modelli di codice sorgente che combinano feature globali e strutturali imparano rappresentazioni più potenti dei programmi.", "Un nuovo metodo per modellare il codice sorgente per il task di riparazione dei bug usando un modello a sandwich come [RNN GNN RNN] che migliora significativamente la localizzazione e la precisione della riparazione."]} +{"source": "Recurrent neural networks (RNNs) are particularly well-suited for modeling long-term dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate.While a number of works attempt to mitigate this effect through gated recurrent units, skip-connections, parametric constraints and design choices, we propose a novel incremental RNN (iRNN), where hidden state vectors keep track of incremental changes, and as such approximate state-vector increments of Rosenblatt's (1962) continuous-time RNNs.iRNN exhibits identity gradients and is able to account for long-term dependencies (LTD).We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training, while suffering no performance degradation.We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks.", "target": ["Le incremental-RNN risolvono il problema dell'exploding/vanishing gradient aggiornando i vettori di stato in base alla differenza tra lo stato precedente e quello previsto da una ODE.", "Gli autori affrontano il problema della propagazione del segnale nelle reti neurali ricorrenti costruendo un sistema di attrazione per la transizione del segnale e controllando se converge ad un equilibrio."]} +{"source": "Recent empirical results on over-parameterized deep networks are marked by a striking absence of the classic U-shaped test error curve: test error keeps decreasing in wider networks.Researchers are actively working on bridging this discrepancy by proposing better complexity measures.Instead, we directly measure prediction bias and variance for four classification and regression tasks on modern deep networks.We find that both bias and variance can decrease as the number of parameters grows.Qualitatively, the phenomenon persists over a number of gradient-based optimizers.To better understand the role of optimization, we decompose the total variance into variance due to training set sampling and variance due to initialization.Variance due to initialization is significant in the under-parameterized regime.In the over-parameterized regime, total variance is much lower and dominated by variance due to sampling.We provide theoretical analysis in a simplified setting that is consistent with our empirical findings.", "target": ["Forniamo prove contro le affermazioni classiche sul bias-variance tradeoff e proponiamo una nuova decomposizione per la varianza."]} +{"source": "Real world images often contain large amounts of private / sensitive information that should be carefully protected without reducing their utilities.In this paper, we propose a privacy-preserving deep learning framework with a learnable ob- fuscator for the image classification task.Our framework consists of three mod- els: learnable obfuscator, classifier and reconstructor.The learnable obfuscator is used to remove the sensitive information in the images and extract the feature maps from them.The reconstructor plays the role as an attacker, which tries to recover the image from the feature maps extracted by the obfuscator.In order to best protect users’ privacy in images, we design an adversarial training methodol- ogy for our framework to optimize the obfuscator.Through extensive evaluations on real world datasets, both the numerical metrics and the visualization results demonstrate that our framework is qualified to protect users’ privacy and achieve a relatively high accuracy on the image classification task.", "target": ["Abbiamo proposto un nuovo framework di classificazione delle immagini di deep learning che può sia classificare accuratamente le immagini che proteggere la privacy degli utenti.", "Questo articolo propone un framework che conserva le informazioni private nell'immagine e non compromette l'usabilità dell'immagine.", "Questo lavoro suggerisce l'uso di adversarial network per offuscare le immagini e quindi permettere di raccoglierle senza problemi di privacy per utilizzarle per il training di modelli di machine learning."]} +{"source": "Bitcoin is a virtual coinage system that enables users to trade virtually free of a central trusted authority.All transactions on the Bitcoin blockchain are publicly available for viewing, yet as Bitcoin is built mainly for security it’s original structure does not allow for direct analysis of address transactions. Existing analysis methods of the Bitcoin blockchain can be complicated, computationally expensive or inaccurate.We propose a computationally efficient model to analyze bitcoin blockchain addresses and allow for their use with existing machine learning algorithms.We compare our approach against Multi Level Sequence Learners (MLSLs), one of the best performing models on bitcoin address data.", "target": ["un modello 2vec per i grafi delle transazioni di criptovaluta", "L'articolo propone di usare un autoencoder, networkX e node2Vec per prevedere se un indirizzo Bitcoin diventerà vuoto dopo un anno, ma i risultati sono peggiori di una baseline esistente."]} +{"source": "Despite remarkable empirical success, the training dynamics of generative adversarial networks (GAN), which involves solving a minimax game using stochastic gradients, is still poorly understood.In this work, we analyze last-iterate convergence of simultaneous gradient descent (simGD) and its variants under the assumption of convex-concavity, guided by a continuous-time analysis with differential equations.First, we show that simGD, as is, converges with stochastic sub-gradients under strict convexity in the primal variable.Second, we generalize optimistic simGD to accommodate an optimism rate separate from the learning rate and show its convergence with full gradients.Finally, we present anchored simGD, a new method, and show convergence with stochastic subgradients.", "target": ["Prova di convergenza del metodo stocastico dei sub-gradienti e variazioni su problemi minimax convessi-concavi", "Un'analisi del simultaneous stochastic subgradient, del simultaneous gradient con optimism e del simultaneous gradient con anchoring nel contesto dei giochi concavi convessi minmax.", "Questo articolo analizza la dinamica dello stochastic gradient descent quando applicata a giochi convessi-concavi, così come la GD con optimism e un nuovo algoritmo GD ancorato che converge sotto ipotesi più deboli di SGD o SGD con optimism."]} +{"source": "Small spacecraft now have precise attitude control systems available commercially, allowing them to slew in 3 degrees of freedom, and capture images within short notice.When combined with appropriate software, this agility can significantly increase response rate, revisit time and coverage.In prior work, we have demonstrated an algorithmic framework that combines orbital mechanics, attitude control and scheduling optimization to plan the time-varying, full-body orientation of agile, small spacecraft in a constellation.The proposed schedule optimization would run at the ground station autonomously, and the resultant schedules uplinked to the spacecraft for execution.The algorithm is generalizable over small steerable spacecraft, control capability, sensor specs, imaging requirements, and regions of interest.In this article, we modify the algorithm to run onboard small spacecraft, such that the constellation can make time-sensitive decisions to slew and capture images autonomously, without ground control.We have developed a communication module based on Delay/Disruption Tolerant Networking (DTN) for onboard data management and routing among the satellites, which will work in conjunction with the other modules to optimize the schedule of agile communication and steering.We then apply this preliminary framework on representative constellations to simulate targeted measurements of episodic precipitation events and subsequent urban floods.The command and control efficiency of our agile algorithm is compared to non-agile (11.3x improvement) and non-DTN (21% improvement) constellations.", "target": ["Proponiamo un framework algoritmico per programmare costellazioni di piccoli veicoli spaziali con capacità di ri-orientamento 3-DOF, collegati in rete con collegamenti inter-sat.", "Questo articolo propone un modulo di comunicazione per ottimizzare il programma di comunicazione per il problema delle costellazioni di veicoli spaziali, e confronta l'algoritmo in setting distribuiti e centralizzati."]} +{"source": "Importance sampling (IS) is a standard Monte Carlo (MC) tool to compute information about random variables such as moments or quantiles with unknown distributions. IS is asymptotically consistent as the number of MC samples, and hence deltas (particles) that parameterize the density estimate, go to infinity.However, retaining infinitely many particles is intractable.We propose a scheme for only keeping a \\emph{finite representative subset} of particles and their augmented importance weights that is \\emph{nearly consistent}. To do so in {an online manner}, we approximate importance sampling in two ways. First, we replace the deltas by kernels, yielding kernel density estimates (KDEs). Second, we sequentially project KDEs onto nearby lower-dimensional subspaces.We characterize the asymptotic bias of this scheme as determined by a compression parameter and kernel bandwidth, which yields a tunable tradeoff between consistency and memory.In experiments, we observe a favorable tradeoff between memory and accuracy, providing for the first time near-consistent compressions of arbitrary posterior distributions.", "target": ["Abbiamo proposto un nuovo algoritmo di importance sampling compresso e kernelizzato."]} +{"source": "We study the following three fundamental problems about ridge regression: (1) what is the structure of the estimator?(2) how to correctly use cross-validation to choose the regularization parameter?and (3) how to accelerate computation without losing too much accuracy?We consider the three problems in a unified large-data linear model.We give a precise representation of ridge regression as a covariance matrix-dependent linear combination of the true parameter and the noise. We study the bias of $K$-fold cross-validation for choosing the regularization parameter, and propose a simple bias-correction.We analyze the accuracy of primal and dual sketching for ridge regression, showing they are surprisingly accurate.Our results are illustrated by simulations and by analyzing empirical data.", "target": ["Studiamo il framework della ridge regression in un framework asintotico ad alta dimensione, e otteniamo intuizioni sulla cross-validation e sullo sketching.", "Uno studio teorico della ridge regression sfruttando una nuova caratterizzazione asintotica dello stimatore di ridge regression."]} +{"source": "Attention mechanisms have advanced the state of the art in several machine learning tasks.Despite significant empirical gains, there is a lack of theoretical analyses on understanding their effectiveness.In this paper, we address this problem by studying the landscape of population and empirical loss functions of attention-based neural networks.Our results show that, under mild assumptions, every local minimum of a two-layer global attention model has low prediction error, and attention models require lower sample complexity than models not employing attention.We then extend our analyses to the popular self-attention model, proving that they deliver consistent predictions with a more expressive class of functions.Additionally, our theoretical results provide several guidelines for designing attention mechanisms.Our findings are validated with satisfactory experimental results on MNIST and IMDB reviews dataset.", "target": ["Analizziamo il panorama delle loss delle reti neurali con attention e spieghiamo perché l'attention è utile nel training delle reti neurali per ottenere buone prestazioni.", "Questo articolo dimostra dal punto di vista teorico che le reti con attention possono generalizzare meglio delle baseline senza attention per l'attention fissa (monolayer e multilayer) e self-attention nel setting a singolo layer."]} +{"source": "Recent advances in deep learning techniques has shown the usefulness of the deep neural networks in extracting features required to perform the task at hand.However, these features learnt are in particular helpful only for the initial task.This is due to the fact that the features learnt are very task specific and does not capture the most general and task agnostic features of the input.In fact the way humans are seen to learn is by disentangling features which task agnostic.This indicates that leaning task agnostic features by disentangling only the most informative features from the input data.Recently Variational Auto-Encoders (VAEs) have shown to be the de-facto models to capture the latent variables in a generative sense.As these latent features can be represented as continuous and/or discrete variables, this indicates us to use VAE with a mixture of continuous and discrete variables for the latent space.We achieve this by performing our experiments using a modified version of joint-vae to learn the disentangled features.", "target": ["Mixture Model per Neural Disentanglement"]} +{"source": "To improve how neural networks function it is crucial to understand their learning process.The information bottleneck theory of deep learning proposes that neural networks achieve good generalization by compressing their representations to disregard information that is not relevant to the task.However, empirical evidence for this theory is conflicting, as compression was only observed when networks used saturating activation functions.In contrast, networks with non-saturating activation functions achieved comparable levels of task performance but did not show compression.In this paper we developed more robust mutual information estimation techniques, that adapt to hidden activity of neural networks and produce more sensitive measurements of activations from all functions, especially unbounded functions.Using these adaptive estimation techniques, we explored compression in networks with a range of different activation functions.With two improved methods of estimation, firstly, we show that saturation of the activation function is not required for compression, and the amount of compression varies between different activation functions.We also find that there is a large amount of variation in compression between different network initializations.Secondary, we see that L2 regularization leads to significantly increased compression, while preventing overfitting.Finally, we show that only compression of the last layer is positively correlated with generalization.", "target": ["Abbiamo sviluppato stime robuste dell'informazione mutua per le DNN e le abbiamo usate per osservare la compressione nelle reti con funzioni di attivazione non saturanti", "Questo articolo ha studiato la credenza popolare che le deep neural network operino la compressione dell'informazione per task supervisionati", "Questo articolo propone un metodo per la stima dell'informazione reciproca per reti con funzioni di attivazione non limitate e l'uso della regolarizzazione L2 per indurre una maggiore compressione."]} +{"source": "In this work, we address the problem of musical timbre transfer, where the goal is to manipulate the timbre of a sound sample from one instrument to match another instrument while preserving other musical content, such as pitch, rhythm, and loudness.In principle, one could apply image-based style transfer techniques to a time-frequency representation of an audio signal, but this depends on having a representation that allows independent manipulation of timbre as well as high-quality waveform generation.We introduce TimbreTron, a method for musical timbre transfer which applies “image” domain style transfer to a time-frequency representation of the audio signal, and then produces a high-quality waveform using a conditional WaveNet synthesizer.We show that the Constant Q Transform (CQT) representation is particularly well-suited to convolutional architectures due to its approximate pitch equivariance.Based on human perceptual evaluations, we confirmed that TimbreTron recognizably transferred the timbre while otherwise preserving the musical content, for both monophonic and polyphonic samples.We made an accompanying demo video here: https://www.cs.toronto.edu/~huang/TimbreTron/index.html which we strongly encourage you to watch before reading the paper.", "target": ["Presentiamo il TimbreTron, una pipeline per il transfer del timbro di alta qualità su forme d'onda musicali utilizzando il transfer in stile dominio CQT.", "Un metodo per convertire le registrazioni di uno specifico strumento musicale in un altro applicando CycleGAN, sviluppato per il transfer dello stile delle immagini, per trasferire gli spettrogrammi.", "Gli autori usano tecniche/strumenti multipli per permettere il transfer neurale del timbro (conversione della musica da uno strumento all'altro) senza esempi di allenamento accoppiati.", "Descrive un modello per il transfer del timbro musicale con i risultati che indicano che il sistema proposto è efficace per il transfer dell'altezza e del tempo, così come per l'adattamento del timbro."]} +{"source": "Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them.But also generic hardware and software implementations of deep learning run more efficiently for sparse networks.Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints.We present an algorithm, DEEP R, that enables us to train directly a sparsely connected neural network.DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded.We demonstrate that DEEP R can be used to train very sparse feedforward and recurrent neural networks on standard benchmark tasks with just a minor loss in performance.DEEP R is based on a rigorous theoretical foundation that views rewiring as stochastic sampling of network configurations from a posterior.", "target": ["L'articolo presenta Deep Rewiring, un algoritmo che può essere utilizzato per addestrare deep neural network quando la connettività della rete è fortemente limitata durante il training.", "Un approccio per implementare il deep learning direttamente su grafi sparsamente connessi, permettendo alle reti di essere addestrate in modo efficiente online e per un apprendimento veloce e flessibile.", "Gli autori forniscono un semplice algoritmo in grado di allenarsi con una memoria limitata"]} +{"source": "Deep learning's success has led to larger and larger models to handle more and more complex tasks; trained models can contain millions of parameters.These large models are compute- and memory-intensive, which makes it a challenge to deploy them with minimized latency, throughput, and storage requirements.Some model compression methods have been successfully applied on image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks.In this paper, we show that a standard model compression technique, weight pruning, cannot be applied to GANs using existing methods.We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator.We show that this framework has a compelling performance to high degrees of sparsity, generalizes well to new tasks and models, and enables meaningful comparisons between different pruning granularities.", "target": ["I metodi di pruning esistenti falliscono quando applicati alle GAN che affrontano task complessi, quindi presentiamo un metodo semplice e robusto per fare pruning sui generatori che funziona bene per un'ampia varietà di reti e task.", "Gli autori propongono una modifica al metodo classico di distillation per il task di comprimere una rete per affrontare il fallimento delle soluzioni precedenti quando applicate alle generative adversarial network."]} +{"source": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure.The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections.In this paper, we find 99.9% of the gradient exchange in distributed SGD is redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth.To preserve accuracy during compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training.We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus.On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB.Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.", "target": ["troviamo che il 99,9% del gradient exchange in SGD distribuito è ridondante; riduciamo la larghezza di banda di comunicazione di due ordini di grandezza senza perdere precisione. ", "Questo documento propone un ulteriore miglioramento rispetto al gradient dropping per migliorare l'efficienza della comunicazione"]} +{"source": "Image-to-image translation has recently received significant attention due to advances in deep learning.Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way.However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex inner- and cross-domain variations.To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain.We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain.Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain.To avoid semantic inconsistencies during translation that naturally appear due to the large inner- and cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels.Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process.", "target": ["Proponiamo la rete Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) che condiziona il processo di traduzione su un'immagine esemplare nel dominio di destinazione.", "Discute un fallimento centrale e la necessità di modelli di traduzione I2I.", "L'articolo esplora l'idea che un'immagine ha due componenti e applica un modello di attention in cui le maschere di feature che guidano il processo di traduzione non richiedono label semantiche"]} +{"source": "Deep neural networks can learn meaningful representations of data.However, these representations are hard to interpret.For example, visualizing a latent layer is generally only possible for at most three dimensions.Neural networks are able to learn and benefit from much higher dimensional representations but these are not visually interpretable because nodes have arbitrary ordering within a layer.Here, we utilize the ability of the human observer to identify patterns in structured representations to visualize higher dimensions.To do so, we propose a class of regularizations we call \\textit{Graph Spectral Regularizations} that impose graph-structure on latent layers.This is achieved by treating activations as signals on a predefined graph and constraining those activations using graph filters, such as low pass and wavelet-like filters.This framework allows for any kind of graph as well as filter to achieve a wide range of structured regularizations depending on the inference needs of the data.First, we show a synthetic example that the graph-structured layer can reveal topological features of the data.Next, we show that a smoothing regularization can impose semantically consistent ordering of nodes when applied to capsule nets.Further, we show that the graph-structured layer, using wavelet-like spatially localized filters, can form localized receptive fields for improved image and biomedical data interpretation.In other words, the mapping between latent layer, neurons and the output space becomes clear due to the localization of the activations.Finally, we show that when structured as a grid, the representations create coherent images that allow for image-processing techniques such as convolutions.", "target": ["Imporre una struttura a grafo sui layer delle reti neurali per una migliore interpretabilità visiva.", "Un nuovo regolarizzatore per imporre la struttura del grafo sui layer nascosti di una rete neurale per migliorare l'interpretabilità delle rappresentazioni nascoste.", "Evidenzia il contributo del regolarizzatore spettrale del grafo all'interpretabilità delle reti neurali."]} +{"source": "Text generation is ubiquitous in many NLP tasks, from summarization, to dialogue and machine translation.The dominant parametric approach is based on locally normalized models which predict one word at a time.While these work remarkably well, they are plagued by exposure bias due to the greedy nature of the generation process.In this work, we investigate un-normalized energy-based models (EBMs) which operate not at the token but at the sequence level.In order to make training tractable, we first work in the residual of a pretrained locally normalized language model and second we train using noise contrastive estimation.Furthermore, since the EBM works at the sequence level, we can leverage pretrained bi-directional contextual representations, such as BERT and RoBERTa.Our experiments on two large language modeling datasets show that residual EBMs yield lower perplexity compared to locally normalized baselines.Moreover, generation via importance sampling is very efficient and of higher quality than the baseline models according to human evaluation.", "target": ["Mostriamo che i modelli Energy-Based, quando addestrati sul residuo di un language model autoregressive, possono essere utilizzati in modo efficace ed efficiente per generare testo.", "Un modello proposto basato sull'energia residua (EBM) per la generazione di testo che opera a livello di frase, e può quindi sfruttare BERT, e raggiunge una perplexity inferiore ed è preferito dalla valutazione umana."]} +{"source": "We investigate the robustness properties of image recognition models equipped with two features inspired by human vision, an explicit episodic memory and a shape bias, at the ImageNet scale.As reported in previous work, we show that an explicit episodic memory improves the robustness of image recognition models against small-norm adversarial perturbations under some threat models.It does not, however, improve the robustness against more natural, and typically larger, perturbations.Learning more robust features during training appears to be necessary for robustness in this second sense.We show that features derived from a model that was encouraged to learn global, shape-based representations (Geirhos et al., 2019) do not only improve the robustness against natural perturbations, but when used in conjunction with an episodic memory, they also provide additional robustness against adversarial perturbations.Finally, we address three important design choices for the episodic memory: memory size, dimensionality of the memories and the retrieval method.We show that to make the episodic memory more compact, it is preferable to reduce the number of memories by clustering them, instead of reducing their dimensionality.", "target": ["studio sistematico di modelli di riconoscimento di immagini su larga scala basati sulla cache, concentrandosi in particolare sulle loro proprietà di robustezza", "Questo articolo ha proposto di usare la cache per migliorare la robustezza contro gli esempi di adversarial image, e ha concluso che l'uso di una grande cache continua non è superiore all'hard attention."]} +{"source": "Group convolutional neural networks (G-CNNs) can be used to improve classical CNNs by equipping them with the geometric structure of groups.Central in the success of G-CNNs is the lifting of feature maps to higher dimensional disentangled representations, in which data characteristics are effectively learned, geometric data-augmentations are made obsolete, and predictable behavior under geometric transformations (equivariance) is guaranteed via group theory.Currently, however, the practical implementations of G-CNNs are limited to either discrete groups (that leave the grid intact) or continuous compact groups such as rotations (that enable the use of Fourier theory).In this paper we lift these limitations and propose a modular framework for the design and implementation of G-CNNs for arbitrary Lie groups.In our approach the differential structure of Lie groups is used to expand convolution kernels in a generic basis of B-splines that is defined on the Lie algebra.This leads to a flexible framework that enables localized, atrous, and deformable convolutions in G-CNNs by means of respectively localized, sparse and non-uniform B-spline expansions.The impact and potential of our approach is studied on two benchmark datasets: cancer detection in histopathology slides (PCam dataset) in which rotation equivariance plays a key role and facial landmark localization (CelebA dataset) in which scale equivariance is important.In both cases, G-CNN architectures outperform their classical 2D counterparts and the added value of atrous and localized group convolutions is studied in detail.", "target": ["L'articolo descrive un framework flessibile per costruire CNN che sono equivarianti a una grande classe di gruppi di trasformazioni.", "Un framework per costruire CNN di gruppo con un gruppo di Lie arbitrario G, che mostra superiorità rispetto a una CNN nella classificazione dei tumori e nella localizzazione dei punti di riferimento."]} +{"source": "Global feature pooling is a modern variant of feature pooling providing better interpretatability and regularization.Although alternative pooling methods exist (eg. max, lp norm, stochastic), the averaging operation is still the dominating global pooling scheme in popular models.As fine-grained recognition requires learning subtle, discriminative features, we consider the question: is average pooling the optimal strategy?We first ask: ``is there a difference between features learned by global average and max pooling?'' Visualization and quantitative analysis show that max pooling encourages learning features of different spatial scales.We then ask ``is there a single global feature pooling variant that's most suitable for fine-grained recognition?'' A thorough evaluation of nine representative pooling algorithms finds that: max pooling outperforms average pooling consistently across models, datasets, and image resolutions; it does so by reducing the generalization gap; and generalized pooling's performance increases almost monotonically as it changes from average to max.We finally ask: ``what's the best way to combine two heterogeneous pooling schemes?'' Common strategies struggle because of potential gradient conflict but the ``freeze-and-train'' trick works best.We also find that post-global batch normalization helps with faster convergence and improves model performance consistently.", "target": ["Un benchmark di nove schemi rappresentativi di pooling globale rivela alcuni risultati interessanti.", "Per task di classificazione fine-grained, questo articolo ha convalidato che maxpooling incoraggia mappe di feature più sparse e supera avgpooling."]} +{"source": "We present a technique to improve the generalization of deep representations learned on small labeled datasets by introducing self-supervised tasks as auxiliary loss functions.Although recent research has shown benefits of self-supervised learning (SSL) on large unlabeled datasets, its utility on small datasets is unknown.We find that SSL reduces the relative error rate of few-shot meta-learners by 4%-27%, even when the datasets are small and only utilizing images within the datasets.The improvements are greater when the training set is smaller or the task is more challenging.Though the benefits of SSL may increase with larger training sets, we observe that SSL can have a negative impact on performance when there is a domain shift between distribution of images used for meta-learning and SSL.Based on this analysis we present a technique that automatically select images for SSL from a large, generic pool of unlabeled images for a given dataset using a domain classifier that provides further improvements.We present results using several meta-learners and self-supervised tasks across datasets with varying degrees of domain shifts and label sizes to characterize the effectiveness of SSL for few-shot learning.", "target": ["La self-supervision migliora il riconoscimento few-shot su dataset piccoli e impegnativi senza fare affidamento su dati extra; i dati extra aiutano solo quando provengono dallo stesso dominio o da uno simile.", "Uno studio empirico di diversi metodi di apprendimento self-supervised (SSL), mostrando che SSL aiuta di più quando il dataset è più difficile, che il dominio conta per il training, e un metodo per scegliere sample da un dataset non annotato."]} +{"source": "Abstraction of Markov Decision Processes is a useful tool for solving complex problems, as it can ignore unimportant aspects of an environment, simplifying the process of learning an optimal policy.In this paper, we propose a new algorithm for finding abstract MDPs in environments with continuous state spaces.It is based on MDP homomorphisms, a structure-preserving mapping between MDPs.We demonstrate our algorithm's ability to learns abstractions from collected experience and show how to reuse the abstractions to guide exploration in new tasks the agent encounters.Our novel task transfer method beats a baseline based on a deep Q-network.", "target": ["Creiamo modelli astratti di ambienti dall'esperienza e li usiamo per imparare più velocemente nuovi task.", "Una metodologia che utilizza l'idea degli omomorfismi MDP per trasformare un MDP complesso con uno spazio di stato continuo in uno più semplice."]} +{"source": "A number of recent methods to understand neural networks have focused on quantifying the role of individual features. One such method, NetDissect identifies interpretable features of a model using the Broden dataset of visual semantic labels (colors, materials, textures, objects and scenes). Given the recent rise of a number of action recognition datasets, we propose extending the Broden dataset to include actions to better analyze learned action models. We describe the annotation process, results from interpreting action recognition models on the extended Broden dataset and examine interpretable feature paths to help us understand the conceptual hierarchy used to classify an action.", "target": ["Espandiamo la Network Dissection per includere l'interpretazione delle azioni ed esaminiamo i percorsi delle feature interpretabili per capire la gerarchia concettuale usata per classificare un'azione."]} +{"source": "Automatic melody generation for pop music has been a long-time aspiration forboth AI researchers and musicians.However, learning to generate euphoniousmelody has turned out to be highly challenging due to a number of factors.Representationof multivariate property of notes has been one of the primary challenges.It is also difficult to remain in the permissible spectrum of musical variety, outsideof which would be perceived as a plain random play without auditory pleasantness.Observing the conventional structure of pop music poses further challenges.In this paper, we propose to represent each note and its properties as a unique‘word,’ thus lessening the prospect of misalignments between the properties, aswell as reducing the complexity of learning.We also enforce regularization policieson the range of notes, thus encouraging the generated melody to stay closeto what humans would find easy to follow.Furthermore, we generate melodyconditioned on song part information, thus replicating the overall structure of afull song.Experimental results demonstrate that our model can generate auditorilypleasant songs that are more indistinguishable from human-written ones thanprevious models.", "target": ["Proponiamo un nuovo modello per rappresentare le note e le loro proprietà, che può migliorare la generazione automatica della melodia.", "Questo articolo propone un modello generativo di melodia simbolica (MIDI) nella musica popolare occidentale che codifica congiuntamente i simboli delle note con informazioni di tempo e durata per formare \"parole\" musicali.", "L'articolo propone di facilitare la generazione di melodia rappresentando le note come \"parole\", rappresentando tutte le proprietà della nota e permettendo così la generazione di \"frasi\" musicali."]} +{"source": "Depth is a key component of Deep Neural Networks (DNNs), however, designing depth is heuristic and requires many human efforts.We propose AutoGrow to automate depth discovery in DNNs: starting from a shallow seed architecture, AutoGrow grows new layers if the growth improves the accuracy; otherwise, stops growing and thus discovers the depth.We propose robust growing and stopping policies to generalize to different network architectures and datasets.Our experiments show that by applying the same policy to different network architectures, AutoGrow can always discover near-optimal depth on various datasets of MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet.For example, in terms of accuracy-computation trade-off, AutoGrow discovers a better depth combination in ResNets than human experts.Our AutoGrow is efficient.It discovers depth within similar time of training a single DNN.", "target": ["Un metodo che fa crescere automaticamente i layer nelle reti neurali per scoprire la profondità ottimale.", "Un framework per interlacciare il training di una rete meno profonda e l'aggiunta di nuovi layer che fornisce intuizioni sul paradigma delle \"growing network\"."]} +{"source": "Given the importance of remote sensing, surprisingly little attention has been paid to it by the representation learning community.To address it and to speed up innovation in this domain, we provide simplified access to 5 diverse remote sensing datasets in a standardized form.We specifically explore in-domain representation learning and address the question of \"what characteristics should a dataset have to be a good source for remote sensing representation learning\".The established baselines achieve state-of-the-art performance on these datasets.", "target": ["Esplorazione del representation learning in-domain per dataset di remote sensing.", "Questo articolo ha fornito diversi dataset standardizzati per il remote sensing e ha dimostrato che la rappresentazione in-domain potrebbe produrre migliori baseline per il remote sensing rispetto al fine-tuning su ImageNet o all'apprendimento da zero."]} +{"source": "Generative seq2seq dialogue systems are trained to predict the next word in dialogues that have already occurred.They can learn from large unlabeled conversation datasets, build a deep understanding of conversational context, and generate a wide variety of responses.This flexibility comes at the cost of control.Undesirable responses in the training data will be reproduced by the model at inference time, and longer generations often don’t make sense.Instead of generating responses one word at a time, we train a classifier to choose from a predefined list of full responses.The classifier is trained on (conversation context, response class) pairs, where each response class is a noisily labeled group of interchangeable responses.At inference, we generate the exemplar response associated with the predicted response class.Experts can edit and improve these exemplar responses over time without retraining the classifier or invalidating old training data.Human evaluation of 775 unseen doctor/patient conversations shows that this tradeoff improves responses.Only 12% of our discriminative approach’s responses are worse than the doctor’s response in the same conversational context, compared to 18% for the generative model.A discriminative model trained without any manual labeling of response classes achieves equal performance to the generative model.", "target": ["Evitare di generare risposte una parola alla volta utilizzando una supervisione debole per addestrare un classificatore a scegliere una risposta completa.", "Un modo per generare risposte per il dialogo medico usando un classificatore per selezionare da un insieme di risposte curate da esperti in base al contesto della conversazione."]} +{"source": "There is a previously identified equivalence between wide fully connected neural networks (FCNs) and Gaussian processes (GPs).This equivalence enables, for instance, test set predictions that would have resulted from a fully Bayesian, infinitely wide trained FCN to be computed without ever instantiating the FCN, but by instead evaluating the corresponding GP.In this work, we derive an analogous equivalence for multi-layer convolutional neural networks (CNNs) both with and without pooling layers, and achieve state of the art results on CIFAR10 for GPs without trainable kernels.We also introduce a Monte Carlo method to estimate the GP corresponding to a given neural network architecture, even in cases where the analytic form has too many terms to be computationally feasible. Surprisingly, in the absence of pooling layers, the GPs corresponding to CNNs with and without weight sharing are identical.As a consequence, translation equivariance, beneficial in finite channel CNNs trained with stochastic gradient descent (SGD), is guaranteed to play no role in the Bayesian treatment of the infinite channel limit - a qualitative difference between the two regimes that is not present in the FCN case.We confirm experimentally, that while in some scenarios the performance of SGD-trained finite CNNs approaches that of the corresponding GPs as the channel count increases, with careful tuning SGD-trained CNNs can significantly outperform their corresponding GPs, suggesting advantages from SGD training compared to fully Bayesian parameter estimation.", "target": ["CNN con width finita addestrate con SGD contro CNN completamente bayesiane con width infinita. Chi vince?", "L'articolo stabilisce una connessione tra la rete neurale convoluzionale bayesiana a canale infinito e i processi gaussiani."]} +{"source": "Bayesian inference promises to ground and improve the performance of deep neural networks.It promises to be robust to overfitting, to simplify the training procedure and the space of hyperparameters, and to provide a calibrated measure of uncertainty that can enhance decision making, agent exploration and prediction fairness.Markov Chain Monte Carlo (MCMC) methods enable Bayesian inference by generating samples from the posterior distribution over model parameters.Despite the theoretical advantages of Bayesian inference and the similarity between MCMC and optimization methods, the performance of sampling methods has so far lagged behind optimization methods for large scale deep learning tasks.We aim to fill this gap and introduce ATMC, an adaptive noise MCMC algorithm that estimates and is able to sample from the posterior of a neural network.ATMC dynamically adjusts the amount of momentum and noise applied to each parameter update in order to compensate for the use of stochastic gradients.We use a ResNet architecture without batch normalization to test ATMC on the Cifar10 benchmark and the large scale ImageNet benchmark and show that, despite the absence of batch normalization, ATMC outperforms a strong optimization baseline in terms of both classification accuracy and test log-likelihood.We show that ATMC is intrinsically robust to overfitting on the training data and that ATMC provides a better calibrated measure of uncertainty compared to the optimization baseline.", "target": ["Scaliamo l'inferenza bayesiana alla classificazione ImageNet e raggiungiamo risultati competitivi di accuratezza e calibrazione dell'incertezza.", "Un algoritmo MCMC adattivo per la classificazione delle immagini che regola dinamicamente il momentum e il rumore applicato ad ogni aggiornamento dei parametri, ed è robusto all'overfitting e fornisce una misura di incertezza con le previsioni."]} +{"source": "Now GANs can generate more and more realistic face images that can easily fool human beings. In contrast, a common convolutional neural network(CNN), e.g. ResNet-18, can achieve more than 99.9% accuracy in discerning fake/real faces if training and testing faces are from the same source.In this paper, we performed both human studies and CNN experiments, which led us to two important findings.One finding is that the textures of fake faces are substantially different from real ones.CNNs can capture local image texture information for recognizing fake/real face, while such cues are easily overlooked by humans.The other finding is that global image texture information is more robust to image editing and generalizable to fake faces from different GANs and datasets.Based on the above findings, we propose a novel architecture coined as Gram-Net, which incorporates “Gram Block” in multiple semantic levels to extract global image texture representations.Experimental results demonstrate that our Gram-Net performs better than existing approaches for fake face detection. Especially, our Gram-Net is more robust to image editing, e.g. downsampling, JPEG compression, blur, and noise. More importantly, our Gram-Net generalizes significantly better in detecting fake faces from GAN models not seen in the training phase.", "target": ["Uno studio empirico sulle immagini false rivela che la texture è un importante indizio che le immagini false attuali differiscono dalle immagini reali. Il nostro modello migliorato che cattura le statistiche globali delle texture mostra migliori prestazioni di rilevamento delle immagini false cross-GAN.", "L'articolo propone un modo per migliorare le prestazioni del modello per il rilevamento di volti falsi in immagini generate da una GAN per essere più generalizzabile in base alle informazioni sulla texture."]} +{"source": "The Wasserstein probability metric has received much attention from the machine learning community.Unlike the Kullback-Leibler divergence, which strictly measures change in probability, the Wasserstein metric reflects the underlying geometry between outcomes.The value of being sensitive to this geometry has been demonstrated, among others, in ordinal regression and generative modelling, and most recently in reinforcement learning.In this paper we describe three natural properties of probability divergences that we believe reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients.The Wasserstein metric possesses the first two properties but, unlike the Kullback-Leibler divergence, does not possess the third.We provide empirical evidence suggesting this is a serious issue in practice.Leveraging insights from probabilistic forecasting we propose an alternative to the Wasserstein metric, the Cramér distance.We show that the Cramér distance possesses all three desired properties, combining the best of the Wasserstein and Kullback-Leibler divergences.We give empirical results on a number of domains comparing these three divergences.To illustrate the practical relevance of the Cramér distance we design a new algorithm, the Cramér Generative Adversarial Network (GAN), and show that it has a number of desirable properties over the related Wasserstein GAN.", "target": ["La distanza di Wasserstein è difficile da minimizzare con la stochastic gradient descent, mentre la distanza di Cramer può essere ottimizzata facilmente e funziona altrettanto bene.", "Il manoscritto propone di usare la distanza di Cramer come loss quando si ottimizza una funzione obiettivo usando stochastic gradient descent perché ha unbiased sample gradient.", "Il contributo dell'articolo è legato ai criteri di performance, in particolare alla metrica di Wasserstein/Mallows"]} +{"source": "We humans have an innate understanding of the asymmetric progression of time, which we use to efficiently and safely perceive and manipulate our environment.Drawing inspiration from that, we approach the problem of learning an arrow of time in a Markov (Decision) Process.We illustrate how a learned arrow of time can capture salient information about the environment, which in turn can be used to measure reachability, detect side-effects and to obtain an intrinsic reward signal.Finally, we propose a simple yet effective algorithm to parameterize the problem at hand and learn an arrow of time with a function approximator (here, a deep neural network).Our empirical results span a selection of discrete and continuous environments, and demonstrate for a class of stochastic processes that the learned arrow of time agrees reasonably well with a well known notion of an arrow of time due to Jordan, Kinderlehrer and Otto (1998).", "target": ["Impariamo la freccia del tempo per gli MDP e la usiamo per misurare la raggiungibilità, rilevare gli effetti collaterali e ottenere un segnale di ricompensa della curiosity.", "Questo lavoro propone l'h-potenziale come soluzione a un obiettivo che misura l'asimmetria stato-transizione in un MDP."]} +{"source": "We formulate stochastic gradient descent (SGD) as a novel factorised Bayesian filtering problem, in which each parameter is inferred separately, conditioned on the corresopnding backpropagated gradient. Inference in this setting naturally gives rise to BRMSprop and BAdam: Bayesian variants of RMSprop and Adam. Remarkably, the Bayesian approach recovers many features of state-of-the-art adaptive SGD methods, including amongst others root-mean-square normalization, Nesterov acceleration and AdamW. As such, the Bayesian approach provides one explanation for the empirical effectiveness of state-of-the-art adaptive SGD algorithms. Empirically comparing BRMSprop and BAdam with naive RMSprop and Adam on MNIST, we find that Bayesian methods have the potential to considerably reduce test loss and classification error.", "target": ["Abbiamo formulato SGD come un problema di filtraggio bayesiano, e mostriamo che questo dà origine a RMSprop, Adam, AdamW, NAG e altre feature dei metodi adattativi allo stato dell'arte", "L'articolo analizza la stochastic gradient descent attraverso il filtraggio bayesiano come framework per analizzare i metodi adattivi.", "Gli autori tentano di unificare i metodi di gradiente adattivo esistenti nel framework del filtraggio bayesiano con un prior dinamico"]} +{"source": "Data augmentation (DA) has been widely utilized to improve generalization in training deep neural networks.Recently, human-designed data augmentation has been gradually replaced by automatically learned augmentation policy.Through finding the best policy in well-designed search space of data augmentation, AutoAugment (Cubuk et al., 2019) can significantly improve validation accuracy on image classification tasks.However, this approach is not computationally practical for large-scale problems.In this paper, we develop an adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss.The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization.In contrast to prior work, we reuse the computation in target network training for policy evaluation, and dispense with the retraining of the target network.Compared to AutoAugment, this leads to about 12x reduction in computing cost and 11x shortening in time overhead on ImageNet.We show experimental results of our approach on CIFAR-10/CIFAR-100, ImageNet, and demonstrate significant performance improvements over state-of-the-art.On CIFAR-10, we achieve a top-1 test error of 1.36%, which is the currently best performing single model.On ImageNet, we achieve a leading performance of top-1 accuracy 79.40% on ResNet-50 and 80.00% on ResNet-50-D without extra data.", "target": ["Introduciamo l'idea dell'adversarial learning nella data augmentation automatica per migliorare la generalizzazione di una rete target.", "Una tecnica chiamata Adversarial AutoAugment che impara dinamicamente buone policy di data augmentation durante il training usando un approccio adversarial."]} +{"source": "In this study we focus on first-order meta-learning algorithms that aim to learn a parameter initialization of a network which can quickly adapt to new concepts, given a few examples.We investigate two approaches to enhance generalization and speed of learning of such algorithms, particularly expanding on the Reptile (Nichol et al., 2018) algorithm.We introduce a novel regularization technique called meta-step gradient pruning and also investigate the effects of increasing the depth of network architectures in first-order meta-learning.We present an empirical evaluation of both approaches, where we match benchmark few-shot image classification results with 10 times fewer iterations using Mini-ImageNet dataset and with the use of deeper networks, we attain accuracies that surpass the current benchmarks of few-shot image classification using Omniglot dataset.", "target": ["Lo studio introduce due approcci per migliorare la generalizzazione del meta learning del primo ordine e presenta una valutazione empirica sulla classificazione few-shot di immagini.", "L'articolo presenta uno studio empirico dell'algoritmo Reptile di meta learning del primo ordine, investigando una tecnica di regolarizzazione proposta e reti più profonde"]} +{"source": "In this paper, we propose the use of in-training matrix factorization to reduce the model size for neural machine translation.Using in-training matrix factorization, parameter matrices may be decomposed into the products of smaller matrices, which can compress large machine translation architectures by vastly reducing the number of learnable parameters.We apply in-training matrix factorization to different layers of standard neural architectures and show that in-training factorization is capable of reducing nearly 50% of learnable parameters without any associated loss in BLEU score.Further, we find that in-training matrix factorization is especially powerful on embedding layers, providing a simple and effective method to curtail the number of parameters with minimal impact on model performance, and, at times, an increase in performance.", "target": ["Questo articolo propone di usare la fattorizzazione della matrice al momento del training per la traduzione automatica neurale, che può ridurre la dimensione del modello e diminuire il tempo di training senza impattare le prestazioni.", "Questo articolo propone di comprimere i modelli usando la fattorizzazione della matrice durante il training per le deep neural network per machine translation."]} +{"source": "Though state-of-the-art sentence representation models can perform tasks requiring significant knowledge of grammar, it is an open question how best to evaluate their grammatical knowledge.We explore five experimental methods inspired by prior work evaluating pretrained sentence representation models.We use a single linguistic phenomenon, negative polarity item (NPI) licensing, as a case study for our experiments.NPIs like 'any' are grammatical only if they appear in a licensing environment like negation ('Sue doesn't have any cats' vs. '*Sue has any cats').This phenomenon is challenging because of the variety of NPI licensing environments that exist.We introduce an artificially generated dataset that manipulates key features of NPI licensing for the experiments.We find that BERT has significant knowledge of these features, but its success varies widely across different experimental methods.We conclude that a variety of methods is necessary to reveal all relevant aspects of a model's grammatical knowledge in a given domain.", "target": ["Diversi metodi di analisi di BERT suggeriscono conclusioni diverse (ma compatibili) in un caso di studio su NPI."]} +{"source": "The primate visual system builds robust, multi-purpose representations of the external world in order to support several diverse downstream cortical processes.Such representations are required to be invariant to the sensory inconsistencies caused by dynamically varying lighting, local texture distortion, etc.A key architectural feature combating such environmental irregularities is ‘long-range horizontal connections’ that aid the perception of the global form of objects.In this work, we explore the introduction of such horizontal connections into standard deep convolutional networks; we present V1Net -- a novel convolutional-recurrent unit that models linear and nonlinear horizontal inhibitory and excitatory connections inspired by primate visual cortical connectivity.We introduce the Texturized Challenge -- a new benchmark to evaluate object recognition performance under perceptual noise -- which we use to evaluate V1Net against an array of carefully selected control models with/without recurrent processing.Additionally, we present results from an ablation study of V1Net demonstrating the utility of diverse neurally inspired horizontal connections for state-of-the-art AI systems on the task of object boundary detection from natural images.We also present the emergence of several biologically plausible horizontal connectivity patterns, namely center-on surround-off, association fields and border-ownership connectivity patterns in a V1Net model trained to perform boundary detection on natural images from the Berkeley Segmentation Dataset 500 (BSDS500).Our findings suggest an increased representational similarity between V1Net and biological visual systems, and highlight the importance of neurally inspired recurrent contextual processing principles for learning visual representations that are robust to perceptual noise and furthering the state-of-the-art in computer vision.", "target": ["In questo lavoro, presentiamo V1Net - una nuova rete neurale ricorrente che modella le connessioni orizzontali corticali che danno origine a robuste rappresentazioni visive attraverso il raggruppamento percettivo.", "Gli autori propongono di modificare una variante convoluzionale di LSTM per includere connessioni orizzontali ispirate alle interazioni note nella corteccia visiva."]} +{"source": "Humans understand novel sentences by composing meanings and roles of core language components.In contrast, neural network models for natural language modeling fail when such compositional generalization is required.The main contribution of this paper is to hypothesize that language compositionality is a form of group-equivariance.Based on this hypothesis, we propose a set of tools for constructing equivariant sequence-to-sequence models.Throughout a variety of experiments on the SCAN tasks, we analyze the behavior of existing models under the lens of equivariance, and demonstrate that our equivariant architecture is able to achieve the type compositional generalization required in human language understanding.", "target": ["Proponiamo un collegamento tra l'equivarianza di permutazione e la generalizzazione compositiva, e forniamo language model equivarianti", "Questo lavoro si concentra sull'apprendimento di rappresentazioni e funzioni localmente equivarianti su parole di input/output ai fini del task SCAN."]} +{"source": "Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks. A key challenge of variational inference is to approximate the posterior over model parameters with a distribution that is simpler and tractable yet sufficiently expressive.In this work, we propose a method for training highly flexible variational distributions by starting with a coarse approximation and iteratively refining it.Each refinement step makes cheap, local adjustments and only requires optimization of simple variational families.We demonstrate theoretically that our method always improves a bound on the approximation (the Evidence Lower BOund) and observe this empirically across a variety of benchmark tasks. In experiments, our method consistently outperforms recent variational inference methods for deep learning in terms of log-likelihood and the ELBO. We see that the gains are further amplified on larger scale models, significantly outperforming standard VI and deep ensembles on residual networks on CIFAR10.", "target": ["L'articolo propone un algoritmo per aumentare la flessibilità del posteriore variazionale nelle reti neurali bayesiane attraverso l'ottimizzazione iterativa.", "Un metodo per addestrare distribuzioni posteriori variazionali flessibili, applicato alle reti neurali bayesiane per eseguire l'inferenza di variazione (VI) sui pesi."]} +{"source": "In this paper, we propose a residual non-local attention network for high-quality image restoration.Without considering the uneven distribution of information in the corrupted images, previous methods are restricted by local convolutional operation and equal treatment of spatial- and channel-wise features.To address this issue, we design local and non-local attention blocks to extract features that capture the long-range dependencies between pixels and pay more attention to the challenging parts.Specifically, we design trunk branch and (non-)local mask branch in each (non-)local attention block.The trunk branch is used to extract hierarchical features.Local and non-local mask branches aim to adaptively rescale these hierarchical features with mixed attentions.The local mask branch concentrates on more local structures with convolutional operations, while non-local attention considers more about long-range dependencies in the whole feature map.Furthermore, we propose residual local and non-local attention learning to train the very deep network, which further enhance the representation ability of the network.Our proposed method can be generalized for various image restoration applications, such as image denoising, demosaicing, compression artifacts reduction, and super-resolution.Experiments demonstrate that our method obtains comparable or better results compared with recently leading methods quantitatively and visually.", "target": ["Nuovo framework allo stato dell'arte per il restauro delle immagini", "L'articolo propone un'architettura di rete neurale convoluzionale che include blocchi per meccanismi di attention locale e non locale, che sono ritenuti responsabili del raggiungimento di risultati eccellenti in quattro applicazioni di restauro delle immagini.", "Questo articolo propone una rete di attention residua non locale per il restauro delle immagini"]} +{"source": "Most approaches to learning action planning models heavily rely on a significantly large volume of training samples or plan observations.In this paper, we adopt a different approach based on deductive learning from domain-specific knowledge, specifically from logic formulae that specify constraints about the possible states of a given domain.The minimal input observability required by our approach is a single example composed of a full initial state and a partial goal state.We will show that exploiting specific domain knowledge enables to constrain the space of possible action models as well as to complete partial observations, both of which turn out helpful to learn good-quality action models.", "target": ["Approccio ibrido all'acquisizione di modelli che compensa la mancanza di dati disponibili con la conoscenza specifica del dominio fornita da esperti", "Un approccio di acquisizione del dominio che considera l'uso di una rappresentazione diversa per il modello di dominio parziale utilizzando relazioni mutex schematiche al posto delle condizioni pre/post."]} +{"source": "We release the largest public ECG dataset of continuous raw signals for representation learning containing over 11k patients and 2 billion labelled beats.Our goal is to enable semi-supervised ECG models to be made as well as to discover unknown subtypes of arrhythmia and anomalous ECG signal events.To this end, we propose an unsupervised representation learning task, evaluated in a semi-supervised fashion. We provide a set of baselines for different feature extractors that can be built upon. Additionally, we perform qualitative evaluations on results from PCA embeddings, where we identify some clustering of known subtypes indicating the potential for representation learning in arrhythmia sub-type discovery.", "target": ["Rilasciamo un dataset costruito a partire dai dati ECG single-lead di 11.000 pazienti a cui è stato prescritto l'uso del dispositivo {DEVICENAME}(TM).", "Questo articolo descrive un dataset ECG su larga scala che gli autori intendono pubblicare e fornisce un'analisi e una visualizzazione non supervisionata del dataset."]} +{"source": "As the basic building block of Convolutional Neural Networks (CNNs), the convolutional layer is designed to extract local patterns and lacks the ability to model global context in its nature.Many efforts have been recently made to complement CNNs with the global modeling ability, especially by a family of works on global feature interaction.In these works, the global context information is incorporated into local features before they are fed into convolutional layers.However, research on neuroscience reveals that, besides influences changing the inputs to our neurons, the neurons' ability of modifying their functions dynamically according to context is essential for perceptual tasks, which has been overlooked in most of CNNs.Motivated by this, we propose one novel Context-Gated Convolution (CGC) to explicitly modify the weights of convolutional layers adaptively under the guidance of global context.As such, being aware of the global context, the modulated convolution kernel of our proposed CGC can better extract representative local patterns and compose discriminative features.Moreover, our proposed CGC is lightweight, amenable to modern CNN architectures, and consistently improves the performance of CNNs according to extensive experiments on image classification, action recognition, and machine translation.", "target": ["Un nuovo Context-Gated Convolution che incorpora informazioni globali sul contesto nelle CNN modulando esplicitamente i kernel di convoluzione, e quindi cattura modelli locali più rappresentativi ed estrae feature discriminanti.", "Questo articolo usa il contesto globale per modulare i pesi dei layer convoluzionali e aiutare le CNN a catturare più feature discriminanti con alte prestazioni e meno parametri rispetto alla modulazione della feature map."]} +{"source": "We analyze the trade-off between quantization noise and clipping distortion in low precision networks.We identify the statistics of various tensors, and derive exact expressions for the mean-square-error degradation due to clipping.By optimizing these expressions, we show marked improvements over standard quantization schemes that normally avoid clipping.For example, just by choosing the accurate clipping values, more than 40\\% accuracy improvement is obtained for the quantization of VGG-16 to 4-bits of precision.Our results have many applications for the quantization of neural networks at both training and inference time.", "target": ["Analizziamo il trade-off tra il rumore di quantizzazione e la distorsione di clipping nelle reti a bassa precisione, e mostriamo miglioramenti marcati rispetto agli schemi di quantizzazione standard che normalmente evitano il clipping", "Deriva una formula per trovare i valori minimi e massimi di clipping per la quantizzazione uniforme che minimizzano l'errore quadratico risultante dalla quantizzazione, per una distribuzione Laplace o Gaussiana sul valore pre-quantizzato."]} +{"source": "Batch Normalization (BN) is one of the most widely used techniques in Deep Learning field.But its performance can awfully degrade with insufficient batch size.This weakness limits the usage of BN on many computer vision tasks like detection or segmentation, where batch size is usually small due to the constraint of memory consumption.Therefore many modified normalization techniques have been proposed, which either fail to restore the performance of BN completely, or have to introduce additional nonlinear operations in inference procedure and increase huge consumption.In this paper, we reveal that there are two extra batch statistics involved in backward propagation of BN, on which has never been well discussed before.The extra batch statistics associated with gradients also can severely affect the training of deep neural network.Based on our analysis, we propose a novel normalization method, named Moving Average Batch Normalization (MABN).MABN can completely restore the performance of vanilla BN in small batch cases, without introducing any additional nonlinear operations in inference procedure.We prove the benefits of MABN by both theoretical analysis and experiments.Our experiments demonstrate the effectiveness of MABN in multiple computer vision tasks including ImageNet and COCO.The code has been released in https://github.com/megvii-model/MABN.", "target": ["Proponiamo un nuovo metodo di normalizzazione per gestire i casi aventi piccole batch size.", "Un metodo per affrontare il problema della piccola batch size di BN che applica l'operazione di media mobile senza troppo overhead e riduce il numero di statistiche di BN per una migliore stabilità."]} +{"source": "We present a simple proof for the benefit of depth in multi-layer feedforward network with rectifed activation (``\"depth separation\").Specifically we present a sequence of classification problems f_i such that(a) for any fixed depth rectified network we can find an index m such that problems with index > m require exponential network width to fully represent the function f_m; and(b) for any problem f_m in the family, we present a concrete neural network with linear depth and bounded width that fully represents it.While there are several previous work showing similar results, our proof uses substantially simpler tools and techniques, and should be accessible to undergraduate students in computer science and people with similar backgrounds.", "target": ["Dimostrazione di separazione della profondità per ReLU MLP con argomenti geometrici", "Una prova che le reti più profonde hanno bisogno di meno unità di quelle meno profonde per una famiglia di problemi."]} +{"source": "The rich and accessible labeled data fuel the revolutionary success of deep learning.Nonetheless, massive supervision remains a luxury for many real applications, boosting great interest in label-scarce techniques such as few-shot learning (FSL).An intuitively feasible approach to FSL is to conduct data augmentation via synthesizing additional training samples.The key to this approach is how to guarantee both discriminability and diversity of the synthesized samples.In this paper, we propose a novel FSL model, called $\\textrm{D}^2$GAN, which synthesizes Diverse and Discriminative features based on Generative Adversarial Networks (GAN).$\\textrm{D}^2$GAN secures discriminability of the synthesized features by constraining them to have high correlation with real features of the same classes while low correlation with those of different classes. Based on the observation that noise vectors that are closer in the latent code space are more likely to be collapsed into the same mode when mapped to feature space, $\\textrm{D}^2$GAN incorporates a novel anti-collapse regularization term, which encourages feature diversity by penalizing the ratio of the logarithmic similarity of two synthesized features and the logarithmic similarity of the latent codes generating them.Experiments on three common benchmark datasets verify the effectiveness of $\\textrm{D}^2$GAN by comparing with the state-of-the-art.", "target": ["Un nuovo algoritmo di apprendimento few-shot basato su GAN sintetizzando feature diverse e discriminanti", "Un metodo di meta learning che impara un modello generativo che può aumentare il support set di un few-shot learner che ottimizza una combinazione di loss."]} +{"source": "The lack of crisp mathematical models that capture the structure of real-worlddata sets is a major obstacle to the detailed theoretical understanding of deepneural networks.Here, we first demonstrate the effect of structured data setsby experimentally comparing the dynamics and the performance of two-layernetworks trained on two different data sets:(i) an unstructured synthetic dataset containing random i.i.d. inputs, and(ii) a simple canonical data set suchas MNIST images.Our analysis reveals two phenomena related to the dynamics ofthe networks and their ability to generalise that only appear when training onstructured data sets.Second, we introduce a generative model for data sets,where high-dimensional inputs lie on a lower-dimensional manifold and havelabels that depend only on their position within this manifold.We call it the*hidden manifold model* and we experimentally demonstrate that trainingnetworks on data sets drawn from this model reproduces both the phenomena seenduring training on MNIST.", "target": ["Dimostriamo come la struttura nei dataset abbia un impatto sulle reti neurali e introduciamo un modello generativo per dataset sintetici che riproduce questo impatto.", "L'articolo studia come diversi setting della struttura dei dati influenzino l'apprendimento delle reti neurali e come imitare il comportamento su dataset reali quando si effettua il learning su uno sintetico."]} +{"source": "In this paper, we study deep diagonal circulant neural networks, that is deep neural networks in which weight matrices are the product of diagonal and circulant ones.Besides making a theoretical analysis of their expressivity, we introduced principled techniques for training these models: we devise an initialization scheme and proposed a smart use of non-linearity functions in order to train deep diagonal circulant networks. Furthermore, we show that these networks outperform recently introduced deep networks with other types of structured layers.We conduct a thorough experimental study to compare the performance of deep diagonal circulant networks with state of the art models based on structured matrices and with dense models.We show that our models achieve better accuracy than other structured approaches while required 2x fewer weights as the next best approach.Finally we train deep diagonal circulant networks to build a compact and accurate models on a real world video classification dataset with over 3.8 million training examples.", "target": ["Addestriamo deep neural network basate su matrici diagonali e circolanti, e dimostriamo che questo tipo di reti sono sia compatte che accurate in applicazioni del mondo reale.", "Gli autori forniscono un'analisi teorica della potenza espressiva delle reti neurali diagonali circolanti (DCNN) e propongono uno schema di inizializzazione per le DCNN deep."]} +{"source": "Interpretability has largely focused on local explanations, i.e. explaining why a model made a particular prediction for a sample.These explanations are appealing due to their simplicity and local fidelity.However, they do not provide information about the general behavior of the model.We propose to leverage model distillation to learn global additive explanations that describe the relationship between input features and model predictions.These global explanations take the form of feature shapes, which are more expressive than feature attributions.Through careful experimentation, we show qualitatively and quantitatively that global additive explanations are able to describe model behavior and yield insights about models such as neural nets.A visualization of our approach applied to a neural net as it is trained is available at https://youtu.be/ErQYwNqzEdc", "target": ["Proponiamo di sfruttare la model distillation per imparare spiegazioni additive globali sotto forma di forme di feature (che sono più espressive delle attribuzioni di feature) per modelli come le reti neurali addestrate su dati tabulari.", "Questo articolo incorpora i Generalized Additive Model (GAM) con la model distillation per fornire spiegazioni globali delle reti neurali."]} +{"source": "A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner.These representations are typically used as general purpose features for words across a range of NLP problems.However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem.Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations.In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences.Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods.We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations.", "target": ["Una framework di apprendimento multi-task su larga scala con diversi obiettivi di training per imparare rappresentazioni di frasi di lunghezza fissa", "Questo articolo riguarda l'apprendimento di embeddings di frasi combinando diversi segnali di training: skip-thought, predizione della traduzione, classificazione delle relazioni di implicazione e predizione del parsing dei costituenti."]} +{"source": "In a time where neural networks are increasingly adopted in sensitive applications, algorithmic bias has emerged as an issue with moral implications.While there are myriad ways that a system may be compromised by bias, systematically isolating and evaluating existing systems on such scenarios is non-trivial, i.e., bias may be subtle, natural and inherently difficult to quantify.To this end, this paper proposes the first systematic study of benchmarking state-of-the-art neural models against biased scenarios.More concretely, we postulate that the bias annotator problem can be approximated with neural models, i.e., we propose generative models of latent bias to deliberately and unfairly associate latent features to a specific class.All in all, our framework provides a new way for principled quantification and evaluation of models against biased datasets.Consequently, we find that state-of-the-art NLP models (e.g., BERT, RoBERTa, XLNET) are readily compromised by biased data.", "target": ["Proponiamo un annotatore neurale di bias per confrontare i modelli sulla loro robustezza ai dataset di testo distorti.", "Un metodo per generare dataset biased per NLP, basandosi su un autoencoder condizionato regolarizzato dall'avversario (CARA)."]} +{"source": "We consider the problem of topic modeling in a weakly semi-supervised setting.In this scenario, we assume that the user knows a priori a subset of the topics she wants the model to learn and is able to provide a few exemplar documents for those topics.In addition, while each document may typically consist of multiple topics, we do not assume that the user will identify all its topics exhaustively. Recent state-of-the-art topic models such as NVDM, referred to herein as Neural Topic Models (NTMs), fall under the variational autoencoder framework.We extend NTMs to the weakly semi-supervised setting by using informative priors in the training objective.After analyzing the effect of informative priors, we propose a simple modification of the NVDM model using a logit-normal posterior that we show achieves better alignment to user-desired topics versus other NTM models.", "target": ["Proponiamo di supervisionare i topic model in stile VAE regolando in modo intelligente il prior per ogni documento. Troviamo che un posterior logit-normale fornisce le migliori prestazioni.", "Un metodo flessibile di supervisionare debolmente un topic model per ottenere un migliore allineamento con l'intuizione dell'utente."]} +{"source": "Analyzing deep neural networks (DNNs) via information plane (IP) theory has gained tremendous attention recently as a tool to gain insight into, among others, their generalization ability.However, it is by no means obvious how to estimate mutual information (MI) between each hidden layer and the input/desired output, to construct the IP.For instance, hidden layers with many neurons require MI estimators with robustness towards the high dimensionality associated with such layers.MI estimators should also be able to naturally handle convolutional layers, while at the same time being computationally tractable to scale to large networks.None of the existing IP methods to date have been able to study truly deep Convolutional Neural Networks (CNNs), such as the e.g.\\ VGG-16.In this paper, we propose an IP analysis using the new matrix--based R\\'enyi's entropy coupled with tensor kernels over convolutional layers, leveraging the power of kernel methods to represent properties of the probability distribution independently of the dimensionality of the data.The obtained results shed new light on the previous literature concerning small-scale DNNs, however using a completely new approach.Importantly, the new framework enables us to provide the first comprehensive IP analysis of contemporary large-scale DNNs and CNNs, investigating the different training phases and providing new insights into the training dynamics of large-scale neural networks.", "target": ["Prima analisi completa del piano di informazione delle deep neural network su larga scala utilizzando l'entropia basata sulla matrice e i kernel tensori.", "Gli autori propongono uno stimatore basato su tensor-kernel per la stima dell'informazione mutua tra layer ad alta dimensionalità in una rete neurale."]} +{"source": "Developing agents that can learn to follow natural language instructions has been an emerging research direction.While being accessible and flexible, natural language instructions can sometimes be ambiguous even to humans.To address this, we propose to utilize programs, structured in a formal language, as a precise and expressive way to specify tasks.We then devise a modular framework that learns to perform a task specified by a program – as different circumstances give rise to diverse ways to accomplish the task, our framework can perceive which circumstance it is currently under, and instruct a multitask policy accordingly to fulfill each subtask of the overall task.Experimental results on a 2D Minecraft environment not only demonstrate that the proposed framework learns to reliably accomplish program instructions and achieves zero-shot generalization to more complex instructions but also verify the efficiency of the proposed modulation mechanism for learning the multitask policy.We also conduct an analysis comparing various models which learn from programs and natural language instructions in an end-to-end fashion.", "target": ["Proponiamo un framework modulare che può eseguire task specificati da programmi e raggiungere una generalizzazione zero-shot per task più complessi.", "Questo articolo studia il training di agenti RL con istruzioni e decomposizioni di task formalizzati come programmi, proponendo un modello per un agente guidato dal programma che interpreta un programma e propone sotto-obiettivi a un modulo di azione."]} +{"source": "We analyze the convergence of (stochastic) gradient descent algorithm for learning a convolutional filter with Rectified Linear Unit (ReLU) activation function.Our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of ReLU, in contrast with previous works that are restricted to standard Gaussian input.We show that (stochastic) gradient descent with random initialization can learn the convolutional filter in polynomial time and the convergence rate depends on the smoothness of the input distribution and the closeness of patches.To the best of our knowledge, this is the first recovery guarantee of gradient-based algorithms for convolutional filter on non-Gaussian input distributions.Our theory also justifies the two-stage learning rate strategy in deep neural networks.While our focus is theoretical, we also present experiments that justify our theoretical findings.", "target": ["Dimostriamo che la gradient descent inizializzata in modo casuale (stocastico) impara un filtro convoluzionale in tempo polinomiale.", "Studia il problema dell'apprendimento di un singolo filtro convoluzionale usando SGD e mostra che sotto certe condizioni, SGD impara un singolo filtro convoluzionale.", "Questo articolo estende l'ipotesi di distribuzione gaussiana a un'ipotesi di smoothness angolare più generale, che copre una famiglia più ampia di distribuzioni di input"]} +{"source": "Deep neural networks (DNNs) are widely adopted in real-world cognitive applications because of their high accuracy.The robustness of DNN models, however, has been recently challenged by adversarial attacks where small disturbance on input samples may result in misclassification.State-of-the-art defending algorithms, such as adversarial training or robust optimization, improve DNNs' resilience to adversarial attacks by paying high computational costs.Moreover, these approaches are usually designed to defend one or a few known attacking techniques only.The effectiveness to defend other types of attacking methods, especially those that have not yet been discovered or explored, cannot be guaranteed.This work aims for a general approach of enhancing the robustness of DNN models under adversarial attacks.In particular, we propose Bamboo -- the first data augmentation method designed for improving the general robustness of DNN without any hypothesis on the attacking algorithms.Bamboo augments the training data set with a small amount of data uniformly sampled on a fixed radius ball around each training data and hence, effectively increase the distance between natural data points and decision boundary.Our experiments show that Bamboo substantially improve the general robustness against arbitrary types of attacks and noises, achieving better results comparing to previous adversarial training methods, robust optimization methods and other data augmentation methods with the same amount of data points.", "target": ["Il primo metodo di data augmentation appositamente progettato per migliorare la robustezza generale di DNN senza alcuna ipotesi sugli algoritmi di attacco.", "Propone un metodo di training di data augmentation per guadagnare la robustezza del modello contro le adversarial perturbation, aumentando i sample casuali in modo uniforme da una sfera a raggio fisso centrata sui dati di training."]} +{"source": "The ability to synthesize realistic patterns of neural activity is crucial for studying neural information processing.Here we used the Generative Adversarial Networks (GANs) framework to simulate the concerted activity of a population of neurons.We adapted the Wasserstein-GAN variant to facilitate the generation of unconstrained neural population activity patterns while still benefiting from parameter sharing in the temporal domain.We demonstrate that our proposed GAN, which we termed Spike-GAN, generates spike trains that match accurately the first- and second-order statistics of datasets of tens of neurons and also approximates well their higher-order statistics.We applied Spike-GAN to a real dataset recorded from salamander retina and showed that it performs as well as state-of-the-art approaches based on the maximum entropy and the dichotomized Gaussian frameworks.Importantly, Spike-GAN does not require to specify a priori the statistics to be matched by the model, and so constitutes a more flexible method than these alternative approaches.Finally, we show how to exploit a trained Spike-GAN to construct 'importance maps' to detect the most relevant statistical structures present in a spike train. Spike-GAN provides a powerful, easy-to-use technique for generating realistic spiking neural activity and for describing the most relevant features of the large-scale neural population recordings studied in modern systems neuroscience.", "target": ["Utilizzo di Wasserstein-GANs per generare attività neurale realistica e per rilevare le feature più rilevanti presenti nei pattern di popolazione neurale.", "Un metodo per simulare spike train da popolazioni di neuroni che corrispondono a dati empirici utilizzando una GAN semi-convoluzionale.", "L'articolo propone di usare le GAN per sintetizzare modelli realistici di attività neurale"]} +{"source": "Deep latent variable models have become a popular model choice due to the scalable learning algorithms introduced by (Kingma & Welling 2013, Rezende et al. 2014).These approaches maximize a variational lower bound on the intractable log likelihood of the observed data.Burda et al. (2015) introduced a multi-sample variational bound, IWAE, that is at least as tight as the standard variational lower bound and becomes increasingly tight as the number of samples increases.Counterintuitively, the typical inference network gradient estimator for the IWAE bound performs poorly as the number of samples increases (Rainforth et al. 2018, Le et al. 2018).Roeder et a.(2017) propose an improved gradient estimator, however, are unable to show it is unbiased.We show that it is in fact biased and that the bias can be estimated efficiently with a second application of the reparameterization trick.The doubly reparameterized gradient (DReG) estimator does not suffer as the number of samples increases, resolving the previously raised issues.The same idea can be used to improve many recently introduced training techniques for latent variable models.In particular, we show that this estimator reduces the variance of the IWAE gradient, the reweighted wake-sleep update (RWS) (Bornschein & Bengio 2014), and the jackknife variational inference (JVI) gradient (Nowozin 2018).Finally, we show that this computationally efficient, drop-in estimator translates to improved performance for all three objectives on several modeling tasks.", "target": ["Gli stimatori di gradiente doppiamente riparametrizzati forniscono una riduzione imparziale della varianza che porta a migliori prestazioni.", "L'autore ha trovato sperimentalmente che lo stimatore del lavoro esistente è biased e propone di ridurre il bias per migliorare lo stimatore del gradiente dell'ELBO."]} +{"source": "Zeroth-order optimization is the process of minimizing an objective $f(x)$, given oracle access to evaluations at adaptively chosen inputs $x$.In this paper, we present two simple yet powerful GradientLess Descent (GLD) algorithms that do not rely on an underlying gradient estimate and are numerically stable.We analyze our algorithm from a novel geometric perspective and we show that for {\\it any monotone transform} of a smooth and strongly convex objective with latent dimension $k \\ge n$, we present a novel analysis that shows convergence within an $\\epsilon$-ball of the optimum in $O(kQ\\log(n)\\log(R/\\epsilon))$ evaluations, where the input dimension is $n$, $R$ is the diameter of the input space and $Q$ is the condition number.Our rates are the first of its kind to be both1) poly-logarithmically dependent on dimensionality and2) invariant under monotone transformations.We further leverage our geometric perspective to show that our analysis is optimal.Both monotone invariance and its ability to utilize a low latent dimensionality are key to the empirical success of our algorithms, as demonstrated on synthetic and MuJoCo benchmarks.", "target": ["Gradientless Descent è un algoritmo senza gradiente provatamente efficiente che è monotono-invariante e veloce per l'ottimizzazione di ordine zero ad alta dimensionalità.", "Questo articolo propone algoritmi GradientLess Descent (GLD) stabili che non si basano sulla stima del gradiente."]} +{"source": "Many processes can be concisely represented as a sequence of events leading from a starting state to an end state.Given raw ingredients, and a finished cake, an experienced chef can surmise the recipe.Building upon this intuition, we propose a new class of visual generative models: goal-conditioned predictors (GCP).Prior work on video generation largely focuses on prediction models that only observe frames from the beginning of the video.GCP instead treats videos as start-goal transformations, making video generation easier by conditioning on the more informative context provided by the first and final frames. Not only do existing forward prediction approaches synthesize better and longer videos when modified to become goal-conditioned, but GCP models can also utilize structures that are not linear in time, to accomplish hierarchical prediction. To this end, we study both auto-regressive GCP models and novel tree-structured GCP models that generate frames recursively, splitting the video iteratively into finer and finer segments delineated by subgoals. In experiments across simulated and real datasets, our GCP methods generate high-quality sequences over long horizons. Tree-structured GCPs are also substantially easier to parallelize than auto-regressive GCPs, making training and inference very efficient, and allowing the model to train on sequences that are thousands of frames in length.Finally, we demonstrate the utility of GCP approaches for imitation learning in the setting without access to expert actions. Videos are on the supplementary website: https://sites.google.com/view/video-gcp", "target": ["Proponiamo una nuova classe di modelli generativi visivi: i predittori condizionati dall'obiettivo. Mostriamo sperimentalmente che il condizionamento sull'obiettivo permette di ridurre l'incertezza e produrre previsioni su orizzonti molto più lunghi.", "Questo articolo riformula il problema della predizione video come interpolazione invece di estrapolazione, condizionando la predizione sul fotogramma iniziale e finale (obiettivo), ottenendo predizioni di qualità superiore."]} +{"source": "Recent advances in computing technology and sensor design have made it easier to collect longitudinal or time series data from patients, resulting in a gigantic amount of available medical data.Most of the medical time series lack annotations or even when the annotations are available they could be subjective and prone to human errors.Earlier works have developed natural language processing techniques to extract concept annotations and/or clinical narratives from doctor notes.However, these approaches are slow and do not use the accompanying medical time series data.To address this issue, we introduce the problem of concept annotation for the medical time series data, i.e., the task of predicting and localizing medical concepts by using the time series data as input.We propose Relational Multi-Instance Learning (RMIL) - a deep Multi Instance Learning framework based on recurrent neural networks, which uses pooling functions and attention mechanisms for the concept annotation tasks.Empirical results on medical datasets show that our proposed models outperform various multi-instance learning models.", "target": ["Proponiamo un framework profondo di Multi Instance Learning basato su reti neurali ricorrenti che utilizza funzioni di pooling e meccanismi di attention per i task di concept annotation.", "L'articolo affronta la classificazione dei dati delle serie temporali mediche e propone di modellare la relazione temporale tra le istanze di ogni serie utilizzando un'architettura di rete neurale ricorrente.", "Propone una nuova formulazione di Multiple Instance Learning (MIL) chiamata Relation MIL (RMIL), e discute una serie di sue varianti con LSTM, Bi-LSTM, S2S, ecc. ed esplora l'integrazione di RMIL con vari meccanismi di attention, e dimostra il suo utilizzo nella predizione di concetti medici da dati di serie temporali."]} +{"source": "The embedding layers transforming input words into real vectors are the key components of deep neural networks used in natural language processing.However, when the vocabulary is large, the corresponding weight matrices can be enormous, which precludes their deployment in a limited resource setting.We introduce a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance. We evaluate our method on a wide range of benchmarks in natural language processing and analyze the trade-off between performance and compression ratios for a wide range of architectures, from MLPs to LSTMs and Transformers.", "target": ["I layer di embedding sono fattorizzati con la decomposizione Tensor Train per ridurre il loro impatto sulla memoria.", "Questo articolo propone un modello di decomposizione tensoriale low-rank per parametrizzare la matrice di embedding nel Natural Language Processing (NLP), che comprime la rete e talvolta aumenta la precisione sul test set."]} +{"source": "We note that common implementations of adaptive gradient algorithms, such as Adam, limit the potential benefit of weight decay regularization, because the weights do not decay multiplicatively (as would be expected for standard weight decay) but by an additive constant factor. We propose a simple way to resolve this issue by decoupling weight decay and the optimization steps taken w.r.t. the loss function.We provide empirical evidence that our proposed modification(i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam, and(ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter).We also demonstrate that longer optimization runs require smaller weight decay values for optimal results and introduce a normalized variant of weight decay to reduce this dependence.Finally, we propose a version of Adam with warm restarts (AdamWR) that has strong anytime performance while achieving state-of-the-art results on CIFAR-10 and ImageNet32x32. Our source code will become available after the review process.", "target": ["Weight decay regularization nei metodi a gradiente adattivo come Adam", "Propone l'idea di disaccoppiare il weight decay dal numero di step del processo di ottimizzazione.", "L'articolo presenta un modo alternativo per implementare il weight decay in Adam con risultati empirici ", "Studia i problemi di weight decay nelle varianti SGD e propone il metodo di disaccoppiamento tra il weight decay e l'aggiornamento basato sul gradiente."]} +{"source": "Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner where knowledge gained from previous tasks is retained and used for future learning.It is essential towards the development of intelligent machines that can adapt to their surroundings.In this work we focus on a lifelong learning approach to generative modeling where we continuously incorporate newly observed streaming distributions into our learnt model.We do so through a student-teacher architecture which allows us to learn and preserve all the distributions seen so far without the need to retain the past data nor the past models.Through the introduction of a novel cross-model regularizer, the student model leverages the information learnt by the teacher, which acts as a summary of everything seen till now.The regularizer has the additional benefit of reducing the effect of catastrophic interference that appears when we learn over streaming data.We demonstrate its efficacy on streaming distributions as well as its ability to learn a common latent representation across a complex transfer learning scenario.", "target": ["Lifelong distributional learning attraverso un'architettura teacher-student accoppiata con un posterior regularizer cross model."]} +{"source": "Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling.In this paper, we look at geometric data represented as point clouds.We introduce a deep autoencoder (AE) network with excellent reconstruction quality and generalization ability.The learned representations outperform the state of the art in 3D recognition tasks and enable basic shape editing applications via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation.We also perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space our AEs and, Gaussian mixture models (GMM).Interestingly, GMMs trained in the latent space of our AEs produce samples of the best fidelity and diversity.To perform our quantitative evaluation of generative models, we propose simple measures of fidelity and diversity based on optimally matching between sets point clouds.", "target": ["Deep autoencoder per imparare una buona rappresentazione per i dati geometrici delle nuvole di punti 3D; Modelli generativi per le nuvole di punti.", "Approcci per apprendere modelli generativi di tipo GAN usando l'architettura PointNet e il latent-space GAN."]} +{"source": "Despite the remarkable performance of deep neural networks (DNNs) on various tasks, they are susceptible to adversarial perturbations which makes it difficult to deploy them in real-world safety-critical applications.In this paper, we aim to obtain robust networks by sparsifying DNN's latent features sensitive to adversarial perturbation.Specifically, we define vulnerability at the latent feature space and then propose a Bayesian framework to prioritize/prune features based on their contribution to both the original and adversarial loss.We also suggest regularizing the features' vulnerability during training to improve robustness further.While such network sparsification has been primarily studied in the literature for computational efficiency and regularization effect of DNNs, we confirm that it is also useful to design a defense mechanism through quantitative evaluation and qualitative analysis.We validate our method, \\emph{Adversarial Neural Pruning (ANP)} on multiple benchmark datasets, which results in an improvement in test accuracy and leads to state-of-the-art robustness.ANP also tackles the practical problem of obtaining sparse and robust networks at the same time, which could be crucial to ensure adversarial robustness on lightweight networks deployed to computation and memory-limited devices.", "target": ["Proponiamo un nuovo metodo per sopprimere la vulnerabilità dello spazio delle feature latenti per ottenere reti robuste e compatte.", "Questo articolo propone un metodo di \"adversarial neural pruning\" per addestrare una maschera di pruning e una nuova loss di soppressione della vulnerabilità per migliorare la precisione e la adversarial robustness."]} +{"source": "In anomaly detection (AD), one seeks to identify whether a test sample is abnormal, given a data set of normal samples. A recent and promising approach to AD relies on deep generative models, such as variational autoencoders (VAEs),for unsupervised learning of the normal data distribution.In semi-supervised AD (SSAD), the data also includes a small sample of labeled anomalies.In this work,we propose two variational methods for training VAEs for SSAD.The intuitive idea in both methods is to train the encoder to ‘separate’ between latent vectors for normal and outlier data.We show that this idea can be derived from principled probabilistic formulations of the problem, and propose simple and effective algorithms. Our methods can be applied to various data types, as we demonstrate on SSAD datasets ranging from natural images to astronomy and medicine, and can be combined with any VAE model architecture.When comparing to state-of-the-art SSAD methods that are not specific to particular data types, we obtain marked improvement in outlier detection.", "target": ["Abbiamo proposto due modifiche VAE che tengono conto dei data example negativi, e li abbiamo utilizzati per il rilevamento semi-supervised delle anomalie.", "Gli articoli propongono due metodi di approccio simili a VAE per il rilevamento semi-supervised della novlety, MML-VAE e DP-VAE."]} +{"source": "We introduce dynamic instance hardness (DIH) to facilitate the training of machine learning models.DIH is a property of each training sample and is computed as the running mean of the sample's instantaneous hardness as measured over the training history.We use DIH to evaluate how well a model retains knowledge about each training sample over time.We find that for deep neural nets (DNNs), the DIH of a sample in relatively early training stages reflects its DIH in later stages and as a result, DIH can be effectively used to reduce the set of training samples in future epochs.Specifically, during each epoch, only samples with high DIH are trained (since they are historically hard) while samples with low DIH can be safely ignored.DIH is updated each epoch only for the selected samples, so it does not require additional computation.Hence, using DIH during training leads to an appreciable speedup.Also, since the model is focused on the historically more challenging samples, resultant models are more accurate.The above, when formulated as an algorithm, can be seen as a form of curriculum learning, so we call our framework DIH curriculum learning (or DIHCL).The advantages of DIHCL, compared to other curriculum learning approaches, are: (1) DIHCL does not require additional inference steps over the data not selected by DIHCL in each epoch, (2) the dynamic instance hardness, compared to static instance hardness (e.g., instantaneous loss), is more stable as it integrates information over the entire training history up to the present time.Making certain mathematical assumptions, we formulate the problem of DIHCL as finding a curriculum that maximizes a multi-set function $f(\\cdot)$, and derive an approximation bound for a DIH-produced curriculum relative to the optimal curriculum.Empirically, DIHCL-trained DNNs significantly outperform random mini-batch SGD and other recently developed curriculum learning methods in terms of efficiency, early-stage convergence, and final performance, and this is shown in training several state-of-the-art DNNs on 11 modern datasets.", "target": ["La nuova comprensione delle dinamiche di training e le metriche di hardness della memorizzazione portano ad un efficiente e dimostrabile curriculum learning.", "Questo articolo formula DIH come un problema di curriculum learning che può utilizzare più efficacemente i dati per addestrare le DNN, e deriva la teoria sul bound di approssimazione."]} +{"source": "This paper explores many immediate connections between adaptive control and machine learning, both through common update laws as well as common concepts.Adaptive control as a field has focused on mathematical rigor and guaranteed convergence.The rapid advances in machine learning on the other hand have brought about a plethora of new techniques and problems for learning.This paper elucidates many of the numerous common connections between both fields such that results from both may be leveraged together to solve new problems.In particular, a specific problem related to higher order learning is solved through insights obtained from these intersections.", "target": ["Storia degli sviluppi paralleli delle regole di update e dei concetti tra controllo adattivo e ottimizzazione nell'apprendimento automatico."]} +{"source": "Recurrent convolution (RC) shares the same convolutional kernels and unrolls them multiple times, which is originally proposed to model time-space signals.We suggest that RC can be viewed as a model compression strategy for deep convolutional neural networks.RC reduces the redundancy across layers and is complementary to most existing model compression approaches.However, the performance of an RC network can't match the performance of its corresponding standard one, i.e. with the same depth but independent convolutional kernels. This reduces the value of RC for model compression.In this paper, we propose a simple variant which improves RC networks: The batch normalization layers of an RC module are learned independently (not shared) for different unrolling steps.We provide insights on why this works.Experiments on CIFAR show that unrolling a convolutional layer several steps can improve the performance, thus indirectly plays a role in model compression.", "target": ["Convoluzione ricorrente per la compressione del modello e un trucco per addestrarlo, cioè l'apprendimento di layer BN indipendenti sugli step.", "L'autore modifica la rete neurale di convoluzione ricorrente (RCNN) con una batch normalization indipendente, con i risultati sperimentali su RCNN comparabili con l'architettura della rete neurale ResNet quando contiene lo stesso numero di layer."]} +{"source": "The visual world is vast and varied, but its variations divide into structured and unstructured factors.Structured factors, such as scale and orientation, admit clear theories and efficient representation design.Unstructured factors, such as what it is that makes a cat look like a cat, are too complicated to model analytically, and so require free-form representation learning.We compose structured Gaussian filters and free-form filters, optimized end-to-end, to factorize the representation for efficient yet general learning.Our experiments on dynamic structure, in which the structured filters vary with the input, equal the accuracy of dynamic inference with more degrees of freedom while improving efficiency.(Please see https://arxiv.org/abs/1904.11487 for the full edition.)", "target": ["I campi recettivi dinamici con struttura gaussiana spaziale sono accurati ed efficienti.", "Questo articolo propone un operatore di convoluzione strutturato per modellare le deformazioni delle regioni locali di un'immagine, che ha ridotto significativamente il numero di parametri."]} +{"source": "It is widely known that well-designed perturbations can cause state-of-the-art machine learning classifiers to mis-label an image, with sufficiently small perturbations that are imperceptible to the human eyes.However, by detecting the inconsistency between the image and wrong label, the human observer would be alerted of the attack.In this paper, we aim to design attacks that not only make classifiers generate wrong labels, but also make the wrong labels imperceptible to human observers.To achieve this, we propose an algorithm called LabelFool which identifies a target label similar to the ground truth label and finds a perturbation of the image for this target label.We first find the target label for an input image by a probability model, then move the input in the feature space towards the target label.Subjective studies on ImageNet show that in the label space, our attack is much less recognizable by human observers, while objective experimental results on ImageNet show that we maintain similar performance in the image space as well as attack rates to state-of-the-art attack algorithms.", "target": ["Un trucco sugli adversarial sample in modo che le label mal classificate siano impercettibili nello spazio delle label agli osservatori umani", "Un metodo per costruire adversarial attack che sono meno rilevabili dall'uomo senza costi nello spazio dell'immagine cambiando la classe di destinazione per essere simile alla classe originale dell'immagine."]} +{"source": "This paper presents noise type/position classification of various impact noises generated in a building which is a serious conflict issue in apartment complexes.For this study, a collection of floor impact noise dataset is recorded with a single microphone.Noise types/positions are selected based on a report by the Floor Management Center under Korea Environmental Corporation.Using a convolutional neural networks based classifier, the impact noise signals converted to log-scaled Mel-spectrograms are classified into noise types or positions.Also, our model is evaluated on a standard environmental sound dataset ESC-50 to show extensibility on environmental sound classification.", "target": ["Questo documento presenta la classificazione del tipo/posizione di vari rumori d'impatto generati in un edificio, che è un serio problema di conflitto nei complessi di appartamenti", "Questo lavoro descrive l'uso delle reti neurali convoluzionali in una nuova area di applicazione relativa alla classificazione del tipo di rumore degli edifici e della posizione del rumore."]} +{"source": "Recordings of neural circuits in the brain reveal extraordinary dynamical richness and high variability.At the same time, dimensionality reduction techniques generally uncover low-dimensional structures underlying these dynamics.What determines the dimensionality of activity in neural circuits?What is the functional role of dimensionality in behavior and task learning?In this work we address these questions using recurrent neural network (RNN) models.We find that, depending on the dynamics of the initial network, RNNs learn to increase and reduce dimensionality in a way that matches task demands.These findings shed light on fundamental dynamical mechanisms by which neural networks solve tasks with robust representations that generalize to new cases.", "target": ["Le reti neurali ricorrenti imparano ad aumentare e ridurre la dimensionalità della loro rappresentazione interna in un modo che corrisponde al task, a seconda della dinamica della rete iniziale."]} +{"source": "Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution.While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms.Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, often motivating this approach as minimizing two (of three) terms in a theoretical bound on target error.Unfortunately, this minimization can cause arbitrary increases in the third term, e.g. they can break down under shifting label distributions.We propose asymmetrically-relaxed distribution alignment, a new approach that overcomes some limitations of standard domain-adversarial algorithms.Moreover, we characterize precise assumptions under which our algorithm is theoretically principled and demonstrate empirical benefits on both synthetic and real datasets.", "target": ["Invece dei rigidi allineamenti di distribuzione nei tradizionali obiettivi di deep domain adaptation, che falliscono quando la distribuzione delle label target si sposta, proponiamo di ottimizzare un obiettivo rilassato con nuove analisi, nuovi algoritmi e convalida sperimentale.", "Questo articolo suggerisce metriche rilassate per domain adaptation che danno nuovi limiti teorici sull'errore di destinazione."]} +{"source": "In this paper, we explore \\textit{summary-to-article generation}: the task of generating long articles given a short summary, which provides finer-grained content control for the generated text.To prevent sequence-to-sequence (seq2seq) models from degenerating into language models and better controlling the long text to be generated, we propose a hierarchical generation approach which first generates a sketch of intermediate length based on the summary and then completes the article by enriching the generated sketch.To mitigate the discrepancy between the ``oracle'' sketch used during training and the noisy sketch generated during inference, we propose an end-to-end joint training framework based on multi-agent reinforcement learning.For evaluation, we use text summarization corpora by reversing their inputs and outputs, and introduce a novel evaluation method that employs a summarization system to summarize the generated article and test its match with the original input summary.Experiments show that our proposed hierarchical generation approach can generate a coherent and relevant article based on the given summary, yielding significant improvements upon conventional seq2seq models.", "target": ["esploriamo il task di summary-to-article generation e proponiamo uno schema di generazione gerarchico insieme a un framework di reinforcement learning congiunto end-to-end per addestrare il modello gerarchico.", "Per affrontare il problema della degenerazione in summary-to-article generation, questo articolo propone un approccio di generazione gerarchica che genera prima uno schizzo intermedio dell'articolo e poi l'articolo completo."]} +{"source": "When training a deep neural network for supervised image classification, one can broadly distinguish between two types of latent features of images that will drive the classification of class Y. Following the notation of Gong et al. (2016), we can divide features broadly into the classes of(i) “core” or “conditionally invariant” features X^ci whose distribution P(X^ci | Y) does not change substantially across domains and(ii) “style” or “orthogonal” features X^orth whose distribution P(X^orth | Y) can change substantially across domains.These latter orthogonal features would generally include features such as position, rotation, image quality or brightness but also more complex ones like hair color or posture for images of persons.We try to guard against future adversarial domain shifts by ideally just using the “conditionally invariant” features for classification.In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable.We can hence not directly see the distributional change of features across different domains. We do assume, however, that we can sometimes observe a so-called identifier or ID variable.We might know, for example, that two images show the same person, with ID referring to the identity of the person.In data augmentation, we generate several images from the same original image, with ID referring to the relevant original image.The method requires only a small fraction of images to have an ID variable.We provide a causal framework for the problem by adding the ID variable to the model of Gong et al. (2016).However, we are interested in settings where we cannot observe the domain directly and we treat domain as a latent variable.If two or more samples share the same class and identifier, (Y, ID)=(y,i), then we treat those samples as counterfactuals under different style interventions on the orthogonal or style features.Using this grouping-by-ID approach, we regularize the network to provide near constant output across samples that share the same ID by penalizing with an appropriate graph Laplacian.This is shown to substantially improve performance in settings where domains change in terms of image quality, brightness, color changes, and more complex changes such as changes in movement and posture.We show links to questions of interpretability, fairness and transfer learning.", "target": ["Proponiamo una regolarizzazione controfattuale per difendersi dagli adversarial domain shift che si verificano attraverso spostamenti nella distribuzione delle \"style feature\" latenti delle immagini.", "L'articolo discute i modi per difendersi dagli adversarial domain shift con la regolarizzazione controfattuale imparando un classificatore che è invariante ai cambiamenti superficiali (o feature di \"stile\") nelle immagini.", "Questo documento mira a una classificazione robusta delle immagini contro gli adversarial domain shift e l'obiettivo è raggiunto evitando di usare le feature di stile mutevoli."]} +{"source": "Gradient-based meta-learning algorithms require several steps of gradient descent to adapt to newly incoming tasks.This process becomes more costly as the number of samples increases. Moreover, the gradient updates suffer from several sources of noise leading to a degraded performance. In this work, we propose a meta-learning algorithm equipped with the GradiEnt Component COrrections, aGECCO cell for short, which generates a multiplicative corrective low-rank matrix which (after vectorization) corrects the estimated gradients. GECCO contains a simple decoder-like network with learnable parameters, an attention module and a so-called context input parameter. The context parameter of GECCO is updated to generate a low-rank corrective term for the network gradients. As a result, meta-learning requires only a few of gradient updates to absorb new task (often, a single update is sufficient in the few-shot scenario). While previous approaches address this problem by altering the learning rates, factorising network parameters or directly learning feature corrections from features and/or gradients, GECCO is an off-the-shelf generator-like unit that performs element-wise gradient corrections without the need to ‘observe’ the features and/or the gradients directly. We show that our GECCO(i) accelerates learning,(ii) performs robust corrections of the gradients corrupted by a noise, and(iii) leads to notable improvements over existing gradient-based meta-learning algorithms.", "target": ["Proponiamo un meta-learner per adattarsi rapidamente su più task anche con un solo step in un setting few-shot.", "Questo articolo propone un metodo per meta-learning di un modulo di correzione del gradiente in cui il precondizionamento è parametrizzato da una rete neurale, e costruisce un processo di aggiornamento del gradiente in due fasi durante l'adattamento."]} +{"source": "Discriminative question answering models can overfit to superficial biases in datasets, because their loss function saturates when any clue makes the answer likely. We introduce generative models of the joint distribution of questions and answers, which are trained to explain the whole question, not just to answer it.Our question answering (QA) model is implemented by learning a prior over answers, and a conditional language model to generate the question given the answer—allowing scalable and interpretable many-hop reasoning as the question is generated word-by-word. Our model achieves competitive performance with specialised discriminative models on the SQUAD and CLEVR benchmarks, indicating that it is a more general architecture for language understanding and reasoning than previous work.The model greatly improves generalisation both from biased training data and to adversarial testing data, achieving a new state-of-the-art on ADVERSARIAL SQUAD.We will release our code.", "target": ["I modelli di question answering che modellano la distribuzione congiunta di domande e risposte possono imparare di più dei modelli discriminativi", "Questo articolo propone un approccio generativo al QA testuale e visuale, dove viene appresa una distribuzione congiunta sullo spazio delle domande e delle risposte dato il contesto, che cattura relazioni più complesse.", "Questo articolo introduce un modello generativo per question answering e propone di modellare p(q,a|c), fattorizzato come p(a|c) * p(q|a,c).", "Gli autori propongono un modello generativo di QA, che ottimizza congiuntamente la distribuzione delle domande e delle risposte date da un documento/contesto."]} +{"source": "In this paper, we turn our attention to the interworking between the activation functions and the batch normalization, which is a virtually mandatory technique to train deep networks currently.We propose the activation function Displaced Rectifier Linear Unit (DReLU) by conjecturing that extending the identity function of ReLU to the third quadrant enhances compatibility with batch normalization.Moreover, we used statistical tests to compare the impact of using distinct activation functions (ReLU, LReLU, PReLU, ELU, and DReLU) on the learning speed and test accuracy performance of standardized VGG and Residual Networks state-of-the-art models.These convolutional neural networks were trained on CIFAR-100 and CIFAR-10, the most commonly used deep learning computer vision datasets.The results showed DReLU speeded up learning in all models and datasets.Besides, statistical significant performance assessments (p<0.05) showed DReLU enhanced the test accuracy presented by ReLU in all scenarios.Furthermore, DReLU showed better test accuracy than any other tested activation function in all experiments with one exception, in which case it presented the second best performance.Therefore, this work demonstrates that it is possible to increase performance replacing ReLU by an enhanced activation function.", "target": ["Viene proposta una nuova funzione di attivazione chiamata Displaced Rectifier Linear Unit. Ha dimostrato di migliorare le prestazioni di training e inferenza delle reti neurali convoluzionali con batch normalization.", "Il documento confronta e dà indicazioni contro l'uso della batch normalization dopo l'uso di ReLU", "Questo articolo propone una funzione di attivazione, chiamata shifted ReLU, per migliorare le prestazioni delle CNN che usano la normalizzazione batch."]} +{"source": "Encoding the input scale information explicitly into the representation learned by a convolutional neural network (CNN) is beneficial for many vision tasks especially when dealing with multiscale input signals.We study, in this paper, a scale-equivariant CNN architecture with joint convolutions across the space and the scaling group, which is shown to be both sufficient and necessary to achieve scale-equivariant representations.To reduce the model complexity and computational burden, we decompose the convolutional filters under two pre-fixed separable bases and truncate the expansion to low-frequency components.A further benefit of the truncated filter expansion is the improved deformation robustness of the equivariant representation.Numerical experiments demonstrate that the proposed scale-equivariant neural network with decomposed convolutional filters (ScDCFNet) achieves significantly improved performance in multiscale image classification and better interpretability than regular CNNs at a reduced model size.", "target": ["Costruiamo reti neurali convoluzionali scale-equivariant nella forma più generale con efficienza computazionale e dimostrata robustezza alla deformazione.", "Gli autori propongono un'architettura CNN che è teoricamente equivariante a scalature e traslazioni isotrope aggiungendo una dimensione di scala extra ai tensori di attivazione."]} +{"source": "In this paper, we diagnose deep neural networks for 3D point cloud processing to explore the utility of different network architectures.We propose a number of hypotheses on the effects of specific network architectures on the representation capacity of DNNs.In order to prove the hypotheses, we design five metrics to diagnose various types of DNNs from the following perspectives, information discarding, information concentration, rotation robustness, adversarial robustness, and neighborhood inconsistency.We conduct comparative studies based on such metrics to verify the hypotheses, which may shed new lights on the architectural design of neural networks.Experiments demonstrated the effectiveness of our method.The code will be released when this paper is accepted.", "target": ["Abbiamo diagnosticato deep neural network per l'elaborazione di nuvole di punti 3D per esplorare l'utilità di diverse architetture di rete.", "L'articolo studia diverse architetture di reti neurali per l'elaborazione di nuvole di punti 3D e propone metriche per la adversarial robustness, la robustezza rotazionale e la coerenza di vicinato."]} +{"source": "In this work we construct flexible joint distributions from low-dimensional conditional semi-implicit distributions.Explicitly defining the structure of the approximation allows to make the variational lower bound tighter, resulting in more accurate inference.", "target": ["L'utilizzo della struttura delle distribuzioni migliora l'inferenza variazionale semi-implicita"]} +{"source": "Imitation learning from human-expert demonstrations has been shown to be greatly helpful for challenging reinforcement learning problems with sparse environment rewards.However, it is very difficult to achieve similar success without relying on expert demonstrations.Recent works on self-imitation learning showed that imitating the agent's own past good experience could indirectly drive exploration in some environments, but these methods often lead to sub-optimal and myopic behavior.To address this issue, we argue that exploration in diverse directions by imitating diverse trajectories, instead of focusing on limited good trajectories, is more desirable for the hard-exploration tasks.We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agent's own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards.Our method significantly outperforms existing self-imitation learning and count-based exploration methods on various hard-exploration tasks with local optima.In particular, we report a state-of-the-art score of more than 20,000 points on Montezumas Revenge without using expert demonstrations or resetting to arbitrary states.", "target": ["Self-imitation learning di traiettorie diverse con policy condizionata dalla traiettoria", "Questo articolo affronta task di esplorazione difficili applicando la self-imitation a una diversa selezione di traiettorie dall'esperienza passata, per guidare un'esplorazione più efficiente in problemi a ricompensa sparsa, ottenendo risultati SOTA."]} +{"source": "We present a method that trains large capacity neural networks with significantly improved accuracy and lower dynamic computational cost.This is achieved by gating the deep-learning architecture on a fine-grained-level.Individual convolutional maps are turned on/off conditionally on features in the network.To achieve this, we introduce a new residual block architecture that gates convolutional channels in a fine-grained manner.We also introduce a generally applicable tool batch-shaping that matches the marginal aggregate posteriors of features in a neural network to a pre-specified prior distribution.We use this novel technique to force gates to be more conditional on the data.We present results on CIFAR-10 and ImageNet datasets for image classification, and Cityscapes for semantic segmentation.Our results show that our method can slim down large architectures conditionally, such that the average computational cost on the data is on par with a smaller architecture, but with higher accuracy.In particular, on ImageNet, our ResNet50 and ResNet34 gated networks obtain 74.60% and 72.55% top-1 accuracy compared to the 69.76% accuracy of the baseline ResNet18 model, for similar complexity.We also show that the resulting networks automatically learn to use more features for difficult examples and fewer features for simple examples.", "target": ["Un metodo che addestra reti neurali di grande capacità con una precisione significativamente migliorata e un costo computazionale dinamico inferiore", "Un metodo per addestrare una rete con grande capacità, di cui solo parti sono usate al momento dell'inferenza dipendente dall'input, usando una selezione condizionale fine-grained e un nuovo metodo di regolarizzazione, \"batch shaping\"."]} +{"source": "With a view to bridging the gap between deep learning and symbolic AI, we present a novel end-to-end neural network architecture that learns to form propositional representations with an explicitly relational structure from raw pixel data.In order to evaluate and analyse the architecture, we introduce a family of simple visual relational reasoning tasks of varying complexity.We show that the proposed architecture, when pre-trained on a curriculum of such tasks, learns to generate reusable representations that better facilitate subsequent learning on previously unseen tasks when compared to a number of baseline architectures.The workings of a successfully trained model are visualised to shed some light on how the architecture functions.", "target": ["Presentiamo un'architettura differenziabile end-to-end che impara a mappare i pixel ai predicati, e la valutiamo su una serie di semplici task di ragionamento relazionale", "Un'architettura di rete basata sul modulo di self-attention a più head per imparare una nuova forma di rappresentazioni relazionali, che migliora l'efficienza dei dati e la capacità di generalizzazione nel curriculum learning."]} +{"source": "In natural language inference, the semantics of some words do not affect the inference.Such information is considered superficial and brings overfitting.How can we represent and discard such superficial information?In this paper, we use first order logic (FOL) - a classic technique from meaning representation language – to explain what information is superficial for a given sentence pair.Such explanation also suggests two inductive biases according to its properties.We proposed a neural network-based approach that utilizes the two inductive biases.We obtain substantial improvements over extensive experiments.", "target": ["Usiamo le reti neurali per proiettare le informazioni superficiali per natural language inference, definendo e identificando le informazioni superficiali dalla prospettiva della logica del primo ordine.", "Questo articolo cerca di ridurre le informazioni superficiali per natural language inference per prevenire l'overfitting, e introduce una graph neural network per modellare la relazione tra premessa e ipotesi.", "Un approccio per trattare natural language inference usando la logica del primo ordine e per infondere i modelli NLI con informazioni logiche per essere più robusti nell'inferenza."]} +{"source": "We propose an approach to training machine learning models that are fair in the sense that their performance is invariant under certain perturbations to the features.For example, the performance of a resume screening system should be invariant under changes to the name of the applicant.We formalize this intuitive notion of fairness by connecting it to the original notion of individual fairness put forth by Dwork et al and show that the proposed approach achieves this notion of fairness.We also demonstrate the effectiveness of the approach on two machine learning tasks that are susceptible to gender and racial biases.", "target": ["Algoritmo per il training di classificatori individualmente fair utilizzando l'adversarial robustness", "Questo articolo propone una nuova definizione di fairness algoritmica e un algoritmo per trovare in modo dimostrabile un modello ML che soddisfi il vincolo di fairness."]} +{"source": "In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets.This seed dataset of images is then augmented to create a purely synthetic training dataset, which is in turn used to train a deep neural network and test on held-out real world handwritten digits dataset spanning five Indic scripts, Kannada, Tamil, Gujarati, Malayalam, and Devanagari.We showcase the efficacy of this approach both qualitatively, by training a Boundary-seeking GAN (BGAN) that generates realistic digit images in the five languages, and also qualitatively by testing a CNN trained on the synthetic data on the real-world datasets.This establishes not only an interesting nexus between the font-datasets-world and transfer learning but also provides a recipe for universal-digit classification in any script.", "target": ["Il seeding e l'augmentation sono tutto ciò che serve per classificare le cifre in qualsiasi lingua?", "Questo articolo presenta nuovi dataset per cinque lingue e propone un nuovo framework (SAT) per la generazione di dataset di immagini di font per la classificazione universale delle cifre."]} +{"source": "An important research direction in machine learning has centered around developing meta-learning algorithms to tackle few-shot learning.An especially successful algorithm has been Model Agnostic Meta-Learning (MAML), a method that consists of two optimization loops, with the outer loop finding a meta-initialization, from which the inner loop can efficiently learn new tasks.Despite MAML's popularity, a fundamental open question remains -- is the effectiveness of MAML due to the meta-initialization being primed for rapid learning (large, efficient changes in the representations) or due to feature reuse, with the meta initialization already containing high quality features?We investigate this question, via ablation studies and analysis of the latent representations, finding that feature reuse is the dominant factor.This leads to the ANIL (Almost No Inner Loop) algorithm, a simplification of MAML where we remove the inner loop for all but the (task-specific) head of the underlying neural network.ANIL matches MAML's performance on benchmark few-shot image classification and RL and offers computational improvements over MAML.We further study the precise contributions of the head and body of the network, showing that performance on the test tasks is entirely determined by the quality of the learned features, and we can remove even the head of the network (the NIL algorithm).We conclude with a discussion of the rapid learning vs feature reuse question for meta-learning algorithms more broadly.", "target": ["Il successo di MAML si basa sul riutilizzo delle feature dalla meta-inizializzazione, che produce anche una semplificazione naturale dell'algoritmo, con il ciclo interno rimosso per il corpo della rete, così come altri approfondimenti sulla testa e sul corpo.", "L'articolo trova che il riutilizzo delle feature è il fattore dominante nel successo di MAML, e propone nuovi algoritmi che richiedono molto meno calcolo di MAML."]} +{"source": "Model training remains a dominant financial cost and time investment in machine learning applications.Developing and debugging models often involve iterative training, further exacerbating this issue.With growing interest in increasingly complex models, there is a need for techniques that help to reduce overall training effort.While incremental training can save substantial time and cost by training an existing model on a small subset of data, little work has explored policies for determining when incremental training provides adequate model performance versus full retraining.We provide a method-agnostic algorithm for deciding when to incrementally train versus fully train.We call this setting of non-deterministic full- or incremental training ``Mixed Setting Training\".Upon evaluation in slot-filling tasks, we find that this algorithm provides a bounded error, avoids catastrophic forgetting, and results in a significant speedup over a policy of always fully training.", "target": ["Forniamo un algoritmo indipendente dal metodo per decidere quando addestrare in modo incrementale rispetto al training completo e fornisce un significativo aumento di velocità rispetto al training completo ed evita la catastrophic forgetting", "Questo articolo propone un approccio per decidere quando incrementare o riqualificare completamente un modello nel setting dello sviluppo iterativo del modello nei task di slot filling."]} +{"source": "Neural networks have succeeded in many reasoning tasks.Empirically, these tasks require specialized network structures, e.g., Graph Neural Networks (GNNs) perform well on many such tasks, while less structured networks fail.Theoretically, there is limited understanding of why and when a network structure generalizes better than other equally expressive ones.We develop a framework to characterize which reasoning tasks a network can learn well, by studying how well its structure aligns with the algorithmic structure of the relevant reasoning procedure.We formally define algorithmic alignment and derive a sample complexity bound that decreases with better alignment.This framework explains the empirical success of popular reasoning models and suggests their limitations.We unify seemingly different reasoning tasks, such as intuitive physics, visual question answering, and shortest paths, via the lens of a powerful algorithmic paradigm, dynamic programming (DP).We show that GNNs can learn DP and thus solve these tasks.On several reasoning tasks, our theory aligns with empirical results.", "target": ["Sviluppiamo un framework teorico per caratterizzare quali task di ragionamento una rete neurale può imparare bene.", "L'articolo propone una misura di classi di allineamento algoritmico che misura quanto le reti neurali sono \"vicine\" agli algoritmi conosciuti, dimostrando il legame tra diverse classi di algoritmi conosciuti e le architetture delle reti neurali."]} +{"source": "Cell-cell interactions have an integral role in tumorigenesis as they are critical in governing immune responses.As such, investigating specific cell-cell interactions has the potential to not only expand upon the understanding of tumorigenesis, but also guide clinical management of patient responses to cancer immunotherapies.A recent imaging technique for exploring cell-cell interactions, multiplexed ion beam imaging by time-of-flight (MIBI-TOF), allows for cells to be quantified in 36 different protein markers at sub-cellular resolutions in situ as high resolution multiplexed images.To explore the MIBI images, we propose a GAN for multiplexed data with protein specific attention.By conditioning image generation on cell types, sizes, and neighborhoods through semantic segmentation maps, we are able to observe how these factors affect cell-cell interactions simultaneously in different protein channels.Furthermore, we design a set of metrics and offer the first insights towards cell spatial orientations, cell protein expressions, and cell neighborhoods.Our model, cell-cell interaction GAN (CCIGAN), outperforms or matches existing image synthesis methods on all conventional measures and significantly outperforms on biologically motivated metrics.To our knowledge, we are the first to systematically model multiple cellular protein behaviors and interactions under simulated conditions through image synthesis.", "target": ["Esploriamo le interazioni cellula-cellula attraverso i contesti dell'ambiente tumorale osservati in immagini altamente multiplexed, attraverso la sintesi delle immagini utilizzando una nuova architettura GAN di attention.", "Un nuovo metodo per modellare i dati generati dal multiplexed ion beam imaging by time-of-flight (MIBI-TOF) imparando la mappatura many-to-many tra i tipi di cellule e i livelli di espressione dei marcatori di proteine."]} +{"source": "Machine learning models for question-answering (QA), where given a question and a passage, the learner must select some span in the passage as an answer, are known to be brittle.By inserting a single nuisance sentence into the passage, an adversary can fool the model into selecting the wrong span.A promising new approach for QA decomposes the task into two stages:(i) select relevant sentences from the passage; and(ii) select a span among those sentences.Intuitively, if the sentence selector excludes the offending sentence, then the downstream span selector will be robust.While recent work has hinted at the potential robustness of two-stage QA, these methods have never, to our knowledge, been explicitly combined with adversarial training.This paper offers a thorough empirical investigation of adversarial robustness, demonstrating that although the two-stage approach lags behind single-stage span selection, adversarial training improves its performance significantly, leading to an improvement of over 22 points in F1 score over the adversarially-trained single-stage model.", "target": ["Un approccio a due stadi che consiste nella selezione della frase seguita dalla selezione dello span può essere reso più robusto agli adversarial attack rispetto a un modello a stadio singolo addestrato sul contesto completo.", "Questo articolo esamina un modello esistente e scopre che un metodo di QA addestrato in due fasi non è più robusto agli adversarial attack rispetto ad altri metodi."]} +{"source": "The aim of this study is to introduce a formal framework for analysis and synthesis of driver assistance systems.It applies formal methods to the verification of a stochastic human driver model built using the cognitive architecture ACT-R, and then bootstraps safety in semi-autonomous vehicles through the design of provably correct Advanced Driver Assistance Systems.The main contributions include the integration of probabilistic ACT-R models in the formal analysis of semi-autonomous systems and an abstraction technique that enables a finite representation of a large dimensional, continuous system in the form of a Markov model.The effectiveness of the method is illustrated in several case studies under various conditions.", "target": ["Verifica di un modello di guidatore umano basato su un'architettura cognitiva e sintesi di un ADAS corretto per costruzione da esso."]} +{"source": "In contrast to the older writing system of the 19th century, modern Hawaiian orthography employs characters for long vowels and glottal stops.These extra characters account for about one-third of the phonemes in Hawaiian, so including them makes a big difference to reading comprehension and pronunciation.However, transliterating between older and newer texts is a laborious task when performed manually.We introduce two related methods to help solve this transliteration problem automatically, given that there were not enough data to train an end-to-end deep learning model.One approach is implemented, end-to-end, using finite state transducers (FSTs).The other is a hybrid deep learning approach which approximately composes an FST with a recurrent neural network (RNN).We find that the hybrid approach outperforms the end-to-end FST by partitioning the original problem into one part that can be modelled by hand, using an FST, and into another part, which is easily solved by an RNN trained on the available data.", "target": ["Un nuovo approccio ibrido di deep learning fornisce la migliore soluzione a un problema di dati limitati (che è importante per la conservazione della lingua hawaiana)"]} +{"source": "In many real-world settings, a learning model must perform few-shot classification: learn to classify examples from unseen classes using only a few labeled examples per class.Additionally, to be safely deployed, it should have the ability to detect out-of-distribution inputs: examples that do not belong to any of the classes.While both few-shot classification and out-of-distribution detection are popular topics,their combination has not been studied.In this work, we propose tasks for out-of-distribution detection in the few-shot setting and establish benchmark datasets, based on four popular few-shot classification datasets. Then, we propose two new methods for this task and investigate their performance.In sum, we establish baseline out-of-distribution detection results using standard metrics on new benchmark datasets and show improved results with our proposed methods.", "target": ["Studiamo quantitativamente il rilevamento di out-of-distribution nel setting few-shot, stabiliamo baseline con ProtoNet, MAML, ABML, e li miglioriamo.", "L'articolo propone due nuovi punteggi di confidence che sono più adatti per il rilevamento di out-of-distribution della classificazione few-shot e mostra che un approccio basato sulla metrica della distanza migliora le prestazioni."]} +{"source": "While modern generative models are able to synthesize high-fidelity, visually appealing images, successfully generating examples that are useful for recognition tasks remains an elusive goal.To this end, our key insight is that the examples should be synthesized to recover classifier decision boundaries that would be learned from a large amount of real examples.More concretely, we treat a classifier trained on synthetic examples as ''student'' and a classifier trained on real examples as ''teacher''.By introducing knowledge distillation into a meta-learning framework, we encourage the generative model to produce examples in a way that enables the student classifier to mimic the behavior of the teacher.To mitigate the potential gap between student and teacher classifiers, we further propose to distill the knowledge in a progressive manner, either by gradually strengthening the teacher or weakening the student.We demonstrate the use of our model-agnostic distillation approach to deal with data scarcity, significantly improving few-shot learning performance on miniImageNet and ImageNet1K benchmarks.", "target": ["Questo articolo introduce la progressive knowledge distillation per l'apprendimento di modelli generativi che sono orientati a task di riconoscimento", "Questo articolo dimostra il curriculum learning easy-to-hard per addestrare un modello generativo per migliorare la classificazione few-shot."]} +{"source": "Deep neural networks provide state-of-the-art performance for many applications of interest.Unfortunately they are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs.Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models.Consequently the adversary can leverage it to attack against the deployed black-box systems. In this work, we demonstrate that the adversarial perturbation can be decomposed into two components: model-specific and data-dependent one, and it is the latter that mainly contributes to the transferability.Motivated by this understanding, we propose to craft adversarial examples by utilizing the noise reduced gradient (NRG) which approximates the data-dependent component.Experiments on various classification models trained on ImageNet demonstrates that the new approach enhances the transferability dramatically.We also find that low-capacity models have more powerful attack capability than high-capacity counterparts, under the condition that they have comparable test performance. These insights give rise to a principled manner to construct adversarial examples with high success rates and could potentially provide us guidance for designing effective defense approaches against black-box attacks.", "target": ["Proponiamo un nuovo metodo per migliorare la trasferibilità degli adversarial sample utilizzando il gradiente noise-reduced.", "Questo articolo postula che una adversarial perturbation consiste in una componente specifica del modello e in una componente specifica dei dati, e che l'amplificazione di quest'ultima è più adatta per gli adversarial attack.", "Questo articolo si concentra sul miglioramento della trasferibilità degli adversarial sample da un modello a un altro modello."]} +{"source": "We present the iterative two-pass decomposition flow to accelerate existing convolutional neural networks (CNNs). The proposed rank selection algorithm can effectively determine the proper ranks of the target convolutional layers for the low rank approximation.Our two-pass CP-decomposition helps prevent from the instability problem.The iterative flow makes the decomposition of the deeper networks systematic.The experiment results shows that VGG16 can be accelerated with a 6.2x measured speedup while the accuracy drop remains only 1.2%.", "target": ["Presentiamo il flusso iterativo di decomposizione CP a due step per accelerare efficacemente le reti neurali convoluzionali esistenti (CNN).", "L'articolo propone un nuovo flusso di lavoro per l'accelerazione e la compressione delle CNN e propone anche un modo per determinare il rango target di ogni layer data l'accelerazione globale target.", "Questo articolo affronta il problema dell'apprendimento di un'operazione con filtro tensore a basso rango per i layer di filtraggio nelle deep neural network (DNN)."]} +{"source": "We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bound on the Lipschitz constant of neural networks.The underlying optimization problems boil down to either linear (LP) or semidefinite (SDP) programming.We show how to use the sparse connectivity of a network, to significantly reduce the complexity of computation.This is specially useful for convolutional as well as pruned neural networks.We conduct experiments on networks with random weights as well as networks trained on MNIST, showing that in the particular case of the $\\ell_\\infty$-Lipschitz constant, our approach yields superior estimates as compared to other baselines available in the literature.", "target": ["Limiti superiori basati su LP sulla costante di Lipschitz delle reti neurali", "Gli autori studiano il problema della stima della costante di Lipschitz di una deep neural network con funzione di attivazione ELO, formulandolo come un problema di ottimizzazione polinomiale."]} +{"source": "Although few-shot learning research has advanced rapidly with the help of meta-learning, its practical usefulness is still limited because most of the researches assumed that all meta-training and meta-testing examples came from a single domain.We propose a simple but effective way for few-shot classification in which a task distribution spans multiple domains including previously unseen ones during meta-training.The key idea is to build a pool of embedding models which have their own metric spaces and to learn to select the best one for a particular task through multi-domain meta-learning.This simplifies task-specific adaptation over a complex task distribution as a simple selection problem rather than modifying the model with a number of parameters at meta-testing time.Inspired by common multi-task learning techniques, we let all models in the pool share a base network and add a separate modulator to each model to refine the base network in its own way.This architecture allows the pool to maintain representational diversity and each model to have domain-invariant representation as well. Experiments show that our selection scheme outperforms other few-shot classification algorithms when target tasks could come from many different domains.They also reveal that aggregating outputs from all constituent models is effective for tasks from unseen domains showing the effectiveness of our framework.", "target": ["Affrontiamo la classificazione multi-dominio few-shot costruendo modelli multipli per rappresentare questa complessa distribuzione di task in modo collettivo e semplificando l'adattamento specifico del task come un problema di selezione da questi modelli pre-trained.", "Questo articolo affronta la classificazione few-shot con molti domini diversi costruendo un pool di modelli di embedding per catturare feature invarianti e specifiche del dominio senza un aumento significativo del numero di parametri."]} +{"source": "Still in 2019, many scanned documents come into businesses in non-digital format.Text to be extracted from real world documents is often nestled inside rich formatting, such as tabular structures or forms with fill-in-the-blank boxes or underlines whose ink often touches or even strikes through the ink of the text itself.Such ink artifacts can severely interfere with the performance of recognition algorithms or other downstream processing tasks.In this work, we propose DeepErase, a neural preprocessor to erase ink artifacts from text images.We devise a method to programmatically augment text images with real artifacts, and use them to train a segmentation network in an weakly supervised manner.In additional to high segmentation accuracy, we show that our cleansed images achieve a significant boost in downstream recognition accuracy by popular OCR software such as Tesseract 4.0.We test DeepErase on out-of-distribution datasets (NIST SDB) of scanned IRS tax return forms and achieve double-digit improvements in recognition accuracy over baseline for both printed and handwritten text.", "target": ["Rimozione su base neurale degli artefatti di inchiostro dei documenti (sottolineature, sbavature, ecc.) senza dati di training annotati manualmente"]} +{"source": "Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input.Current approaches relying on substitute model training, gradient estimation or genetic algorithms often require an excessive number of queries.Therefore, they are not suitable for real-world systems where the maximum query number is limited due to cost.We propose a query-efficient black-box attack which uses Bayesian optimisation in combination with Bayesian model selection to optimise over the adversarial perturbation and the optimal degree of search space dimension reduction.We demonstrate empirically that our method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks.", "target": ["Proponiamo un attacco black-box query-efficiente che utilizza l'ottimizzazione bayesiana in combinazione con la selezione del modello bayesiano per ottimizzare l'adversarial perturbation e il grado ottimale di riduzione della dimensione dello spazio di ricerca.", "Gli autori propongono di utilizzare l'ottimizzazione bayesiana con un surrogato GP per la generazione di immagini adversarial, sfruttando la struttura additiva e utilizzando la selezione bayesiana del modello per determinare una riduzione ottimale della dimensionalità."]} +{"source": "Learning multimodal representations is a fundamentally complex research problem due to the presence of multiple heterogeneous sources of information.Although the presence of multiple modalities provides additional valuable information, there are two key challenges to address when learning from multimodal data:1) models must learn the complex intra-modal and cross-modal interactions for prediction and2) models must be robust to unexpected missing or noisy modalities during testing.In this paper, we propose to optimize for a joint generative-discriminative objective across multimodal data and labels.We introduce a model that factorizes representations into two sets of independent factors: multimodal discriminative and modality-specific generative factors.Multimodal discriminative factors are shared across all modalities and contain joint multimodal features required for discriminative tasks such as sentiment prediction.Modality-specific generative factors are unique for each modality and contain the information required for generating data.Experimental results show that our model is able to learn meaningful multimodal representations that achieve state-of-the-art or competitive performance on six multimodal datasets.Our model demonstrates flexible generative capabilities by conditioning on independent factors and can reconstruct missing modalities without significantly impacting performance.Lastly, we interpret our factorized representations to understand the interactions that influence multimodal learning.", "target": ["Proponiamo un modello per imparare rappresentazioni multimodali fattorizzate che sono discriminative, generative e interpretabili.", "Questo articolo presenta il \"modello di fattorizzazione multimodale\" che fattorizza le rappresentazioni in fattori discriminativi multimodali condivisi e fattori generativi specifici della modalità. "]} +{"source": "The successful application of flexible, general learning algorithms to real-world robotics applications is often limited by their poor data-efficiency.To address the challenge, domains with more than one dominant task of interest encourage the sharing of information across tasks to limit required experiment time.To this end, we investigate compositional inductive biases in the form of hierarchical policies as a mechanism for knowledge transfer across tasks in reinforcement learning (RL).We demonstrate that this type of hierarchy enables positive transfer while mitigating negative interference.Furthermore, we demonstrate the benefits of additional incentives to efficiently decompose task solutions.Our experiments show that these incentives are naturally given in multitask learning and can be easily introduced for single objectives.We design an RL algorithm that enables stable and fast learning of structured policies and the effective reuse of both behavior components and transition data across tasks in an off-policy setting.Finally, we evaluate our algorithm in simulated environments as well as physical robot experiments and demonstrate substantial improvements in data data-efficiency over competitive baselines.", "target": ["Sviluppiamo un algoritmo gerarchico e actor-critic per il transfer compositivo attraverso la condivisione di componenti di policy e dimostriamo la specializzazione dei componenti e i relativi benefici diretti nei domini multitask, così come il suo adattamento per task singoli.", "Una combinazione di diverse tecniche di apprendimento per l'acquisizione della struttura e l'apprendimento con dati asimmetrici, utilizzati per addestrare una policy HRL.", "Gli autori introducono un framework di policy gerarchica per l'uso nel reinforcement learning sia a task singolo che multitask, e valutano l'utilità della struttura su task robotici complessi."]} +{"source": "In this paper, we study the representational power of deep neural networks (DNN) that belong to the family of piecewise-linear (PWL) functions, based on PWL activation units such as rectifier or maxout.We investigate the complexity of such networks by studying the number of linear regions of the PWL function.Typically, a PWL function from a DNN can be seen as a large family of linear functions acting on millions of such regions.We directly build upon the work of Mont´ufar et al. (2014), Mont´ufar (2017), and Raghu et al. (2017) by refining the upper and lower bounds on the number of linear regions for rectified and maxout networks.In addition to achieving tighter bounds, we also develop a novel method to perform exact numeration or counting of the number of linear regions with a mixed-integer linear formulation that maps the input space to output.We use this new capability to visualize how the number of linear regions change while training DNNs.", "target": ["Contiamo empiricamente il numero di regioni lineari delle reti con ReLU e raffiniamo i limiti superiori e inferiori.", "Questo articolo presenta dei limiti migliorati per il conteggio del numero di regioni lineari nelle reti ReLU."]} +{"source": "Convolutional neural networks memorize part of their training data, which is why strategies such as data augmentation and drop-out are employed to mitigate over- fitting.This paper considers the related question of “membership inference”, where the goal is to determine if an image was used during training.We con- sider membership tests over either ensembles of samples or individual samples.First, we show how to detect if a dataset was used to train a model, and in particular whether some validation images were used at train time.Then, we introduce a new approach to infer membership when a few of the top layers are not available or have been fine-tuned, and show that lower layers still carry information about the training samples.To support our findings, we conduct large-scale experiments on Imagenet and subsets of YFCC-100M with modern architectures such as VGG and Resnet.", "target": ["Analizziamo le proprietà di memorizzazione tramite un convnet del set di allenamento e proponiamo diversi casi d'uso in cui possiamo estrarre alcune informazioni sul training set.", "Mostra le proprietà di generalizzazione/memorizzazione delle Convnet grandi e deep e cerca di sviluppare procedure relative all'identificazione se un input di una ConvNet addestrata è stato effettivamente utilizzato per addestrare la rete."]} +{"source": "While Generative Adversarial Networks (GANs) have empirically produced impressive results on learning complex real-world distributions, recent works have shown that they suffer from lack of diversity or mode collapse.The theoretical work of Arora et al. (2017a) suggests a dilemma about GANs’ statistical properties: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse.By contrast, we show in this paper that GANs can in principle learn distributions in Wasserstein distance (or KL-divergence in many cases) with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators).For various generator classes such as mixture of Gaussians, exponential families, and invertible and injective neural networks generators, we design corresponding discriminators (which are often neural nets of specific architectures) such that the Integral Probability Metric (IPM) induced by the discriminators can provably approximate the Wasserstein distance and/or KL-divergence.This implies that if the training is successful, then the learned distribution is close to the true distribution in Wasserstein distance or KL divergence, and thus cannot drop modes.Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence or the Wasserstein distance, indicating that the lack of diversity in GANs may be caused by the sub-optimality in optimization instead of statistical inefficiency.", "target": ["Le GAN possono in linea di principio imparare distribuzioni in modo efficiente dal punto di vista del sample, se la classe discriminante è compatta e ha un forte potere di distinzione rispetto alla particolare classe generatrice.", "Propone la nozione di approssimabilità ristretta e fornisce un limite di complessità del sample, polinomiale nella dimensione, che è utile nello studio della mancanza di diversità nelle GAN.", "Discute sul fatto che la metrica di probabilità integrale può essere una buona approssimazione della distanza di Wasserstein sotto alcune ipotesi lievi."]} +{"source": "Understanding the optimization trajectory is critical to understand training of deep neural networks.We show how the hyperparameters of stochastic gradient descent influence the covariance of the gradients (K) and the Hessian of the training loss (H) along this trajectory.Based on a theoretical model, we predict that using a high learning rate or a small batch size in the early phase of training leads SGD to regions of the parameter space with (1) reduced spectral norm of K, and (2) improved conditioning of K and H. We show that the point on the trajectory after which these effects hold, which we refer to as the break-even point, is reached early during training.We demonstrate these effects empirically for a range of deep neural networks applied to multiple different tasks.Finally, we apply our analysis to networks with batch normalization (BN) layers and find that it is necessary to use a high learning rate to achieve loss smoothing effects attributed previously to BN alone.", "target": ["Nella fase iniziale del training delle deep neural network esiste un \"break-even point\" che determina le proprietà dell'intera traiettoria di ottimizzazione.", "Questo lavoro analizza l'ottimizzazione delle deep neural network considerando come gli iperparametri batch size e step size modifichino le traiettorie di apprendimento."]} +{"source": "Graph Convolution Network (GCN) has been recognized as one of the most effective graph models for semi-supervised learning, but it extracts merely the first-order or few-order neighborhood information through information propagation, which suffers performance drop-off for deeper structure.Existing approaches that deal with the higher-order neighbors tend to take advantage of adjacency matrix power.In this paper, we assume a seemly trivial condition that the higher-order neighborhood information may be similar to that of the first-order neighbors.Accordingly, we present an unsupervised approach to describe such similarities and learn the weight matrices of higher-order neighbors automatically through Lasso that minimizes the feature loss between the first-order and higher-order neighbors, based on which we formulate the new convolutional filter for GCN to learn the better node representations.Our model, called higher-order weighted GCN (HWGCN), has achieved the state-of-the-art results on a number of node classification tasks over Cora, Citeseer and Pubmed datasets.", "target": ["Proponiamo HWGCN per mescolare a diversi ordini le informazioni di neighborhood pertinenti per imparare meglio le rappresentazioni dei nodi.", "Gli autori propongono una variante di GCN, HWGCN, per considerare la convoluzione oltre i vicini ad uno step, che è paragonabile ai metodi allo stato dell'arte."]} +{"source": "The performance of deep neural networks is often attributed to their automated, task-related feature construction.It remains an open question, though, why this leads to solutions with good generalization, even in cases where the number of parameters is larger than the number of samples.Back in the 90s, Hochreiter and Schmidhuber observed that flatness of the loss surface around a local minimum correlates with low generalization error.For several flatness measures, this correlation has been empirically validated.However, it has recently been shown that existing measures of flatness cannot theoretically be related to generalization: if a network uses ReLU activations, the network function can be reparameterized without changing its output in such a way that flatness is changed almost arbitrarily.This paper proposes a natural modification of existing flatness measures that results in invariance to reparameterization.The proposed measures imply a robustness of the network to changes in the input and the hidden layers.Connecting this feature robustness to generalization leads to a generalized definition of the representativeness of data.With this, the generalization error of a model trained on representative data can be bounded by its feature robustness which depends on our novel flatness measure.", "target": ["Introduciamo una nuova misura di planarità ai minimi locali della superficie di loss delle deep neural network che è invariante rispetto alle riparametrizzazioni a livello di layer e colleghiamo la planarità alla robustezza delle feature e alla generalizzazione.", "Gli autori propongono una nozione di robustezza delle feature che è invariante rispetto al ridimensionamento dei pesi e discutono la relazione di questa nozione con la generalizzazione.", "Questo articolo definisce una nozione di feature-robustness e la combina con la epsilon representativeness di una funzione per descrivere una connessione tra flatness dei minimi e generalizzazione nelle deep neural network."]} +{"source": "Bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e.g.neurons.We apply and further develop this approach for gated recurrent architectures.Specifically, in addition to sparsification of individual weights and neurons, we propose to sparsify preactivations of gates and information flow in LSTM.It makes some gates and information flow components constant, speeds up forward pass and improves compression.Moreover, the resulting structure of gate sparsity is interpretable and depends on the task.", "target": ["Proponiamo di sparsificare le preattivazioni dei gate e il flusso di informazioni in LSTM per renderle costanti e aumentare il livello di sparsità dei neuroni", "Questo articolo ha proposto un metodo di sparsificazione per le reti neurali ricorrenti eliminando i neuroni con preattivazioni zero per ottenere reti compatte."]} +{"source": "Improving the accuracy of numerical methods remains a central challenge in many disciplines and is especially important for nonlinear simulation problems.A representative example of such problems is fluid flow, which has been thoroughly studied to arrive at efficient simulations of complex flow phenomena.This paper presents a data-driven approach that learns to improve the accuracy of numerical solvers.The proposed method utilizes an advanced numerical scheme with a fine simulation resolution to acquire reference data.We, then, employ a neural network that infers a correction to move a coarse thus quickly obtainable result closer to the reference data.We provide insights into the targeted learning problem with different learning approaches: fully supervised learning methods with a naive and an optimized data acquisition as well as an unsupervised learning method with a differentiable Navier-Stokes solver.While our approach is very general and applicable to arbitrary partial differential equation models, we specifically highlight gains in accuracy for fluid flow simulations.", "target": ["Introduciamo un approccio di rete neurale per assistere i risolutori di equazioni differenziali parziali.", "Gli autori mirano a migliorare la precisione dei risolutori numerici addestrando una rete neurale su dati di riferimento simulati che correggono il risolutore numerico."]} +{"source": "A patient’s health information is generally fragmented across silos.Though it is technically feasible to unite data for analysis in a manner that underpins a rapid learning healthcare system, privacy concerns and regulatory barriers limit data centralization.Machine learning can be conducted in a federated manner on patient datasets with the same set of variables, but separated across sites of care.But federated learning cannot handle the situation where different data types for a givenpatient are separated vertically across different organizations.We call methods that enable machine learning model training on data separated by two or more degrees “confederated machine learning.”We built and evaluated a confederated machinelearningmodel to stratify the risk of accidental falls among the elderly.", "target": ["un metodo di confederated learning che allena il modello da dati medici separati orizzontalmente e verticalmente", "Un metodo di confederated learning che impara attraverso le divisioni nei dati medici separati sia orizzontalmente che verticalmente."]} +{"source": "Existing neural networks are vulnerable to \"adversarial examples\"---created by adding maliciously designed small perturbations in inputs to induce a misclassification by the networks.The most investigated defense strategy is adversarial training which augments training data with adversarial examples.However, applying single-step adversaries in adversarial training does not support the robustness of the networks, instead, they will even make the networks to be overfitted.In contrast to the single-step, multi-step training results in the state-of-the-art performance on MNIST and CIFAR10, yet it needs a massive amount of time.Therefore, we propose a method, Stochastic Quantized Activation (SQA) that solves overfitting problems in single-step adversarial training and fastly achieves the robustness comparable to the multi-step.SQA attenuates the adversarial effects by providing random selectivity to activation functions and allows the network to learn robustness with only single-step training.Throughout the experiment, our method demonstrates the state-of-the-art robustness against one of the strongest white-box attacks as PGD training, but with much less computational cost.Finally, we visualize the learning process of the network with SQA to handle strong adversaries, which is different from existing methods.", "target": ["Questo articolo propone l'attivazione quantizzata stocastica che risolve i problemi di overfitting nell'adversarial training FGSM e raggiunge rapidamente la robustezza paragonabile al training multi-step.", "L'articolo propone un modello per migliorare l'adversarial training introducendo perturbazioni casuali nelle attivazioni di uno degli hidden layer"]} +{"source": "Neural activity is highly variable in response to repeated stimuli.We used an open dataset, the Allen Brain Observatory, to quantify the distribution of responses to repeated natural movie presentations.A large fraction of responses are best fit by log-normal distributions or Gaussian mixtures with two components.These distributions are similar to those from units in deep neural networks with dropout.Using a separate set of electrophysiological recordings, we constructed a population coupling model as a control for state-dependent activity fluctuations and found that the model residuals also show non-Gaussian distributions.We then analyzed responses across trials from multiple sections of different movie clips and observed that the noise in cortex aligns better with in-clip versus out-of-clip stimulus variations.We argue that noise is useful for generalization when it moves along representations of different exemplars in-class, similar to the structure of cortical noise.", "target": ["Studiamo la struttura del rumore nel cervello e scopriamo che può aiutare la generalizzazione spostando le rappresentazioni lungo le variazioni degli stimoli in-class."]} +{"source": "Unsupervised domain adaptation has received significant attention in recent years.Most of existing works tackle the closed-set scenario, assuming that the source and target domains share the exactly same categories.In practice, nevertheless, a target domain often contains samples of classes unseen in source domain (i.e., unknown class).The extension of domain adaptation from closed-set to such open-set situation is not trivial since the target samples in unknown class are not expected to align with the source.In this paper, we address this problem by augmenting the state-of-the-art domain adaptation technique, Self-Ensembling, with category-agnostic clusters in target domain.Specifically, we present Self-Ensembling with Category-agnostic Clusters (SE-CC) --- a novel architecture that steers domain adaptation with the additional guidance of category-agnostic clusters that are specific to target domain.These clustering information provides domain-specific visual cues, facilitating the generalization of Self-Ensembling for both closed-set and open-set scenarios.Technically, clustering is firstly performed over all the unlabeled target samples to obtain the category-agnostic clusters, which reveal the underlying data space structure peculiar to target domain.A clustering branch is capitalized on to ensure that the learnt representation preserves such underlying structure by matching the estimated assignment distribution over clusters to the inherent cluster distribution for each target sample.Furthermore, SE-CC enhances the learnt representation with mutual information maximization.Extensive experiments are conducted on Office and VisDA datasets for both open-set and closed-set domain adaptation, and superior results are reported when comparing to the state-of-the-art approaches.", "target": ["Presentiamo un nuovo design, cioè Self-Ensembling con Category-agnostic Clusters, per domain adaptation closed-set e open-set.", "Un nuovo approccio a domain adaptation open set, dove le categorie del dominio di origine sono contenute nelle categorie del dominio target al fine di filtrare le categorie outlier e consentire l'adattamento all'interno delle classi condivise."]} +{"source": "We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization.Spectral Inference Networks generalize Slow Feature Analysis to generic symmetric operators, and are closely related to Variational Monte Carlo methods from computational physics.As such, they can be a powerful tool for unsupervised representation learning from video or graph-structured data.We cast training Spectral Inference Networks as a bilevel optimization problem, which allows for online learning of multiple eigenfunctions.We show results of training Spectral Inference Networks on problems in quantum mechanics and feature learning for videos on synthetic datasets.Our results demonstrate that Spectral Inference Networks accurately recover eigenfunctions of linear operators and can discover interpretable representations from video in a fully unsupervised manner.", "target": ["Mostriamo come imparare le decomposizioni spettrali degli operatori lineari con il deep learning, e lo usiamo per l'apprendimento unsupervised senza un modello generativo.", "Gli autori propongono di utilizzare un framework di deep learning per risolvere il calcolo degli autovettori più grandi.", "Questo articolo presenta una framework per imparare le autofunzioni attraverso un processo stocastico e propone di affrontare la sfida del calcolo delle autofunzioni in un contesto su larga scala approssimando e poi usando un processo di ottimizzazione stocastica a due fasi. "]} +{"source": "The Tensor-Train factorization (TTF) is an efficient way to compress large weight matrices of fully-connected layers and recurrent layers in recurrent neural networks (RNNs).However, high Tensor-Train ranks for all the core tensors of parameters need to be element-wise fixed, which results in an unnecessary redundancy of model parameters.This work applies Riemannian stochastic gradient descent (RSGD) to train core tensors of parameters in the Riemannian Manifold before finding vectors of lower Tensor-Train ranks for parameters.The paper first presents the RSGD algorithm with a convergence analysis and then tests it on more advanced Tensor-Train RNNs such as bi-directional GRU/LSTM and Encoder-Decoder RNNs with a Tensor-Train attention model.The experiments on digit recognition and machine translation tasks suggest the effectiveness of the RSGD algorithm for Tensor-Train RNNs.", "target": ["Applicazione dell'algoritmo Riemannian SGD (RSGD) per il training di Tensor-Train RNNs per ridurre ulteriormente i parametri del modello.", "L'articolo propone di utilizzare l'algoritmo dello stochastic gradient Riemanniano per l'apprendimento di tensori a basso rango nelle deep neural network.", "Propone un algoritmo per l'ottimizzazione delle reti neurali parametrizzate dalla decomposizione Tensor Train basata sull'ottimizzazione Riemanniana e sull'adattamento del rango, e progetta un'architettura TT LSTM bidirezionale."]} +{"source": "In this paper, we consider the problem of learning control policies that optimize areward function while satisfying constraints due to considerations of safety, fairness, or other costs.We propose a new algorithm - Projection Based ConstrainedPolicy Optimization (PCPO), an iterative method for optimizing policies in a two-step process - the first step performs an unconstrained update while the secondstep reconciles the constraint violation by projection the policy back onto the constraint set.We theoretically analyze PCPO and provide a lower bound on rewardimprovement, as well as an upper bound on constraint violation for each policy update.We further characterize the convergence of PCPO with projection basedon two different metrics - L2 norm and Kullback-Leibler divergence.Our empirical results over several control tasks demonstrate that our algorithm achievessuperior performance, averaging more than 3.5 times less constraint violation andaround 15% higher reward compared to state-of-the-art methods.", "target": ["Proponiamo un nuovo algoritmo che impara policy che soddisfano i vincoli, e forniamo un'analisi teorica e una dimostrazione empirica nel contesto del reinforcement learning con vincoli.", "Questo articolo introduce un algoritmo di ottimizzazione delle policy vincolate che utilizza un processo di ottimizzazione a due fasi, dove le policy che non soddisfano il vincolo possono essere riproiettate nell'insieme dei vincoli."]} +{"source": "Deep networks face challenges of ensuring their robustness against inputs that cannot be effectively represented by information learned from training data.We attribute this vulnerability to the limitations inherent to activation-based representation.To complement the learned information from activation-based representation, we propose utilizing a gradient-based representation that explicitly focuses on missing information.In addition, we propose a directional constraint on the gradients as an objective during training to improve the characterization of missing information.To validate the effectiveness of the proposed approach, we compare the anomaly detection performance of gradient-based and activation-based representations.We show that the gradient-based representation outperforms the activation-based representation by 0.093 in CIFAR-10 and 0.361 in CURE-TSR datasets in terms of AUROC averaged over all classes.Also, we propose an anomaly detection algorithm that uses the gradient-based representation, denoted as GradCon, and validate its performance on three benchmarking datasets.The proposed method outperforms the majority of the state-of-the-art algorithms in CIFAR-10, MNIST, and fMNIST datasets with an average AUROC of 0.664, 0.973, and 0.934, respectively.", "target": ["Proponiamo una rappresentazione basata sul gradiente per caratterizzare le informazioni che le deep neural network non hanno imparato.", "Gli autori presentano la creazione di rappresentazioni basate sui gradienti rispetto ai pesi per integrare le informazioni mancanti dal dataset di training per le deep neural network."]} +{"source": "Medical images may contain various types of artifacts with different patterns and mixtures, which depend on many factors such as scan setting, machine condition, patients’ characteristics, surrounding environment, etc.However, existing deep learning based artifact reduction methods are restricted by their training set with specific predetermined artifact type and pattern.As such, they have limited clinical adoption.In this paper, we introduce a “Zero-Shot” medical image Artifact Reduction (ZSAR) framework, which leverages the power of deep learning but without using general pre-trained networks or any clean image reference.Specifically, we utilize the low internal visual entropy of an image and train a light-weight image-specific artifact reduction network to reduce artifacts in an image at test-time.We use Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) as vehicles to show that ZSAR can reduce artifacts better than state-of-the-art both qualitatively and quantitatively, while using shorter execution time.To the best of our knowledge, this is the first deep learning framework that reduces artifacts in medical images without using a priori training set.", "target": ["Introduciamo un framework zero-shot per la riduzione degli artefatti nelle immagini mediche, che sfrutta la potenza del deep learning, ma senza utilizzare reti generali pre-trained o qualsiasi riferimento ad un'immagine pulita. "]} +{"source": "Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks.For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image.In this work we adapt the information bottleneck concept for attribution.By adding noise to intermediate feature maps we restrict the flow of information and can quantify (in bits) how much information image regions provide.We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings.The method’s information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not necessary for the network's decision.", "target": ["Applichiamo il concetto di informational bottleneck all'attribution.", "L'articolo propone un nuovo metodo basato sulla perturbazione per il calcolo delle mappe di attribution/saliency per classificatori di immagini basati su deep neural network, iniettando rumore artificiale in un layer iniziale della rete."]} +{"source": "Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling.Sparsity is a technique to reduce compute and memory requirements of deep learning models.Sparse RNNs are easier to deploy on devices and high-end server processors.Even though sparse operations need less compute and memory relative to their dense counterparts, the speed-up observed by using sparse operations is less than expected on different hardware platforms.In order to address this issue, we investigate two different approaches to induce block sparsity in RNNs: pruning blocks of weights in a layer and using group lasso regularization with pruning to create blocks of weights with zeros.Using these techniques, we can create block-sparse RNNs with sparsity ranging from 80% to 90% with a small loss in accuracy.This technique allows us to reduce the model size by roughly 10x.Additionally, we can prune a larger dense network to recover this loss in accuracy while maintaining high block sparsity and reducing the overall parameter count.Our technique works with a variety of block sizes up to 32x32.Block-sparse RNNs eliminate overheads related to data storage and irregular memory accesses while increasing hardware efficiency compared to unstructured sparsity.", "target": ["Mostriamo che le RNN possono essere pruned per indurre la sparsità dei blocchi che migliora la velocità per le operazioni sparse sull'hardware esistente.", "Gli autori propongono un approccio di pruning di sparsità a blocchi per comprimere le RNN, usando il gruppo LASSO per promuovere la sparsità e per potare, ma con una schedule molto specializzata per quanto riguarda il peso di pruning e il pruning stesso."]} +{"source": "Value iteration networks are an approximation of the value iteration (VI) algorithm implemented with convolutional neural networks to make VI fully differentiable.In this work, we study these networks in the context of robot motion planning, with a focus on applications to planetary rovers.The key challenging task in learning-based motion planning is to learn a transformation from terrain observations to a suitable navigation reward function.In order to deal with complex terrain observations and policy learning, we propose a value iteration recurrence, referred to as the soft value iteration network (SVIN).SVIN is designed to produce more effective training gradients through the value iteration network.It relies on a soft policy model, where the policy is represented with a probability distribution over all possible actions, rather than a deterministic policy that returns only the best action.We demonstrate the effectiveness of the proposed method in robot motion planning scenarios.In particular, we study the application of SVIN to very challenging problems in planetary rover navigation and present early training results on data gathered by the Curiosity rover that is currently operating on Mars.", "target": ["Proponiamo un miglioramento per le value iteration network, con applicazioni alla pianificazione del percorso del rover planetario.", "Questo articolo apprende una funzione di reward basata sulle traiettorie degli esperti utilizzando un modulo di iterazione del valore per rendere il planning step differenziabile"]} +{"source": "Transformer networks have lead to important progress in language modeling and machine translation.These models include two consecutive modules, a feed-forward layer and a self-attention layer.The latter allows the network to capture long term dependencies and are often regarded as the key ingredient in the success of Transformers.Building upon this intuition, we propose a new model that solely consists of attention layers.More precisely, we augment the self-attention layers with persistent memory vectors that play a similar role as the feed-forward layer.Thanks to these vectors, we can remove the feed-forward layer without degrading the performance of a transformer.Our evaluation shows the benefits brought by our model on standard character and word level language modeling benchmarks.", "target": ["Un nuovo layer di attention che combina la self-attention e i sub-layer feed-forward delle reti Transformer.", "Questo articolo propone una modifica al modello Transformer incorporando l'attention sui vettori di memoria \"persistenti\" nel layer di self-attention, ottenendo prestazioni alla pari con i modelli esistenti e utilizzando meno parametri."]} +{"source": "This work views neural networks as data generating systems and applies anomalous pattern detection techniques on that data in order to detect when a network is processing a group of anomalous inputs. Detecting anomalies is a critical component for multiple machine learning problems including detecting the presence of adversarial noise added to inputs.More broadly, this work is a step towards giving neural networks the ability to detect groups of out-of-distribution samples. This work introduces ``Subset Scanning methods from the anomalous pattern detection domain to the task of detecting anomalous inputs to neural networks. Subset Scanning allows us to answer the question: \"``Which subset of inputs have larger-than-expected activations at which subset of nodes?\" Framing the adversarial detection problem this way allows us to identify systematic patterns in the activation space that span multiple adversarially noised images. Such images are ``\"weird together\". Leveraging this common anomalous pattern, we show increased detection power as the proportion of noised images increases in a test set. Detection power and accuracy results are provided for targeted adversarial noise added to CIFAR-10 images on a 20-layer ResNet using the Basic Iterative Method attack.", "target": ["Troviamo efficacemente un sottoinsieme di immagini che hanno attivazioni più alte del previsto per alcuni sottoinsiemi di nodi. Queste immagini appaiono più anomale e più facili da rilevare se viste come un gruppo.", "L'articolo ha proposto uno schema per rilevare la presenza di input anomali basato su un approccio di \"subset scanning\" per rilevare attivazioni anomale nella rete di deep learning."]} +{"source": "Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks.In this work, we conduct a thorough investigation of stable recurrent models.Theoretically, we prove stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent.Empirically, we demonstrate stable recurrent models often perform as well as their unstable counterparts on benchmark sequence tasks.Taken together, these findings shed light on the effective power of recurrent networks and suggest much of sequence learning happens, or can be made to happen, in the stable regime.Moreover, our results help to explain why in many cases practitioners succeed in replacing recurrent models by feed-forward models.", "target": ["I modelli ricorrenti stabili possono essere approssimati da reti feed-forward e performano come i modelli instabili su task di riferimento.", "Studia la stabilità delle RNN e l'indagine della normalizzazione spettrale alle previsioni sequenziali."]} +{"source": "Weight-sharing plays a significant role in the success of many deep neural networks, by increasing memory efficiency and incorporating useful inductive priors about the problem into the network.But understanding how weight-sharing can be used effectively in general is a topic that has not been studied extensively.Chen et al. (2015) proposed HashedNets, which augments a multi-layer perceptron with a hash table, as a method for neural network compression.We generalize this method into a framework (ArbNets) that allows for efficient arbitrary weight-sharing, and use it to study the role of weight-sharing in neural networks.We show that common neural networks can be expressed as ArbNets with different hash functions.We also present two novel hash functions, the Dirichlet hash and the Neighborhood hash, and use them to demonstrate experimentally that balanced and deterministic weight-sharing helps with the performance of a neural network.", "target": ["Si studia il ruolo del weight sharing nelle reti neurali usando le funzioni hash, scoprendo che una funzione hash equilibrata e deterministica aiuta le prestazioni della rete.", "Si propone ArbNet per studiare weight sharing in modo più sistematico definendo la funzione di weight sharing come una funzione hash."]} +{"source": "We introduce Neural Markov Logic Networks (NMLNs), a statistical relational learning system that borrows ideas from Markov logic.Like Markov Logic Networks (MLNs), NMLNs are an exponential-family model for modelling distributions over possible worlds, but unlike MLNs, they do not rely on explicitly specified first-order logic rules.Instead, NMLNs learn an implicit representation of such rules as a neural network that acts as a potential function on fragments of the relational structure.Interestingly, any MLN can be represented as an NMLN.Similarly to recently proposed Neural theorem provers (NTPs) (Rocktaschel at al. 2017), NMLNs can exploit embeddings of constants but, unlike NTPs, NMLNs work well also in their absence.This is extremely important for predicting in settings other than the transductive one.We showcase the potential of NMLNs on knowledge-base completion tasks and on generation of molecular (graph) data.", "target": ["Introduciamo un sistema di apprendimento statistico relazionale che prende in prestito idee dalla logica di Markov ma impara una rappresentazione implicita delle regole come una rete neurale.", "L'articolo fornisce un'estensione alle reti logiche di Markov rimuovendo la loro dipendenza da regole logiche di primo ordine predefinite per modellare più domini nei task di completamento delle knowledge base."]} +{"source": "Using variational Bayes neural networks, we develop an algorithm capable of accumulating knowledge into a prior from multiple different tasks.This results in a rich prior capable of few-shot learning on new tasks.The posterior can go beyond the mean field approximation and yields good uncertainty on the performed experiments.Analysis on toy tasks show that it can learn from significantly different tasks while finding similarities among them.Experiments on Mini-Imagenet reach state of the art with 74.5% accuracy on 5 shot learning.Finally, we provide two new benchmarks, each showing a failure mode of existing meta learning algorithms such as MAML and prototypical Networks.", "target": ["Un metodo scalabile per l'apprendimento di un prior espressivo sulle reti neurali in task multipli.", "L'articolo presenta un metodo per addestrare un modello probabilistico per il Multitask Transfer Learning introducendo una variabile latente per task per catturare la comunanza nelle istanze del task.", "Il lavoro propone un approccio variazionale al meta learning che impiega variabili latenti corrispondenti a dataset specifici del task.", "Mira ad apprendere un prior sulle reti neurali per task multipli."]} +{"source": "Sequential data often originates from diverse environments.Across them exist both shared regularities and environment specifics.To learn robust cross-environment descriptions of sequences we introduce disentangled state space models (DSSM).In the latent space of DSSM environment-invariant state dynamics is explicitly disentangled from environment-specific information governing that dynamics.We empirically show that such separation enables robust prediction, sequence manipulation and environment characterization.We also propose an unsupervised VAE-based training procedure to learn DSSM as Bayesian filters.In our experiments, we demonstrate state-of-the-art performance in controlled generation and prediction of bouncing ball video sequences across varying gravitational influences.", "target": ["MODELLI DI SPAZIO DI STATO DISENTANGLED ", "L'articolo presenta un modello generativo dello spazio di stato che utilizza una variabile latente globale E per catturare le informazioni specifiche dell'ambiente."]} +{"source": "In this work, we approach one-shot and few-shot learning problems as methods for finding good prototypes for each class, where these prototypes are generalizable to new data samples and classes.We propose a metric learner that learns a Bregman divergence by learning its underlying convex function.Bregman divergences are a good candidate for this framework given they are the only class of divergences with the property that the best representative of a set of points is given by its mean.We propose a flexible extension to prototypical networks to enable joint learning of the embedding and the divergence, while preserving computational efficiency.Our preliminary results are comparable with the prior work on the Omniglot and Mini-ImageNet datasets, two standard benchmarks for one-shot and few-shot learning.We argue that our model can be used for other tasks that involve metric learning or tasks that require approximate convexity such as structured prediction and data completion.", "target": ["Apprendimento della divergenza di Bregman per few-shot learning."]} +{"source": "Motivated by the flexibility of biological neural networks whose connectivity structure changes significantly during their lifetime,we introduce the Unrestricted Recursive Network (URN) and demonstrate that it can exhibit similar flexibility during training via gradient descent.We show empirically that many of the different neural network structures commonly used in practice today (including fully connected, locally connected and residual networks of differ-ent depths and widths) can emerge dynamically from the same URN.These different structures can be derived using gradient descent on a single general loss function where the structure of the data and the relative strengths of various regulator terms determine the structure of the emergent network.We show that this loss function and the regulators arise naturally when considering the symmetries of the network as well as the geometric properties of the input data.", "target": ["Introduciamo un framework di rete che può modificare la sua struttura durante il training e mostriamo che può convergere a vari archetipi di rete ML come MLP e LCN."]} +{"source": "We present CROSSGRAD , a method to use multi-domain training data to learn a classifier that generalizes to new domains.CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain.Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training.In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains.We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances.CROSSGRAD jointly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives.This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions.Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and (2) data augmentation is a more stable and accurate method than domain adversarial training.", "target": ["Data augmentation guidata dal dominio fornisce un metodo robusto e stabile di generalizzazione del dominio", "Questo articolo propone un approccio di generalizzazione del dominio attraverso data augmentation dipendente dal dominio", "Gli autori introducono il metodo CrossGrad, che allena sia un task di classificazione delle label che un task di classificazione del dominio."]} +{"source": "We present sketch-rnn, a recurrent neural network able to construct stroke-based drawings of common objects.The model is trained on a dataset of human-drawn images representing many different classes.We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format.", "target": ["Studiamo un'alternativa ai tradizionali approcci di modellazione delle immagini tramite pixel, e proponiamo un modello generativo per le immagini vettoriali.", "Questo articolo introduce un'architettura di rete neurale per la generazione di disegni ispirata all'autoencoder variazionale."]} +{"source": "Wilson et al. (2017) showed that, when the stepsize schedule is properly designed, stochastic gradient generalizes better than ADAM (Kingma & Ba, 2014).In light of recent work on hypergradient methods (Baydin et al., 2018), we revisit these claims to see if such methods close the gap between the most popular optimizers.As a byproduct, we analyze the true benefit of these hypergradient methods compared to more classical schedules, such as the fixed decay of Wilson et al. (2017).In particular, we observe they are of marginal help since their performance varies significantly when tuning their hyperparameters.Finally, as robustness is a critical quality of an optimizer, we provide a sensitivity analysis of these gradient based optimizers to assess how challenging their tuning is.", "target": ["Forniamo uno studio che cerca di vedere come il recente adattamento del learning rate online estende la conclusione fatta da Wilson et al. 2018 sui metodi con gradiente adattivo, insieme al confronto e all'analisi della sensibilità.", "Riporta i risultati dei test di diversi metodi relativi alla regolazione dello step size, tra cui SGD standard, SGD con momento Nesterov e ADAM e confronta questi metodi con ipergradiente e senza."]} +{"source": "Despite an ever growing literature on reinforcement learning algorithms and applications, much less is known about their statistical inference.In this paper, we investigate the large-sample behaviors of the Q-value estimates with closed-form characterizations of the asymptotic variances.This allows us to efficiently construct confidence regions for Q-value and optimal value functions, and to develop policies to minimize their estimation errors.This also leads to a policy exploration strategy that relies on estimating the relative discrepancies among the Q estimates.Numerical experiments show superior performances of our exploration strategy than other benchmark approaches.", "target": ["Indaghiamo il comportamento su grandi sample delle stime dei valori Q e abbiamo proposto una strategia di esplorazione efficiente che si basa sulla stima delle discrepanze relative tra le stime dei valori Q.", "Questo articolo presenta un algoritmo di pura esplorazione per il reinforcement learning basato su un'analisi asintotica dei valori Q e la loro convergenza alla distribuzione limite centrale, superando gli algoritmi di esplorazione di riferimento."]} +{"source": "We perform completely unsupervised one-sided image to image translation between a source domain $X$ and a target domain $Y$ such that we preserve relevant underlying shared semantics (e.g., class, size, shape, etc). In particular, we are interested in a more difficult case than those typically addressed in the literature, where the source and target are ``far\" enough that reconstruction-style or pixel-wise approaches fail.We argue that transferring (i.e., \\emph{translating}) said relevant information should involve both discarding source domain-specific information while incorporate target domain-specific information, the latter of which we model with a noisy prior distribution. In order to avoid the degenerate case where the generated samples are only explained by the prior distribution, we propose to minimize an estimate of the mutual information between the generated sample and the sample from the prior distribution.We discover that the architectural choices are an important factor to consider in order to preserve the shared semantic between $X$ and $Y$. We show state of the art results on the MNIST to SVHN task for unsupervised image to image translation.", "target": ["Addestriamo una rete di traduzione da immagine a immagine che prende come input l'immagine sorgente e un sample da una distribuzione prior per generare un sample dalla distribuzione target", "Questo articolo formalizza il problema della traduzione non supervisionata e propone un framework GAN aumentato che utilizza l'informazione mutua per evitare il caso degenere", "Questo articolo formula il problema della traduzione non supervisionata di immagini one-to-many e affronta il problema minimizzando l'informazione mututa."]} +{"source": "Identifying salient points in images is a crucial component for visual odometry, Structure-from-Motion or SLAM algorithms.Recently, several learned keypoint methods have demonstrated compelling performance on challenging benchmarks. However, generating consistent and accurate training data for interest-point detection in natural images still remains challenging, especially for human annotators.We introduce IO-Net (i.e. InlierOutlierNet), a novel proxy task for the self-supervision of keypoint detection, description and matching.By making the sampling of inlier-outlier sets from point-pair correspondences fully differentiable within the keypoint learning framework, we show that are able to simultaneously self-supervise keypoint description and improve keypoint matching.Second, we introduce KeyPointNet, a keypoint-network architecture that is especially amenable to robust keypoint detection and description.We design the network to allow local keypoint aggregation to avoid artifacts due to spatial discretizations commonly used for this task, and we improve fine-grained keypoint descriptor performance by taking advantage of efficient sub-pixel convolutions to upsample the descriptor feature-maps to a higher operating resolution.Through extensive experiments and ablative analysis, we show that the proposed self-supervised keypoint learning method greatly improves the quality of feature matching and homography estimation on challenging benchmarks over the state-of-the-art.", "target": ["Si impara ad estrarre key point distinguibili da una proxy task, con rifiuto di outlier.", "Questo articolo è dedicato al self-supervised learning di feature locali utilizzando Neural Guided RANSAC come un ulteriore fornitore di loss ausiliario per migliorare l'interpolazione dei descrittori."]} +{"source": "We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks, which are tasks where multiple agents must work together to achieve a goal they could not individually.Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.Thus, we propose to incentivize agents to take (joint) actions whose effects cannot be predicted via a composition of the predicted effect for each individual agent.We study two instantiations of this idea, one based on the true states encountered, and another based on a dynamics model trained concurrently with the policy.While the former is simpler, the latter has the benefit of being analytically differentiable with respect to the action taken.We validate our approach in robotic bimanual manipulation tasks with sparse rewards; we find that our approach yields more efficient learning than both1) training with only the sparse reward and2) using the typical surprise-based formulation of intrinsic motivation, which does not bias toward synergistic behavior.Videos are available on the project webpage: https://sites.google.com/view/iclr2020-synergistic.", "target": ["Proponiamo una formulazione di motivazione intrinseca che è adatta come bias di esplorazione in task sinergici a più agenti con reward sparse, incoraggiando gli agenti a influenzare l'ambiente in modi che non sarebbero stati possibili se avessero agito individualmente.", "L'articolo si concentra sull'uso della motivazione intrinseca per migliorare il processo di esplorazione degli agenti di reinforcement learning in task che richiedono la presenza di più agenti."]} +{"source": "A general graph-structured neural network architecture operates on graphs through two core components: (1) complex enough message functions; (2) a fixed information aggregation process.In this paper, we present the Policy Message Passing algorithm, which takes a probabilistic perspective and reformulates the whole information aggregation as stochastic sequential processes.The algorithm works on a much larger search space, utilizes reasoning history to perform inference, and is robust to noisy edges.We apply our algorithm to multiple complex graph reasoning and prediction tasks and show that our algorithm consistently outperforms state-of-the-art graph-structured models by a significant margin.", "target": ["Un algoritmo di inferenza probabilistica guidato da una rete neurale per modelli strutturati come un grafo", "Questo articolo introduce il policy message passing, una graph neural network con un meccanismo di inferenza che assegna messaggi agli archi in modo ricorrente, mostrando prestazioni competitive su task di ragionamento visivo."]} +{"source": "Deep multitask networks, in which one neural network produces multiple predictive outputs, are more scalable and often better regularized than their single-task counterparts.Such advantages can potentially lead to gains in both speed and performance, but multitask networks are also difficult to train without finding the right balance between tasks.We present a novel gradient normalization (GradNorm) technique which automatically balances the multitask loss function by directly tuning the gradients to equalize task training rates.We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting over single networks, static baselines, and other adaptive multitask loss balancing techniques.GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter $\\alpha$.Thus, what was once a tedious search process which incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks.Ultimately, we hope to demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.", "target": ["Mostriamo come è possibile aumentare le prestazioni in una rete multitask facendo tuning di una funzione di loss adattiva multitask che viene appresa attraverso il bilanciamento diretto dei gradienti della rete.", "Questo lavoro propone uno schema di update dinamico dei pesi che aggiorna i pesi per le diverse loss dei task durante il training, facendo uso dei rapporti di loss dei diversi task."]} +{"source": "Image segmentation aims at grouping pixels that belong to the same object or region.At the heart of image segmentation lies the problem of determining whether a pixel is inside or outside a region, which we denote as the \"insideness\" problem.Many Deep Neural Networks (DNNs) variants excel in segmentation benchmarks, but regarding insideness, they have not been well visualized or understood: What representations do DNNs use to address the long-range relationships of insideness?How do architectural choices affect the learning of these representations?In this paper, we take the reductionist approach by analyzing DNNs solving the insideness problem in isolation, i.e. determining the inside of closed (Jordan) curves.We demonstrate analytically that state-of-the-art feed-forward and recurrent architectures can implement solutions of the insideness problem for any given curve.Yet, only recurrent networks could learn these general solutions when the training enforced a specific \"routine\" capable of breaking down the long-range relationships.Our results highlights the need for new training strategies that decompose the learning into appropriate stages, and that lead to the general class of solutions necessary for DNNs to understand insideness.", "target": ["Le DNN per la segmentazione delle immagini possono implementare soluzioni per il problema dell'insidie, ma solo alcune reti ricorrenti potrebbero impararle con un tipo specifico di supervisione.", "Questo articolo introduce l'insideness per studiare la semantic segmentation nell'era del deep learning, e i risultati possono aiutare i modelli a generalizzare meglio."]} +{"source": "We address the challenging problem of deep representation learning--the efficient adaption of a pre-trained deep network to different tasks.Specifically, we propose to explore gradient-based features.These features are gradients of the model parameters with respect to a task-specific loss given an input sample.Our key innovation is the design of a linear model that incorporates both gradient features and the activation of the network.We show that our model provides a local linear approximation to a underlying deep model, and discuss important theoretical insight.Moreover, we present an efficient algorithm for the training and inference of our model without computing the actual gradients.Our method is evaluated across a number of representation learning tasks on several datasets and using different network architectures.We demonstrate strong results in all settings.And our results are well-aligned with our theoretical insight.", "target": ["Dato un modello pre-trained, abbiamo esplorato i gradienti per sample dei parametri del modello rispetto a una loss specifica del task, e costruito un modello lineare che combina i gradienti dei parametri del modello e l'attivazione del modello.", "Questo articolo propone di utilizzare i gradienti di layer specifici di reti convoluzionali come feature in un modello linearizzato per il transfer learning e fast adaptation."]} +{"source": "Recovering 3D geometry shape, albedo and lighting from a single image has wide applications in many areas, which is also a typical ill-posed problem.In order to eliminate the ambiguity, face prior knowledge like linear 3D morphable models (3DMM) learned from limited scan data are often adopted to the reconstruction process.However, methods based on linear parametric models cannot generalize well for facial images in the wild with various ages, ethnicity, expressions, poses, and lightings.Recent methods aim to learn a nonlinear parametric model using convolutional neural networks (CNN) to regress the face shape and texture directly.However, the models were only trained on a dataset that is generated from a linear 3DMM.Moreover, the identity and expression representations are entangled in these models, which hurdles many facial editing applications.In this paper, we train our model with adversarial loss in a semi-supervised manner on hybrid batches of unlabeled and labeled face images to exploit the value of large amounts of unlabeled face images from unconstrained photo collections.A novel center loss is introduced to make sure that different facial images from the same person have the same identity shape and albedo.Besides, our proposed model disentangles identity, expression, pose, and lighting representations, which improves the overall reconstruction performance and facilitates facial editing applications, e.g., expression transfer.Comprehensive experiments demonstrate that our model produces high-quality reconstruction compared to state-of-the-art methods and is robust to various expression, pose, and lighting conditions.", "target": ["Addestriamo il nostro modello di ricostruzione facciale con loss adversarial in modo semi-supervised su batch ibride composte da immagini facciali non annotate e annotate per sfruttare il valore di grandi quantità di immagini facciali non annotate da collezioni fotografiche non vincolate.", "Questo articolo propone un processo di training semi-supervised e adversarial per ottenere rappresentazioni disentangled non lineari da un'immagine del viso con funzioni di loss, raggiungendo prestazioni allo stato dell'arte nella ricostruzione del viso."]} +{"source": "Human conversations naturally evolve around related entities and connected concepts, while may also shift from topic to topic.This paper presents ConceptFlow, which leverages commonsense knowledge graphs to explicitly model such conversation flows for better conversation response generation.ConceptFlow grounds the conversation inputs to the latent concept space and represents the potential conversation flow as a concept flow along the commonsense relations.The concept is guided by a graph attention mechanism that models the possibility of the conversation evolving towards different concepts.The conversation response is then decoded using the encodings of both utterance texts and concept flows, integrating the learned conversation structure in the concept space.Our experiments on Reddit conversations demonstrate the advantage of ConceptFlow over previous commonsense aware dialog models and fine-tuned GPT-2 models, while using much fewer parameters but with explicit modeling of conversation structures.", "target": ["Questo articolo presenta ConceptFlow che modella esplicitamente il flusso di conversazione nel commonsense knowledge graph per una migliore generazione di conversazioni.", "L'articolo propone un sistema per generare una risposta single-turn ad un enunciato in un ambiente di dialogo open-domain usando la diffiusione nei vicini dei grounded concept."]} +{"source": "Biological neural networks face homeostatic and resource constraints that restrict the allowed configurations of connection weights.If a constraint is tight it defines a very small solution space, and the size of these constraint spaces determines their potential overlap with the solutions for computational tasks.We study the geometry of the solution spaces for constraints on neurons' total synaptic weight and on individual synaptic weights, characterizing the connection degrees (numbers of partners) that maximize the size of these solution spaces.We then hypothesize that the size of constraints' solution spaces could serve as a cost function governing neural circuit development.We develop analytical approximations and bounds for the model evidence of the maximum entropy degree distributions under these cost functions.We test these on a published electron microscopic connectome of an associative learning center in the fly brain, finding evidence for a developmental progression in circuit structure.", "target": ["Esaminiamo l'ipotesi che l'entropia degli spazi di soluzione per i vincoli sui pesi sinaptici (la \"flessibilità\" del vincolo) potrebbe servire come funzione di costo per lo sviluppo dei circuiti neurali."]} +{"source": "In this preliminary work, we study the generalization properties of infinite ensembles of infinitely-wide neural networks. Amazingly, this model family admits tractable calculations for many information-theoretic quantities. We report analytical and empirical investigations in the search for signals that correlate with generalization.", "target": ["Gli insiemi infiniti di reti neurali infinitamente larghe sono una famiglia di modelli interessante dal punto di vista della teoria dell'informazione."]} +{"source": "Learning multilingual representations of text has proven a successful method for many cross-lingual transfer learning tasks.There are two main paradigms for learning such representations: (1) alignment, which maps different independently trained monolingual representations into a shared space, and (2) joint training, which directly learns unified multilingual representations using monolingual and cross-lingual objectives jointly.In this paper, we first conduct direct comparisons of representations learned using both of these methods across diverse cross-lingual tasks.Our empirical results reveal a set of pros and cons for both methods, and show that the relative performance of alignment versus joint training is task-dependent.Stemming from this analysis, we propose a simple and novel framework that combines these two previously mutually-exclusive approaches.Extensive experiments on various tasks demonstrate that our proposed framework alleviates limitations of both approaches, and outperforms existing methods on the MUSE bilingual lexicon induction (BLI) benchmark.We further show that our proposed framework can generalize to contextualized representations and achieves state-of-the-art results on the CoNLL cross-lingual NER benchmark.", "target": ["Conduciamo uno studio comparativo del cross-lingual alignment rispetto al joint training e uniamo questi due paradigmi precedentemente mantenuti separati in un nuovo framework.", "Questo articolo confronta gli approcci al bilingual lexicon induction e mostra quale metodo si comporta meglio nel lessico, nell'induzione e nei task di NER e MT."]} +{"source": "Large number of weights in deep neural networks make the models difficult to be deployed in low memory environments such as, mobile phones, IOT edge devices as well as \"inferencing as a service\" environments on the cloud. Prior work has considered reduction in the size of the models, through compression techniques like weight pruning, filter pruning, etc. or through low-rank decomposition of the convolution layers.In this paper, we demonstrate the use of multiple techniques to achieve not only higher model compression but also reduce the compute resources required during inferencing.We do filter pruning followed by low-rank decomposition using Tucker decomposition for model compression.We show that our approach achieves upto 57\\% higher model compression when compared to either Tucker Decomposition or Filter pruning alone at similar accuracy for GoogleNet.Also, it reduces the Flops by upto 48\\% thereby making the inferencing faster.", "target": ["Combinazione di tecniche di compressione del modello ortogonale per ottenere una riduzione significativa della dimensione del modello e del numero di flop richiesti durante l'inferenza.", "Questo articolo propone di combinare la decomposizione di Tucker con il pruning dei filtri."]} +{"source": "We review the limitations of BLEU and ROUGE -- the most popular metrics used to assess reference summaries against hypothesis summaries, and introduce JAUNE: a set of criteria for what a good metric should behave like and propose concrete ways to use recent Transformers-based Language Models to assess reference summaries against hypothesis summaries.", "target": ["Introduce JAUNE: una metodologia per sostituire il punteggio BLEU e ROUGE con valutatori multidimensionali e basati su modelli per la valutazione dei riassunti", "Questo articolo propone una nuova metrica JAUNE per la valutazione di machine translation e dei sistemi di summarization, mostrando che il loro modello corrisponde meglio alla ground truth rispetto a BLEU."]} +{"source": "This paper presents a new Graph Neural Network (GNN) type using feature-wise linear modulation (FiLM).Many standard GNN variants propagate information along the edges of a graph by computing ``messages'' based only on the representation of the source of each edge.In GNN-FiLM, the representation of the target node of an edge is additionally used to compute a transformation that can be applied to all incoming messages, allowing feature-wise modulation of the passed information.Results of experiments comparing different GNN architectures on three tasks from the literature are presented, based on re-implementations of baseline methods.Hyperparameters for all methods were found using extensive search, yielding somewhat surprising results: differences between baseline models are smaller than reported in the literature.Nonetheless, GNN-FiLM outperforms baseline methods on a regression task on molecular graphs and performs competitively on other tasks.", "target": ["nuovo formalismo GNN + esperimenti estesi; dimostrando che le differenze tra GGNN/GCN/GAT sono minori di quanto si era pensato", "L'articolo propone una nuova architettura Graph Neural Network che utilizza la Feature-wise Linear Modulation per condizionare il message passing tra i nodi di origine e di destinazione in base alla rappresentazione dei nodi di destinazione."]} +{"source": "To deal simultaneously with both, the attributed network embedding and clustering, we propose a new model.It exploits both content and structure information, capitalising on their simultaneous use.The proposed model relies on the approximation of the relaxed continuous embedding solution by the true discrete clustering one.Thereby, we show that incorporating an embedding representation provides simpler and more interpretable solutions.Experiment results demonstrate that the proposed algorithm performs better, in terms of clustering and embedding, than the state-of-art algorithms, including deep learning methods devoted to similar tasks for attributed network datasets with different proprieties.", "target": ["Questo articolo propone un nuovo framework di decomposizione della matrice per l'embedding simultaneo dei dati di rete e il clustering.", "Questo articolo propone un algoritmo per eseguire insieme l'embedding della rete di attributi e il clustering."]} +{"source": "We propose a learned image-guided rendering technique that combines the benefits of image-based rendering and GAN-based image synthesis.The goal of our method is to generate photo-realistic re-renderings of reconstructed objects for virtual and augmented reality applications (e.g., virtual showrooms, virtual tours and sightseeing, the digital inspection of historical artifacts).A core component of our work is the handling of view-dependent effects.Specifically, we directly train an object-specific deep neural network to synthesize the view-dependent appearance of an object.As input data we are using an RGB video of the object.This video is used to reconstruct a proxy geometry of the object via multi-view stereo.Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering.This warping assumes diffuse surfaces, in case of view-dependent effects, such as specular highlights, it leads to artifacts.To this end, we propose EffectsNet, a deep neural network that predicts view-dependent effects.Based on these estimations, we are able to convert observed images to diffuse images.These diffuse images can be projected into other views.In the target view, our pipeline reinserts the new view-dependent effects.To composite multiple reprojected images to a final output, we learn a composition network that outputs photo-realistic results.Using this image-guided approach, the network does not have to allocate capacity on ``remembering'' object appearance, instead it learns how to combine the appearance of captured images.We demonstrate the effectiveness of our approach both qualitatively and quantitatively on synthetic as well as on real data.", "target": ["Proponiamo una tecnica di rendering appresa guidata dalle immagini che combina i vantaggi del rendering basato sulle immagini e della sintesi delle immagini basata su GAN, considerando gli effetti dipendenti dalla vista.", "Questa presentazione propone un metodo per gestire gli effetti dipendenti dalla vista nel rendering neurale, che migliora la robustezza dei metodi di rendering neurale esistenti."]} +{"source": "We evaluate the distribution learning capabilities of generative adversarial networks by testing them on synthetic datasets.The datasets include common distributions of points in $R^n$ space and images containing polygons of various shapes and sizes.We find that by and large GANs fail to faithfully recreate point datasets which contain discontinous support or sharp bends with noise.Additionally, on image datasets, we find that GANs do not seem to learn to count the number of objects of the same kind in an image.We also highlight the apparent tension between generalization and learning in GANs.", "target": ["Le GAN sono valutate su dataset sintetici"]} +{"source": "This paper proposes a new approach for step size adaptation in gradient methods.The proposed method called step size optimization (SSO) formulates the step size adaptation as an optimization problem which minimizes the loss function with respect to the step size for the given model parameters and gradients.Then, the step size is optimized based on alternating direction method of multipliers (ADMM).SSO does not require the second-order information or any probabilistic models for adapting the step size, so it is efficient and easy to implement.Furthermore, we also introduce stochastic SSO for stochastic learning environments.In the experiments, we integrated SSO to vanilla SGD and Adam, and they outperformed state-of-the-art adaptive gradient methods including RMSProp, Adam, L4-Adam, and AdaBound on extensive benchmark datasets.", "target": ["Proponiamo un metodo di adattamento della step size efficiente ed efficace per i metodi basati sul gradiente.", "Un nuovo adattamento della step size nei metodi basati sul gradiente del primo ordine che stabilisce un nuovo problema di ottimizzazione con l'espansione del primo ordine della funzione di loss e la regolarizzazione, dove la step size è trattata come variabile."]} +{"source": "Despite the fact that generative models are extremely successful in practice, the theory underlying this phenomenon is only starting to catch up with practice.In this work we address the question of the universality of generative models: is it true that neural networks can approximate any data manifold arbitrarily well?We provide a positive answer to this question and show that under mild assumptions on the activation function one can always find a feedforward neural network that maps the latent space onto a set located within the specified Hausdorff distance from the desired data manifold.We also prove similar theorems for the case of multiclass generative models and cycle generative models, trained to map samples from one manifold to another and vice versa.", "target": ["Abbiamo dimostrato che un'ampia classe di manifold può essere generata da reti ReLU e sigmoidi con precisione arbitraria.", "Questo articolo fornisce alcune garanzie di base su quando i manifold possono essere scritti come l'immagine di una funzione approssimata da una rete neurale, e unisce insieme teoremi dalla geometria dei manifold e risultati standard di approssimazione universale.", "Questo articolo mostra teoricamente che i modelli generativi basati sulle reti neurali possono approssimare i data manifold, e dimostra che, sotto blande ipotesi, le reti neurali possono mappare uno spazio latente su un insieme vicino al data manifold entro una piccola distanza di Hausdorff."]} +{"source": "Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL.However, the theoretical understanding of such methods has been rather limited.This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees.We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward.The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model.The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification.Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO).Experiments demonstrate that SLBO achieves the state-of-the-art performance when only 1M or fewer samples are permitted on a range of continuous control benchmark tasks.", "target": ["Progettiamo algoritmi di reinforcement learning basati su modelli con garanzie teoriche e raggiungiamo risultati allo stato dell'arte sui task di benchmark Mujuco quando sono ammessi un milione o meno di sample.", "L'articolo ha proposto un framework per progettare algoritmi RL basati su modelli e OFU che raggiungono prestazioni SOTA sui task MuJoCo."]} +{"source": "We study the use of knowledge distillation to compress the U-net architecture.We show that, while standard distillation is not sufficient to reliably train a compressed U-net, introducing other regularization methods, such as batch normalization and class re-weighting, in knowledge distillation significantly improves the training process.This allows us to compress a U-net by over 1000x, i.e., to 0.1% of its original number of parameters, at a negligible decrease in performance.", "target": ["Presentiamo ulteriori tecniche per utilizzare la knowledge distillation per comprimere U-net di oltre 1000x.", "Gli autori hanno introdotto una strategia di distillazione modificata per comprimere un'architettura U-net di oltre 1000x mantenendo una precisione vicina alla U-net originale."]} +{"source": "Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks.This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network.This requires to calculate the Hessian around a mode, which makes learning tractable.In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task.Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work.", "target": ["Questo articolo fornisce un approccio per affrontare il catastrophic forgetting attraverso stime di curvatura senza Hessiana", "L'articolo propone un metodo approssimato di Laplace per il training delle reti neurali nel setting di continual learning con una bassa complessità spaziale."]} +{"source": "There has been recent interest in improving performance of simple models for multiple reasons such as interpretability, robust learning from small data, deployment in memory constrained settings as well as environmental considerations.In this paper, we propose a novel method SRatio that can utilize information from high performing complex models (viz. deep neural networks, boosted trees, random forests) to reweight a training dataset for a potentially low performing simple model such as a decision tree or a shallow network enhancing its performance.Our method also leverages the per sample hardness estimate of the simple model which is not the case with the prior works which primarily consider the complex model's confidences/predictions and is thus conceptually novel.Moreover, we generalize and formalize the concept of attaching probes to intermediate layers of a neural network, which was one of the main ideas in previous work \\citep{profweight}, to other commonly used classifiers and incorporate this into our method.The benefit of these contributions is witnessed in the experiments where on 6 UCI datasets and CIFAR-10 we outperform competitors in a majority (16 out of 27) of the cases and tie for best performance in the remaining cases.In fact, in a couple of cases, we even approach the complex model's performance.We also conduct further experiments to validate assertions and intuitively understand why our method works.Theoretically, we motivate our approach by showing that the weighted loss minimized by simple models using our weighting upper bounds the loss of the complex model.", "target": ["Metodo per migliorare le prestazioni dei modelli semplici dato un modello complesso e accurato.", "L'articolo propone un metodo per migliorare le predizioni di un modello a bassa capacità che mostra vantaggi rispetto agli approcci esistenti."]} +{"source": "We propose a principled method for kernel learning, which relies on a Fourier-analytic characterization of translation-invariant or rotation-invariant kernels.Our method produces a sequence of feature maps, iteratively refining the SVM margin.We provide rigorous guarantees for optimality and generalization, interpreting our algorithm as online equilibrium-finding dynamics in a certain two-player min-max game.Evaluations on synthetic and real-world datasets demonstrate scalability and consistent improvements over related random features-based methods.", "target": ["Un algoritmo semplice e pratico per l'apprendimento di un kernel dai training data che massimizza i margini ed è invariante alle traslazioni o è sfericamente simmetrico, utilizzando strumenti dell'analisi di Fourier e di regret minimization.", "L'articolo propone di imparare un kernal custom invariante alla traslazione o alla rotazione nella rappresentazione di Fourier per massimizzare il margine di SVM.", "Gli autori propongono un interessante algoritmo per apprendere insieme la l1-SVM e il kernel rappresentato da Fourier", "Gli autori considerano l'apprendimento diretto di rappresentazioni di Fourier di kernel invarianti rispetto a shift o traslazioni per applicazioni di machine learning con l'allineamento del kernel ai dati come funzione obiettivo da ottimizzare."]} +{"source": "We elaborate on using importance sampling for causal reasoning, in particular for counterfactual inference.We show how this can be implemented natively in probabilistic programming.By considering the structure of the counterfactual query, one can significantly optimise the inference process.We also consider design choices to enable further optimisations.We introduce MultiVerse, a probabilistic programming prototype engine for approximate causal reasoning.We provide experimental results and compare with Pyro, an existing probabilistic programming framework with some of causal reasoning tools.", "target": ["Programmazione probabilistica che supporta nativamente l'inferenza causale e controfattuale"]} +{"source": "We consider the problem of representing collective behavior of large populations and predicting the evolution of a population distribution over a discrete state space.A discrete time mean field game (MFG) is motivated as an interpretable model founded on game theory for understanding the aggregate effect of individual actions and predicting the temporal evolution of population distributions.We achieve a synthesis of MFG and Markov decision processes (MDP) by showing that a special MFG is reducible to an MDP.This enables us to broaden the scope of mean field game theory and infer MFG models of large real-world systems via deep inverse reinforcement learning.Our method learns both the reward function and forward dynamics of an MFG from real data, and we report the first empirical test of a mean field game model of a real-world social media population.", "target": ["Inferenza di un mean field game (MFG) del comportamento di grandi popolazioni attraverso una sintesi di MFG e processi decisionali di Markov.", "Gli autori trattano l'inferenza nei modelli di comportamento collettivo usando il reinforcement learning inverso per imparare le funzioni di reward degli agenti nel modello."]} +{"source": "We study the problem of training sequential generative models for capturing coordinated multi-agent trajectory behavior, such as offensive basketball gameplay. When modeling such settings, it is often beneficial to design hierarchical models that can capture long-term coordination using intermediate variables. Furthermore, these intermediate variables should capture interesting high-level behavioral semantics in an interpretable and manipulable way.We present a hierarchical framework that can effectively learn such sequential generative models. Our approach is inspired by recent work on leveraging programmatically produced weak labels, which we extend to the spatiotemporal regime.In addition to synthetic settings, we show how to instantiate our framework to effectively model complex interactions between basketball players and generate realistic multi-agent trajectories of basketball gameplay over long time periods.We validate our approach using both quantitative and qualitative evaluations, including a user study comparison conducted with professional sports analysts.", "target": ["Mescoliamo deep generative model con una supervisione debole programmatica per generare traiettorie coordinate multi-agent di qualità significativamente più alta rispetto alle baseline precedenti.", "Propone modelli generativi sequenziali multi-agente.", "L'articolo propone il training di modelli generativi che producono traiettorie multi-agente usando funzioni euristiche che etichettano le variabili che altrimenti sarebbero latenti nei dati di training"]} +{"source": "Many automated machine learning methods, such as those for hyperparameter and neural architecture optimization, are computationally expensive because they involve training many different model configurations.In this work, we present a new method that saves computational budget by terminating poor configurations early on in the training.In contrast to existing methods, we consider this task as a ranking and transfer learning problem.We qualitatively show that by optimizing a pairwise ranking loss and leveraging learning curves from other data sets, our model is able to effectively rank learning curves without having to observe many or very long learning curves.We further demonstrate that our method can be used to accelerate a neural architecture search by a factor of up to 100 without a significant performance degradation of the discovered architecture.In further experiments we analyze the quality of ranking, the influence of different model components as well as the predictive behavior of the model.", "target": ["Imparare a classificare le learning curve al fine di fermare prima i training job non promettenti. La novità è l'uso della loss di pairwise ranking per modellare direttamente la probabilità di migliorare e di trasferire l'apprendimento attraverso i dataset per ridurre i dati di training richiesti.", "L'articolo propone un metodo per classificare le curve di apprendimento delle reti neurali che può modellare le curve di apprendimento su diversi dataset, ottenendo una maggiore velocità nei task di classificazione delle immagini."]} +{"source": "Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster.Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions.This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills.We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence.Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience.While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution - that of using experience replay buffers for all past events - with a mixture of on- and off-policy learning, leveraging behavioral cloning.We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities.When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one.", "target": ["Mostriamo che, nei setting di continual learning, il catastrophic forgetting può essere evitato applicando la RL off-policy a una miscela di esperienze nuove e di replay, con una loss di behavioural cloning.", "Propone una particolare variante di experience replay con behavioural cloning come metodo per continual learning."]} +{"source": "We present a method which learns to integrate temporal information, from a learned dynamics model, with ambiguous visual information, from a learned vision model, in the context of interacting agents.Our method is based on a graph-structured variational recurrent neural network, which is trained end-to-end to infer the current state of the (partially observed) world, as well as to forecast future states.We show that our method outperforms various baselines on two sports datasets, one based on real basketball trajectories, and one generated by a soccer game engine.", "target": ["Presentiamo un metodo che impara a integrare informazioni temporali e informazioni visive ambigue nel contesto di agenti che interagiscono.", "Gli autori propongono Graph VRNN che modella l'interazione di più agenti distribuendo una VRNN per ogni agente", "Questo articolo presenta un'architettura basata su una graph neural network che è addestrata per localizzare e modellare le interazioni degli agenti in un ambiente direttamente dai pixel e mostra il vantaggio di questo modello per effettuare task tracking e prevedere le posizioni degli agenti."]} +{"source": "In this paper we study the problem of learning the weights of a deep convolutional neural network.We consider a network where convolutions are carried out over non-overlapping patches with a single kernel in each layer.We develop an algorithm for simultaneously learning all the kernels from the training data.Our approach dubbed Deep Tensor Decomposition (DeepTD) is based on a rank-1 tensor decomposition.We theoretically investigate DeepTD under a realizable model for the training data where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted convolutional kernels.We show that DeepTD is data-efficient and provably works as soon as the sample size exceeds the total number of convolutional weights in the network.Our numerical experiments demonstrate the effectiveness of DeepTD and verify our theoretical findings.", "target": ["Consideriamo un modello semplificato di deep convolutional neural network. Mostriamo che tutti i layer di questa rete possono essere appresi approssimativamente con una corretta applicazione della decomposizione tensoriale.", "Fornisce garanzie teoriche per l'apprendimento di deep convolutional neural network usando la decomposizione tensoriale rank-one.", "Questo articolo propone un metodo di apprendimento per un caso ristretto di deep convolutional neural netowrk, dove i layer sono limitati al caso non sovrapposto e hanno solo un canale di uscita per layer", "Analizza il problema dell'apprendimento di una classe molto speciale di CNN: ogni layer consiste di un singolo filtro, applicato a patch non sovrapposte dell'input."]} +{"source": "Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy.However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively.Based on these results, we articulate the \"lottery ticket hypothesis:\" dense, randomly-initialized, feed-forward networks contain subnetworks (\"winning tickets\") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations.The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations.We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10.Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.", "target": ["Le reti neurali feedforward che possono avere pesi sottoposti a pruning dopo il training potrebbero avere gli stessi pesi pruned prima del training", "Mostra che esistono sottoreti sparse che possono essere addestrate da zero con buone prestazioni di generalizzazione e propone una NNs non pruned e inizializzata in modo casuale che contiene sottoreti che possono essere addestrate da zero con un'accuratezza di generalizzazione simile.", "L'articolo esamina l'ipotesi che le reti neurali inizializzate in modo casuale contengano sottoreti che convergono altrettanto velocemente o più velocemente e possono raggiungere la stessa o una migliore accuratezza di classificazione"]} +{"source": "We investigate the difficulties of training sparse neural networks and make new observations about optimization dynamics and the energy landscape within the sparse regime.Recent work of \\citep{Gale2019, Liu2018} has shown that sparse ResNet-50 architectures trained on ImageNet-2012 dataset converge to solutions that are significantly worse than those found by pruning.We show that, despite the failure of optimizers, there is a linear path with a monotonically decreasing objective from the initialization to the ``good'' solution.Additionally, our attempts to find a decreasing objective path from ``bad'' solutions to the ``good'' ones in the sparse subspace fail.However, if we allow the path to traverse the dense subspace, then we consistently find a path between two solutions.These findings suggest traversing extra dimensions may be needed to escape stationary points found in the sparse subspace.", "target": ["In questo articolo evidenziamo la difficoltà di addestrare reti neurali sparse facendo esperimenti di interpolazione nell'energy landscape."]} +{"source": "Neural network training depends on the structure of the underlying loss landscape, i.e. local minima, saddle points, flat plateaus, and loss barriers.In relation to the structure of the landscape, we study the permutation symmetry of neurons in each layer of a deep neural network, which gives rise not only to multiple equivalent global minima of the loss function but also to critical points in between partner minima.In a network of $d-1$ hidden layers with $n_k$ neurons in layers $k = 1, \\ldots, d$, we construct continuous paths between equivalent global minima that lead through a `permutation point' where the input and output weight vectors of two neurons in the same hidden layer $k$ collide and interchange.We show that such permutation points are critical points which lie inside high-dimensional subspaces of equal loss, contributing to the global flatness of the landscape.We also find that a permutation point for the exchange of neurons $i$ and $j$ transits into a flat high-dimensional plateau that enables all $n_k!$permutations of neurons in a given layer $k$ at the same loss value. Moreover, we introduce higher-order permutation points by exploiting the hierarchical structure in the loss landscapes of neural networks, and find that the number of $K$-th order permutation points is much larger than the (already huge) number of equivalent global minima -- at least by a polynomial factor of order $K$. In twotasks, we demonstrate numerically with our path finding method that continuous paths between partner minima exist: first, in a toy network with a single hidden layer on a function approximation task and, second, in a multilayer network on the MNIST task. Our geometricapproach yields a lower bound on the number of critical points generated by weight-space symmetries and provides a simple intuitive link between previous theoretical results and numerical observations.", "target": ["La simmetria dello spazio dei pesi nei landscape delle reti neurali dà luogo ad un gran numero di punti di sella e sottospazi piatti ad alta dimensionalità.", "L'articolo ha presentato un metodo low-loss per studiare la funzione di loss rispetto ai parametri in una rete neurale dal punto di vista della simmetria nel weight space."]} +{"source": "The training of stochastic neural network models with binary ($\\pm1$) weights and activations via continuous surrogate networks is investigated.We derive, using mean field theory, a set of scalar equations describing how input signals propagate through surrogate networks.The equations reveal that depending on the choice of surrogate model, the networks may or may not exhibit an order to chaos transition, and the presence of depth scales that limit the maximum trainable depth.Specifically, in solving the equations for edge of chaos conditions, we show that surrogates derived using the Gaussian local reparameterisation trick have no critical initialisation, whereas a deterministic surrogates based on analytic Gaussian integration do.The theory is applied to a range of binary neuron and weight design choices, such as different neuron noise models, allowing the categorisation of algorithms in terms of their behaviour at initialisation.Moreover, we predict theoretically and confirm numerically, that common weight initialization schemes used in standard continuous networks, when applied to the mean values of the stochastic binary weights, yield poor training performance.This study shows that, contrary to common intuition, the means of the stochastic binary weights should be initialised close to close to $\\pm 1$ for deeper networks to be trainable.", "target": ["teoria della propagazione del segnale applicata a surrogati continui di reti binarie; inizializzazione contro intuitiva; reparameterisation trick non utile", "Gli autori studiano le dinamiche di allenamento delle reti neurali binarie quando si usano surrogati continui, studiano quali proprietà dovrebbero avere le reti all'inizializzazione per effettuare il training al meglio e forniscono consigli concreti sui pesi stocastici all'inizializzazione.", "Un'esplorazione approfondita delle reti binarie stocastiche, dei surrogati continui e delle loro dinamiche di training, con approfondimenti su come inizializzare i pesi per ottenere le migliori prestazioni."]} +{"source": "Semantic dependency parsing, which aims to find rich bi-lexical relationships, allows words to have multiple dependency heads, resulting in graph-structured representations.We propose an approach to semi-supervised learning of semantic dependency parsers based on the CRF autoencoder framework.Our encoder is a discriminative neural semantic dependency parser that predicts the latent parse graph of the input sentence.Our decoder is a generative neural model that reconstructs the input sentence conditioned on the latent parse graph.Our model is arc-factored and therefore parsing and learning are both tractable.Experiments show our model achieves significant and consistent improvement over the supervised baseline.", "target": ["Proponiamo un approccio all'apprendimento semi-supervised di semantic dependency parser basato sul framework CRF autoencoder.", "Questo articolo si concentra sul parsing semi-supervised delle dipendenze semantiche usando il CRF-autoencoder per addestrare il modello in uno stile semi-supervised, indicando l'efficacia su task di dati annotati con poche risorse."]} +{"source": "For sequence models with large word-level vocabularies, a majority of network parameters lie in the input and output layers.In this work, we describe a new method, DeFINE, for learning deep word-level representations efficiently.Our architecture uses a hierarchical structure with novel skip-connections which allows for the use of low dimensional input and output layers, reducing total parameters and training time while delivering similar or better performance versus existing methods.DeFINE can be incorporated easily in new or existing sequence models.Compared to state-of-the-art methods including adaptive input representations, this technique results in a 6% to 20% drop in perplexity.On WikiText-103, DeFINE reduces total parameters of Transformer-XL by half with minimal impact on performance.On the Penn Treebank, DeFINE improves AWD-LSTM by 4 points with a 17% reduction in parameters, achieving comparable performance to state-of-the-art methods with fewer parameters.For machine translation, DeFINE improves a Transformer model by 2% while simultaneously reducing total parameters by 26%", "target": ["DeFINE utilizza una rete profonda, gerarchica e sparsa con nuove skip connection per imparare in modo efficiente i word embedding.", "Questo articolo descrive un nuovo metodo per l'apprendimento di rappresentazioni deep a livello di parola in modo efficiente utilizzando una struttura gerarchica con skip-connection per l'uso di layer di input e output a bassa dimensione."]} +{"source": "In this paper, we present a reproduction of the paper of Bertinetto et al. [2019] \"Meta-learning with differentiable closed-form solvers\" as part of the ICLR 2019 Reproducibility Challenge.In successfully reproducing the most crucial part of the paper, we reach a performance that is comparable with or superior to the original paper on two benchmarks for several settings.We evaluate new baseline results, using a new dataset presented in the paper.Yet, we also provide multiple remarks and recommendations about reproducibility and comparability. After we brought our reproducibility work to the authors’ attention, they have updated the original paper on which this work is based and released code as well.Our contributions mainly consist in reproducing the most important results of their original paper, in giving insight in the reproducibility and in providing a first open-source implementation.", "target": ["Riproduciamo con successo e diamo osservazioni sul confronto con le baseline di un approccio di meta learning per la classificazione few-shot che funziona tramite la backpropagation attraverso la soluzione di un risolutore in forma chiusa."]} +{"source": "Network pruning has emerged as a powerful technique for reducing the size of deep neural networks.Pruning uncovers high-performance subnetworks by taking a trained dense network and gradually removing unimportant connections.Recently, alternative techniques have emerged for training sparse networks directly without having to train a large dense model beforehand, thereby achieving small memory footprints during both training and inference.These techniques are based on dynamic reallocation of non-zero parameters during training.Thus, they are in effect executing a training-time search for the optimal subnetwork.We investigate a most recent one of these techniques and conduct additional experiments to elucidate its behavior in training sparse deep convolutional networks.Dynamic parameter reallocation converges early during training to a highly trainable subnetwork.We show that neither the structure, nor the initialization of the discovered high-performance subnetwork is sufficient to explain its good performance.Rather, it is the dynamics of parameter reallocation that are responsible for successful learning.Dynamic parameter reallocation thus improves the trainability of deep convolutional networks, playing a similar role as overparameterization, without incurring the memory and computational cost of the latter.", "target": ["La riallocazione dinamica dei parametri permette il successo del training diretto di reti sparse compatte, e gioca un ruolo indispensabile anche quando conosciamo la rete sparsa ottimale a-priori"]} +{"source": "n this paper we present a thrust in three directions of visual development us- ing supervised and semi-supervised techniques.The first is an implementation of semi-supervised object detection and recognition using the principles of Soft At- tention and Generative Adversarial Networks (GANs).The second and the third are supervised networks that learn basic concepts of spatial locality and quantity respectively using Convolutional Neural Networks (CNNs).The three thrusts to- gether are based on the approach of Experiential Robot Learning, introduced in previous publication.While the results are unripe for implementation, we believe they constitute a stepping stone towards autonomous development of robotic vi- sual modules.", "target": ["3 spinte che servono come trampolini di lancio per l'experiential learning dei robot del modulo di visione", "Indaga sulle prestazioni dei classificatori di immagini e dei rilevatori di oggetti esistenti."]} +{"source": "Characterization of the representations learned in intermediate layers of deep networks can provide valuable insight into the nature of a task and can guide the development of well-tailored learning strategies.Here we study convolutional neural network-based acoustic models in the context of automatic speech recognition.Adapting a method proposed by Yosinski et al. [2014], we measure the transferability of each layer between German and English to assess the their language-specifity.We observe three distinct regions of transferability: (1) the first two layers are entirely transferable between languages, (2) layers 2–8 are also highly transferable but we find evidence of some language specificity, (3) the subsequent fully connected layers are more language specific but can be successfully finetuned to the target language.To further probe the effect of weight freezing, we performed follow-up experiments using freeze-training [Raghu et al., 2017].Our results are consistent with the observation that CCNs converge 'bottom up' during training and demonstrate the benefit of freeze training, especially for transfer learning.", "target": ["Tutti i nostri modelli acustici basati su CNN, tranne per i primi due layer, hanno dimostrato un certo grado di specificità per la lingua, ma il freeze training ha permesso un transfer di successo tra le lingue.", "L'articolo misura la trasferibilità delle feature per ogni layer nei modelli acustici basati su CNN attraverso le lingue, concludendo che gli AM addestrati con la tecnica \"freeze training\" hanno superato gli altri modelli di transfer learning."]} +{"source": "Policy gradients methods often achieve better performance when the change in policy is limited to a small Kullback-Leibler divergence.We derive policy gradients where the change in policy is limited to a small Wasserstein distance (or trust region).This is done in the discrete and continuous multi-armed bandit settings with entropy regularisation.We show that in the small steps limit with respect to the Wasserstein distance $W_2$, policy dynamics are governed by the heat equation, following the Jordan-Kinderlehrer-Otto result.This means that policies undergo diffusion and advection, concentrating near actions with high reward.This helps elucidate the nature of convergence in the probability matching setup, and provides justification for empirical practices such as Gaussian policy priors and additive gradient noise.", "target": ["Si collegano i gradienti di policy entropica della regione di Wasserstein-trust e l'equazione del calore.", "L'articolo esplora le connessioni tra reinforcement learning e la teoria del trasporto ottimale quadratico", "Gli autori hanno studiato il gradiente della policy con il cambiamento delle policy limitato da una regione di fiducia della distanza di Wasserstein nel setting del multi-armed bandit, mostrando che nel limite degli small step, la dinamica della policy è governata dall'equazione del calore (equazione di Fokker-Planck)."]} +{"source": "The softmax function is widely used to train deep neural networks for multi-class classification.Despite its outstanding performance in classification tasks, the features derived from the supervision of softmax are usually sub-optimal in some scenarios where Euclidean distances apply in feature spaces.To address this issue, we propose a new loss, dubbed the isotropic loss, in the sense that the overall distribution of data points is regularized to approach the isotropic normal one.Combined with the vanilla softmax, we formalize a novel criterion called the isotropic softmax, or isomax for short, for supervised learning of deep neural networks.By virtue of the isomax, the intra-class features are penalized by the isotropic loss while inter-class distances are well kept by the original softmax loss.Moreover, the isomax loss does not require any additional modifications to the network, mini-batches or the training process.Extensive experiments on classification and clustering are performed to demonstrate the superiority and robustness of the isomax loss.", "target": ["La capacità discriminativa di softmax per l'apprendimento di vettori di feature di oggetti è efficacemente migliorata dalla normalizzazione isotropica sulla distribuzione globale dei punti dati."]} +{"source": "A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient.Recently, Jin et al. (2018) proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP.In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards \\emph{without} accessing a generative model.We show that the \\textit{sample complexity of exploration} of our algorithm is bounded by $\\tilde{O}({\\frac{SA}{\\epsilon^2(1-\\gamma)^7}})$.This improves the previously best known result of $\\tilde{O}({\\frac{SA}{\\epsilon^4(1-\\gamma)^8}})$ in this setting achieved by delayed Q-learning (Strehlet al., 2006),, and matches the lower bound in terms of $\\epsilon$ as well as $S$ and $A$ up to logarithmic factors.", "target": ["Adattiamo Q-learning con bonus UCB-exploration a MDP infinite-horizon con reward scontate senza accedere a un modello generativo, e migliora il miglior risultato precedentemente conosciuto.", "Questo articolo ha considerato un algoritmo di Q learning con una policy di esplorazione UCB per MDP infinite-horizon."]} +{"source": "Backpropagation is driving today's artificial neural networks (ANNs).However, despite extensive research, it remains unclear if the brain implements this algorithm.Among neuroscientists, reinforcement learning (RL) algorithms are often seen as a realistic alternative: neurons can randomly introduce change, and use unspecific feedback signals to observe their effect on the cost and thus approximate their gradient.However, the convergence rate of such learning scales poorly with the number of involved neurons.Here we propose a hybrid learning approach.Each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide.We provide proof that our approach converges to the true gradient for certain classes of networks.In both feedforward and convolutional networks, we empirically show that our approach learns to approximate the gradient, and can match the performance of gradient-based learning.Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules.", "target": ["Le perturbazioni possono essere usate per addestrare i pesi di feedback per imparare in reti neurali fully connected e convoluzionali", "Questo articolo propone un metodo che affronta il problema del \"trasporto dei pesi\" stimando i pesi per il backward pass usando uno stimatore basato sul rumore "]} +{"source": "This paper proposes and demonstrates a surprising pattern in the training of neural networks: there is a one to one relation between the values of any pair of losses (such as cross entropy, mean squared error, 0/1 error etc.) evaluated for a model arising at (any point of) a training run.This pattern is universal in the sense that this one to one relationship is identical across architectures (such as VGG, Resnet, Densenet etc.), algorithms (SGD and SGD with momentum) and training loss functions (cross entropy and mean squared error).", "target": ["Identifichiamo alcuni modelli universali nel comportamento di diverse loss surrogate (CE, MSE, loss 0-1) durante il training delle reti neurali e presentiamo prove empiriche di supporto."]} \ No newline at end of file