categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2407.10921
null
null
http://arxiv.org/pdf/2407.10921v1
2024-07-15T17:22:16Z
2024-07-15T17:22:16Z
A Dual-Attention Aware Deep Convolutional Neural Network for Early Alzheimer's Detection
Alzheimer's disease (AD) represents the primary form of neurodegeneration, impacting millions of individuals each year and causing progressive cognitive decline. Accurately diagnosing and classifying AD using neuroimaging data presents ongoing challenges in medicine, necessitating advanced interventions that will enhance treatment measures. In this research, we introduce a dual attention enhanced deep learning (DL) framework for classifying AD from neuroimaging data. Combined spatial and self-attention mechanisms play a vital role in emphasizing focus on neurofibrillary tangles and amyloid plaques from the MRI images, which are difficult to discern with regular imaging techniques. Results demonstrate that our model yielded remarkable performance in comparison to existing state of the art (SOTA) convolutional neural networks (CNNs), with an accuracy of 99.1%. Moreover, it recorded remarkable metrics, with an F1-Score of 99.31%, a precision of 99.24%, and a recall of 99.5%. These results highlight the promise of cutting edge DL methods in medical diagnostics, contributing to highly reliable and more efficient healthcare solutions.
[ "['Pandiyaraju V' 'Shravan Venkatraman' 'Abeshek A' 'Aravintakshan S A'\n 'Pavan Kumar S' 'Kannan A']" ]
null
null
2407.10930
null
null
http://arxiv.org/pdf/2407.10930v1
2024-07-15T17:30:31Z
2024-07-15T17:30:31Z
Fine-Tuning and Prompt Optimization: Two Great Steps that Work Better Together
Natural Language Processing (NLP) systems are increasingly taking the form of multi-stage pipelines involving multiple distinct language models (LMs) and prompting strategies. Here we address the question of how to fine-tune such systems to improve their performance. We cast this as a problem of optimizing the underlying LM weights and the prompting strategies together, and consider a challenging but highly realistic scenario in which we have no gold labels for any intermediate stages in the pipeline. To address this challenge, we evaluate approximate optimization strategies in which we bootstrap training labels for all pipeline stages and use these to optimize the pipeline's prompts and fine-tune its weights alternatingly. In experiments with multi-hop QA, mathematical reasoning, and feature-based classification, we find that simple approaches for optimizing the prompts and weights together outperform directly optimizing weights alone and prompts alone by up to 65% and 5%, respectively, on average across LMs and tasks. We will release our new optimizers in DSPy at http://dspy.ai
[ "['Dilara Soylu' 'Christopher Potts' 'Omar Khattab']" ]
null
null
2407.10949
null
null
http://arxiv.org/pdf/2407.10949v1
2024-07-15T17:45:53Z
2024-07-15T17:45:53Z
Representing Rule-based Chatbots with Transformers
Transformer-based chatbots can conduct fluent, natural-sounding conversations, but we have limited understanding of the mechanisms underlying their behavior. Prior work has taken a bottom-up approach to understanding Transformers by constructing Transformers for various synthetic and formal language tasks, such as regular expressions and Dyck languages. However, it is not obvious how to extend this approach to understand more naturalistic conversational agents. In this work, we take a step in this direction by constructing a Transformer that implements the ELIZA program, a classic, rule-based chatbot. ELIZA illustrates some of the distinctive challenges of the conversational setting, including both local pattern matching and long-term dialog state tracking. We build on constructions from prior work -- in particular, for simulating finite-state automata -- showing how simpler constructions can be composed and extended to give rise to more sophisticated behavior. Next, we train Transformers on a dataset of synthetically generated ELIZA conversations and investigate the mechanisms the models learn. Our analysis illustrates the kinds of mechanisms these models tend to prefer -- for example, models favor an induction head mechanism over a more precise, position based copying mechanism; and using intermediate generations to simulate recurrent data structures, like ELIZA's memory mechanisms. Overall, by drawing an explicit connection between neural chatbots and interpretable, symbolic mechanisms, our results offer a new setting for mechanistic analysis of conversational agents.
[ "['Dan Friedman' 'Abhishek Panigrahi' 'Danqi Chen']" ]
null
null
2407.10954
null
null
http://arxiv.org/abs/2407.10954v1
2024-07-15T17:52:22Z
2024-07-15T17:52:22Z
A Unified Differentiable Boolean Operator with Fuzzy Logic
This paper presents a unified differentiable boolean operator for implicit solid shape modeling using Constructive Solid Geometry (CSG). Traditional CSG relies on min, max operators to perform boolean operations on implicit shapes. But because these boolean operators are discontinuous and discrete in the choice of operations, this makes optimization over the CSG representation challenging. Drawing inspiration from fuzzy logic, we present a unified boolean operator that outputs a continuous function and is differentiable with respect to operator types. This enables optimization of both the primitives and the boolean operations employed in CSG with continuous optimization techniques, such as gradient descent. We further demonstrate that such a continuous boolean operator allows modeling of both sharp mechanical objects and smooth organic shapes with the same framework. Our proposed boolean operator opens up new possibilities for future research toward fully continuous CSG optimization.
[ "['Hsueh-Ti Derek Liu' 'Maneesh Agrawala' 'Cem Yuksel' 'Tim Omernick'\n 'Vinith Misra' 'Stefano Corazza' 'Morgan McGuire' 'Victor Zordan']" ]
null
null
2407.10955
null
null
http://arxiv.org/pdf/2407.10955v1
2024-07-15T17:54:03Z
2024-07-15T17:54:03Z
Enhancing Stochastic Optimization for Statistical Efficiency Using ROOT-SGD with Diminishing Stepsize
In this paper, we revisit textsf{ROOT-SGD}, an innovative method for stochastic optimization to bridge the gap between stochastic optimization and statistical efficiency. The proposed method enhances the performance and reliability of textsf{ROOT-SGD} by integrating a carefully designed emph{diminishing stepsize strategy}. This approach addresses key challenges in optimization, providing robust theoretical guarantees and practical benefits. Our analysis demonstrates that textsf{ROOT-SGD} with diminishing achieves optimal convergence rates while maintaining computational efficiency. By dynamically adjusting the learning rate, textsf{ROOT-SGD} ensures improved stability and precision throughout the optimization process. The findings of this study offer valuable insights for developing advanced optimization algorithms that are both efficient and statistically robust.
[ "['Tong Zhang' 'Chris Junchi Li']" ]
null
null
2407.10960
null
null
http://arxiv.org/pdf/2407.10960v1
2024-07-15T17:55:42Z
2024-07-15T17:55:42Z
Fast Matrix Multiplications for Lookup Table-Quantized LLMs
The deployment of large language models (LLMs) is often constrained by memory bandwidth, where the primary bottleneck is the cost of transferring model parameters from the GPU's global memory to its registers. When coupled with custom kernels that fuse the dequantization and matmul operations, weight-only quantization can thus enable faster inference by reducing the amount of memory movement. However, developing high-performance kernels for weight-quantized LLMs presents substantial challenges, especially when the weights are compressed to non-evenly-divisible bit widths (e.g., 3 bits) with non-uniform, lookup table (LUT) quantization. This paper describes FLUTE, a flexible lookup table engine for LUT-quantized LLMs, which uses offline restructuring of the quantized weight matrix to minimize bit manipulations associated with unpacking, and vectorization and duplication of the lookup table to mitigate shared memory bandwidth constraints. At batch sizes < 32 and quantization group size of 128 (typical in LLM inference), the FLUTE kernel can be 2-4x faster than existing GEMM kernels. As an application of FLUTE, we explore a simple extension to lookup table-based NormalFloat quantization and apply it to quantize LLaMA3 to various configurations, obtaining competitive quantization performance against strong baselines while obtaining an end-to-end throughput increase of 1.5 to 2 times.
[ "['Han Guo' 'William Brandon' 'Radostin Cholakov' 'Jonathan Ragan-Kelley'\n 'Eric P. Xing' 'Yoon Kim']" ]
null
null
2407.10964
null
null
http://arxiv.org/pdf/2407.10964v1
2024-07-15T17:58:42Z
2024-07-15T17:58:42Z
No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations
This paper introduces FUNGI, Features from UNsupervised GradIents, a method to enhance the features of vision encoders by leveraging self-supervised gradients. Our method is simple: given any pretrained model, we first compute gradients from various self-supervised objectives for each input. These are projected to a lower dimension and then concatenated with the model's embedding. The resulting features are evaluated on k-nearest neighbor classification over 11 datasets from vision, 5 from natural language processing, and 2 from audio. Across backbones spanning various sizes and pretraining strategies, FUNGI features provide consistent performance improvements over the embeddings. We also show that using FUNGI features can benefit linear classification and image retrieval, and that they significantly improve the retrieval-based in-context scene understanding abilities of pretrained models, for example improving upon DINO by +17% for semantic segmentation - without any training.
[ "['Walter Simoncini' 'Spyros Gidaris' 'Andrei Bursuc' 'Yuki M. Asano']" ]
null
null
2407.10967
null
null
http://arxiv.org/pdf/2407.10967v1
2024-07-15T17:59:23Z
2024-07-15T17:59:23Z
BECAUSE: Bilinear Causal Representation for Generalizable Offline Model-based Reinforcement Learning
Offline model-based reinforcement learning (MBRL) enhances data efficiency by utilizing pre-collected datasets to learn models and policies, especially in scenarios where exploration is costly or infeasible. Nevertheless, its performance often suffers from the objective mismatch between model and policy learning, resulting in inferior performance despite accurate model predictions. This paper first identifies the primary source of this mismatch comes from the underlying confounders present in offline data for MBRL. Subsequently, we introduce textbf{B}ilintextbf{E}ar textbf{CAUS}al rtextbf{E}presentation~(BECAUSE), an algorithm to capture causal representation for both states and actions to reduce the influence of the distribution shift, thus mitigating the objective mismatch problem. Comprehensive evaluations on 18 tasks that vary in data quality and environment context demonstrate the superior performance of BECAUSE over existing offline RL algorithms. We show the generalizability and robustness of BECAUSE under fewer samples or larger numbers of confounders. Additionally, we offer theoretical analysis of BECAUSE to prove its error bound and sample efficiency when integrating causal representation into offline MBRL.
[ "['Haohong Lin' 'Wenhao Ding' 'Jian Chen' 'Laixi Shi' 'Jiacheng Zhu'\n 'Bo Li' 'Ding Zhao']" ]
null
null
2407.10969
null
null
http://arxiv.org/pdf/2407.10969v1
2024-07-15T17:59:29Z
2024-07-15T17:59:29Z
Q-Sparse: All Large Language Models can be Fully Sparsely-Activated
We introduce, Q-Sparse, a simple yet effective approach to training sparsely-activated large language models (LLMs). Q-Sparse enables full sparsity of activations in LLMs which can bring significant efficiency gains in inference. This is achieved by applying top-K sparsification to the activations and the straight-through-estimator to the training. The key results from this work are, (1) Q-Sparse can achieve results comparable to those of baseline LLMs while being much more efficient at inference time; (2) We present an inference-optimal scaling law for sparsely-activated LLMs; (3) Q-Sparse is effective in different settings, including training-from-scratch, continue-training of off-the-shelf LLMs, and finetuning; (4) Q-Sparse works for both full-precision and 1-bit LLMs (e.g., BitNet b1.58). Particularly, the synergy of BitNet b1.58 and Q-Sparse (can be equipped with MoE) provides the cornerstone and a clear path to revolutionize the efficiency, including cost and energy consumption, of future LLMs.
[ "['Hongyu Wang' 'Shuming Ma' 'Ruiping Wang' 'Furu Wei']" ]
null
null
2407.10971
null
null
http://arxiv.org/pdf/2407.10971v1
2024-07-15T17:59:52Z
2024-07-15T17:59:52Z
Walking the Values in Bayesian Inverse Reinforcement Learning
The goal of Bayesian inverse reinforcement learning (IRL) is recovering a posterior distribution over reward functions using a set of demonstrations from an expert optimizing for a reward unknown to the learner. The resulting posterior over rewards can then be used to synthesize an apprentice policy that performs well on the same or a similar task. A key challenge in Bayesian IRL is bridging the computational gap between the hypothesis space of possible rewards and the likelihood, often defined in terms of Q values: vanilla Bayesian IRL needs to solve the costly forward planning problem - going from rewards to the Q values - at every step of the algorithm, which may need to be done thousands of times. We propose to solve this by a simple change: instead of focusing on primarily sampling in the space of rewards, we can focus on primarily working in the space of Q-values, since the computation required to go from Q-values to reward is radically cheaper. Furthermore, this reversion of the computation makes it easy to compute the gradient allowing efficient sampling using Hamiltonian Monte Carlo. We propose ValueWalk - a new Markov chain Monte Carlo method based on this insight - and illustrate its advantages on several tasks.
[ "['Ondrej Bajgar' 'Alessandro Abate' 'Konstantinos Gatsis'\n 'Michael A. Osborne']" ]
null
null
2407.10972
null
null
http://arxiv.org/pdf/2407.10972v1
2024-07-15T17:59:55Z
2024-07-15T17:59:55Z
VGBench: Evaluating Large Language Models on Vector Graphics Understanding and Generation
In the realm of vision models, the primary mode of representation is using pixels to rasterize the visual world. Yet this is not always the best or unique way to represent visual content, especially for designers and artists who depict the world using geometry primitives such as polygons. Vector graphics (VG), on the other hand, offer a textual representation of visual content, which can be more concise and powerful for content like cartoons or sketches. Recent studies have shown promising results on processing vector graphics with capable Large Language Models (LLMs). However, such works focus solely on qualitative results, understanding, or a specific type of vector graphics. We propose VGBench, a comprehensive benchmark for LLMs on handling vector graphics through diverse aspects, including (a) both visual understanding and generation, (b) evaluation of various vector graphics formats, (c) diverse question types, (d) wide range of prompting techniques, (e) under multiple LLMs. Evaluating on our collected 4279 understanding and 5845 generation samples, we find that LLMs show strong capability on both aspects while exhibiting less desirable performance on low-level formats (SVG). Both data and evaluation pipeline will be open-sourced at https://vgbench.github.io.
[ "['Bocheng Zou' 'Mu Cai' 'Jianrui Zhang' 'Yong Jae Lee']" ]
null
null
9703183
null
null
http://arxiv.org/abs/cond-mat/9703183v1
1997-03-20T15:54:36Z
1997-03-20T15:54:36Z
Finite size scaling of the bayesian perceptron
We study numerically the properties of the bayesian perceptron through a gradient descent on the optimal cost function. The theoretical distribution of stabilities is deduced. It predicts that the optimal generalizer lies close to the boundary of the space of (error-free) solutions. The numerical simulations are in good agreement with the theoretical distribution. The extrapolation of the generalization error to infinite input space size agrees with the theoretical results. Finite size corrections are negative and exhibit two different scaling regimes, depending on the training set size. The variance of the generalization error vanishes for $N rightarrow infty$ confirming the property of self-averaging.
[ "['A. Buhot' 'J. -M. Torres Moreno' 'M. B. Gordon']" ]
null
null
9712002
null
null
http://arxiv.org/pdf/cmp-lg/9712002v2
1997-12-11T13:46:34Z
1997-12-09T15:42:46Z
Machine Learning of User Profiles: Representational Issues
As more information becomes available electronically, tools for finding information of interest to users becomes increasingly important. The goal of the research described here is to build a system for generating comprehensible user profiles that accurately capture user interest with minimum user interaction. The research described here focuses on the importance of a suitable generalization hierarchy and representation for learning profiles which are predictively accurate and comprehensible. In our experiments we evaluated both traditional features based on weighted term vectors as well as subject features corresponding to categories which could be drawn from a thesaurus. Our experiments, conducted in the context of a content-based profiling system for on-line newspapers on the World Wide Web (the IDD News Browser), demonstrate the importance of a generalization hierarchy and the promise of combining natural language processing techniques with machine learning (ML) to address an information retrieval (IR) problem.
[ "['Eric Bloedorn' 'Inderjeet Mani' 'T. Richard MacMillan']" ]
null
null
9809110
null
null
http://arxiv.org/pdf/cs/9809110v1
1998-09-27T18:42:51Z
1998-09-27T18:42:51Z
Similarity-Based Models of Word Cooccurrence Probabilities
In many applications of natural language processing (NLP) it is necessary to determine the likelihood of a given word combination. For example, a speech recognizer may need to determine which of the two word combinations ``eat a peach'' and ``eat a beach'' is more likely. Statistical NLP methods determine the likelihood of a word combination from its frequency in a training corpus. However, the nature of language is such that many word combinations are infrequent and do not occur in any given corpus. In this work we propose a method for estimating the probability of such previously unseen word combinations using available information on ``most similar'' words. We describe probabilistic word association models based on distributional word similarity, and apply them to two tasks, language modeling and pseudo-word disambiguation. In the language modeling task, a similarity-based model is used to improve probability estimates for unseen bigrams in a back-off language model. The similarity-based method yields a 20% perplexity improvement in the prediction of unseen bigrams and statistically significant reductions in speech-recognition error. We also compare four similarity-based estimation methods against back-off and maximum-likelihood estimation methods on a pseudo-word sense disambiguation task in which we controlled for both unigram and bigram frequency to avoid giving too much weight to easy-to-disambiguate high-frequency configurations. The similarity-based methods perform up to 40% better on this particular task.
[ "['Ido Dagan' 'Lillian Lee' 'Fernando C. N. Pereira']" ]
null
null
9809111
null
null
http://arxiv.org/pdf/cs/9809111v1
1998-09-28T03:48:22Z
1998-09-28T03:48:22Z
Evolution of Neural Networks to Play the Game of Dots-and-Boxes
Dots-and-Boxes is a child's game which remains analytically unsolved. We implement and evolve artificial neural networks to play this game, evaluating them against simple heuristic players. Our networks do not evaluate or predict the final outcome of the game, but rather recommend moves at each stage. Superior generalisation of play by co-evolved populations is found, and a comparison made with networks trained by back-propagation using simple heuristics as an oracle.
[ "['Lex Weaver' 'Terry Bossomaier']" ]
null
null
9809122
null
null
http://arxiv.org/pdf/cs/9809122v1
1998-09-30T03:44:08Z
1998-09-30T03:44:08Z
Practical algorithms for on-line sampling
One of the core applications of machine learning to knowledge discovery consists on building a function (a hypothesis) from a given amount of data (for instance a decision tree or a neural network) such that we can use it afterwards to predict new instances of the data. In this paper, we focus on a particular situation where we assume that the hypothesis we want to use for prediction is very simple, and thus, the hypotheses class is of feasible size. We study the problem of how to determine which of the hypotheses in the class is almost the best one. We present two on-line sampling algorithms for selecting hypotheses, give theoretical bounds on the number of necessary examples, and analize them exprimentally. We compare them with the simple batch sampling approach commonly used and show that in most of the situations our algorithms use much fewer number of examples.
[ "['Carlos Domingo' 'Ricard Gavalda' 'Osamu Watanabe']" ]
null
null
9811003
null
null
http://arxiv.org/pdf/cs/9811003v1
1998-10-31T19:33:50Z
1998-10-31T19:33:50Z
A Winnow-Based Approach to Context-Sensitive Spelling Correction
A large class of machine-learning problems in natural language require the characterization of linguistic context. Two characteristic properties of such problems are that their feature space is of very high dimensionality, and their target concepts refer to only a small subset of the features in the space. Under such conditions, multiplicative weight-update algorithms such as Winnow have been shown to have exceptionally good theoretical properties. We present an algorithm combining variants of Winnow and weighted-majority voting, and apply it to a problem in the aforementioned class: context-sensitive spelling correction. This is the task of fixing spelling errors that happen to result in valid words, such as substituting "to" for "too", "casual" for "causal", etc. We evaluate our algorithm, WinSpell, by comparing it against BaySpell, a statistics-based method representing the state of the art for this task. We find: (1) When run with a full (unpruned) set of features, WinSpell achieves accuracies significantly higher than BaySpell was able to achieve in either the pruned or unpruned condition; (2) When compared with other systems in the literature, WinSpell exhibits the highest performance; (3) The primary reason that WinSpell outperforms BaySpell is that WinSpell learns a better linear separator; (4) When run on a test set drawn from a different corpus than the training set was drawn from, WinSpell is better able than BaySpell to adapt, using a strategy we will present that combines supervised learning on the training set with unsupervised learning on the (noisy) test set.
[ "['Andrew R. Golding' 'Dan Roth']" ]
null
null
9811006
null
null
http://arxiv.org/pdf/cs/9811006v1
1998-11-02T18:57:23Z
1998-11-02T18:57:23Z
Machine Learning of Generic and User-Focused Summarization
A key problem in text summarization is finding a salience function which determines what information in the source should be included in the summary. This paper describes the use of machine learning on a training corpus of documents and their abstracts to discover salience functions which describe what combination of features is optimal for a given summarization task. The method addresses both "generic" and user-focused summaries.
[ "['Inderjeet Mani' 'Eric Bloedorn']" ]
null
null
9811010
null
null
http://arxiv.org/pdf/cs/9811010v1
1998-11-03T21:14:32Z
1998-11-03T21:14:32Z
Learning to Resolve Natural Language Ambiguities: A Unified Approach
We analyze a few of the commonly used statistics based and machine learning algorithms for natural language disambiguation tasks and observe that they can be re-cast as learning linear separators in the feature space. Each of the methods makes a priori assumptions, which it employs, given the data, when searching for its hypothesis. Nevertheless, as we show, it searches a space that is as rich as the space of all linear separators. We use this to build an argument for a data driven approach which merely searches for a good linear separator in the feature space, without further assumptions on the domain or a specific problem. We present such an approach - a sparse network of linear separators, utilizing the Winnow learning algorithm - and show how to use it in a variety of ambiguity resolution problems. The learning approach presented is attribute-efficient and, therefore, appropriate for domains having very large number of attributes. In particular, we present an extensive experimental comparison of our approach with other methods on several well studied lexical disambiguation tasks such as context-sensitive spelling correction, prepositional phrase attachment and part of speech tagging. In all cases we show that our approach either outperforms other methods tried for these tasks or performs comparably to the best.
[ "['Dan Roth']" ]
null
null
9812021
null
null
http://arxiv.org/pdf/cs/9812021v1
1998-12-22T16:33:19Z
1998-12-22T16:33:19Z
Forgetting Exceptions is Harmful in Language Learning
We show that in language learning, contrary to received wisdom, keeping exceptional training instances in memory can be beneficial for generalization accuracy. We investigate this phenomenon empirically on a selection of benchmark natural language processing tasks: grapheme-to-phoneme conversion, part-of-speech tagging, prepositional-phrase attachment, and base noun phrase chunking. In a first series of experiments we combine memory-based learning with training set editing techniques, in which instances are edited based on their typicality and class prediction strength. Results show that editing exceptional instances (with low typicality or low class prediction strength) tends to harm generalization accuracy. In a second series of experiments we compare memory-based learning and decision-tree learning methods on the same selection of tasks, and find that decision-tree learning often performs worse than memory-based learning. Moreover, the decrease in performance can be linked to the degree of abstraction from exceptions (i.e., pruning or eagerness). We provide explanations for both results in terms of the properties of the natural language processing tasks and the learning algorithms.
[ "['Walter Daelemans' 'Antal van den Bosch' 'Jakub Zavrel']" ]
null
null
9901001
null
null
http://arxiv.org/pdf/cs/9901001v1
1999-01-05T00:56:54Z
1999-01-05T00:56:54Z
TDLeaf(lambda): Combining Temporal Difference Learning with Game-Tree Search
In this paper we present TDLeaf(lambda), a variation on the TD(lambda) algorithm that enables it to be used in conjunction with minimax search. We present some experiments in both chess and backgammon which demonstrate its utility and provide comparisons with TD(lambda) and another less radical variant, TD-directed(lambda). In particular, our chess program, ``KnightCap,'' used TDLeaf(lambda) to learn its evaluation function while playing on the Free Internet Chess Server (FICS, fics.onenet.net). It improved from a 1650 rating to a 2100 rating in just 308 games. We discuss some of the reasons for this success and the relationship between our results and Tesauro's results in backgammon.
[ "['Jonathan Baxter' 'Andrew Tridgell' 'Lex Weaver']" ]
null
null
9901002
null
null
http://arxiv.org/pdf/cs/9901002v1
1999-01-10T03:21:23Z
1999-01-10T03:21:23Z
KnightCap: A chess program that learns by combining TD(lambda) with game-tree search
In this paper we present TDLeaf(lambda), a variation on the TD(lambda) algorithm that enables it to be used in conjunction with game-tree search. We present some experiments in which our chess program ``KnightCap'' used TDLeaf(lambda) to learn its evaluation function while playing on the Free Internet Chess Server (FICS, fics.onenet.net). The main success we report is that KnightCap improved from a 1650 rating to a 2150 rating in just 308 games and 3 days of play. As a reference, a rating of 1650 corresponds to about level B human play (on a scale from E (1000) to A (1800)), while 2150 is human master level. We discuss some of the reasons for this success, principle among them being the use of on-line, rather than self-play.
[ "['Jonathan Baxter' 'Andrew Tridgell' 'Lex Weaver']" ]
null
null
9901014
null
null
http://arxiv.org/pdf/cs/9901014v1
1999-01-27T17:48:14Z
1999-01-27T17:48:14Z
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's minimal sufficient statistic. In general we show that data compression is almost always the best strategy, both in hypothesis identification and prediction.
[ "['Paul Vitanyi' 'Ming Li']" ]
null
null
9902006
null
null
http://arxiv.org/pdf/cs/9902006v1
1999-02-02T16:17:16Z
1999-02-02T16:17:16Z
A Discipline of Evolutionary Programming
Genetic fitness optimization using small populations or small population updates across generations generally suffers from randomly diverging evolutions. We propose a notion of highly probable fitness optimization through feasible evolutionary computing runs on small size populations. Based on rapidly mixing Markov chains, the approach pertains to most types of evolutionary genetic algorithms, genetic programming and the like. We establish that for systems having associated rapidly mixing Markov chains and appropriate stationary distributions the new method finds optimal programs (individuals) with probability almost 1. To make the method useful would require a structured design methodology where the development of the program and the guarantee of the rapidly mixing property go hand in hand. We analyze a simple example to show that the method is implementable. More significant examples require theoretical advances, for example with respect to the Metropolis filter.
[ "['Paul Vitanyi']" ]
null
null
9902026
null
null
http://arxiv.org/pdf/cs/9902026v1
1999-02-15T01:52:45Z
1999-02-15T01:52:45Z
Probabilistic Inductive Inference:a Survey
Inductive inference is a recursion-theoretic theory of learning, first developed by E. M. Gold (1967). This paper surveys developments in probabilistic inductive inference. We mainly focus on finite inference of recursive functions, since this simple paradigm has produced the most interesting (and most complex) results.
[ "['Andris Ambainis']" ]
null
null
9905004
null
null
http://arxiv.org/pdf/cs/9905004v1
1999-05-10T20:52:23Z
1999-05-10T20:52:23Z
Using Collective Intelligence to Route Internet Traffic
A COllective INtelligence (COIN) is a set of interacting reinforcement learning (RL) algorithms designed in an automated fashion so that their collective behavior optimizes a global utility function. We summarize the theory of COINs, then present experiments using that theory to design COINs to control internet traffic routing. These experiments indicate that COINs outperform all previously investigated RL-based, shortest path routing algorithms.
[ "['David H. Wolpert' 'Kagan Tumer' 'Jeremy Frank']" ]
null
null
9905005
null
null
http://arxiv.org/pdf/cs/9905005v1
1999-05-10T22:20:40Z
1999-05-10T22:20:40Z
General Principles of Learning-Based Multi-Agent Systems
We consider the problem of how to design large decentralized multi-agent systems (MAS's) in an automated fashion, with little or no hand-tuning. Our approach has each agent run a reinforcement learning algorithm. This converts the problem into one of how to automatically set/update the reward functions for each of the agents so that the global goal is achieved. In particular we do not want the agents to ``work at cross-purposes'' as far as the global goal is concerned. We use the term artificial COllective INtelligence (COIN) to refer to systems that embody solutions to this problem. In this paper we present a summary of a mathematical framework for COINs. We then investigate the real-world applicability of the core concepts of that framework via two computer experiments: we show that our COINs perform near optimally in a difficult variant of Arthur's bar problem (and in particular avoid the tragedy of the commons for that problem), and we also illustrate optimal performance for our COINs in the leader-follower problem.
[ "['David H. Wolpert' 'Kevin R. Wheeler' 'Kagan Tumer']" ]
null
null
9905007
null
null
http://arxiv.org/pdf/cs/9905007v1
1999-05-12T14:25:40Z
1999-05-12T14:25:40Z
An Efficient, Probabilistically Sound Algorithm for Segmentation and Word Discovery
This paper presents a model-based, unsupervised algorithm for recovering word boundaries in a natural-language text from which they have been deleted. The algorithm is derived from a probability model of the source that generated the text. The fundamental structure of the model is specified abstractly so that the detailed component models of phonology, word-order, and word frequency can be replaced in a modular fashion. The model yields a language-independent, prior probability distribution on all possible sequences of all possible words over a given alphabet, based on the assumption that the input was generated by concatenating words from a fixed but unknown lexicon. The model is unusual in that it treats the generation of a complete corpus, regardless of length, as a single event in the probability space. Accordingly, the algorithm does not estimate a probability distribution on words; instead, it attempts to calculate the prior probabilities of various word sequences that could underlie the observed text. Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that this algorithm is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances. Keywords: Bayesian grammar induction, probability models, minimum description length (MDL), unsupervised learning, cognitive modeling, language acquisition, segmentation
[ "['Michael R. Brent']" ]
null
null
9905008
null
null
http://arxiv.org/pdf/cs/9905008v1
1999-05-19T14:52:33Z
1999-05-19T14:52:33Z
Inducing a Semantically Annotated Lexicon via EM-Based Clustering
We present a technique for automatic induction of slot annotations for subcategorization frames, based on induction of hidden classes in the EM framework of statistical estimation. The models are empirically evalutated by a general decision test. Induction of slot labeling for subcategorization frames is accomplished by a further application of EM, and applied experimentally on frame observations derived from parsing large corpora. We outline an interpretation of the learned representations as theoretical-linguistic decompositional lexical entries.
[ "['Mats Rooth' 'Stefan Riezler' 'Detlef Prescher' 'Glenn Carroll'\n 'Franz Beil']" ]
null
null
9905009
null
null
http://arxiv.org/pdf/cs/9905009v1
1999-05-19T14:47:21Z
1999-05-19T14:47:21Z
Inside-Outside Estimation of a Lexicalized PCFG for German
The paper describes an extensive experiment in inside-outside estimation of a lexicalized probabilistic context free grammar for German verb-final clauses. Grammar and formalism features which make the experiment feasible are described. Successive models are evaluated on precision and recall of phrase markup.
[ "['Franz Beil' 'Glenn Carroll' 'Detlef Prescher' 'Stefan Riezler'\n 'Mats Rooth']" ]
null
null
9905010
null
null
http://arxiv.org/pdf/cs/9905010v1
1999-05-19T16:03:05Z
1999-05-19T16:03:05Z
Statistical Inference and Probabilistic Modelling for Constraint-Based NLP
We present a probabilistic model for constraint-based grammars and a method for estimating the parameters of such models from incomplete, i.e., unparsed data. Whereas methods exist to estimate the parameters of probabilistic context-free grammars from incomplete data (Baum 1970), so far for probabilistic grammars involving context-dependencies only parameter estimation techniques from complete, i.e., fully parsed data have been presented (Abney 1997). However, complete-data estimation requires labor-intensive, error-prone, and grammar-specific hand-annotating of large language corpora. We present a log-linear probability model for constraint logic programming, and a general algorithm to estimate the parameters of such models from incomplete data by extending the estimation algorithm of Della-Pietra, Della-Pietra, and Lafferty (1997) to incomplete data settings.
[ "['Stefan Riezler']" ]
null
null
9905011
null
null
http://arxiv.org/pdf/cs/9905011v1
1999-05-20T18:28:15Z
1999-05-20T18:28:15Z
Ensembles of Radial Basis Function Networks for Spectroscopic Detection of Cervical Pre-Cancer
The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, non-invasively and quantitatively probes the biochemical and morphological changes that occur in pre-cancerous tissue. A multivariate statistical algorithm was used to extract clinically useful information from tissue spectra acquired from 361 cervical sites from 95 patients at 337, 380 and 460 nm excitation wavelengths. The multivariate statistical analysis was also employed to reduce the number of fluorescence excitation-emission wavelength pairs required to discriminate healthy tissue samples from pre-cancerous tissue samples. The use of connectionist methods such as multi layered perceptrons, radial basis function networks, and ensembles of such networks was investigated. RBF ensemble algorithms based on fluorescence spectra potentially provide automated, and near real-time implementation of pre-cancer detection in the hands of non-experts. The results are more reliable, direct and accurate than those achieved by either human experts or multivariate statistical algorithms.
[ "['Kagan Tumer' 'Nirmala Ramanujam' 'Joydeep Ghosh'\n 'Rebecca Richards-Kortum']" ]
null
null
9905012
null
null
http://arxiv.org/pdf/cs/9905012v1
1999-05-20T20:15:13Z
1999-05-20T20:15:13Z
Linear and Order Statistics Combiners for Pattern Classification
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the "added" error. If N unbiased classifiers are combined by simple averaging, the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the ith order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.
[ "['Kagan Tumer' 'Joydeep Ghosh']" ]
null
null
9905013
null
null
http://arxiv.org/pdf/cs/9905013v1
1999-05-20T20:37:02Z
1999-05-20T20:37:02Z
Robust Combining of Disparate Classifiers through Order Statistics
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In the typical setting investigated till now, each classifier is trained on data taken or resampled from a common data set, or (almost) randomly selected subsets thereof, and thus experiences similar quality of training data. However, in certain situations where data is acquired and analyzed on-line at several geographically distributed locations, the quality of data may vary substantially, leading to large discrepancies in performance of individual classifiers. In this article we introduce and investigate a family of classifiers based on order statistics, for robust handling of such cases. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when such combiners are used. We show analytically that the selection of the median, the maximum and in general, the $i^{th}$ order statistic improves classification performance. Furthermore, we introduce the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that they are quite beneficial in presence of outliers or uneven classifier performance. Experimental results on several public domain data sets corroborate these findings.
[ "['Kagan Tumer' 'Joydeep Ghosh']" ]
null
null
9905014
null
null
http://arxiv.org/pdf/cs/9905014v1
1999-05-21T14:26:07Z
1999-05-21T14:26:07Z
Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition
This paper presents the MAXQ approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges wih probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this non-hierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.
[ "['Thomas G. Dietterich']" ]
null
null
9905015
null
null
http://arxiv.org/pdf/cs/9905015v1
1999-05-21T14:49:39Z
1999-05-21T14:49:39Z
State Abstraction in MAXQ Hierarchical Reinforcement Learning
Many researchers have explored methods for hierarchical reinforcement learning (RL) with temporal abstractions, in which abstract actions are defined that can perform many primitive actions before terminating. However, little is known about learning with state abstractions, in which aspects of the state space are ignored. In previous work, we developed the MAXQ method for hierarchical RL. In this paper, we define five conditions under which state abstraction can be combined with the MAXQ value function decomposition. We prove that the MAXQ-Q learning algorithm converges under these conditions and show experimentally that state abstraction is important for the successful application of MAXQ-Q learning.
[ "['Thomas G. Dietterich']" ]
null
null
9906004
null
null
http://arxiv.org/pdf/cs/9906004v1
1999-06-02T13:41:51Z
1999-06-02T13:41:51Z
Cascaded Grammatical Relation Assignment
In this paper we discuss cascaded Memory-Based grammatical relations assignment. In the first stages of the cascade, we find chunks of several types (NP,VP,ADJP,ADVP,PP) and label them with their adverbial function (e.g. local, temporal). In the last stage, we assign grammatical relations to pairs of chunks. We studied the effect of adding several levels to this cascaded classifier and we found that even the less performing chunkers enhanced the performance of the relation finder.
[ "['Sabine Buchholz' 'Jorn Veenstra' 'Walter Daelemans']" ]
null
null
9906005
null
null
http://arxiv.org/pdf/cs/9906005v1
1999-06-02T13:48:48Z
1999-06-02T13:48:48Z
Memory-Based Shallow Parsing
We present a memory-based learning (MBL) approach to shallow parsing in which POS tagging, chunking, and identification of syntactic relations are formulated as memory-based modules. The experiments reported in this paper show competitive results, the F-value for the Wall Street Journal (WSJ) treebank is: 93.8% for NP chunking, 94.7% for VP chunking, 77.1% for subject detection and 79.0% for object detection.
[ "['Walter Daelemans' 'Sabine Buchholz' 'Jorn Veenstra']" ]
null
null
9906016
null
null
http://arxiv.org/pdf/cs/9906016v1
1999-06-18T03:25:03Z
1999-06-18T03:25:03Z
Automatically Selecting Useful Phrases for Dialogue Act Tagging
We present an empirical investigation of various ways to automatically identify phrases in a tagged corpus that are useful for dialogue act tagging. We found that a new method (which measures a phrase's deviation from an optimally-predictive phrase), enhanced with a lexical filtering mechanism, produces significantly better cues than manually-selected cue phrases, the exhaustive set of phrases in a training corpus, and phrases chosen by traditional metrics, like mutual information and information gain.
[ "['Ken Samuel' 'Sandra Carberry' 'K. Vijay-Shanker']" ]
null
null
9907004
null
null
http://arxiv.org/pdf/cs/9907004v2
1999-10-14T00:31:39Z
1999-07-06T01:44:00Z
MAP Lexicon is useful for segmentation and word discovery in child-directed speech
Because of rather fundamental changes to the underlying model proposed in the paper, it has been withdrawn from the archive.
[ "['Anand Venkataraman']" ]
null
null
9908013
null
null
http://arxiv.org/abs/cs/9908013v1
1999-08-17T21:32:41Z
1999-08-17T21:32:41Z
Collective Intelligence for Control of Distributed Dynamical Systems
We consider the El Farol bar problem, also known as the minority game (W. B. Arthur, ``The American Economic Review'', 84(2): 406--411 (1994), D. Challet and Y.C. Zhang, ``Physica A'', 256:514 (1998)). We view it as an instance of the general problem of how to configure the nodal elements of a distributed dynamical system so that they do not ``work at cross purposes'', in that their collective dynamics avoids frustration and thereby achieves a provided global goal. We summarize a mathematical theory for such configuration applicable when (as in the bar problem) the global goal can be expressed as minimizing a global energy function and the nodes can be expressed as minimizers of local free energy functions. We show that a system designed with that theory performs nearly optimally for the bar problem.
[ "['David H. Wolpert' 'Kevin R. Wheeler' 'Kagan Tumer']" ]
null
null
9908014
null
null
http://arxiv.org/pdf/cs/9908014v1
1999-08-17T22:49:19Z
1999-08-17T22:49:19Z
An Introduction to Collective Intelligence
This paper surveys the emerging science of how to design a ``COllective INtelligence'' (COIN). A COIN is a large multi-agent system where: (i) There is little to no centralized communication or control; and (ii) There is a provided world utility function that rates the possible histories of the full system. In particular, we are interested in COINs in which each agent runs a reinforcement learning (RL) algorithm. Rather than use a conventional modeling approach (e.g., model the system dynamics, and hand-tune agents to cooperate), we aim to solve the COIN design problem implicitly, via the ``adaptive'' character of the RL algorithms of each of the agents. This approach introduces an entirely new, profound design problem: Assuming the RL algorithms are able to achieve high rewards, what reward functions for the individual agents will, when pursued by those agents, result in high world utility? In other words, what reward functions will best ensure that we do not have phenomena like the tragedy of the commons, Braess's paradox, or the liquidity trap? Although still very young, research specifically concentrating on the COIN design problem has already resulted in successes in artificial domains, in particular in packet-routing, the leader-follower problem, and in variants of Arthur's El Farol bar problem. It is expected that as it matures and draws upon other disciplines related to COINs, this research will greatly expand the range of tasks addressable by human engineers. Moreover, in addition to drawing on them, such a fully developed scie nce of COIN design may provide much insight into other already established scientific fields, such as economics, game theory, and population biology.
[ "['David H. Wolpert' 'Kagan Tumer']" ]
null
null
9910011
null
null
http://arxiv.org/pdf/cs/9910011v1
1999-10-13T03:25:33Z
1999-10-13T03:25:33Z
A statistical model for word discovery in child directed speech
A statistical model for segmentation and word discovery in child directed speech is presented. An incremental unsupervised learning algorithm to infer word boundaries based on this model is described and results of empirical tests showing that the algorithm is competitive with other models that have been used for similar tasks are also presented.
[ "['Anand Venkataraman']" ]
null
null
9912008
null
null
http://arxiv.org/pdf/cs/9912008v2
2001-01-26T18:45:28Z
1999-12-13T08:33:43Z
New Error Bounds for Solomonoff Prediction
Solomonoff sequence prediction is a scheme to predict digits of binary strings without knowing the underlying probability distribution. We call a prediction scheme informed when it knows the true probability distribution of the sequence. Several new relations between universal Solomonoff sequence prediction and informed prediction and general probabilistic prediction schemes will be proved. Among others, they show that the number of errors in Solomonoff prediction is finite for computable distributions, if finite in the informed case. Deterministic variants will also be studied. The most interesting result is that the deterministic variant of Solomonoff prediction is optimal compared to any other probabilistic or deterministic prediction scheme apart from additive square root corrections only. This makes it well suited even for difficult prediction problems, where it does not suffice when the number of errors is minimal to within some factor greater than one. Solomonoff's original bound and the ones presented here complement each other in a useful way.
[ "['Marcus Hutter']" ]
null
null
9912016
null
null
http://arxiv.org/pdf/cs/9912016v1
1999-12-23T01:07:33Z
1999-12-23T01:07:33Z
HMM Specialization with Selective Lexicalization
We present a technique which complements Hidden Markov Models by incorporating some lexicalized states representing syntactically uncommon words. Our approach examines the distribution of transitions, selects the uncommon words, and makes lexicalized states for the words. We performed a part-of-speech tagging experiment on the Brown corpus to evaluate the resultant language model and discovered that this technique improved the tagging accuracy by 0.21% at the 95% level of confidence.
[ "['Jin-Dong Kim' 'Sang-Zoo Lee' 'Hae-Chang Rim']" ]
null
null
null
2,023
cvpr
null
null
null
Train-Once-for-All Personalization
We study the problem of how to train a "personalization-friendly" model such that given only the task descriptions, the model can be adapted to different end-users' needs, e.g., for accurately classifying different subsets of objects. One baseline approach is to train a "generic" model for classifying a wide range of objects, followed by class selection. In our experiments, we however found it suboptimal, perhaps because the model's weights are kept frozen without being personalized. To address this drawback, we propose Train-once-for-All PERsonalization (TAPER), a framework that is trained just once and can later customize a model for different end-users given their task descriptions. TAPER learns a set of "basis" models and a mixer predictor, such that given the task description, the weights (not the predictions!) of the basis models can be on the fly combined into a single "personalized" model. Via extensive experiments on multiple recognition tasks, we show that TAPER consistently outperforms the baseline methods in achieving a higher personalized accuracy. Moreover, we show that TAPER can synthesize a much smaller model to achieve comparable performance to a huge generic model, making it "deployment-friendly" to resource-limited end devices. Interestingly, even without end-users' task descriptions, TAPER can still be specialized to the deployed context based on its past predictions, making it even more "personalization-friendly".
[ "Hong-You Chen, Yandong Li, Yin Cui, Mingda Zhang, Wei-Lun Chao, Li Zhang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 11818-11827" ]