Titles
string
Abstracts
string
Years
int64
Categories
string
Mathematical Model for Transformation of Sentences from Active Voice to Passive Voice
Formal work in linguistics has both produced and used important mathematical tools. Motivated by a survey of models for context and word meaning, syntactic categories, phrase structure rules and trees, an attempt is being made in the present paper to present a mathematical model for structuring of sentences from active voice to passive voice, which is is the form of a transitive verb whose grammatical subject serves as the patient, receiving the action of the verb. For this purpose we have parsed all sentences of a corpus and have generated Boolean groups for each of them. It has been observed that when we take constituents of the sentences as subgroups, the sequences of phrases form permutation roups. Application of isomorphism property yields permutation mapping between the important subgroups. It has resulted in a model for transformation of sentences from active voice to passive voice. A computer program has been written to enable the software developers to evolve grammar software for sentence transformations.
2,009
Computation and Language
Language Diversity across the Consonant Inventories: A Study in the Framework of Complex Networks
n this paper, we attempt to explain the emergence of the linguistic diversity that exists across the consonant inventories of some of the major language families of the world through a complex network based growth model. There is only a single parameter for this model that is meant to introduce a small amount of randomness in the otherwise preferential attachment based growth process. The experiments with this model parameter indicates that the choice of consonants among the languages within a family are far more preferential than it is across the families. The implications of this result are twofold -- (a) there is an innate preference of the speakers towards acquiring certain linguistic structures over others and (b) shared ancestry propels the stronger preferential connection between the languages within a family than across them. Furthermore, our observations indicate that this parameter might bear a correlation with the period of existence of the language families under investigation.
2,009
Computation and Language
Acquisition of morphological families and derivational series from a machine readable dictionary
The paper presents a linguistic and computational model aiming at making the morphological structure of the lexicon emerge from the formal and semantic regularities of the words it contains. The model is word-based. The proposed morphological structure consists of (1) binary relations that connect each headword with words that are morphologically related, and especially with the members of its morphological family and its derivational series, and of (2) the analogies that hold between the words. The model has been tested on the lexicon of French using the TLFi machine readable dictionary.
2,009
Computation and Language
An Object-Oriented and Fast Lexicon for Semantic Generation
This paper is about the technical design of a large computational lexicon, its storage, and its access from a Prolog environment. Traditionally, efficient access and storage of data structures is implemented by a relational database management system. In Delilah, a lexicon-based NLP system, efficient access to the lexicon by the semantic generator is vital. We show that our highly detailed HPSG-style lexical specifications do not fit well in the Relational Model, and that they cannot be efficiently retrieved. We argue that they fit more naturally in the Object-Oriented Model. Although storage of objects is redundant, we claim that efficient access is still possible by applying indexing, and compression techniques from the Relational Model to the Object-Oriented Model. We demonstrate that it is possible to implement object-oriented storage and fast access in ISO Prolog.
2,009
Computation and Language
Normalized Web Distance and Word Similarity
There is a great deal of work in cognitive psychology, linguistics, and computer science, about using word (or phrase) frequencies in context in text corpora to develop measures for word similarity or word association, going back to at least the 1960s. The goal of this chapter is to introduce the normalizedis a general way to tap the amorphous low-grade knowledge available for free on the Internet, typed in by local users aiming at personal gratification of diverse objectives, and yet globally achieving what is effectively the largest semantic electronic database in the world. Moreover, this database is available for all by using any search engine that can return aggregate page-count estimates for a large range of search-queries. In the paper introducing the NWD it was called `normalized Google distance (NGD),' but since Google doesn't allow computer searches anymore, we opt for the more neutral and descriptive NWD. web distance (NWD) method to determine similarity between words and phrases. It
2,009
Computation and Language
Encoding models for scholarly literature
We examine the issue of digital formats for document encoding, archiving and publishing, through the specific example of "born-digital" scholarly journal articles. We will begin by looking at the traditional workflow of journal editing and publication, and how these practices have made the transition into the online domain. We will examine the range of different file formats in which electronic articles are currently stored and published. We will argue strongly that, despite the prevalence of binary and proprietary formats such as PDF and MS Word, XML is a far superior encoding choice for journal articles. Next, we look at the range of XML document structures (DTDs, Schemas) which are in common use for encoding journal articles, and consider some of their strengths and weaknesses. We will suggest that, despite the existence of specialized schemas intended specifically for journal articles (such as NLM), and more broadly-used publication-oriented schemas such as DocBook, there are strong arguments in favour of developing a subset or customization of the Text Encoding Initiative (TEI) schema for the purpose of journal-article encoding; TEI is already in use in a number of journal publication projects, and the scale and precision of the TEI tagset makes it particularly appropriate for encoding scholarly articles. We will outline the document structure of a TEI-encoded journal article, and look in detail at suggested markup patterns for specific features of journal articles.
2,010
Computation and Language
Size dependent word frequencies and translational invariance of books
It is shown that a real novel shares many characteristic features with a null model in which the words are randomly distributed throughout the text. Such a common feature is a certain translational invariance of the text. Another is that the functional form of the word-frequency distribution of a novel depends on the length of the text in the same way as the null model. This means that an approximate power-law tail ascribed to the data will have an exponent which changes with the size of the text-section which is analyzed. A further consequence is that a novel cannot be described by text-evolution models like the Simon model. The size-transformation of a novel is found to be well described by a specific Random Book Transformation. This size transformation in addition enables a more precise determination of the functional form of the word-frequency distribution. The implications of the results are discussed.
2,010
Computation and Language
Properties of quasi-alphabetic tree bimorphisms
We study the class of quasi-alphabetic relations, i.e., tree transformations defined by tree bimorphisms with two quasi-alphabetic tree homomorphisms and a regular tree language. We present a canonical representation of these relations; as an immediate consequence, we get the closure under union. Also, we show that they are not closed under intersection and complement, and do not preserve most common operations on trees (branches, subtrees, v-product, v-quotient, f-top-catenation). Moreover, we prove that the translations defined by quasi-alphabetic tree bimorphism are exactly products of context-free string languages. We conclude by presenting the connections between quasi-alphabetic relations, alphabetic relations and classes of tree transformations defined by several types of top-down tree transducers. Furthermore, we get that quasi-alphabetic relations preserve the recognizable and algebraic tree languages.
2,010
Computation and Language
Without a 'doubt'? Unsupervised discovery of downward-entailing operators
An important part of textual inference is making deductions involving monotonicity, that is, determining whether a given assertion entails restrictions or relaxations of that assertion. For instance, the statement 'We know the epidemic spread quickly' does not entail 'We know the epidemic spread quickly via fleas', but 'We doubt the epidemic spread quickly' entails 'We doubt the epidemic spread quickly via fleas'. Here, we present the first algorithm for the challenging lexical-semantics problem of learning linguistic constructions that, like 'doubt', are downward entailing (DE). Our algorithm is unsupervised, resource-lean, and effective, accurately recovering many DE operators that are missing from the hand-constructed lists that textual-inference systems currently use.
2,009
Computation and Language
How opinions are received by online communities: A case study on Amazon.com helpfulness votes
There are many on-line settings in which users publicly express opinions. A number of these offer mechanisms for other users to evaluate these opinions; a canonical example is Amazon.com, where reviews come with annotations like "26 of 32 people found the following review helpful." Opinion evaluation appears in many off-line settings as well, including market research and political campaigns. Reasoning about the evaluation of an opinion is fundamentally different from reasoning about the opinion itself: rather than asking, "What did Y think of X?", we are asking, "What did Z think of Y's opinion of X?" Here we develop a framework for analyzing and modeling opinion evaluation, using a large-scale collection of Amazon book reviews as a dataset. We find that the perceived helpfulness of a review depends not just on its content but also but also in subtle ways on how the expressed evaluation relates to other evaluations of the same product. As part of our approach, we develop novel methods that take advantage of the phenomenon of review "plagiarism" to control for the effects of text in opinion evaluation, and we provide a simple and natural mathematical model consistent with our findings. Our analysis also allows us to distinguish among the predictions of competing theories from sociology and social psychology, and to discover unexpected differences in the collective opinion-evaluation behavior of user populations from different countries.
2,009
Computation and Language
Non-Parametric Bayesian Areal Linguistics
We describe a statistical model over linguistic areas and phylogeny. Our model recovers known areas and identifies a plausible hierarchy of areal features. The use of areas improves genetic reconstruction of languages both qualitatively and quantitatively according to a variety of metrics. We model linguistic areas by a Pitman-Yor process and linguistic phylogeny by Kingman's coalescent.
2,009
Computation and Language
A Bayesian Model for Discovering Typological Implications
A standard form of analysis for linguistic typology is the universal implication. These implications state facts about the range of extant languages, such as ``if objects come after verbs, then adjectives come after nouns.'' Such implications are typically discovered by painstaking hand analysis over a small sample of languages. We propose a computational model for assisting at this process. Our model is able to discover both well-known implications as well as some novel implications that deserve further study. Moreover, through a careful application of hierarchical analysis, we are able to cope with the well-known sampling problem: languages are not independent.
2,007
Computation and Language
Induction of Word and Phrase Alignments for Automatic Document Summarization
Current research in automatic single document summarization is dominated by two effective, yet naive approaches: summarization by sentence extraction, and headline generation via bag-of-words models. While successful in some tasks, neither of these models is able to adequately capture the large set of linguistic devices utilized by humans when they produce summaries. One possible explanation for the widespread use of these models is that good techniques have been developed to extract appropriate training data for them from existing document/abstract and document/headline corpora. We believe that future progress in automatic summarization will be driven both by the development of more sophisticated, linguistically informed models, as well as a more effective leveraging of document/abstract corpora. In order to open the doors to simultaneously achieving both of these goals, we have developed techniques for automatically producing word-to-word and phrase-to-phrase alignments between documents and their human-written abstracts. These alignments make explicit the correspondences that exist in such document/abstract pairs, and create a potentially rich data source from which complex summarization algorithms may learn. This paper describes experiments we have carried out to analyze the ability of humans to perform such alignments, and based on these analyses, we describe experiments for creating them automatically. Our model for the alignment task is based on an extension of the standard hidden Markov model, and learns to create alignments in a completely unsupervised fashion. We describe our model in detail and present experimental results that show that our model is able to learn to reliably identify word- and phrase-level alignments in a corpus of <document,abstract> pairs.
2,005
Computation and Language
A Noisy-Channel Model for Document Compression
We present a document compression system that uses a hierarchical noisy-channel model of text production. Our compression system first automatically derives the syntactic structure of each sentence and the overall discourse structure of the text given as input. The system then uses a statistical hierarchical model of text production in order to drop non-important syntactic and discourse constituents so as to generate coherent, grammatical document compressions of arbitrary length. The system outperforms both a baseline and a sentence-based compression system that operates by simplifying sequentially all sentences in a text. Our results support the claim that discourse knowledge plays an important role in document summarization.
2,002
Computation and Language
A Large-Scale Exploration of Effective Global Features for a Joint Entity Detection and Tracking Model
Entity detection and tracking (EDT) is the task of identifying textual mentions of real-world entities in documents, extending the named entity detection and coreference resolution task by considering mentions other than names (pronouns, definite descriptions, etc.). Like NE tagging and coreference resolution, most solutions to the EDT task separate out the mention detection aspect from the coreference aspect. By doing so, these solutions are limited to using only local features for learning. In contrast, by modeling both aspects of the EDT task simultaneously, we are able to learn using highly complex, non-local features. We develop a new joint EDT model and explore the utility of many features, demonstrating their effectiveness on this task.
2,005
Computation and Language
Bayesian Query-Focused Summarization
We present BayeSum (for ``Bayesian summarization''), a model for sentence extraction in query-focused summarization. BayeSum leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BayeSum is not afflicted by the paucity of information in short queries. We show that approximate inference in BayeSum is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BayeSum can be understood as a justified query expansion technique in the language modeling for IR framework.
2,006
Computation and Language
Pattern Based Term Extraction Using ACABIT System
In this paper, we propose a pattern-based term extraction approach for Japanese, applying ACABIT system originally developed for French. The proposed approach evaluates termhood using morphological patterns of basic terms and term variants. After extracting term candidates, ACABIT system filters out non-terms from the candidates based on log-likelihood. This approach is suitable for Japanese term extraction because most of Japanese terms are compound nouns or simple phrasal patterns.
2,003
Computation and Language
Un syst\`eme modulaire d'acquisition automatique de traductions \`a partir du Web
We present a method of automatic translation (French/English) of Complex Lexical Units (CLU) for aiming at extracting a bilingual lexicon. Our modular system is based on linguistic properties (compositionality, polysemy, etc.). Different aspects of the multilingual Web are used to validate candidate translations and collect new terms. We first build a French corpus of Web pages to collect CLU. Three adapted processing stages are applied for each linguistic property : compositional and non polysemous translations, compositional polysemous translations and non compositional translations. Our evaluation on a sample of CLU shows that our technique based on the Web can reach a very high precision.
2,009
Computation and Language
Multiple Retrieval Models and Regression Models for Prior Art Search
This paper presents the system called PATATRAS (PATent and Article Tracking, Retrieval and AnalysiS) realized for the IP track of CLEF 2009. Our approach presents three main characteristics: 1. The usage of multiple retrieval models (KL, Okapi) and term index definitions (lemma, phrase, concept) for the three languages considered in the present track (English, French, German) producing ten different sets of ranked results. 2. The merging of the different results based on multiple regression models using an additional validation set created from the patent collection. 3. The exploitation of patent metadata and of the citation structures for creating restricted initial working sets of patents and for producing a final re-ranking regression model. As we exploit specific metadata of the patent documents and the citation relations only at the creation of initial working sets and during the final post ranking step, our architecture remains generic and easy to extend.
2,009
Computation and Language
An OLAC Extension for Dravidian Languages
OLAC was founded in 2000 for creating online databases of language resources. This paper intends to review the bottom-up distributed character of the project and proposes an extension of the architecture for Dravidian languages. An ontological structure is considered for effective natural language processing (NLP) and its advantages over statistical methods are reviewed
2,009
Computation and Language
Empowering OLAC Extension using Anusaaraka and Effective text processing using Double Byte coding
The paper reviews the hurdles while trying to implement the OLAC extension for Dravidian / Indian languages. The paper further explores the possibilities which could minimise or solve these problems. In this context, the Chinese system of text processing and the anusaaraka system are scrutinised.
2,009
Computation and Language
Implementation of Rule Based Algorithm for Sandhi-Vicheda Of Compound Hindi Words
Sandhi means to join two or more words to coin new word. Sandhi literally means `putting together' or combining (of sounds), It denotes all combinatory sound-changes effected (spontaneously) for ease of pronunciation. Sandhi-vicheda describes [5] the process by which one letter (whether single or cojoined) is broken to form two words. Part of the broken letter remains as the last letter of the first word and part of the letter forms the first letter of the next letter. Sandhi- Vicheda is an easy and interesting way that can give entirely new dimension that add new way to traditional approach to Hindi Teaching. In this paper using the Rule based algorithm we have reported an accuracy of 60-80% depending upon the number of rules to be implemented.
2,009
Computation and Language
Reference Resolution within the Framework of Cognitive Grammar
Following the principles of Cognitive Grammar, we concentrate on a model for reference resolution that attempts to overcome the difficulties previous approaches, based on the fundamental assumption that all reference (independent on the type of the referring expression) is accomplished via access to and restructuring of domains of reference rather than by direct linkage to the entities themselves. The model accounts for entities not explicitly mentioned but understood in a discourse, and enables exploitation of discursive and perceptual context to limit the set of potential referents for a given referring expression. As the most important feature, we note that a single mechanism is required to handle what are typically treated as diverse phenomena. Our approach, then, provides a fresh perspective on the relations between Cognitive Grammar and the problem of reference.
2,001
Computation and Language
Marking-up multiple views of a Text: Discourse and Reference
We describe an encoding scheme for discourse structure and reference, based on the TEI Guidelines and the recommendations of the Corpus Encoding Specification (CES). A central feature of the scheme is a CES-based data architecture enabling the encoding of and access to multiple views of a marked-up document. We describe a tool architecture that supports the encoding scheme, and then show how we have used the encoding scheme and the tools to perform a discourse analytic task in support of a model of global discourse cohesion called Veins Theory (Cristea & Ide, 1998).
1,998
Computation and Language
A Common XML-based Framework for Syntactic Annotations
It is widely recognized that the proliferation of annotation schemes runs counter to the need to re-use language resources, and that standards for linguistic annotation are becoming increasingly mandatory. To answer this need, we have developed a framework comprised of an abstract model for a variety of different annotation types (e.g., morpho-syntactic tagging, syntactic annotation, co-reference annotation, etc.), which can be instantiated in different ways depending on the annotator's approach and goals. In this paper we provide an overview of the framework, demonstrate its applicability to syntactic annotation, and show how it can contribute to comparative evaluation of parser output and diverse syntactic annotation schemes.
2,001
Computation and Language
Standards for Language Resources
This paper presents an abstract data model for linguistic annotations and its implementation using XML, RDF and related standards; and to outline the work of a newly formed committee of the International Standards Organization (ISO), ISO/TC 37/SC 4 Language Resource Management, which will use this work as its starting point. The primary motive for presenting the latter is to solicit the participation of members of the research community to contribute to the work of the committee.
2,002
Computation and Language
Language Models for Handwritten Short Message Services
Handwriting is an alternative method for entering texts composing Short Message Services. However, a whole new language features the texts which are produced. They include for instance abbreviations and other consonantal writing which sprung up for time saving and fashion. We have collected and processed a significant number of such handwriting SMS, and used various strategies to tackle this challenging area of handwriting recognition. We proposed to study more specifically three different phenomena: consonant skeleton, rebus, and phonetic writing. For each of them, we compare the rough results produced by a standard recognition system with those obtained when using a specific language model.
2,007
Computation and Language
Vers la reconnaissance de mini-messages manuscrits
Handwriting is an alternative method for entering texts which composed Short Message Services. However, a whole new language features the texts which are produced. They include for instance abbreviations and other consonantal writing which sprung up for time saving and fashion. We have collected and processed a significant number of such handwritten SMS, and used various strategies to tackle this challenging area of handwriting recognition. We proposed to study more specifically three different phenomena: consonant skeleton, rebus, and phonetic writing. For each of them, we compare the rough results produced by a standard recognition system with those obtained when using a specific language model to take care of them.
2,007
Computation and Language
Analyse en d\'ependances \`a l'aide des grammaires d'interaction
This article proposes a method to extract dependency structures from phrase-structure level parsing with Interaction Grammars. Interaction Grammars are a formalism which expresses interactions among words using a polarity system. Syntactical composition is led by the saturation of polarities. Interactions take place between constituents, but as grammars are lexicalized, these interactions can be translated at the level of words. Dependency relations are extracted from the parsing process: every dependency is the consequence of a polarity saturation. The dependency relations we obtain can be seen as a refinement of the usual dependency tree. Generally speaking, this work sheds new light on links between phrase structure and dependency parsing.
2,009
Computation and Language
Grouping Synonyms by Definitions
We present a method for grouping the synonyms of a lemma according to its dictionary senses. The senses are defined by a large machine readable dictionary for French, the TLFi (Tr\'esor de la langue fran\c{c}aise informatis\'e) and the synonyms are given by 5 synonym dictionaries (also for French). To evaluate the proposed method, we manually constructed a gold standard where for each (word, definition) pair and given the set of synonyms defined for that word by the 5 synonym dictionaries, 4 lexicographers specified the set of synonyms they judge adequate. While inter-annotator agreement ranges on that task from 67% to at best 88% depending on the annotator pair and on the synonym dictionary being considered, the automatic procedure we propose scores a precision of 67% and a recall of 71%. The proposed method is compared with related work namely, word sense disambiguation, synonym lexicon acquisition and WordNet construction.
2,009
Computation and Language
Mathematics, Recursion, and Universals in Human Languages
There are many scientific problems generated by the multiple and conflicting alternative definitions of linguistic recursion and human recursive processing that exist in the literature. The purpose of this article is to make available to the linguistic community the standard mathematical definition of recursion and to apply it to discuss linguistic recursion. As a byproduct, we obtain an insight into certain "soft universals" of human languages, which are related to cognitive constructs necessary to implement mathematical reasoning, i.e. mathematical model theory.
2,009
Computation and Language
Towards Multimodal Content Representation
Multimodal interfaces, combining the use of speech, graphics, gestures, and facial expressions in input and output, promise to provide new possibilities to deal with information in more effective and efficient ways, supporting for instance: - the understanding of possibly imprecise, partial or ambiguous multimodal input; - the generation of coordinated, cohesive, and coherent multimodal presentations; - the management of multimodal interaction (e.g., task completion, adapting the interface, error prevention) by representing and exploiting models of the user, the domain, the task, the interactive context, and the media (e.g. text, audio, video). The present document is intended to support the discussion on multimodal content representation, its possible objectives and basic constraints, and how the definition of a generic representation framework for multimodal content representation may be approached. It takes into account the results of the Dagstuhl workshop, in particular those of the informal working group on multimodal meaning representation that was active during the workshop (see http://www.dfki.de/~wahlster/Dagstuhl_Multi_Modality, Working Group 4).
2,002
Computation and Language
A Note On Higher Order Grammar
Both syntax-phonology and syntax-semantics interfaces in Higher Order Grammar (HOG) are expressed as axiomatic theories in higher-order logic (HOL), i.e. a language is defined entirely in terms of provability in the single logical system. An important implication of this elegant architecture is that the meaning of a valid expression turns out to be represented not by a single, nor even by a few "discrete" terms (in case of ambiguity), but by a "continuous" set of logically equivalent terms. The note is devoted to precise formulation and proof of this observation.
2,009
Computation and Language
Ludics and its Applications to natural Language Semantics
Proofs, in Ludics, have an interpretation provided by their counter-proofs, that is the objects they interact with. We follow the same idea by proposing that sentence meanings are given by the counter-meanings they are opposed to in a dialectical interaction. The conception is at the intersection of a proof-theoretic and a game-theoretic accounts of semantics, but it enlarges them by allowing to deal with possibly infinite processes.
2,009
Computation and Language
Evaluation of Hindi to Punjabi Machine Translation System
Machine Translation in India is relatively young. The earliest efforts date from the late 80s and early 90s. The success of every system is judged from its evaluation experimental results. Number of machine translation systems has been started for development but to the best of author knowledge, no high quality system has been completed which can be used in real applications. Recently, Punjabi University, Patiala, India has developed Punjabi to Hindi Machine translation system with high accuracy of about 92%. Both the systems i.e. system under question and developed system are between same closely related languages. Thus, this paper presents the evaluation results of Hindi to Punjabi machine translation system. It makes sense to use same evaluation criteria as that of Punjabi to Hindi Punjabi Machine Translation System. After evaluation, the accuracy of the system is found to be about 95%.
2,009
Computation and Language
The Uned systems at Senseval-2
We have participated in the SENSEVAL-2 English tasks (all words and lexical sample) with an unsupervised system based on mutual information measured over a large corpus (277 million words) and some additional heuristics. A supervised extension of the system was also presented to the lexical sample task. Our system scored first among unsupervised systems in both tasks: 56.9% recall in all words, 40.2% in lexical sample. This is slightly worse than the first sense heuristic for all words and 3.6% better for the lexical sample, a strong indication that unsupervised Word Sense Disambiguation remains being a strong challenge.
2,002
Computation and Language
Word Sense Disambiguation Based on Mutual Information and Syntactic Patterns
This paper describes a hybrid system for WSD, presented to the English all-words and lexical-sample tasks, that relies on two different unsupervised approaches. The first one selects the senses according to mutual information proximity between a context word a variant of the sense. The second heuristic analyzes the examples of use in the glosses of the senses so that simple syntactic patterns are inferred. This patterns are matched against the disambiguation contexts. We show that the first heuristic obtains a precision and recall of .58 and .35 respectively in the all words task while the second obtains .80 and .25. The high precision obtained recommends deeper research of the techniques. Results for the lexical sample task are also provided.
2,004
Computation and Language
Word Sense Disambiguation Using English-Spanish Aligned Phrases over Comparable Corpora
In this paper we describe a WSD experiment based on bilingual English-Spanish comparable corpora in which individual noun phrases have been identified and aligned with their respective counterparts in the other language. The evaluation of the experiment has been carried out against SemCor. We show that, with the alignment algorithm employed, potential precision is high (74.3%), however the coverage of the method is low (2.7%), due to alignments being far less frequent than we expected. Contrary to our intuition, precision does not rise consistently with the number of alignments. The coverage is low due to several factors; there are important domain differences, and English and Spanish are too close languages for this approach to be able to discriminate efficiently between senses, rendering it unsuitable for WSD, although the method may prove more productive in machine translation.
2,009
Computation and Language
A New Computational Schema for Euphonic Conjunctions in Sanskrit Processing
Automated language processing is central to the drive to enable facilitated referencing of increasingly available Sanskrit E texts. The first step towards processing Sanskrit text involves the handling of Sanskrit compound words that are an integral part of Sanskrit texts. This firstly necessitates the processing of euphonic conjunctions or sandhis, which are points in words or between words, at which adjacent letters coalesce and transform. The ancient Sanskrit grammarian Panini's codification of the Sanskrit grammar is the accepted authority in the subject. His famed sutras or aphorisms, numbering approximately four thousand, tersely, precisely and comprehensively codify the rules of the grammar, including all the rules pertaining to sandhis. This work presents a fresh new approach to processing sandhis in terms of a computational schema. This new computational model is based on Panini's complex codification of the rules of grammar. The model has simple beginnings and is yet powerful, comprehensive and computationally lean.
2,009
Computation and Language
ANN-based Innovative Segmentation Method for Handwritten text in Assamese
Artificial Neural Network (ANN) s has widely been used for recognition of optically scanned character, which partially emulates human thinking in the domain of the Artificial Intelligence. But prior to recognition, it is necessary to segment the character from the text to sentences, words etc. Segmentation of words into individual letters has been one of the major problems in handwriting recognition. Despite several successful works all over the work, development of such tools in specific languages is still an ongoing process especially in the Indian context. This work explores the application of ANN as an aid to segmentation of handwritten characters in Assamese- an important language in the North Eastern part of India. The work explores the performance difference obtained in applying an ANN-based dynamic segmentation algorithm compared to projection- based static segmentation. The algorithm involves, first training of an ANN with individual handwritten characters recorded from different individuals. Handwritten sentences are separated out from text using a static segmentation method. From the segmented line, individual characters are separated out by first over segmenting the entire line. Each of the segments thus obtained, next, is fed to the trained ANN. The point of segmentation at which the ANN recognizes a segment or a combination of several segments to be similar to a handwritten character, a segmentation boundary for the character is assumed to exist and segmentation performed. The segmented character is next compared to the best available match and the segmentation boundary confirmed.
2,009
Computation and Language
Co-word Analysis using the Chinese Character Set
Until recently, Chinese texts could not be studied using co-word analysis because the words are not separated by spaces in Chinese (and Japanese). A word can be composed of one or more characters. The online availability of programs that separate Chinese texts makes it possible to analyze them using semantic maps. Chinese characters contain not only information, but also meaning. This may enhance the readability of semantic maps. In this study, we analyze 58 words which occur ten or more times in the 1652 journal titles of the China Scientific and Technical Papers and Citations Database. The word occurrence matrix is visualized and factor-analyzed.
2,008
Computation and Language
A Discourse-based Approach in Text-based Machine Translation
This paper presents a theoretical research based approach to ellipsis resolution in machine translation. The formula of discourse is applied in order to resolve ellipses. The validity of the discourse formula is analyzed by applying it to the real world text, i.e., newspaper fragments. The source text is converted into mono-sentential discourses where complex discourses require further dissection either directly into primitive discourses or first into compound discourses and later into primitive ones. The procedure of dissection needs further improvement, i.e., discovering as many primitive discourse forms as possible. An attempt has been made to investigate new primitive discourses or patterns from the given text.
2,010
Computation and Language
Resolution of Unidentified Words in Machine Translation
This paper presents a mechanism of resolving unidentified lexical units in Text-based Machine Translation (TBMT). In a Machine Translation (MT) system it is unlikely to have a complete lexicon and hence there is intense need of a new mechanism to handle the problem of unidentified words. These unknown words could be abbreviations, names, acronyms and newly introduced terms. We have proposed an algorithm for the resolution of the unidentified words. This algorithm takes discourse unit (primitive discourse) as a unit of analysis and provides real time updates to the lexicon. We have manually applied the algorithm to news paper fragments. Along with anaphora and cataphora resolution, many unknown words especially names and abbreviations were updated to the lexicon.
2,010
Computation and Language
Standards for Language Resources
The goal of this paper is two-fold: to present an abstract data model for linguistic annotations and its implementation using XML, RDF and related standards; and to outline the work of a newly formed committee of the International Standards Organization (ISO), ISO/TC 37/SC 4 Language Resource Management, which will use this work as its starting point.
2,001
Computation and Language
Active Learning for Mention Detection: A Comparison of Sentence Selection Strategies
We propose and compare various sentence selection strategies for active learning for the task of detecting mentions of entities. The best strategy employs the sum of confidences of two statistical classifiers trained on different views of the data. Our experimental results show that, compared to the random selection strategy, this strategy reduces the amount of required labeled training data by over 50% while achieving the same performance. The effect is even more significant when only named mentions are considered: the system achieves the same performance by using only 42% of the training data required by the random selection strategy.
2,009
Computation and Language
A New Look at the Classical Entropy of Written English
A simple method for finding the entropy and redundancy of a reasonable long sample of English text by direct computer processing and from first principles according to Shannon theory is presented. As an example, results on the entropy of the English language have been obtained based on a total of 20.3 million characters of written English, considering symbols from one to five hundred characters in length. Besides a more realistic value of the entropy of English, a new perspective on some classic entropy-related concepts is presented. This method can also be extended to other Latin languages. Some implications for practical applications such as plagiarism-detection software, and the minimum number of words that should be used in social Internet network messaging, are discussed.
2,009
Computation and Language
Automated languages phylogeny from Levenshtein distance
Languages evolve over time in a process in which reproduction, mutation and extinction are all possible, similar to what happens to living organisms. Using this similarity it is possible, in principle, to build family trees which show the degree of relatedness between languages. The method used by modern glottochronology, developed by Swadesh in the 1950s, measures distances from the percentage of words with a common historical origin. The weak point of this method is that subjective judgment plays a relevant role. Recently we proposed an automated method that avoids the subjectivity, whose results can be replicated by studies that use the same database and that doesn't require a specific linguistic knowledge. Moreover, the method allows a quick comparison of a large number of languages. We applied our method to the Indo-European and Austronesian families, considering in both cases, fifty different languages. The resulting trees are similar to those of previous studies, but with some important differences in the position of few languages and subgroups. We believe that these differences carry new information on the structure of the tree and on the phylogenetic relationships within families.
2,012
Computation and Language
Automated words stability and languages phylogeny
The idea of measuring distance between languages seems to have its roots in the work of the French explorer Dumont D'Urville (D'Urville 1832). He collected comparative words lists of various languages during his voyages aboard the Astrolabe from 1826 to1829 and, in his work about the geographical division of the Pacific, he proposed a method to measure the degree of relation among languages. The method used by modern glottochronology, developed by Morris Swadesh in the 1950s (Swadesh 1952), measures distances from the percentage of shared cognates, which are words with a common historical origin. Recently, we proposed a new automated method which uses normalized Levenshtein distance among words with the same meaning and averages on the words contained in a list. Another classical problem in glottochronology is the study of the stability of words corresponding to different meanings. Words, in fact, evolve because of lexical changes, borrowings and replacement at a rate which is not the same for all of them. The speed of lexical evolution is different for different meanings and it is probably related to the frequency of use of the associated words (Pagel et al. 2007). This problem is tackled here by an automated methodology only based on normalized Levenshtein distance.
2,009
Computation and Language
Measuring the Meaning of Words in Contexts: An automated analysis of controversies about Monarch butterflies, Frankenfoods, and stem cells
Co-words have been considered as carriers of meaning across different domains in studies of science, technology, and society. Words and co-words, however, obtain meaning in sentences, and sentences obtain meaning in their contexts of use. At the science/society interface, words can be expected to have different meanings: the codes of communication that provide meaning to words differ on the varying sides of the interface. Furthermore, meanings and interfaces may change over time. Given this structuring of meaning across interfaces and over time, we distinguish between metaphors and diaphors as reflexive mechanisms that facilitate the translation between contexts. Our empirical focus is on three recent scientific controversies: Monarch butterflies, Frankenfoods, and stem-cell therapies. This study explores new avenues that relate the study of co-word analysis in context with the sociological quest for the analysis and processing of meaning.
2,006
Computation and Language
Standardization of the formal representation of lexical information for NLP
A survey of dictionary models and formats is presented as well as a presentation of corresponding recent standardisation activities.
2,010
Computation and Language
Acquisition d'informations lexicales \`a partir de corpus C\'edric Messiant et Thierry Poibeau
This paper is about automatic acquisition of lexical information from corpora, especially subcategorization acquisition.
2,009
Computation and Language
Hierarchies in Dictionary Definition Space
A dictionary defines words in terms of other words. Definitions can tell you the meanings of words you don't know, but only if you know the meanings of the defining words. How many words do you need to know (and which ones) in order to be able to learn all the rest from definitions? We reduced dictionaries to their "grounding kernels" (GKs), about 10% of the dictionary, from which all the other words could be defined. The GK words turned out to have psycholinguistic correlates: they were learned at an earlier age and more concrete than the rest of the dictionary. But one can compress still more: the GK turns out to have internal structure, with a strongly connected "kernel core" (KC) and a surrounding layer, from which a hierarchy of definitional distances can be derived, all the way out to the periphery of the full dictionary. These definitional distances, too, are correlated with psycholinguistic variables (age of acquisition, concreteness, imageability, oral and written frequency) and hence perhaps with the "mental lexicon" in each of our heads.
2,009
Computation and Language
Lexical evolution rates by automated stability measure
Phylogenetic trees can be reconstructed from the matrix which contains the distances between all pairs of languages in a family. Recently, we proposed a new method which uses normalized Levenshtein distances among words with same meaning and averages on all the items of a given list. Decisions about the number of items in the input lists for language comparison have been debated since the beginning of glottochronology. The point is that words associated to some of the meanings have a rapid lexical evolution. Therefore, a large vocabulary comparison is only apparently more accurate then a smaller one since many of the words do not carry any useful information. In principle, one should find the optimal length of the input lists studying the stability of the different items. In this paper we tackle the problem with an automated methodology only based on our normalized Levenshtein distance. With this approach, the program of an automated reconstruction of languages relationships is completed.
2,015
Computation and Language
Measures of lexical distance between languages
The idea of measuring distance between languages seems to have its roots in the work of the French explorer Dumont D'Urville \cite{Urv}. He collected comparative words lists of various languages during his voyages aboard the Astrolabe from 1826 to 1829 and, in his work about the geographical division of the Pacific, he proposed a method to measure the degree of relation among languages. The method used by modern glottochronology, developed by Morris Swadesh in the 1950s, measures distances from the percentage of shared cognates, which are words with a common historical origin. Recently, we proposed a new automated method which uses normalized Levenshtein distance among words with the same meaning and averages on the words contained in a list. Recently another group of scholars \cite{Bak, Hol} proposed a refined of our definition including a second normalization. In this paper we compare the information content of our definition with the refined version in order to decide which of the two can be applied with greater success to resolve relationships among languages.
2,015
Computation and Language
Parsing of part-of-speech tagged Assamese Texts
A natural language (or ordinary language) is a language that is spoken, written, or signed by humans for general-purpose communication, as distinguished from formal languages (such as computer-programming languages or the "languages" used in the study of formal logic). The computational activities required for enabling a computer to carry out information processing using natural language is called natural language processing. We have taken Assamese language to check the grammars of the input sentence. Our aim is to produce a technique to check the grammatical structures of the sentences in Assamese text. We have made grammar rules by analyzing the structures of Assamese sentences. Our parsing program finds the grammatical errors, if any, in the Assamese sentence. If there is no error, the program will generate the parse tree for the Assamese sentence
2,009
Computation and Language
Representing human and machine dictionaries in Markup languages
In this chapter we present the main issues in representing machine readable dictionaries in XML, and in particular according to the Text Encoding Dictionary (TEI) guidelines.
2,010
Computation and Language
A Survey of Paraphrasing and Textual Entailment Methods
Paraphrasing methods recognize, generate, or extract phrases, sentences, or longer natural language expressions that convey almost the same information. Textual entailment methods, on the other hand, recognize, generate, or extract pairs of natural language expressions, such that a human who reads (and trusts) the first element of a pair would most likely infer that the other element is also true. Paraphrasing can be seen as bidirectional textual entailment and methods from the two areas are often similar. Both kinds of methods are useful, at least in principle, in a wide range of natural language processing applications, including question answering, summarization, text generation, and machine translation. We summarize key ideas from the two areas by considering in turn recognition, generation, and extraction methods, also pointing to prominent articles and resources.
2,010
Computation and Language
Speech Recognition Oriented Vowel Classification Using Temporal Radial Basis Functions
The recent resurgence of interest in spatio-temporal neural network as speech recognition tool motivates the present investigation. In this paper an approach was developed based on temporal radial basis function "TRBF" looking to many advantages: few parameters, speed convergence and time invariance. This application aims to identify vowels taken from natural speech samples from the Timit corpus of American speech. We report a recognition accuracy of 98.06 percent in training and 90.13 in test on a subset of 6 vowel phonemes, with the possibility to expend the vowel sets in future.
2,009
Computation and Language
Syllable Analysis to Build a Dictation System in Telugu language
In recent decades, Speech interactive systems gained increasing importance. To develop Dictation System like Dragon for Indian languages it is most important to adapt the system to a speaker with minimum training. In this paper we focus on the importance of creating speech database at syllable units and identifying minimum text to be considered while training any speech recognition system. There are systems developed for continuous speech recognition in English and in few Indian languages like Hindi and Tamil. This paper gives the statistical details of syllables in Telugu and its use in minimizing the search space during recognition of speech. The minimum words that cover maximum syllables are identified. This words list can be used for preparing a small text which can be used for collecting speech sample while training the dictation system. The results are plotted for frequency of syllables and the number of syllables in each word. This approach is applied on the CIIL Mysore text corpus which is of 3 million words.
2,009
Computation and Language
Speech Recognition by Machine, A Review
This paper presents a brief survey on Automatic Speech Recognition and discusses the major themes and advances made in the past 60 years of research, so as to provide a technological perspective and an appreciation of the fundamental progress that has been accomplished in this important area of speech communication. After years of research and development the accuracy of automatic speech recognition remains one of the important research challenges (e.g., variations of the context, speakers, and environment).The design of Speech Recognition system requires careful attentions to the following issues: Definition of various types of speech classes, speech representation, feature extraction techniques, speech classifiers, database and performance evaluation. The problems that are existing in ASR and the various techniques to solve these problems constructed by various research workers have been presented in a chronological order. Hence authors hope that this work shall be a contribution in the area of speech recognition. The objective of this review paper is to summarize and compare some of the well known methods used in various stages of speech recognition system and identify research topic and applications which are at the forefront of this exciting and challenging field.
2,009
Computation and Language
Sentence Simplification Aids Protein-Protein Interaction Extraction
Accurate systems for extracting Protein-Protein Interactions (PPIs) automatically from biomedical articles can help accelerate biomedical research. Biomedical Informatics researchers are collaborating to provide metaservices and advance the state-of-art in PPI extraction. One problem often neglected by current Natural Language Processing systems is the characteristic complexity of the sentences in biomedical literature. In this paper, we report on the impact that automatic simplification of sentences has on the performance of a state-of-art PPI extraction system, showing a substantial improvement in recall (8%) when the sentence simplification method is applied, without significant impact to precision.
2,009
Computation and Language
Towards Effective Sentence Simplification for Automatic Processing of Biomedical Text
The complexity of sentences characteristic to biomedical articles poses a challenge to natural language parsers, which are typically trained on large-scale corpora of non-technical text. We propose a text simplification process, bioSimplify, that seeks to reduce the complexity of sentences in biomedical abstracts in order to improve the performance of syntactic parsers on the processed sentences. Syntactic parsing is typically one of the first steps in a text mining pipeline. Thus, any improvement in performance would have a ripple effect over all processing steps. We evaluated our method using a corpus of biomedical sentences annotated with syntactic links. Our empirical results show an improvement of 2.90% for the Charniak-McClosky parser and of 4.23% for the Link Grammar parser when processing simplified sentences rather than the original sentences in the corpus.
2,009
Computation and Language
\'Etude et traitement automatique de l'anglais du XVIIe si\`ecle : outils morphosyntaxiques et dictionnaires
In this article, we record the main linguistic differences or singularities of 17th century English, analyse them morphologically and syntactically and propose equivalent forms in contemporary English. We show how 17th century texts may be transcribed into modern English, combining the use of electronic dictionaries with rules of transcription implemented as transducers. Apr\`es avoir expos\'e la constitution du corpus, nous recensons les principales diff\'erences ou particularit\'es linguistiques de la langue anglaise du XVIIe si\`ecle, les analysons du point de vue morphologique et syntaxique et proposons des \'equivalents en anglais contemporain (AC). Nous montrons comment nous pouvons effectuer une transcription automatique de textes anglais du XVIIe si\`ecle en anglais moderne, en combinant l'utilisation de dictionnaires \'electroniques avec des r\`egles de transcriptions impl\'ement\'ees sous forme de transducteurs.
2,010
Computation and Language
"Mind your p's and q's": or the peregrinations of an apostrophe in 17th Century English
If the use of the apostrophe in contemporary English often marks the Saxon genitive, it may also indicate the omission of one or more let-ters. Some writers (wrongly?) use it to mark the plural in symbols or abbreviations, visual-ised thanks to the isolation of the morpheme "s". This punctuation mark was imported from the Continent in the 16th century. During the 19th century its use was standardised. However the rules of its usage still seem problematic to many, including literate speakers of English. "All too often, the apostrophe is misplaced", or "errant apostrophes are springing up every-where" is a complaint that Internet users fre-quently come across when visiting grammar websites. Many of them detail its various uses and misuses, and attempt to correct the most common mistakes about it, especially its mis-use in the plural, called greengrocers' apostro-phes and humorously misspelled "greengro-cers apostrophe's". While studying English travel accounts published in the seventeenth century, we noticed that the different uses of this symbol may accompany various models of metaplasms. We were able to highlight the linguistic variations of some lexemes, and trace the origin of modern grammar rules gov-erning its usage.
2,010
Computation and Language
Recognition and translation Arabic-French of Named Entities: case of the Sport places
The recognition of Arabic Named Entities (NE) is a problem in different domains of Natural Language Processing (NLP) like automatic translation. Indeed, NE translation allows the access to multilingual in-formation. This translation doesn't always lead to expected result especially when NE contains a person name. For this reason and in order to ameliorate translation, we can transliterate some part of NE. In this context, we propose a method that integrates translation and transliteration together. We used the linguis-tic NooJ platform that is based on local grammars and transducers. In this paper, we focus on sport domain. We will firstly suggest a refinement of the typological model presented at the MUC Conferences we will describe the integration of an Arabic transliteration module into translation system. Finally, we will detail our method and give the results of the evaluation.
2,010
Computation and Language
Morphological study of Albanian words, and processing with NooJ
We are developing electronic dictionaries and transducers for the automatic processing of the Albanian Language. We will analyze the words inside a linear segment of text. We will also study the relationship between units of sense and units of form. The composition of words takes different forms in Albanian. We have found that morphemes are frequently concatenated or simply juxtaposed or contracted. The inflected grammar of NooJ allows constructing the dictionaries of flexed forms (declensions or conjugations). The diversity of word structures requires tools to identify words created by simple concatenation, or to treat contractions. The morphological tools of NooJ allow us to create grammatical tools to represent and treat these phenomena. But certain problems exceed the morphological analysis and must be represented by syntactical grammars.
2,010
Computation and Language
Approximations to the MMI criterion and their effect on lattice-based MMI
Maximum mutual information (MMI) is a model selection criterion used for hidden Markov model (HMM) parameter estimation that was developed more than twenty years ago as a discriminative alternative to the maximum likelihood criterion for HMM-based speech recognition. It has been shown in the speech recognition literature that parameter estimation using the current MMI paradigm, lattice-based MMI, consistently outperforms maximum likelihood estimation, but this is at the expense of undesirable convergence properties. In particular, recognition performance is sensitive to the number of times that the iterative MMI estimation algorithm, extended Baum-Welch, is performed. In fact, too many iterations of extended Baum-Welch will lead to degraded performance, despite the fact that the MMI criterion improves at each iteration. This phenomenon is at variance with the analogous behavior of maximum likelihood estimation -- at least for the HMMs used in speech recognition -- and it has previously been attributed to `over fitting'. In this paper, we present an analysis of lattice-based MMI that demonstrates, first of all, that the asymptotic behavior of lattice-based MMI is much worse than was previously understood, i.e. it does not appear to converge at all, and, second of all, that this is not due to `over fitting'. Instead, we demonstrate that the `over fitting' phenomenon is the result of standard methodology that exacerbates the poor behavior of two key approximations in the lattice-based MMI machinery. We also demonstrate that if we modify the standard methodology to improve the validity of these approximations, then the convergence properties of lattice-based MMI become benign without sacrificing improvements to recognition accuracy.
2,010
Computation and Language
On Event Structure in the Torn Dress
Using Pustejovsky's "The Syntax of Event Structure" and Fong's "On Mending a Torn Dress" we give a glimpse of a Pustejovsky-like analysis to some example sentences in Fong. We attempt to give a framework for semantics to the noun phrases and adverbs as appropriate as well as the lexical entries for all words in the examples and critique both papers in light of our findings and difficulties.
2,010
Computation and Language
Towards a Heuristic Categorization of Prepositional Phrases in English with WordNet
This document discusses an approach and its rudimentary realization towards automatic classification of PPs; the topic, that has not received as much attention in NLP as NPs and VPs. The approach is a rule-based heuristics outlined in several levels of our research. There are 7 semantic categories of PPs considered in this document that we are able to classify from an annotated corpus.
2,010
Computation and Language
Thai Rhetorical Structure Analysis
Rhetorical structure analysis (RSA) explores discourse relations among elementary discourse units (EDUs) in a text. It is very useful in many text processing tasks employing relationships among EDUs such as text understanding, summarization, and question-answering. Thai language with its distinctive linguistic characteristics requires a unique technique. This article proposes an approach for Thai rhetorical structure analysis. First, EDUs are segmented by two hidden Markov models derived from syntactic rules. A rhetorical structure tree is constructed from a clustering technique with its similarity measure derived from Thai semantic rules. Then, a decision tree whose features derived from the semantic rules is used to determine discourse relations.
2,010
Computation and Language
Co-channel Interference Cancellation for Space-Time Coded OFDM Systems Using Adaptive Beamforming and Null Deepening
Combined with space-time coding, the orthogonal frequency division multiplexing (OFDM) system explores space diversity. It is a potential scheme to offer spectral efficiency and robust high data rate transmissions over frequency-selective fading channel. However, space-time coding impairs the system ability to suppress interferences as the signals transmitted from two transmit antennas are superposed and interfered at the receiver antennas. In this paper, we developed an adaptive beamforming based on least mean squared error algorithm and null deepening to combat co-channel interference (CCI) for the space-time coded OFDM (STC-OFDM) system. To illustrate the performance of the presented approach, it is compared to the null steering beamformer which requires a prior knowledge of directions of arrival (DOAs). The structure of space-time decoders are preserved although there is the use of beamformers before decoding. By incorporating the proposed beamformer as a CCI canceller in the STC-OFDM systems, the performance improvement is achieved as shown in the simulation results.
2,010
Computation and Language
Syntactic Topic Models
The syntactic topic model (STM) is a Bayesian nonparametric model of language that discovers latent distributions of words (topics) that are both semantically and syntactically coherent. The STM models dependency parsed corpora where sentences are grouped into documents. It assumes that each word is drawn from a latent topic chosen by combining document-level features and the local syntactic context. Each document has a distribution over latent topics, as in topic models, which provides the semantic consistency. Each element in the dependency parse tree also has a distribution over the topics of its children, as in latent-state syntax models, which provides the syntactic consistency. These distributions are convolved so that the topic of each word is likely under both its document and syntactic context. We derive a fast posterior inference algorithm based on variational methods. We report qualitative and quantitative studies on both synthetic data and hand-parsed documents. We show that the STM is a more predictive model of language than current models based only on syntax or only on topics.
2,010
Computation and Language
SLAM : Solutions lexicales automatique pour m\'etaphores
This article presents SLAM, an Automatic Solver for Lexical Metaphors like ?d\'eshabiller* une pomme? (to undress* an apple). SLAM calculates a conventional solution for these productions. To carry on it, SLAM has to intersect the paradigmatic axis of the metaphorical verb ?d\'eshabiller*?, where ?peler? (?to peel?) comes closer, with a syntagmatic axis that comes from a corpus where ?peler une pomme? (to peel an apple) is semantically and syntactically regular. We test this model on DicoSyn, which is a ?small world? network of synonyms, to compute the paradigmatic axis and on Frantext.20, a French corpus, to compute the syntagmatic axis. Further, we evaluate the model with a sample of an experimental corpus of the database of Flexsem
2,009
Computation and Language
Why has (reasonably accurate) Automatic Speech Recognition been so hard to achieve?
Hidden Markov models (HMMs) have been successfully applied to automatic speech recognition for more than 35 years in spite of the fact that a key HMM assumption -- the statistical independence of frames -- is obviously violated by speech data. In fact, this data/model mismatch has inspired many attempts to modify or replace HMMs with alternative models that are better able to take into account the statistical dependence of frames. However it is fair to say that in 2010 the HMM is the consensus model of choice for speech recognition and that HMMs are at the heart of both commercially available products and contemporary research systems. In this paper we present a preliminary exploration aimed at understanding how speech data depart from HMMs and what effect this departure has on the accuracy of HMM-based speech recognition. Our analysis uses standard diagnostic tools from the field of statistics -- hypothesis testing, simulation and resampling -- which are rarely used in the field of speech recognition. Our main result, obtained by novel manipulations of real and resampled data, demonstrates that real data have statistical dependency and that this dependency is responsible for significant numbers of recognition errors. We also demonstrate, using simulation and resampling, that if we `remove' the statistical dependency from data, then the resulting recognition error rates become negligible. Taken together, these results suggest that a better understanding of the structure of the statistical dependency in speech data is a crucial first step towards improving HMM-based speech recognition.
2,010
Computation and Language
Change of word types to word tokens ratio in the course of translation (based on Russian translations of K. Vonnegut novels)
The article provides lexical statistical analysis of K. Vonnegut's two novels and their Russian translations. It is found out that there happen some changes between the speed of word types and word tokens ratio change in the source and target texts. The author hypothesizes that these changes are typical for English-Russian translations, and moreover, they represent an example of Baker's translation feature of levelling out.
2,010
Computation and Language
Linguistic Geometries for Unsupervised Dimensionality Reduction
Text documents are complex high dimensional objects. To effectively visualize such data it is important to reduce its dimensionality and visualize the low dimensional embedding as a 2-D or 3-D scatter plot. In this paper we explore dimensionality reduction methods that draw upon domain knowledge in order to achieve a better low dimensional embedding and visualization of documents. We consider the use of geometries specified manually by an expert, geometries derived automatically from corpus statistics, and geometries computed from linguistic resources.
2,010
Computation and Language
From Frequency to Meaning: Vector Space Models of Semantics
Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.
2,010
Computation and Language
Automatic derivation of domain terms and concept location based on the analysis of the identifiers
Developers express the meaning of the domain ideas in specifically selected identifiers and comments that form the target implemented code. Software maintenance requires knowledge and understanding of the encoded ideas. This paper presents a way how to create automatically domain vocabulary. Knowledge of domain vocabulary supports the comprehension of a specific domain for later code maintenance or evolution. We present experiments conducted in two selected domains: application servers and web frameworks. Knowledge of domain terms enables easy localization of chunks of code that belong to a certain term. We consider these chunks of code as "concepts" and their placement in the code as "concept location". Application developers may also benefit from the obtained domain terms. These terms are parts of speech that characterize a certain concept. Concepts are encoded in "classes" (OO paradigm) and the obtained vocabulary of terms supports the selection and the comprehension of the class' appropriate identifiers. We measured the following software products with our tool: JBoss, JOnAS, GlassFish, Tapestry, Google Web Toolkit and Echo2.
2,010
Computation and Language
A Computational Algorithm based on Empirical Analysis, that Composes Sanskrit Poetry
Poetry-writing in Sanskrit is riddled with problems for even those who know the language well. This is so because the rules that govern Sanskrit prosody are numerous and stringent. We propose a computational algorithm that converts prose given as E-text into poetry in accordance with the metrical rules of Sanskrit prosody, simultaneously taking care to ensure that sandhi or euphonic conjunction, which is compulsory in verse, is handled. The algorithm is considerably speeded up by a novel method of reducing the target search database. The algorithm further gives suggestions to the poet in case what he/she has given as the input prose is impossible to fit into any allowed metrical format. There is also an interactive component of the algorithm by which the algorithm interacts with the poet to resolve ambiguities. In addition, this unique work, which provides a solution to a problem that has never been addressed before, provides a simple yet effective speech recognition interface that would help the visually impaired dictate words in E-text, which is in turn versified by our Poetry Composer Engine.
2,010
Computation and Language
Les Entit\'es Nomm\'ees : usage et degr\'es de pr\'ecision et de d\'esambigu\"isation
The recognition and classification of Named Entities (NER) are regarded as an important component for many Natural Language Processing (NLP) applications. The classification is usually made by taking into account the immediate context in which the NE appears. In some cases, this immediate context does not allow getting the right classification. We show in this paper that the use of an extended syntactic context and large-scale resources could be very useful in the NER task.
2,007
Computation and Language
Mathematical Foundations for a Compositional Distributional Model of Meaning
We propose a mathematical framework for a unification of the distributional theory of meaning in terms of vector space models, and a compositional theory for grammatical types, for which we rely on the algebra of Pregroups, introduced by Lambek. This mathematical framework enables us to compute the meaning of a well-typed sentence from the meanings of its constituents. Concretely, the type reductions of Pregroups are `lifted' to morphisms in a category, a procedure that transforms meanings of constituents into a meaning of the (well-typed) whole. Importantly, meanings of whole sentences live in a single space, independent of the grammatical structure of the sentence. Hence the inner-product can be used to compare meanings of arbitrary sentences, as it is for comparing the meanings of words in the distributional model. The mathematical structure we employ admits a purely diagrammatic calculus which exposes how the information flows between the words in a sentence in order to make up the meaning of the whole sentence. A variation of our `categorical model' which involves constraining the scalars of the vector spaces to the semiring of Booleans results in a Montague-style Boolean-valued semantics.
2,010
Computation and Language
La repr\'esentation formelle des concepts spatiaux dans la langue
In this chapter, we assume that systematically studying spatial markers semantics in language provides a means to reveal fundamental properties and concepts characterizing conceptual representations of space. We propose a formal system accounting for the properties highlighted by the linguistic analysis, and we use these tools for representing the semantic content of several spatial relations of French. The first part presents a semantic analysis of the expression of space in French aiming at describing the constraints that formal representations have to take into account. In the second part, after presenting the structure of our formal system, we set out its components. A commonsense geometry is sketched out and several functional and pragmatic spatial concepts are formalized. We take a special attention in showing that these concepts are well suited to representing the semantic content of several prepositions of French ('sur' (on), 'dans' (in), 'devant' (in front of), 'au-dessus' (above)), and in illustrating the inferential adequacy of these representations.
1,997
Computation and Language
Les entit\'es spatiales dans la langue : \'etude descriptive, formelle et exp\'erimentale de la cat\'egorisation
While previous linguistic and psycholinguistic research on space has mainly analyzed spatial relations, the studies reported in this paper focus on how language distinguishes among spatial entities. Descriptive and experimental studies first propose a classification of entities, which accounts for both static and dynamic space, has some cross-linguistic validity, and underlies adults' cognitive processing. Formal and computational analyses then introduce theoretical elements aiming at modelling these categories, while fulfilling various properties of formal ontologies (generality, parsimony, coherence...). This formal framework accounts, in particular, for functional dependences among entities underlying some part-whole descriptions. Finally, developmental research shows that language-specific properties have a clear impact on how children talk about space. The results suggest some cross-linguistic variability in children's spatial representations from an early age onwards, bringing into question models in which general cognitive capacities are the only determinants of spatial cognition during the course of development.
2,005
Computation and Language
Learning Recursive Segments for Discourse Parsing
Automatically detecting discourse segments is an important preliminary step towards full discourse parsing. Previous research on discourse segmentation have relied on the assumption that elementary discourse units (EDUs) in a document always form a linear sequence (i.e., they can never be nested). Unfortunately, this assumption turns out to be too strong, for some theories of discourse like SDRT allows for nested discourse units. In this paper, we present a simple approach to discourse segmentation that is able to produce nested EDUs. Our approach builds on standard multi-class classification techniques combined with a simple repairing heuristic that enforces global coherence. Our system was developed and evaluated on the first round of annotations provided by the French Annodis project (an ongoing effort to create a discourse bank for French). Cross-validated on only 47 documents (1,445 EDUs), our system achieves encouraging performance results with an F-score of 73% for finding EDUs.
2,010
Computation and Language
Statistical Physics for Natural Language Processing
This paper has been withdrawn by the author.
2,015
Computation and Language
Displacement Calculus
The Lambek calculus provides a foundation for categorial grammar in the form of a logic of concatenation. But natural language is characterized by dependencies which may also be discontinuous. In this paper we introduce the displacement calculus, a generalization of Lambek calculus, which preserves its good proof-theoretic properties while embracing discontinuiity and subsuming it. We illustrate linguistic applications and prove Cut-elimination, the subformula property, and decidability
2,010
Computation and Language
Punctuation effects in English and Esperanto texts
A statistical physics study of punctuation effects on sentence lengths is presented for written texts: {\it Alice in wonderland} and {\it Through a looking glass}. The translation of the first text into esperanto is also considered as a test for the role of punctuation in defining a style, and for contrasting natural and artificial, but written, languages. Several log-log plots of the sentence length-rank relationship are presented for the major punctuation marks. Different power laws are observed with characteristic exponents. The exponent can take a value much less than unity ($ca.$ 0.50 or 0.30) depending on how a sentence is defined. The texts are also mapped into time series based on the word frequencies. The quantitative differences between the original and translated texts are very minutes, at the exponent level. It is argued that sentences seem to be more reliable than word distributions in discussing an author style.
2,010
Computation and Language
Morphonette: a morphological network of French
This paper describes in details the first version of Morphonette, a new French morphological resource and a new radically lexeme-based method of morphological analysis. This research is grounded in a paradigmatic conception of derivational morphology where the morphological structure is a structure of the entire lexicon and not one of the individual words it contains. The discovery of this structure relies on a measure of morphological similarity between words, on formal analogy and on the properties of two morphological paradigms:
2,010
Computation and Language
The Lambek-Grishin calculus is NP-complete
The Lambek-Grishin calculus LG is the symmetric extension of the non-associative Lambek calculus NL. In this paper we prove that the derivability problem for LG is NP-complete.
2,010
Computation and Language
Network analysis of a corpus of undeciphered Indus civilization inscriptions indicates syntactic organization
Archaeological excavations in the sites of the Indus Valley civilization (2500-1900 BCE) in Pakistan and northwestern India have unearthed a large number of artifacts with inscriptions made up of hundreds of distinct signs. To date there is no generally accepted decipherment of these sign sequences and there have been suggestions that the signs could be non-linguistic. Here we apply complex network analysis techniques to a database of available Indus inscriptions, with the aim of detecting patterns indicative of syntactic organization. Our results show the presence of patterns, e.g., recursive structures in the segmentation trees of the sequences, that suggest the existence of a grammar underlying these inscriptions.
2,011
Computation and Language
Using Soft Constraints To Learn Semantic Models Of Descriptions Of Shapes
The contribution of this paper is to provide a semantic model (using soft constraints) of the words used by web-users to describe objects in a language game; a game in which one user describes a selected object of those composing the scene, and another user has to guess which object has been described. The given description needs to be non ambiguous and accurate enough to allow other users to guess the described shape correctly. To build these semantic models the descriptions need to be analyzed to extract the syntax and words' classes used. We have modeled the meaning of these descriptions using soft constraints as a way for grounding the meaning. The descriptions generated by the system took into account the context of the object to avoid ambiguous descriptions, and allowed users to guess the described object correctly 72% of the times.
2,010
Computation and Language
Quantitative parametrization of texts written by Ivan Franko: An attempt of the project
In the article, the project of quantitative parametrization of all texts by Ivan Franko is manifested. It can be made only by using modern computer techniques after the frequency dictionaries for all Franko's works are compiled. The paper describes the application spheres, methodology, stages, principles and peculiarities in the compilation of the frequency dictionary of the second half of the 19th century - the beginning of the 20th century. The relation between the Ivan Franko frequency dictionary, explanatory dictionary of writer's language and text corpus is discussed.
2,010
Computation and Language
A generic tool to generate a lexicon for NLP from Lexicon-Grammar tables
Lexicon-Grammar tables constitute a large-coverage syntactic lexicon but they cannot be directly used in Natural Language Processing (NLP) applications because they sometimes rely on implicit information. In this paper, we introduce LGExtract, a generic tool for generating a syntactic lexicon for NLP from the Lexicon-Grammar tables. It is based on a global table that contains undefined information and on a unique extraction script including all operations to be performed for all tables. We also present an experiment that has been conducted to generate a new lexicon of French verbs and predicative nouns.
2,008
Computation and Language
Ivan Franko's novel Dlja domashnjoho ohnyshcha (For the Hearth) in the light of the frequency dictionary
In the article, the methodology and the principles of the compilation of the Frequency dictionary for Ivan Franko's novel Dlja domashnjoho ohnyshcha (For the Hearth) are described. The following statistical parameters of the novel vocabulary are obtained: variety, exclusiveness, concentration indexes, correlation between word rank and text coverage, etc. The main quantitative characteristics of Franko's novels Perekhresni stezhky (The Cross-Paths) and Dlja domashnjoho ohnyshcha are compared on the basis of their frequency dictionaries.
2,010
Computation and Language
Segmentation and Nodal Points in Narrative: Study of Multiple Variations of a Ballad
The Lady Maisry ballads afford us a framework within which to segment a storyline into its major components. Segments and as a consequence nodal points are discussed for nine different variants of the Lady Maisry story of a (young) woman being burnt to death by her family, on account of her becoming pregnant by a foreign personage. We motivate the importance of nodal points in textual and literary analysis. We show too how the openings of the nine variants can be analyzed comparatively, and also the conclusions of the ballads.
2,010
Computation and Language
Offline Arabic Handwriting Recognition Using Artificial Neural Network
The ambition of a character recognition system is to transform a text document typed on paper into a digital format that can be manipulated by word processor software Unlike other languages, Arabic has unique features, while other language doesn't have, from this language these are seven or eight language such as ordo, jewie and Persian writing, Arabic has twenty eight letters, each of which can be linked in three different ways or separated depending on the case. The difficulty of the Arabic handwriting recognition is that, the accuracy of the character recognition which affects on the accuracy of the word recognition, in additional there is also two or three from for each character, the suggested solution by using artificial neural network can solve the problem and overcome the difficulty of Arabic handwriting recognition.
2,010
Computation and Language
Fuzzy Modeling and Natural Language Processing for Panini's Sanskrit Grammar
Indian languages have long history in World Natural languages. Panini was the first to define Grammar for Sanskrit language with about 4000 rules in fifth century. These rules contain uncertainty information. It is not possible to Computer processing of Sanskrit language with uncertain information. In this paper, fuzzy logic and fuzzy reasoning are proposed to deal to eliminate uncertain information for reasoning with Sanskrit grammar. The Sanskrit language processing is also discussed in this paper.
2,010
Computation and Language
The probabilistic analysis of language acquisition: Theoretical, computational, and experimental analysis
There is much debate over the degree to which language learning is governed by innate language-specific biases, or acquired through cognition-general principles. Here we examine the probabilistic language acquisition hypothesis on three levels: We outline a novel theoretical result showing that it is possible to learn the exact generative model underlying a wide class of languages, purely from observing samples of the language. We then describe a recently proposed practical framework, which quantifies natural language learnability, allowing specific learnability predictions to be made for the first time. In previous work, this framework was used to make learnability predictions for a wide variety of linguistic constructions, for which learnability has been much debated. Here, we present a new experiment which tests these learnability predictions. We find that our experimental results support the possibility that these linguistic constructions are acquired probabilistically from cognition-general principles.
2,010
Computation and Language
Complete Complementary Results Report of the MARF's NLP Approach to the DEFT 2010 Competition
This companion paper complements the main DEFT'10 article describing the MARF approach (arXiv:0905.1235) to the DEFT'10 NLP challenge (described at http://www.groupes.polymtl.ca/taln2010/deft.php in French). This paper is aimed to present the complete result sets of all the conducted experiments and their settings in the resulting tables highlighting the approach and the best results, but also showing the worse and the worst and their subsequent analysis. This particular work focuses on application of the MARF's classical and NLP pipelines to identification tasks within various francophone corpora to identify decades when certain articles were published for the first track (Piste 1) and place of origin of a publication (Piste 2), such as the journal and location (France vs. Quebec). This is the sixth iteration of the release of the results.
2,014
Computation and Language
Testing SDRT's Right Frontier
The Right Frontier Constraint (RFC), as a constraint on the attachment of new constituents to an existing discourse structure, has important implications for the interpretation of anaphoric elements in discourse and for Machine Learning (ML) approaches to learning discourse structures. In this paper we provide strong empirical support for SDRT's version of RFC. The analysis of about 100 doubly annotated documents by five different naive annotators shows that SDRT's RFC is respected about 95% of the time. The qualitative analysis of presumed violations that we have performed shows that they are either click-errors or structural misconceptions.
2,010
Computation and Language