{ "paper_id": "W98-0203", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:03:02.004462Z" }, "title": "Coreference as the Foundations for Link Analysis over Free Text Databases", "authors": [ { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Amit", "middle": [], "last": "Bagga Box", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Coreference annotated data has the potential to substantially increase the domain over which link analysis can be applied. We have developed coreference technologies which relate individuals and events within and across text documents. This in turn leverages the first step in mapping the information in those texts into a more database like format suitable for visualization with link driven software.", "pdf_parse": { "paper_id": "W98-0203", "_pdf_hash": "", "abstract": [ { "text": "Coreference annotated data has the potential to substantially increase the domain over which link analysis can be applied. We have developed coreference technologies which relate individuals and events within and across text documents. This in turn leverages the first step in mapping the information in those texts into a more database like format suitable for visualization with link driven software.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Coreference is in some sense nature's own hyperlink. For example, the phrase 'Alan Turing', 'the father of modern computer science', or 'he' can refer to the same individual in the world. The communicative function of coreference is the ability to link information about entities across many sentences and documents. In data base terms, individual sentences provide entry records which are organized around entities, and the method of indicating which entity the record is about is coreference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Link analysis is well suited to visualizing large structured databases where generalizations emerge from macro observations of relatedness. Unfortunately, free text is not sufficiently organized for similar fidelity observations. Coreference in its simplest form has the potential to organize free text sufficiently to greatly expand the domain over which link analysis can be fruitfully applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Below we will illustrate the kinds of coreference that we currently annotate in the CAMP software system and give an idea of our system performance. Then we will illustrate what kinds of observations could be pulled via visualization from coreference annotated document collections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Processing Software", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CAMP Natural Language", "sec_num": "2" }, { "text": "The foundation of our system is the CAMP NLP system. This system provides an integrated environment in which one can access many levels of linguistic information as well as world knowledge. Its main components include: named entity recognition, tokenization, sentence detection, part-of-speech tagging, morphological analysis, parsing, argument detection, and coreference resolution as described below. Many of the techniques used for these tasks perform at or near the state of the art and are described in more depth in (Wacholder 97) , (Collins 96) , (Baldwin 95) , (Reynar 97) , (Baldwin 97) , (Bagga, 98b) .", "cite_spans": [ { "start": 522, "end": 536, "text": "(Wacholder 97)", "ref_id": null }, { "start": 539, "end": 551, "text": "(Collins 96)", "ref_id": null }, { "start": 554, "end": 566, "text": "(Baldwin 95)", "ref_id": null }, { "start": 569, "end": 580, "text": "(Reynar 97)", "ref_id": null }, { "start": 583, "end": 595, "text": "(Baldwin 97)", "ref_id": null }, { "start": 598, "end": 605, "text": "(Bagga,", "ref_id": null }, { "start": 606, "end": 610, "text": "98b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "CAMP Natural Language", "sec_num": "2" }, { "text": "We have been developing the within document coreference component of CAMP since 1995 when the system was developed to participate in the Sixth Message Understanding Conference (MUC-6) coreference task. Below we will illustrate the classes of coreference that the system annotates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Within Document Coreference", "sec_num": "3" }, { "text": "Coreference breaks down into several readily identified areas based on the form of the phrase being resolved and the method of calculating coreference. We will proceed in the approximate ordering of the systems execution of components. A more detailed analysis of the classes of coreference can be found in (Bagga, 98a) .", "cite_spans": [ { "start": 307, "end": 314, "text": "(Bagga,", "ref_id": null }, { "start": 315, "end": 319, "text": "98a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Within Document Coreference", "sec_num": "3" }, { "text": "There are several readily identified syntactic constructions that reliably indicate coreference. First are appositive relations as holds between 'John Smith' and ~chairman of General Electric' in:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Highly Syntactic Coreference", "sec_num": "3.1" }, { "text": "John Smith, chairman of General Electric, resigned yesterday.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Highly Syntactic Coreference", "sec_num": "3.1" }, { "text": "Identifying this class of coreference requires some syntactic knowledge of the text and property analysis of the individual phrases to avoid finding coreference in examples like: John Smith, 47, resigned yesterday. Smith, Jones, Woodhouse and Fife announced a new partner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Highly Syntactic Coreference", "sec_num": "3.1" }, { "text": "To avoid these sorts of errors we have a mutual exclusion test that applies to such positings of coreference to prevent non-sensical annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Highly Syntactic Coreference", "sec_num": "3.1" }, { "text": "Another class of highly syntactic coreference exists in the form of predicate nominal constructions as between 'John' and 'the finest juggler in the world' in:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Highly Syntactic Coreference", "sec_num": "3.1" }, { "text": "John is the finest juggler in the world.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Highly Syntactic Coreference", "sec_num": "3.1" }, { "text": "Like the appositive case, mutual exclusion tests are required to prevent incorrect resolutions as in:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Highly Syntactic Coreference", "sec_num": "3.1" }, { "text": "John is tall. They are blue. These classes of highly syntactic coreference can play a very important role in bridging phrases that we would normally be unable to relate. For example, it is unlikely that our software would be able to relate the same noun phrases in a text like The finest juggler in the world visited Philadelphia this week.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Highly Syntactic Coreference", "sec_num": "3.1" }, { "text": "John Smith pleased crowds every night in the Annenberg theater. This is because we do not have sufficiently sophisticated knowledge sources to determine that jugglers are very likely to be in the business of pleasing crowds. But the recognition of the predicate nominal will allow us to connect a chain of 'John Smith', 'Mr. Smith', 'he' with a chain of 'the finest juggler in the world', 'the juggler' and 'a juggling expert'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Highly Syntactic Coreference", "sec_num": "3.1" }, { "text": "Names of people, places, products and companies are referred to in many different variations. In journalistic prose there will be a full name of an entity, and throughout the rest of the article there will be ellided references to the same entity. Some name variations are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Noun Coreference", "sec_num": "3.2" }, { "text": "\u2022 Mr. James Dabah <-James <-Jim <-Dabah \u2022 Minnesota Mining and Manufacturing <-3M Corp. <-3M \u2022 Washington D.C. <-WASHINGTON <-Wash- ington <-D.C. <-Wash. \u2022 New York <-New York City <-NYC <-N.Y.C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Noun Coreference", "sec_num": "3.2" }, { "text": "This class of coreference forms a solid foundation over which we resolve the remaining coreference in the document. One reason for this is that we learn important properties about the phrases in virtue of the coreference resolution. For example, we may not know whether 'Dabah' is a person name, male name, female name, company or place, but upon resolution with 'Mr. James Dabah' we then know that it refers to a male person.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Noun Coreference", "sec_num": "3.2" }, { "text": "We resolve such coreferences with partial string matching subroutines coupled with lists of honorifics, corporate designators and acronyms. A substantial problem in resolving these names is avoiding overgeneration like relating 'Washington' the place with the name 'Consuela Washington'. We control the string matching with a range of salience functions and restrictions of the kinds of partial string matches we are willing to tolerate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Noun Coreference", "sec_num": "3.2" }, { "text": "A very challenging area of coreference annotation involves coreference between common nouns like 'a shady stock deal' and 'the deal'. Fundamentally the problem is that very conservative approaches to exact and partial string matches overgenerate badly. Some examples of actual chains are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Noun Coreference", "sec_num": "3.3" }, { "text": "\u2022 his dad's trophies <-those trophies \u2022 those words <-the last words \u2022 the risk <-the potential risk", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Noun Coreference", "sec_num": "3.3" }, { "text": "\u2022 its accident investigation <-the investigation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Noun Coreference", "sec_num": "3.3" }, { "text": "We have adopted a range of matching heuristics and salience strategies to try and recognize a small, but accurate, subset of these coreferences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Noun Coreference", "sec_num": "3.3" }, { "text": "The pronominal resolution component of the system is perhaps the most advanced of all the components. It features a sophisticated salience model designed to produce high accuracy coreference in highly ambiguous texts. It is capable of noticing ambiguity in text, and will fail to resolve pronouns in such circumstances. For example the system will not resolve 'he' in the following example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pronoun Coreference", "sec_num": "3.4" }, { "text": "Earl and Ted were working together when suddenly he fell into the threshing machine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pronoun Coreference", "sec_num": "3.4" }, { "text": "We resolve pronouns like 'they', 'it', 'he', 'hers', 'themselves' to proper nouns, common nouns and other pronouns. Depending on the genre of data being processed, this component can resolve 60-90% of the pronouns in a text with very high accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pronoun Coreference", "sec_num": "3.4" }, { "text": "3.5 The Overall Nexus of Coreference in a Document Once all the coreference in a document has been computed, we have a good approximation of which sentences are strongly related to other sentences in the document by counting the number of coreference links between the sentences. We know which entities are mentioned most often, and what other entities are involved in the same sentences or paragraphs. This sort of information has been used to generate very effective summaries of documents and as a foundation for a simple visualization interface to texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pronoun Coreference", "sec_num": "3.4" }, { "text": "Cross Document Coreference", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "Cross-document coreference occurs when the same person, place, event, or concept is discussed in more than one text source. Figure 1 shows the architecture of the cross-document module of CAMP. \u2022 Next, for the coreference chain of interest within each article (for example, the coreference chain that contains \"John Perry\"), the Sentence Extractor module extracts all the sentences that contain the noun phrases which form the coreference chain. In other words, the SentenceExtractor module produces a \"summary\" of the article with respect to the entity of interest. These sun~maries are a special case of the query sensitive techniques being developed at Penn using CAMP. Therefore, for doc.36 ( Figure 2 ), since at least one of the three noun phrases (\"John Perry,\" \"he,\" and \"Perry\") in the coreference chain of interest appears in each of the three sentences in the extract, the summary produced by SentenceExtractor is the extract itself. On the other hand, the summary produced by Sen-tenceExtractor for the coreference chain of interest in doc.38 is only the first sentence of the extract because the only element of the coreference chain appears in this sentence.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 132, "text": "Figure 1", "ref_id": null }, { "start": 697, "end": 705, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "\u2022 Finally, for each article, the VSM-Disambiguate module uses the summary extracted by the Sen-tenceExtractor and computes its similarity with 01iver \"Biff\" Kelly of Weymouth succeeds John Perry as president of the Massachusetts Golf Association. \"We win have continued growth in the future,\" said Kelly, who will serve for two years. \"There's been a lot of changes and there win be continued changes as we head into the year 2000.\" Figure 7 : Precision, Recall, and F-Measure Using Our Algorithm for the John Smith Data Set the summaries extracted from each of the other articles. The VSM-Disambiguate module uses a standard vector space model (used widely in information retrieval) (Salton, 89) to compute the similarities between the summaries. Summaries having similarity above a certain threshold are considered to be regarding the same entity.", "cite_spans": [ { "start": 684, "end": 692, "text": "(Salton,", "ref_id": null }, { "start": 693, "end": 696, "text": "89)", "ref_id": null } ], "ref_spans": [ { "start": 433, "end": 441, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "We tested our cross-document system on two highly ambiguous test sets. The first set contained 197 articles from the 1996 and 1997 editions of the New York Times, while the second set contained 219 articles from the 1997 edition of the New York Times. The sole criteria for including an article in the two sets was the presence of a string matching the \"/John.*?Smith/\", and the \"/resign/\" regular expressions respectively. The goal for the first set was to identify crossdocument coreference chains about the same John Smith, and the goal for the second set was to identify cross-document coreference chains about the same \"resign\" event. The answer keys were manually created, but the scoring was completely automated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4.1" }, { "text": "There were 35 different John Smiths in the first set. Of these, 24 were involved in chains of size 1. The other 173 articles were regarding the 11 remaining John Smiths. Descriptions of a few of the John Smiths are: Chairman and CEO of General Motors, assistant track coach at UCLA, the legendary explorer, and the main character in Disney's Pocahontas, former president of the Labor Party of Britain. In the second set, there were 97 different \"resign\" events. Of these, 60 were involved in chains of size 1. The articles were regarding resignations of several different people including Ted Hobart of ABC Corp., Dick Morris, Speaker Jim Wright, and the possible resignation of Newt Gingrich.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "4.1" }, { "text": "In order to score the cross-document coreference chains output by the system, we had to map the cross-document coreference scoring problem to a within-document coreference scoring problem. This was done by creating a meta document consisting of the file names of each of the documents that the system was run on. Assuming that each of the documents in the two data sets was about a single John Smith, or about a single \"resign\" event, the crossdocument coreference chains produced by the system could now be evaluated by scoring the corresponding within-document coreference chains in the meta document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring and Results", "sec_num": "4.2" }, { "text": "Precision and recall are the measures used to evaluate the chains output by the system. For an entity, i, we define the precision and recall with respect to that entity in Figure 6 .", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 180, "text": "Figure 6", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Scoring and Results", "sec_num": "4.2" }, { "text": "The final precision and recall numbers are computed by the following two formulae:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring and Results", "sec_num": "4.2" }, { "text": "N Final Precision = Z wi * Precision~ i=l N Final Recall = ~ wl * Recall~ i=l", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring and Results", "sec_num": "4.2" }, { "text": "where N is the number of entities in the document, and wi is the weight assigned to entity i in the document. For the results discussed in this paper, equal weights were assigned to each entity in the meta document. In other words, wi = -~ for all i. Full details about the scoring algorithm can be found in (Bagga, 98) . Figure 7 shows the Precision, Recall, and the F-Measure (the average of precision and recall with equal weights for both) statistics for the John Smith data set. The best precision and recall achieved by Figure 8 : Precision, Recall, and F-Measure Using Our Algorithm for the \"resign\" Data Set the system on this data set was 93% and 77% respectively (when the threshold for the vector space model was set to 0.15). Similarly, Figure 8 shows the same three statistics for the \"resign\" data set. The best precision and recall achieved by the system on this data set was 94% and 81% respectively. This occurs when the threshold for the vector space model was set to 0.2. The results show that the system was very successful in resolving cross-document coreference.", "cite_spans": [ { "start": 308, "end": 315, "text": "(Bagga,", "ref_id": null }, { "start": 316, "end": 319, "text": "98)", "ref_id": null } ], "ref_spans": [ { "start": 322, "end": 330, "text": "Figure 7", "ref_id": null }, { "start": 526, "end": 534, "text": "Figure 8", "ref_id": null }, { "start": 749, "end": 757, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Scoring and Results", "sec_num": "4.2" }, { "text": "Crucial to the entire process of visualizing large document collections is relating the same individual or event across multiple documents. This single aspect of our system establishes its viability for large collection analysis. It allows the drops of information held in each document to be merged into a larger pool that is well organized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Possible Generalizations About Large Data Collections Derived From Coreference Annotations", "sec_num": "5" }, { "text": "Two display techniques immediately suggest themselves for accessing the coreference annotations in a document collection, the first is to take the identified entities as atomic and link them to other entities which co-occur in the same document. This might reveal a relation between individuals and events, or individuals and other individuals. For example, such a linking might indicate that no newspaper article ever mentioned both Clark Kent and Superman in the same article, but that most all other famous individuals tended to overlap in some article or another. On the positive case, individuals, over time, may tend to congregate in media stories or events may tend to be more tightly linked than otherwise expected. The second technique would be to take as atomic the documents and relate via links other documents that contain mention of the same entity. With a temporal dimension, the role of individuals and events could be assessed as time moved forward.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Primary Display of Information", "sec_num": "5.1" }, { "text": "The fact that two entities coexisted in the same sentence in a document is noteworthy for correlational analysis. Links could be restricted to those between entities that co-existed in the same sentence or paragraph. Additional filterings are possible with constraints on the sorts of verbs that exist in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finer Grained Analysis of the Documents", "sec_num": "5.2" }, { "text": "A more sophisticated version of the above is to access the argument structure of the document. CAMP software provides a limited predicate argument structure that allows subjects/verbs/objects to be identified. This ability moves our annotation closer to the fixed record data structure of a traditional data base. One could select an event and its object, for instance 'X sold arms to Iraq' and see what the fillers for X were in a link analysis. There are limitations to predicate argument structure matching-for instance getting the correct pattern for all the selling of arms variations is quite difficult.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finer Grained Analysis of the Documents", "sec_num": "5.2" }, { "text": "In any case, there appear to be a myriad of applications for link analysis in the domain of large text data bases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finer Grained Analysis of the Documents", "sec_num": "5.2" }, { "text": "The goal of this paper has been to articulate a novel input class for link based visualization techniquescoreference. We feel that there is tremendous potential for collaboration between researchers in visualization and in coreference annotation given the new space of information provided by coreference analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "formation by Computer, 1989, Reading, MA: Addison-Wesley. ", "cite_spans": [ { "start": 13, "end": 57, "text": "Computer, 1989, Reading, MA: Addison-Wesley.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" } ], "back_matter": [ { "text": "The second author was supported in part by a Fellowship from IBM Corporation, and in part by the University of Pennsylvania. Part of this work was done when the second author was visiting the Institute for Research in Cognitive Science at the University of Pennsylvania.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Algorithms for Scoring Coreference Chains", "authors": [ { "first": "Amit", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the the Linguistic Coreference Workshop at The First International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bagga, Amit, and Breck Baldwin. Algorithms for Scoring Coreference Chains. Proceedings of the the Linguistic Coreference Workshop at The First International Conference on Language Resources and Evaluation, May 1998.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Evaluation of Coreferences and Coreference Resolution Systems", "authors": [ { "first": "Amit", "middle": [], "last": "Bagga", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the First Language Resource and Evaluation Conference", "volume": "", "issue": "", "pages": "563--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bagga, Amit. Evaluation of Coreferences and Coref- erence Resolution Systems. Proceedings of the First Language Resource and Evaluation Confer- ence, pp. 563-566, May 1998.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Entity-Based Cross-Document Coreferencing Using the Vector Space Model", "authors": [ { "first": "Amit", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "To appear at the 17th International Conference on Computational Linguistics and the 36th Annual Meeting of the Association for Computational Linguistics (COLING-ACL'98)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bagga, Amit, and Breck Baldwin. Entity-Based Cross-Document Coreferencing Using the Vec- tor Space Model. To appear at the 17th Inter- national Conference on Computational Linguis- tics and the 36th Annual Meeting of the Asso- ciation for Computational Linguistics (COLING- ACL'98), August 1998.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "EAGLE: An Extensible Architecture for General Linguistic Engineering", "authors": [ { "first": "B", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "C", "middle": [], "last": "Doran", "suffix": "" }, { "first": "J", "middle": [], "last": "Reynar", "suffix": "" }, { "first": "M", "middle": [], "last": "Niv", "suffix": "" }, { "first": "M", "middle": [], "last": "Wasson", "suffix": "" } ], "year": 1997, "venue": "Proceedings RIA O, Computer-Assisted Information Searching on Internet", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baldwin, B., C. Doran, J. Reynar, M. Niv, and M. Wasson. EAGLE: An Extensible Architecture for General Linguistic Engineering. Proceedings RIA O, Computer-Assisted Information Searching on Internet, Montreal, Canada, 1997.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A New Statistical Parser Based on Bigram Lexical Dependencies", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 34 th Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, Michael. A New Statistical Parser Based on Bigram Lexical Dependencies. Proceedings of the 34 th Meeting of the Association for Computational Linguistics, 1996.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Entropy Model for Part-Of-Speech Tagging", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" }, { "first": "", "middle": [], "last": "Maximum", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "133--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi, Adwait. A Maximum Entropy Model for Part-Of-Speech Tagging. Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pp. 133-142, May 1996.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Disambiguation of Proper Names in Text", "authors": [ { "first": "Nina", "middle": [], "last": "Wacholder", "suffix": "" }, { "first": "Yael", "middle": [], "last": "Ravin", "suffix": "" }, { "first": "Misook", "middle": [], "last": "Choi", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "202--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wacholder, Nina, Yael Ravin, and Misook Choi. Dis- ambiguation of Proper Names in Text. Proceedings of the Fifth Conference on Applied Natural Lan- guage Processing, pp. 202-208, 1997.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Maximum Entropy Approach to Identifying Sentence Boundaries", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Reynar", "suffix": "" }, { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "16--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reynar, Jeffrey, and Adwait Ratnaparkhi. A Max- imum Entropy Approach to Identifying Sentence Boundaries. Proceedings of the Fifth Conference on Applied Natural Language Processing, pp. 16- 19, 1997.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automatic Text Processing: The Transformation, Analysis, and Retrieval of", "authors": [ { "first": "Gerard", "middle": [], "last": "Salton", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salton, Gerard. Automatic Text Processing: The Transformation, Analysis, and Retrieval of In-", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Coreference Chains for doc.36 This module takes as input the coreference chains produced by CAMP's within document coreference module. Details about each of the main steps of the cross-document coreference algorithm are given below. \u2022 First, for each article, the within document coreference module of CAMP is run on that article. It produces coreference chains for all the entities mentioned in the article. For example, consider the two extracts in Figures 2 and 4. The coreference chains output by CAMP for the two extracts are shown in Figures 3 and 5." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Figure 4: Extract from doc.38" }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "number of correct elements in the output chain containing entityi Precisioni = Recalli = number of elements in the output chain containing entityi number of correct elements in the output chain containing entityi number of elements in the truth chain containing entityi Definitions for Precision and Recall for an Entity i" } } } }