{ "paper_id": "W11-0128", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T05:38:05.424168Z" }, "title": "Ontology-based Distinction between Polysemy and Homonymy", "authors": [ { "first": "Jason", "middle": [], "last": "Utt", "suffix": "", "affiliation": { "laboratory": "", "institution": "Sprachverarbeitung Universit\u00e4t Stuttgart", "location": {} }, "email": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e4t Heidelberg", "location": {} }, "email": "pado@cl.uni-heidelberg.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We consider the problem of distinguishing polysemous from homonymous nouns. This distinction is often taken for granted, but is seldom operationalized in the shape of an empirical model. We present a first step towards such a model, based on WordNet augmented with ontological classes provided by CoreLex. This model provides a polysemy index for each noun which (a), accurately distinguishes between polysemy and homonymy; (b), supports the analysis that polysemy can be grounded in the frequency of the meaning shifts shown by nouns; and (c), improves a regression model that predicts when the \"one-sense-per-discourse\" hypothesis fails.", "pdf_parse": { "paper_id": "W11-0128", "_pdf_hash": "", "abstract": [ { "text": "We consider the problem of distinguishing polysemous from homonymous nouns. This distinction is often taken for granted, but is seldom operationalized in the shape of an empirical model. We present a first step towards such a model, based on WordNet augmented with ontological classes provided by CoreLex. This model provides a polysemy index for each noun which (a), accurately distinguishes between polysemy and homonymy; (b), supports the analysis that polysemy can be grounded in the frequency of the meaning shifts shown by nouns; and (c), improves a regression model that predicts when the \"one-sense-per-discourse\" hypothesis fails.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Linguistic studies of word meaning generally divide ambiguity into homonymy and polysemy. Homonymous words exhibit idiosyncratic variation, with essentially unrelated senses, e.g. bank as FINANCIAL INSTITUTION versus as NATURAL OBJECT. In polysemy, meanwhile, sense variation is systematic, i.e., appears for whole sets of words. E.g., lamb, chicken and salmon have ANIMAL and FOOD senses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It is exactly this systematicity that represents a challenge for lexical semantics. While homonymy is assumed to be encoded in the lexicon for each lemma, there is a substantial body of work on dealing with general polysemy patterns (cf. Nunberg and Zaenen (1992) ; Copestake and Briscoe (1995) ; Pustejovsky (1995) ; Nunberg (1995) ). This work is predominantly theoretical in nature. Examples of questions addressed are the conditions under which polysemy arises, the representation of polysemy in the semantic lexicon, disambiguation mechanisms in the syntax-semantics interface, and subcategories of polysemy.", "cite_spans": [ { "start": 238, "end": 263, "text": "Nunberg and Zaenen (1992)", "ref_id": "BIBREF10" }, { "start": 266, "end": 294, "text": "Copestake and Briscoe (1995)", "ref_id": "BIBREF2" }, { "start": 297, "end": 315, "text": "Pustejovsky (1995)", "ref_id": "BIBREF11" }, { "start": 318, "end": 332, "text": "Nunberg (1995)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The distinction between polysemy and homonymy also has important potential ramifications for computational linguistics, in particular for Word Sense Disambiguation (WSD). Notably, Ide and Wilks (2006) argue that WSD should focus on modeling homonymous sense distinctions, which are easy to make and provide most benefit. Another case in point is the one-sense-per-discourse hypothesis (Gale et al., 1992) , which claims that within a discourse, instances of a word will strongly tend towards realizing the same sense. This hypothesis seems to apply primarily to homonyms, as pointed out by Krovetz (1998) .", "cite_spans": [ { "start": 180, "end": 200, "text": "Ide and Wilks (2006)", "ref_id": "BIBREF5" }, { "start": 385, "end": 404, "text": "(Gale et al., 1992)", "ref_id": "BIBREF4" }, { "start": 590, "end": 604, "text": "Krovetz (1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unfortunately, the distinction between polysemy and homonymy is still very much an unsolved question. The discussion in the theoretical literature focuses mostly on clear-cut examples and avoids the broader issue. Work on WSD, and in computational linguistics more generally, almost exclusively builds on the WordNet (Fellbaum, 1998) word sense inventory, which lists an unstructured set of senses for each word and does not indicate in which way these senses are semantically related. Diachronic linguistics proposes etymological criteria; however, these are neither undisputed nor easy to operationalize. Consequently, there are currently no broad-coverage lexicons that indicate the polysemy status of words, nor even, to our knowledge, precise, automatizable criteria.", "cite_spans": [ { "start": 317, "end": 333, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goal in this paper is to take a first step towards an automatic polysemy classification. Our approach is based on the aforementioned intuition that meaning variation is systematic in polysemy, but not in homonymy. This approach is described in Section 2. We assess systematicity by mapping WordNet senses onto basic types, a set of 39 ontological categories defined by the CoreLex resource (Buitelaar, 1998) , and looking at the prevalence of pairs of basic types (such as {FINANCIAL INSTITUTION, NATURAL OBJECT} above) across the lexicon. We evaluate this model on two tasks. In Section 3, we apply the measure to the classification of a set of typical polysemy and homonymy lemmas, mostly drawn from the literature. In Section 4, we apply it to the one-sense-per-discourse hypothesis and show that polysemous words tend to violate this hypothesis more than homonyms. Section 5 concludes.", "cite_spans": [ { "start": 394, "end": 411, "text": "(Buitelaar, 1998)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goal is to take the first steps towards an empirical model of polysemy, that is, a computational model which makes predictions for -in principle -arbitrary words on the basis of their semantic behavior.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Polysemy", "sec_num": "2" }, { "text": "The basis of our approach mirrors the focus of much linguistic work on polysemy, namely the fact that polysemy is systematic: There is a whole set of words which show the same variation between two (or more) ontological categories, cf. the \"universal grinder\" (Copestake and Briscoe, 1995) . There are different ways of grounding this notion of systematicity empirically. An obvious choice would be to use a corpus. However, this would introduce a number of problems. First, while corpora provide frequency information, the role of frequency with respect to systematicity is unclear: should acceptable but rare senses play a role, or not? We side with the theoretical literature in assuming that they do. Another problem with corpora is the actual observation of sense variation. Few sense-tagged corpora exist, and those that do are typically small. Interpreting context variation in untagged corpora, on the other hand, corresponds to unsupervised WSD, a serious research problem in itself -see, e.g., Navigli (2009) .", "cite_spans": [ { "start": 260, "end": 289, "text": "(Copestake and Briscoe, 1995)", "ref_id": "BIBREF2" }, { "start": 1004, "end": 1018, "text": "Navigli (2009)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling Polysemy", "sec_num": "2" }, { "text": "We therefore decided to adopt a knowledge-based approach that uses the structure of the WordNet ontology to calculate how systematically the senses of a word vary. The resulting model sets all senses of a word on equal footing. It is thus vulnerable to shortcomings in the architecture of WordNet, but this danger is alleviated in practice by our use of a \"coarsened\" version of WordNet (see below).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling Polysemy", "sec_num": "2" }, { "text": "WordNet provides only a flat list of senses for each word. This list does not indicate the nature of the sense variation among the senses. However, building on the generative lexicon theory by Pustejovsky (1995) , Buitelaar (1998) has developed the \"CoreLex\" resource. It defines a set of 39 so-called basic types which correspond to coarse-grained ontological categories. Each basic type is linked to one or more WordNet anchor nodes, which define a complete mapping between WordNet synsets and basic types by dominance. 1 Table 1 shows the set of basic types and their main anchors; Table 2 shows example lemmas for some basic types.", "cite_spans": [ { "start": 193, "end": 211, "text": "Pustejovsky (1995)", "ref_id": "BIBREF11" }, { "start": 214, "end": 230, "text": "Buitelaar (1998)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 524, "end": 531, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 585, "end": 592, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "WordNet, CoreLex and Basic Types", "sec_num": "2.1" }, { "text": "Ambiguous lemmas are often associated with two or more basic types. CoreLex therefore further assigns each lemma to what Buitelaar calls a polysemy class, the set of all basic types its synsets belong to; a class with multiple representatives is considered systematic. These classes subsume both idiosyncratic and systematic patterns, and thus, despite their name, provide no clue about the nature of the ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet, CoreLex and Basic Types", "sec_num": "2.1" }, { "text": "CoreLex makes it possible to represent the meaning of a lemma not through a set of synsets, but instead in terms of a set of basic types. This constitutes an important step forward. Our working hypothesis is that these basic types approximate the ontological categories that are used in the literature on polysemy to define polysemy patterns. That is, we can define a meaning shift to mean that a lemma possesses one sense in one basic type, while another sense belongs to another basic type. Naturally, this correspondence is not perfect: systematic polysemy did not play a role in the design of the WordNet ontology. Nevertheless, there is a fairly good approximation that allows us to recover many prominent polysemy patterns. Table 3 shows three polysemy patterns characterized in terms of basic types. The first class was already mentioned before. The second class contains a subset of \"transparent nouns\" which can denote a container or a quantity. The last class contains words which describe a place or a group of people. ", "cite_spans": [], "ref_spans": [ { "start": 730, "end": 737, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "WordNet, CoreLex and Basic Types", "sec_num": "2.1" }, { "text": "Given the intuitions developed in the previous section, we define a basic ambiguity as a pair of basic types, both of which are associated with a given lemma. The variation spectrum of a word is then the set of all its basic ambiguities. For example, bottle would have the variation spectrum {{art qui} } (cf . Table 3) ; the word course with the three basic types act, art, grs would have the variation spectrum {{act art}; {act grs}; {art grs} }. There are 39 basic types and thus 39 \u2022 38/2 = 741 possible basic ambiguities. In practice, only 663 basic ambiguities are attested in WordNet. We can quantify each basic ambiguity by the number of words that exhibit it. For the moment, we simply interpret frequency as systematicity. 2 Thus, we interpret the high-frequency (systematic) basic ambiguities as polysemous, and low-frequency (idiosyncratic) basic ambiguities as homonymous. Table 4 shows the most frequent basic ambiguities, all of which apply to several hundred lemmas and can safely be interpreted as polysemous. At the other end, 56 of the 663 basic ambiguities are singletons, i.e. are attested by only a single lemma.", "cite_spans": [], "ref_spans": [ { "start": 309, "end": 320, "text": ". Table 3)", "ref_id": "TABREF3" }, { "start": 887, "end": 894, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Polysemy as Systematicity", "sec_num": "2.2" }, { "text": "In a second step, we extend this classification from basic ambiguities to lemmas. The intuition is again fairly straightforward: A word whose basic ambiguities are systematic will be perceived as polysemous, and as homonymous otherwise. This is clearly an oversimplification, both practically, since we depend on WordNet/CoreLex having made the correct design decisions in defining the ontology and the basic types; as well as conceptually, since not all polysemy patterns will presumably show the same degree of systematicity. Nevertheless, we believe that basic types provide an informative level of abstraction, and that our model is in principle even able to account for conventionalized metaphor, to the extent that the corresponding senses are encoded in WordNet. The exact manner in which the systematicity of the individual basic ambiguities of one lemma are combined is not a priori clear. We have chosen the following method. Let P be a basic ambiguity, P(w) the variation spectrum of a lemma w, and freq(P ) the number of WordNet lemmas with basic ambiguity P . We define the set of polysemous basic ambiguities P N as the N -most frequent bins of basic ambiguities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polysemy as Systematicity", "sec_num": "2.2" }, { "text": "P N = {[P 1 ], ..., [P N ]}, where [P i ] = {P j | freq(P i ) = freq(P j )} and freq(P k ) > freq(P l ) for k < l.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polysemy as Systematicity", "sec_num": "2.2" }, { "text": "We call non-polysemous basic ambiguities idiosyncratic. The polysemy index of a lemma w, \u03c0 N (w), is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polysemy as Systematicity", "sec_num": "2.2" }, { "text": "\u03c0 N (w) = | P N \u2229 P(w)| | P(w)| (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polysemy as Systematicity", "sec_num": "2.2" }, { "text": "\u03c0 N simply measures the ratio of w's basic ambiguities which are polysemous, i.e., high-frequency basic ambiguities. \u03c0 N ranges between 0 and 1, and can be interpreted analogously to the intuition that we have developed on the level of basic ambiguities: high values of \u03c0 (close to 1) mean that the majority of a lemma's basic ambiguities are polysemous, and therefore the lemma is perceived as polysemous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polysemy as Systematicity", "sec_num": "2.2" }, { "text": "In contrast, low values of \u03c0 (close to 0) mean that the lemma's basic ambiguities are predominantly idiosyncratic, and thus the lemma counts as homonymous. Again, note that we consider basic ambiguities at the type level, and that corpus frequency does not enter into the model. This model of polysemy relies crucially on the distinction between systematic and idiosyncratic basic ambiguities, and therefore in turn on the parameter N . N corresponds to the sharp cutoff that our model assumes. At the N -th most frequent basic ambiguity, polysemy turns into homonymy. Since frequency is our only criterion, we have to lump together all basic ambiguities with the same frequency into 135 bins. If we set N = 0, none of the bins count as polysemous, so \u03c0 0 (w) = 0 for all w -all lemmas are homonymous. In the other extreme, we can set N to 135, the total number of frequency bins, which makes all basic ambiguities polysemous, and thus all lemmas: \u03c0 135 (w) = 1 for all w. The optimization of N will be discussed in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polysemy as Systematicity", "sec_num": "2.2" }, { "text": "We assign each lemma a polysemy index between 0 and 1. We thus abandon the dichotomy that is usually made in the literature between two distinct categories of polysemy and homonymy. Instead, we consider polysemy and homonymy the two end points on a gradient, where words in the middle show elements of both. This type of behavior can be seen even for prototypical examples of either category, such as the homonym bank, which shows a variation between SOCIAL GROUP and ARTIFACT:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gradience between Homonymy and Polysemy", "sec_num": "2.3" }, { "text": "(1) a. The bill would force banks [...] to report such property. (grs) b. The coin bank was empty. (art)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gradience between Homonymy and Polysemy", "sec_num": "2.3" }, { "text": "Note that this is the same basic ambiguity that is often cited as a typical example of polysemous sense variation, for example for words like newspaper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gradience between Homonymy and Polysemy", "sec_num": "2.3" }, { "text": "On the other hand, many lemmas which are presumably polysemous show rather unsystematic basic ambiguities. fod, a popular example of a regular and productive sense extension. Yet each of the nouns exhibits additional basic types. The noun chicken also has the highly idiosyncratic meaning of a person who lacks confidence. A lamb can mean a gullible person, salmon is the name of a color and a river, and a duck a score in the game of cricket. There is thus an obvious unsystematic variety in the words' sense variations -a single word can show both homonymic as well as polysemous sense alternation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gradience between Homonymy and Polysemy", "sec_num": "2.3" }, { "text": "To identify an optimal cutoff value N for our polysemy index, we use a simple supervised approach: we optimize the quality with which our polysemy index models a small, manually created dataset. More specifically, we created a two-class, 48-word dataset with 24 homonymous nouns (class hom) and 24 polysemous nouns (class poly) drawn from the literature. The dataset is shown in Table 6 . We now rank these items according to \u03c0 N for different values of N and observe the ability of \u03c0 N to distinguish the two classes. We measure this ability with the Mann-Whitney U test, a nonparametric counterpart of the t-test. 3 In our case, the U statistic is defined as", "cite_spans": [ { "start": 616, "end": 617, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 379, "end": 386, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Evaluating the Polysemy Model", "sec_num": "3" }, { "text": "U (N ) = m i=1 n j=1 1(\u03c0 N (hom i ) < \u03c0 N (poly i ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Polysemy Model", "sec_num": "3" }, { "text": "where 1 is the function function that returns the truth value of its argument (1 for \"true\"). Informally, U (N ) counts the number of correctly ranked pairs of a homonymous and a polysemous noun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Polysemy Model", "sec_num": "3" }, { "text": "The maximum for U is the number of item pairs from the classes (24 \u2022 24 = 576). A score of U = 576 would mean that every \u03c0 N -value of a homonym is smaller than every polysemous value. U = 0 means that there are no homonyms with smaller \u03c0-scores. So U can be directly interpreted as the quality of separation between the two classes. The null hypothesis of this test is that the ranking is essentially random, i.e., half the rankings are correct 4 . We can reject the null hypothesis if U is significantly larger. Figure 1(a) shows the U -statistic for all values of N (between 0 and 135). The left end shows the quality of separation (i.e. U ) for few basic ambiguities (i.e. small N ) which is very small. As soon as we start considering the most frequent basic ambiguities as systematic and thus as evidence for polysemy, hom and poly become much more distinct. We see a clear global maximum of U for N = 81 (U = 436.5). This U value is highly significant at p < 0.005, which means that even on our fairly small dataset, we can reject the null hypothesis that the ranking is random. \u03c0 81 indeed separates the classes with high confidence: 436.5 of 576 or roughly 75% of all pairwise rankings in the dataset are correct. For N > 81, performance degrades again: apparently these settings include too many basic ambiguities in the \"systematic\" category, and homonymous words start to be misclassified as polysemous.", "cite_spans": [], "ref_spans": [ { "start": 514, "end": 525, "text": "Figure 1(a)", "ref_id": null } ], "eq_spans": [], "section": "Evaluating the Polysemy Model", "sec_num": "3" }, { "text": "The separation between the two classes is visualized in the box-and-whiskers plot in Figure 1(b) . We find that more than 75% of the polysemous words have \u03c0 81 > .6. The median value for poly is 1, thus for more than half of the class \u03c0 81 = 1, which can be seen in Figure 2(b) as well. This is a very positive result, since our hope is that highly polysemous words get high scores. Figure 2(a) shows that homonyms are concentrated in the mid-range while exhibiting a small number of \u03c0 81 -values at both extremes.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 96, "text": "Figure 1(b)", "ref_id": null }, { "start": 266, "end": 277, "text": "Figure 2(b)", "ref_id": "FIGREF2" }, { "start": 383, "end": 394, "text": "Figure 2(a)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Evaluating the Polysemy Model", "sec_num": "3" }, { "text": "We take the fact that there is indeed an N which clearly maximizes U as a very positive result that validates our choice of introducing a sharp cutoff between polysemous and idiosyncratic basic ambiguities. These 81 frequency bins contain roughly 20% of the most frequent basic ambiguities. This corresponds to the assumption that basic ambiguities are polysemous if they occur with a minimum of about 50 lemmas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Polysemy Model", "sec_num": "3" }, { "text": "If we look more closely at those polysemous words that obtain low scores (school, glass and cup), we observe that they also show idiosyncratic variation as discussed in Section 2.3. In the case of school, we have the senses schooltime of type tme and group of fish of type grb which one would not expect to alternate regularly with grs and art, the rest of its variation spectrum. The word glass has the unusual type agt due to its use as a slang term for crystal methamphetamine. Finally, cup is unique in that means both an indefinite quantity as well as the definite measurement equal to half a pint. Only 10 other words have this variation in WordNet, including such words as million and billion, which are often used to describe an indefinite but large number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Polysemy Model", "sec_num": "3" }, { "text": "On the other hand, those homonyms that have a high score (e.g. tie, staff and china) have somewhat unexpected regularities due to obscure senses. Both tie and staff are terms used in musical notation. This leads to basic ambiguities with the com type, something that is very common. Finally, the obviously unrelated senses for china, China and porcelain, are less idiosyncratic when abstracted to their types, log and art, respectively. There are 117 words that can mean a location as well as an artifact, (e.g. fireguard, bath, resort, front, . . . ) which are clearly polysemous in that the location is where the artifact is located.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Polysemy Model", "sec_num": "3" }, { "text": "In conclusion, those examples which are most grossly miscategorized by \u03c0 81 contain unexpected sense variations, a number of which have been ignored in previous studies. The second evaluation that we propose for our polysemy index concerns a broader question on word sense, namely the so-called one-sense-per-discourse (1spd) hypothesis. This hypothesis was introduced by Gale et al. (1992) and claims that \" [...] if a word such as sentence appears two or more times in a well-written discourse, it is extremely likely that they will all share the same sense\". The authors verified their hypothesis on a small experiment with encouraging results (only 4% of discourses broke the hypothesis). Indeed, if this hypothesis were unreservedly true, then it would represent a very strong global constraint that could serve to improve word sense disambiguation -and in fact, a follow-up paper by Yarowsky (1995) exploited the hypothesis for this benefit. Unfortunately, it seems that 1spd does not apply universally. At the time (1992), WordNet had not yet emerged as a widely used sense inventory, and the sense labels used by Gale et al. were fairly coarse-grained ones, motivated by translation pairs (e.g., English duty translated as French droit (tax) vs. devoir (obligation)), which correspond mostly to homonymous sense distinctions. 5 Current WSD, in contrast, uses the much more fine-grained WordNet sense inventory which conflates homonymous and polysemous sense distinctions. Now, 1spd seems intuitively plausible for homonyms, where the senses describe different entities that are unlikely to occur in the same discourse (or if they do, different words will be used). However, the situation is different for polysemous words: In a discourse about a party, bottle might felicitously occur both as an object and a measure word. A study by Krovetz (1998) confirmed this intuition on two sense-tagged corpora, where he found 33% of discourses to break 1spd. He suggests that knowledge about polysemy classes can be useful as global biases for WSD.", "cite_spans": [ { "start": 372, "end": 390, "text": "Gale et al. (1992)", "ref_id": "BIBREF4" }, { "start": 409, "end": 414, "text": "[...]", "ref_id": null }, { "start": 889, "end": 904, "text": "Yarowsky (1995)", "ref_id": "BIBREF13" }, { "start": 1842, "end": 1856, "text": "Krovetz (1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Polysemy Model", "sec_num": "3" }, { "text": "In this section, we analyze the sense-tagged SemCor corpus in terms of the basic type-based framework of polysemy that we have developed in Section 2 both qualitatively and quantitatively to demonstrate that basic types, and our polysemy index \u03c0, help us better understand the 1spd hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Polysemy Model", "sec_num": "3" }, { "text": "The first step in our analysis looks specifically at the basic types and basic ambiguities we observe in discourses that break 1spd. Our study reanalyses SemCor, a subset of the Brown corpus annotated exhaustively with WordNet senses (Fellbaum, 1998) . SemCor contains a total of 186 discourses, paragraphs of between 645 and 1023 words. These 186 discourses, in combination with 1088 nouns, give rise to 7520 lemma-discourse pairs, that is, cases where a sense-tagged lemma occurs more than once within a discourse. 6 These 7520 lemma-discourse pairs form the basis of our analysis. We started by looking at the relative frequency of 1spd. We found that the hypothesis holds for 69% of the lemma-discourse pairs, but not for the remaining 31%. This is a good match with Krovetz' findings, and indicates that there are many discourses where there lemmas are used in different senses.", "cite_spans": [ { "start": 234, "end": 250, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Analysis by Basic Types and One-Basic-Type-Per-Discourse", "sec_num": "4.1" }, { "text": "In accordance with our approach to modeling meaning variation at the level of basic types, we implemented a \"coarsened\" version of 1spd, namely one-basic-type-per-discourse (1btpd). This hypothesis is parallel to the original, claiming that it is extremely likely that all words in a discourse share the same basic type. As we have argued before, the basic-type level is a fairly good approximation to the most important ontological categories, while smoothing over some of the most fine-grained (and most troublesome) sense distinctions in WordNet. In this vein, 1btpd should get rid of \"spurious\" ambiguity, but preserve meaningful ambiguity, be it homonymous or polysemous. In fact, the basic type with most of these \"within-basic-type\" ambiguities is PSYCHOLOGICAL FEATURE, which contains many subtle distinctions such as the following senses of perception:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis by Basic Types and One-Basic-Type-Per-Discourse", "sec_num": "4.1" }, { "text": "a. a way of conceiving something b. the process of perceiving c. knowledge gained by perceiving d. becoming aware of something via the senses Such distinctions are collapsed in 1btpd. In consequence, we expect a noticeable, but limited, reduction in Table 7 : Most frequent basic ambiguities that break the 1btpd hypothesis in SemCor the cases that break the hypothesis. Indeed, 1btpd holds for 76% of all lemma-discourse pairs, i.e., for 7% more than 1spd. For the remainder of this analysis, we will test the 1btpd hypothesis instead of 1spd.", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 257, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Analysis by Basic Types and One-Basic-Type-Per-Discourse", "sec_num": "4.1" }, { "text": "The basic type level also provides a good basis to analyze the lemma-discourse pairs where the hypothesis breaks down. Table 7 shows the basic ambiguities that break the hypothesis in SemCor most often. The WordNet frequencies are high throughout, which means that these basic ambiguities are polysemous according to our framework. It is noticeable that the two basic types PSYCHOLOGICAL FEATURE and ACTION participate in almost all of these basic ambiguities. This observation can be explained straightforwardly through polysemous sense extension as sketched above: Actions are associated, among other things, with attributes, states, and communications, and discussion of an action in a discourse can fairly effortlessly switch to these other basic types. A very similar situation applies to psychological features, which are also associated with many of the other categories. In sum, we find that the data bears out our hypothesis: almost all of the most frequent cases of several-basic-types-per-discourse clearly correspond to basic ambiguities that we have classified as polysemous rather than homonymous.", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 126, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Analysis by Basic Types and One-Basic-Type-Per-Discourse", "sec_num": "4.1" }, { "text": "This section complements the qualitative analysis of the previous section with a quantitative analysis which predicts specifically for which lemma-discourse pairs 1btpd breaks down. To do so, we fit a logit mixed effects model (Breslow and Clayton, 1993) to the SemCor data. Logit mixed effects models can be seen as a generalization of logistic regression models. They explain a binary response variable y in terms of a set of fixed effects x, but also include a set of random effects x \u2032 . Fixed effects correspond to \"ordinary\" predictors as in traditional logistic regression, while random effects account for correlations in the data introduced by groups (such as items or subjects) without ascribing these random effects the same causal power as fixed effects -see, e.g., Jaeger (2008) for details.", "cite_spans": [ { "start": 227, "end": 254, "text": "(Breslow and Clayton, 1993)", "ref_id": "BIBREF0" }, { "start": 778, "end": 791, "text": "Jaeger (2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Analysis by Regression Modeling", "sec_num": "4.2" }, { "text": "The contribution of each factor is modelled by a coefficient \u03b2, and their sum is interpreted as the logit-transformed probability of a positive outcome for the response variable:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis by Regression Modeling", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y = 1) = 1 1 + e \u2212z with z = \u03b2 i x i + \u03b2 \u2032 j x \u2032 j", "eq_num": "(2)" } ], "section": "Analysis by Regression Modeling", "sec_num": "4.2" }, { "text": "Model estimation is usually performed using numeric approximations. The coefficients \u03b2 \u2032 of the random effects are drawn from a multivariate normal distribution, centered around 0, which ensures that the majority of random effects are ascribed very small coefficients. From a linguistic perspective, a desirable property of regression models is that they describe the importance of the different effects. First of all, each coefficient can be tested for significant difference to zero, which indicates whether the corresponding effect contributes significantly to modeling the data. Furthermore, the absolute value of each \u03b2 i can be interpreted as the log odds -that is, as the (logarithmized) change in the probability of the response variable being positive depending on x i being positive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis by Regression Modeling", "sec_num": "4.2" }, { "text": "In our experiment, each datapoint corresponds to one of the 7520 lemma-discourse pair from SemCor (cf. Section 4.1). The response variable is binary: whether 1btpd holds for the lemma-discourse pair or not. We include in the model five predictors which we expect to affect the response variable: three fixed effects and two random ones. The first fixed effect is the ambiguity of the lemma as measured by the Table 8 : Logit mixed effects model for the response variable \"one-basic-type-per-discourse (1btpd) holds\" (SemCor; random effects: discourse and lemma; significances: -: p > 0.05; ***: p < 0.001) number of its basic types, i.e. the size of its variation spectrum. We expect that the more ambiguous a noun, the smaller the chance for 1btpd. We expect the same effect for the (logarithmized) length of the discourse in words: longer discourses run a higher risk for violating the hypothesis. Our third fixed effect is the polysemy index \u03c0 81 , for which we also expect a negative effect. The two random effects are the identity of the discourse and the noun. Both of these can influence the outcome, but should not be used as full explanatory variables. We build the model in the R statistical environment, using the lme4 7 package. The main results are shown in Table 8 . We find that the number of basic types has a highly significant negative effect on the 1btpd hypothesis (p < 0.001) . Each additional basic type lowers the odds for the hypothesis by a factor of e \u22120.50 \u2248 0.61. The confidence interval is small; the effect is very consistent. This was to be expectedit would have been highly suspicious if we had not found this basic frequency effect. Our expectations are not met for the discourse length predictor, though. We expected a negative coefficient, but find a positive one. The size of the confidence interval shows the effect to be insignificant. Thus, we have to assume that there is no significant relationship between the length of the discourse and the 1btpd hypothesis. Note that this outcome might result from the limited variation of discourse lengths in SemCor: recall that no discourse contains less than 645 or more than 1023 words.", "cite_spans": [], "ref_spans": [ { "start": 409, "end": 416, "text": "Table 8", "ref_id": null }, { "start": 1271, "end": 1278, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Analysis by Regression Modeling", "sec_num": "4.2" }, { "text": "However, we find a second highly significant negative effect (p < 0.001) in our polysemy index \u03c0 81 . With a coefficient of -0.91, this means that a word with a polysemy index of 1 is only 40% as likely to preserve 1btpd than a word with a polysemy index of 0. The confidence interval is larger than for the number of basic types, but still fairly small. To bolster this finding, we estimated a second mixed effects model which was identical to the first one but did not contain \u03c0 81 as predictor. We tested the difference between the models with a likelihood ratio test and found that the model that includes \u03c0 81 is highly preferred (p < 0.0001; D = \u22122\u2206LL = 40; df = 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis by Regression Modeling", "sec_num": "4.2" }, { "text": "These findings establish that our polysemy index \u03c0 can indeed serve a purpose beyond the direct modeling of polysemy vs. homonymy, namely to explain the distribution of word senses in discourse better than obvious predictors like the overall ambiguity of the word and the length of the discourse can. This further validates the polysemy index as a contribution to the study of the behavior of word senses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis by Regression Modeling", "sec_num": "4.2" }, { "text": "In this paper, we have approached the problem of distinguishing empirically two different kinds of word sense ambiguity, namely homonymy and polysemy. To avoid sparse data problems inherent in corpus work on sense distributions, our framework is based on WordNet, augmented with the ontological categories provided by the CoreLex lexicon. We first classify the basic ambiguities (i.e., the pairs of ontological categories) shown by a lemma as either polysemous or homonymous, and then assign the ratio of polysemous basic ambiguities to each word as its polysemy index.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We have evaluated this framework on two tasks. The first was distinguishing polysemous from homonymous lemmas on the basis of their polysemy index, where it gets 76% of all pairwise rankings correct. We also used this task to identify an optimal value for the threshold between polysemous and homonymous basic ambiguities. We located it at around 20% of all basic ambiguities (113 of 663 in the top 81 frequency bins), which apparently corresponds to human intuitions. The second task was an analysis of the one-sense-per-discourse heuristic, which showed that this hypothesis breaks down frequently in the face of polysemy, and that the polysemy index can be used within a regression model to predict the instances within a discourse where this happens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "It may seem strange that our continuous index assumes a gradient between homonymy and polysemy. Our analyses indicate that on the level of actual examples, the two classes are indeed not separated by a clear boundary: many words contain basic ambiguities of either type. Nevertheless, even in the linguistic literature, words are often considered as either polysemous or homonymous. Our interpretation of this contradiction is that some basic types (or some basic ambiguities) are more prominent than others. The present study has ignored this level, modeling the polysemy index simply on the ratio of polysemous patterns without any weighting. In future work, we will investigate human judgments of polysemy vs. homonymy more closely, and assess other correlates of these judgments (e.g., corpus counts).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "A second area of future work is more practical. The logistic regression incorporating our polysemous index predicts, for each lemma-discourse pair, the probability that the one-sense-per-discourse hypothesis is violated. We will use this information as a global prior on an \"all-words\" WSD task, where all occurrences of a word in a discourse need to be disambiguated. Finally, Stokoe (2005) demonstrates the chances for improvement in information retrieval systems if we can reliably distinguish between homonymous and polysemous senses of a word.", "cite_spans": [ { "start": 378, "end": 391, "text": "Stokoe (2005)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Note that not all of CoreLex anchor nodes are disjoint; therefore a given WordNet synset may be dominated by two CoreLex anchor nodes. We assign each synset to the basic type corresponding to the most specific dominating anchor node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that this is strictly a type-based notion of frequency: corpus (token) frequencies do not enter into our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The advantage of U over t is that t assumes comparable variance in the two samples, which we cannot guarantee.4 Provided that, like in this case, the classes are of equal size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that Gale et al. use the term \"polysemy\" synonymously with \"ambiguous\".6 We exclude cases where a lemma occurs once in a discourse, since 1spd holds trivially.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://cran.r-project.org/web/packages/lme4/index.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Approximate inference in generalized linear mixed models", "authors": [ { "first": "N", "middle": [], "last": "Breslow", "suffix": "" }, { "first": "D", "middle": [], "last": "Clayton", "suffix": "" } ], "year": 1993, "venue": "Journal of the American Statistical Society", "volume": "88", "issue": "421", "pages": "9--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Breslow, N. and D. Clayton (1993). Approximate inference in generalized linear mixed models. Journal of the American Statistical Society 88(421), 9-25.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "CoreLex: An ontology of systematic polysemous classes", "authors": [ { "first": "P", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 1998, "venue": "Proceedings of FOIS", "volume": "", "issue": "", "pages": "221--235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Buitelaar, P. (1998). CoreLex: An ontology of systematic polysemous classes. In Proceedings of FOIS, Amsterdam, Netherlands, pp. 221-235.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Semi-productive polysemy and sense extension", "authors": [ { "first": "A", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "T", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 1995, "venue": "Journal of Semantics", "volume": "12", "issue": "", "pages": "15--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Copestake, A. and T. Briscoe (1995). Semi-productive polysemy and sense extension. Journal of Semantics 12, 15-67.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, C. (1998). WordNet: An Electronic Lexical Database. MIT Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "One sense per discourse", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1992, "venue": "Proceedings of HLT", "volume": "", "issue": "", "pages": "233--237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gale, W. A., K. W. Church, and D. Yarowsky (1992). One sense per discourse. In Proceedings of HLT, Harriman, NY, pp. 233-237.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Making sense about sense", "authors": [ { "first": "N", "middle": [], "last": "Ide", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 2006, "venue": "Word Sense Disambiguation: Algorithms and Applications", "volume": "", "issue": "", "pages": "47--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ide, N. and Y. Wilks (2006). Making sense about sense. In E. Agirre and P. Edmonds (Eds.), Word Sense Disambiguation: Algorithms and Applications, pp. 47-74. Springer.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Categorical data analysis: Away from ANOVAs and toward Logit Mixed Models", "authors": [ { "first": "T", "middle": [], "last": "Jaeger", "suffix": "" } ], "year": 2008, "venue": "Journal of Memory and Language", "volume": "59", "issue": "4", "pages": "434--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaeger, T. (2008). Categorical data analysis: Away from ANOVAs and toward Logit Mixed Models. Journal of Memory and Language 59(4), 434-446.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "More than one sense per discourse", "authors": [ { "first": "R", "middle": [], "last": "Krovetz", "suffix": "" } ], "year": 1998, "venue": "Proceedings of SENSEVAL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krovetz, R. (1998). More than one sense per discourse. In Proceedings of SENSEVAL, Herstmonceux Castle, England.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Word Sense Disambiguation: a survey", "authors": [ { "first": "R", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2009, "venue": "ACM Computing Surveys", "volume": "41", "issue": "2", "pages": "1--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Navigli, R. (2009). Word Sense Disambiguation: a survey. ACM Computing Surveys 41(2), 1-69.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Transfers of meaning", "authors": [ { "first": "G", "middle": [], "last": "Nunberg", "suffix": "" } ], "year": 1995, "venue": "Journal of Semantics", "volume": "12", "issue": "2", "pages": "109--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nunberg, G. (1995). Transfers of meaning. Journal of Semantics 12(2), 109-132.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Systematic polysemy in lexicology and lexicography", "authors": [ { "first": "G", "middle": [], "last": "Nunberg", "suffix": "" }, { "first": "A", "middle": [], "last": "Zaenen", "suffix": "" } ], "year": 1992, "venue": "Proceedings of Euralex II", "volume": "", "issue": "", "pages": "387--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nunberg, G. and A. Zaenen (1992). Systematic polysemy in lexicology and lexicography. In Proceedings of Euralex II, Tampere, Finland, pp. 387-395.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Generative Lexicon", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pustejovsky, J. (1995). The Generative Lexicon. Cambridge MA: MIT Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Differentiating homonymy and polysemy in information retrieval", "authors": [ { "first": "C", "middle": [], "last": "Stokoe", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in NLP", "volume": "", "issue": "", "pages": "403--410", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stokoe, C. (2005). Differentiating homonymy and polysemy in information retrieval. In Proceedings of the conference on Human Language Technology and Empirical Methods in NLP, Morristown, NJ, pp. 403-410.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Unsupervised word sense disambiguation rivaling supervised methods", "authors": [ { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarowsky, D. (1995). Unsupervised word sense disambiguation rivaling supervised methods. In Proceed- ings of ACL, Cambridge, MA, pp. 189-196.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "of \u03c081 values by class Figure 1: Separation of the hom and poly classes in our dataset", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "Words and their \u03c0 81 -scores 4 The One-Sense-Per-Discourse Hypothesis", "type_str": "figure", "uris": null }, "TABREF1": { "html": null, "text": "", "type_str": "table", "content": "
: The 39 CoreLex basic types (BTs) and their WordNet anchor nodes
Basic type WordNet anchorExamples
agtAGENTdriver, menace, power, proxy, . . .
grsSOCIAL GROUPcity, government, people, state, . . .
phoPHENOMENONlife, pressure, trade, work, . . .
posPOSSESSIONfigure, land, money, right, . . .
quiINDEFINITE QUANTITY bit, glass, lot, step, . . .
relRELATIONfunction, part, position, series, . . .
", "num": null }, "TABREF2": { "html": null, "text": "Basic types with example words", "type_str": "table", "content": "
Pattern (Basic types)Examples
ANIMAL, FOODfowl, hare, lobster, octopus, snail, . . .
ARTIFACT, INDEFINITE QUANTITY bottle, jug, keg, spoon, tub, . . .
ARTIFACT, SOCIAL GROUPacademy, embassy, headquarters, . . .
", "num": null }, "TABREF3": { "html": null, "text": "Examples of polysemous meaning variation patterns", "type_str": "table", "content": "", "num": null }, "TABREF4": { "html": null, "text": "Basic ambiguityExamples {act com} construction, consultation, draft, estimation, refusal, . . . {act art} press, review, staging, tackle, . . . {com hum} egyptian, esquimau, kazakh, mojave, thai, . . . {act sta} domination, excitement, failure, marriage, matrimony, . . . {art hum} dip, driver, mouth, pawn, watch, wing, . . .", "type_str": "table", "content": "
", "num": null }, "TABREF5": { "html": null, "text": "Top five basic ambiguities with example lemmas", "type_str": "table", "content": "
NounBasic typesNoun Basic types
chicken anm fod evt humlambanm fod hum
salmon anm fod atr natduckanm fod art qud
", "num": null }, "TABREF6": { "html": null, "text": "", "type_str": "table", "content": "", "num": null }, "TABREF7": { "html": null, "text": "shows four lemmas which are instances of the meaning variation between ANIMAL Homonymous nouns ball,bank, board, chapter, china, degree, fall, fame, plane, plant, pole, post, present, rest, score, sentence, spring, staff, stage, table, term, tie, tip, tongue Polysemous nouns bottle, chicken, church, classification, construction, cup, development, fish, glass, improvement, increase, instruction, judgment, lamb, management, newspaper, painting, paper, picture, pool, school, state, story, university", "type_str": "table", "content": "
", "num": null }, "TABREF8": { "html": null, "text": "Experimental items for the two classes hom and poly (anm) and FOOD", "type_str": "table", "content": "
", "num": null }, "TABREF9": { "html": null, "text": "Basic ambiguity most common breaking words freq(P breaks 1btpd) freq(P ) N {com psy} evidence, sense, literature, meaning, style, . . .", "type_str": "table", "content": "
8936513
{act psy}study, education, pattern, attention, process, . . .885887
{psy sta}need, feeling, difficulty, hope, fact, . . .7933814
{act atr}role, look, influence, assistance, interest, . . .794919
{act art}church, way, case, thing, design, . . .677532
{act sta}operation, interest, trouble, employment, absence, . . .606154
{act com}thing, art, production, music, literature, . . .597551
{atr sta}life, level, desire, area, unity, . . .585946
", "num": null } } } }