{ "paper_id": "W11-0149", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T05:39:05.891821Z" }, "title": "Semantic Relatedness from Automatically Generated Semantic Networks", "authors": [ { "first": "Pia-Ramona", "middle": [], "last": "Wojtinnek", "suffix": "", "affiliation": { "laboratory": "", "institution": "Oxford University Computing Laboratory", "location": {} }, "email": "pia-ramona.wojtinnek@comlab.ox.ac.uk" }, { "first": "Stephen", "middle": [], "last": "Pulman", "suffix": "", "affiliation": { "laboratory": "", "institution": "Oxford University Computing Laboratory", "location": {} }, "email": "stephen.pulman@comlab.ox.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce a novel approach to measuring semantic relatedness of terms based on an automatically generated, large-scale semantic network. We present promising first results that indicate potential competitiveness with approaches based on manually created resources.", "pdf_parse": { "paper_id": "W11-0149", "_pdf_hash": "", "abstract": [ { "text": "We introduce a novel approach to measuring semantic relatedness of terms based on an automatically generated, large-scale semantic network. We present promising first results that indicate potential competitiveness with approaches based on manually created resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The quantification of semantic similarity and relatedness of terms is an important problem of lexical semantics. Its applications include word sense disambiguation, text summarization and information retrieval (Budanitsky and Hirst, 2006) . Most approaches to measuring semantic relatedness fall into one of two categories. They either look at distributional properties based on corpora (Finkelstein et al., 2002; Agirre et al., 2009) or make use of pre-existing knowledge resources such as WordNet or Roget's Thesaurus (Hughes and Ramage, 2007; Jarmasz, 2003) . The latter approaches achieve good results, but they are inherently restricted in coverage and domain adaptation due to their reliance on costly manual acquisition of the resource. In addition, those methods that are based on hierarchical, taxonomically structured resources are generally better suited for measuring semantic similarity than relatedness (Budanitsky and Hirst, 2006) . In this paper, we introduce a novel technique that measures semantic relatedness based on an automatically generated semantic network. Terms are compared by the similarity of their contexts in the semantic network. We present our promising initial results of this work in progress, which indicate the potential to compete with resource-based approaches while performing well on both, semantic similarity and relatedness.", "cite_spans": [ { "start": 210, "end": 238, "text": "(Budanitsky and Hirst, 2006)", "ref_id": "BIBREF2" }, { "start": 387, "end": 413, "text": "(Finkelstein et al., 2002;", "ref_id": "BIBREF4" }, { "start": 414, "end": 434, "text": "Agirre et al., 2009)", "ref_id": "BIBREF0" }, { "start": 520, "end": 545, "text": "(Hughes and Ramage, 2007;", "ref_id": "BIBREF8" }, { "start": 546, "end": 560, "text": "Jarmasz, 2003)", "ref_id": "BIBREF9" }, { "start": 917, "end": 945, "text": "(Budanitsky and Hirst, 2006)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our approach to measuring semantic relatedness, we first automatically build a large semantic network from text and then measure the similarity of two terms by the similarity of the local networks around their corresponding nodes. The semantic network serves as a structured representation of the occurring concepts, relations and attributes in the text. It is built by translating every sentence in the text into a network fragment based on semantic analysis and then merging these networks into a large network by mapping all occurrences of the same term into one node. Figure 1 (a) contains a sample text snippet and the network derived from it. In this way, concepts are connected across sentences and documents, resulting in a high-level view of the information contained.", "cite_spans": [], "ref_spans": [ { "start": 575, "end": 583, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Similarity and Relatedness from semantic networks", "sec_num": "2" }, { "text": "Our underlying assumption for measuring semantic relatedness is that semantically related nodes are connected to a similar set of nodes. In other words, we consider the context of a node in the network as a representation of its meaning. In contrast to standard approaches which look only at a type of context directly found in the text, e.g. words that occur within a certain window from the target word, our network-based context takes into account indirect connections between concepts. For example, in the text underlying the network in Fig. 2 , dissertation and module rarely co-occurred in a sentence, but the network shows a strong connection over student as well as over credit and work.", "cite_spans": [], "ref_spans": [ { "start": 541, "end": 547, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Similarity and Relatedness from semantic networks", "sec_num": "2" }, { "text": "We build the network incrementally by parsing every sentence, translating it into a small network fragment and then mapping that fragment onto the main network generated from all previous sentences. Our translation of sentences from text to network is based on the one used in the ASKNet system (Harrington and Clark, 2007) . It makes use of two NLP tools, the Clark and Curran parser (Clark and Curran, 2004) and the semantic analysis tool Boxer (Bos et al., 2004) , both of which are part of the C&C Toolkit 1 . The parser is based on Combinatory Categorial Grammar (CCG) and has been trained on 40,000 manually annotated sentences of the WSJ. It is both robust and efficient. Boxer is designed to convert the CCG parsed text into a logical representation based on Discourse Representation Theory (DRT). This intermediate logical form representation presents an abstraction from syntactic details to semantic core information. For example, the syntactical forms progress of student and student's progress have the same Boxer representation as well as the student who attends the lecture and the student attending the lecture. In addition, Boxer provides some elementary co-reference resolution.", "cite_spans": [ { "start": 295, "end": 323, "text": "(Harrington and Clark, 2007)", "ref_id": "BIBREF7" }, { "start": 385, "end": 409, "text": "(Clark and Curran, 2004)", "ref_id": "BIBREF3" }, { "start": 447, "end": 465, "text": "(Bos et al., 2004)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "The Network Structure", "sec_num": "2.1" }, { "text": "The translation from the Boxer output into a network is straightforward and an example is given in Figure 1 (b). The network structure distinguishes between object nodes (rectangular), relational nodes (diamonds) and attributes (rounded rectangles) and different types of links such as subject or object links.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 107, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Network Structure", "sec_num": "2.1" }, { "text": "Students select modules from the published list and write a dissertation. Modules usually provide 15 credits each, but 30 credits are awarded for the dissertation. The student must discuss the topic of the final dissertation with their appointed tutor. The large unified network is then built by merging every occurrence of a concept (e.g. object node) into one node, thus accumulating the information on this concept. In the second example ( Figure ??) , the lecture node would be merged with occurrences of lecture in other sentences. Figure 2 gives a subset of a network generated from a few paragraphs taken from Oxford Student Handbooks. Multiple occurrences of the same relation between two object nodes are drawn as overlapping.", "cite_spans": [], "ref_spans": [ { "start": 443, "end": 453, "text": "Figure ??)", "ref_id": null }, { "start": 537, "end": 545, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Network Structure", "sec_num": "2.1" }, { "text": "We measure the semantic relatedness of two concepts by measuring the similarity of the surroundings of their corresponding nodes in the network. Semantically related terms are then expected to be connected to a similar set of nodes. We retrieve the network context of a specific node and determine the level of significance of each node in the context using spreading activation 2 . The target node is given an initial activation of a x = 10 * numberOfLinks(x) and is fired so that the activation spreads over its outand ingoing links to the surrounding nodes. They in turn fire if their received activation level exceeds a certain threshold. The activation attenuates by a constant factor in every step and a stable state is reached when no node in the network can fire anymore. In this way, the context nodes receive different levels of activation reflecting their significance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Vector Space Model", "sec_num": "2.2" }, { "text": "We derive a vector representation v(x) of the network context of x including only object nodes and their activation levels. The entries are", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Vector Space Model", "sec_num": "2.2" }, { "text": "v i (x) = act x,ax (n i ) n i \u2208 {n \u2208 nodes | type(n) = object node}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Vector Space Model", "sec_num": "2.2" }, { "text": "The semantic relatedness of two target words is then measured by the cosine similarity of their context vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Vector Space Model", "sec_num": "2.2" }, { "text": "sim rel(x, y) = cos( v(x), v(y)) = v(x) \u2022 v(y) v(x) v(y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Vector Space Model", "sec_num": "2.2" }, { "text": "As spreading activation takes several factors into account, such as number of paths, length of paths, level of density and number of connections, this method leverages the full interconnected structure of the network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Vector Space Model", "sec_num": "2.2" }, { "text": "We evaluate our approach on the WordSimilarity-353 (Finkelstein et al., 2002) test collection, which is a commonly used gold standard for the semantic relatedness task. It provides average human judgments scores of the degree of relatedness for 353 word pairs. The collection contains classically similar word Approach Spearman (Strube and Ponzetto, 2006) Wikipedia 0.19-0.48 (Jarmasz, 2003) Roget's 0.55 (Hughes and Ramage, 2007) WordNet 0.55 (Agirre et al., 2009) WordNet 0.56 (Finkelstein et al., 2002) Web corpus, LSA 0.56 (Harrington, 2010) Sem. Network 0.62 (Agirre et al., 2009) WordNet+gloss 0.66 (Agirre et al., 2009) Web corpus 0.66 (Gabrilovich and Markovitch, 2007) pairs such as street -avenue and topically related pairs such as hotel -reservation. However, no distinction was made while judging and the instruction was to rate the general degree of semantic relatedness.", "cite_spans": [ { "start": 51, "end": 77, "text": "(Finkelstein et al., 2002)", "ref_id": "BIBREF4" }, { "start": 328, "end": 355, "text": "(Strube and Ponzetto, 2006)", "ref_id": "BIBREF10" }, { "start": 376, "end": 391, "text": "(Jarmasz, 2003)", "ref_id": "BIBREF9" }, { "start": 405, "end": 430, "text": "(Hughes and Ramage, 2007)", "ref_id": "BIBREF8" }, { "start": 444, "end": 465, "text": "(Agirre et al., 2009)", "ref_id": "BIBREF0" }, { "start": 479, "end": 505, "text": "(Finkelstein et al., 2002)", "ref_id": "BIBREF4" }, { "start": 527, "end": 545, "text": "(Harrington, 2010)", "ref_id": "BIBREF6" }, { "start": 564, "end": 585, "text": "(Agirre et al., 2009)", "ref_id": "BIBREF0" }, { "start": 605, "end": 626, "text": "(Agirre et al., 2009)", "ref_id": "BIBREF0" }, { "start": 643, "end": 677, "text": "(Gabrilovich and Markovitch, 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "As a corpus we chose the British National Corpus (BNC) 3 . It is one of the largest standardized English corpora and contains approximately 5.9 million sentences. Choosing this text collection enables us to build a general purpose network that is not specifically created for the considered work pairs and ensures a realistic overall connectedness of the network as well as a broad coverage. In this paper we created a network from 2 million sentences of the BNC. It contains 27.5 million nodes out of which 635.000 are object nodes and the rest are relation and attribute nodes. The building time including parsing was approximately 4 days.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "Following the common practice in related work, we compared our scores to the human judgements using the Spearman rank-order correlation coefficient. The results can be found in Table 1 (a) with a comparison to previous results on the WordSimilarity-353 collection.", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 184, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "Our first result over all word pairs is relatively low compared to the currently best performing systems. However, we noticed that many poorly rated word pairs contained at least one word with low frequency. Excluding these considerably improved the result to 0.50. On this reduced set of word pairs our scores are in the region of approaches which make use of the Wikipedia category network, the Word-Net taxonomic relations or Roget's thesaurus. This is a promising result as it indicates that our approach based on automatically generated networks has the potential of competing with those using manually created resources if we increase the corpus size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "While our results are not competitive with the best corpus based methods, we can note that our current corpus is an order of magnitude smaller -2 million sentences versus 1 million full Wikipedia articles (Gabrilovich and Markovitch, 2007) or 215MB versus 1.6 Terabyte (Agirre et al., 2009) . The extent to which corpus size influences our results is subject to further research.", "cite_spans": [ { "start": 205, "end": 239, "text": "(Gabrilovich and Markovitch, 2007)", "ref_id": "BIBREF5" }, { "start": 269, "end": 290, "text": "(Agirre et al., 2009)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "We also evaluated our scores separately on the semantically similar versus the semantically related subsets of WordSim-353 following Agirre et al. (2009) (Table 1(b) ). Taking the same low-frequency cut as above, we can see that our approach performs equally well on both sets. This is remarkable as different methods tend to be more appropriate to calculate either one or the other (Agirre et al., 2009) . In particular, WordNet based measures are well known to be better suited to measure similarity than relatedness due to its hierarchical, taxonomic structure (Budanitsky and Hirst, 2006) . The fact that our system achieves equal results on the subset indicates that it matches human judgement of semantic relatedness beyond specific types of relations. This could be due to the associative structure of the network.", "cite_spans": [ { "start": 133, "end": 153, "text": "Agirre et al. (2009)", "ref_id": "BIBREF0" }, { "start": 383, "end": 404, "text": "(Agirre et al., 2009)", "ref_id": "BIBREF0" }, { "start": 564, "end": 592, "text": "(Budanitsky and Hirst, 2006)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 154, "end": 165, "text": "(Table 1(b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "Our approach is closely related to Harrington (2010) as our networks are built in a similar fashion and we also use spreading activation to measure semantic relatedness. In their approach, semantic relatedness of two terms a and b is measured by the activation b receives when a is fired. The core difference of this measurement to ours is that it is path-based while ours is context based. In addition, the corpus used was retrieved specifically for the word pairs in question while ours is a general-purpose corpus.", "cite_spans": [ { "start": 35, "end": 52, "text": "Harrington (2010)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "In addition, our approach is related to work that uses personalized PageRank or Random Walks on WordNet (Agirre et al., 2009; Hughes and Ramage, 2007) . Similar the spreading activation method presented here, personalized PageRank and Random Walks are used to provide a relevance distribution of nodes surrounding the target word to its meaning. In contrast to the approaches based on resources, our network is automatically built and therefore does not rely on costly, manual creation. In addition, compared to WordNet based measures, our method is potentially not biased towards relatedness due to similarity.", "cite_spans": [ { "start": 104, "end": 125, "text": "(Agirre et al., 2009;", "ref_id": "BIBREF0" }, { "start": 126, "end": 150, "text": "Hughes and Ramage, 2007)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "We presented a novel approach to measuring semantic relatedness which first builds a large-scale semantic network and then determines the relatedness of nodes by the similarity of their surrounding local network. Our preliminary results of this ongoing work are promising and are in the region of several WordNet and Wikipedia link structure approaches. As future work, there are several ways of improvement we are going to investigate. Firstly, the results in Section 3 show the crucial influence of corpus size and occurrence frequency on the performance of our system. We will be experimenting with larger general networks (e.g. the whole BNC) as well as integration of retrieved documents for the low frequency terms. Secondly, the parameters and specific settings for the spreading activation algorithm need to be tuned. For example, the amount of initial activation of the target node determines the size of the context considered. Thirdly, we will investigate different vector representation variants. In particular, we can achieve a more fine-grained representation by also considering relation nodes in addition to object nodes. We believe that with these improvements our automatic semantic network approach will be able to compete with techniques based on manually created resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Outlook", "sec_num": "5" }, { "text": "http://svn.ask.it.usyd.edu.au/trac/candc", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The spreading activation algorithm is based onHarrington (2010)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.natcorp.ox.ac.uk/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A study on similarity and relatedness using distributional and wordnet-based approaches", "authors": [ { "first": "E", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "E", "middle": [], "last": "Alfonseca", "suffix": "" }, { "first": "K", "middle": [], "last": "Hall", "suffix": "" }, { "first": "J", "middle": [], "last": "Kravalova", "suffix": "" }, { "first": "M", "middle": [], "last": "Pa\u015fca", "suffix": "" }, { "first": "A", "middle": [], "last": "Soroa", "suffix": "" } ], "year": 2009, "venue": "NAACL '09", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agirre, E., E. Alfonseca, K. Hall, J. Kravalova, M. Pa\u015fca, and A. Soroa (2009). A study on similarity and relatedness using distributional and wordnet-based approaches. In NAACL '09.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Wide-coverage semantic representations from a ccg parser", "authors": [ { "first": "J", "middle": [], "last": "Bos", "suffix": "" }, { "first": "S", "middle": [], "last": "Clark", "suffix": "" }, { "first": "M", "middle": [], "last": "Steedman", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Curran", "suffix": "" }, { "first": "J", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2004, "venue": "COLING'04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bos, J., S. Clark, M. Steedman, J. R. Curran, and J. Hockenmaier (2004). Wide-coverage semantic representations from a ccg parser. In COLING'04.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Evaluating wordnet-based measures of lexical semantic relatedness", "authors": [ { "first": "A", "middle": [], "last": "Budanitsky", "suffix": "" }, { "first": "G", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2006, "venue": "Computational Linguistics", "volume": "32", "issue": "1", "pages": "13--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Budanitsky, A. and G. Hirst (2006). Evaluating wordnet-based measures of lexical semantic relatedness. Computational Linguistics 32(1), 13-47.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Parsing the wsj using ccg and log-linear models", "authors": [ { "first": "S", "middle": [], "last": "Clark", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2004, "venue": "ACL'04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, S. and J. R. Curran (2004). Parsing the wsj using ccg and log-linear models. In ACL'04.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Placing search in context: the concept revisited", "authors": [ { "first": "L", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "E", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matias", "suffix": "" }, { "first": "E", "middle": [], "last": "Rivlin", "suffix": "" }, { "first": "Z", "middle": [], "last": "Solan", "suffix": "" }, { "first": "G", "middle": [], "last": "Wolfman", "suffix": "" }, { "first": "E", "middle": [], "last": "Ruppin", "suffix": "" } ], "year": 2002, "venue": "ACM Trans. Inf. Syst", "volume": "20", "issue": "1", "pages": "116--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Finkelstein, L., E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin (2002). Placing search in context: the concept revisited. ACM Trans. Inf. Syst. 20(1), 116-131.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Computing semantic relatedness using wikipedia-based explicit semantic analysis", "authors": [ { "first": "E", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "S", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabrilovich, E. and S. Markovitch (2007). Computing semantic relatedness using wikipedia-based explicit semantic analysis. In IJCAI'07.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A semantic network approach to measuring semantic relatedness", "authors": [ { "first": "B", "middle": [], "last": "Harrington", "suffix": "" } ], "year": 2010, "venue": "COLING'10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harrington, B. (2010). A semantic network approach to measuring semantic relatedness. In COLING'10.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Asknet: automated semantic knowledge network", "authors": [ { "first": "B", "middle": [], "last": "Harrington", "suffix": "" }, { "first": "S", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2007, "venue": "AAAI'07", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harrington, B. and S. Clark (2007). Asknet: automated semantic knowledge network. In AAAI'07.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Lexical semantic relatedness with random graph walks", "authors": [ { "first": "T", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "D", "middle": [], "last": "Ramage", "suffix": "" } ], "year": 2007, "venue": "EMNLP-CoNLL'07", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hughes, T. and D. Ramage (2007). Lexical semantic relatedness with random graph walks. In EMNLP-CoNLL'07.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Roget's thesaurus as a lexical resource for natural language processsing", "authors": [ { "first": "M", "middle": [], "last": "Jarmasz", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jarmasz, M. (2003). Roget's thesaurus as a lexical resource for natural language processsing. Master's thesis, University of Ottawa.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Wikirelate! computing semantic relatedness using wikipedia", "authors": [ { "first": "M", "middle": [], "last": "Strube", "suffix": "" }, { "first": "S", "middle": [ "P" ], "last": "Ponzetto", "suffix": "" } ], "year": 2006, "venue": "AAAI'06", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strube, M. and S. P. Ponzetto (2006). Wikirelate! computing semantic relatedness using wikipedia. In AAAI'06.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "(a) Sample text snippet and according network representation. (b) Example of translation from text to network over Boxer semantic analysis", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "Subgraph displaying selected concepts and relations from sample network.", "type_str": "figure", "num": null }, "TABREF1": { "type_str": "table", "html": null, "num": null, "text": "", "content": "
: (a) Spearman ranking correlation coefficient results for our approach and comparison with |
previous approaches. (b) Separate results for similarity and relatedness subset. |