File size: 42,888 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 |
{
"paper_id": "W11-0149",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:39:05.891821Z"
},
"title": "Semantic Relatedness from Automatically Generated Semantic Networks",
"authors": [
{
"first": "Pia-Ramona",
"middle": [],
"last": "Wojtinnek",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oxford University Computing Laboratory",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Stephen",
"middle": [],
"last": "Pulman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oxford University Computing Laboratory",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce a novel approach to measuring semantic relatedness of terms based on an automatically generated, large-scale semantic network. We present promising first results that indicate potential competitiveness with approaches based on manually created resources.",
"pdf_parse": {
"paper_id": "W11-0149",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce a novel approach to measuring semantic relatedness of terms based on an automatically generated, large-scale semantic network. We present promising first results that indicate potential competitiveness with approaches based on manually created resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The quantification of semantic similarity and relatedness of terms is an important problem of lexical semantics. Its applications include word sense disambiguation, text summarization and information retrieval (Budanitsky and Hirst, 2006) . Most approaches to measuring semantic relatedness fall into one of two categories. They either look at distributional properties based on corpora (Finkelstein et al., 2002; Agirre et al., 2009) or make use of pre-existing knowledge resources such as WordNet or Roget's Thesaurus (Hughes and Ramage, 2007; Jarmasz, 2003) . The latter approaches achieve good results, but they are inherently restricted in coverage and domain adaptation due to their reliance on costly manual acquisition of the resource. In addition, those methods that are based on hierarchical, taxonomically structured resources are generally better suited for measuring semantic similarity than relatedness (Budanitsky and Hirst, 2006) . In this paper, we introduce a novel technique that measures semantic relatedness based on an automatically generated semantic network. Terms are compared by the similarity of their contexts in the semantic network. We present our promising initial results of this work in progress, which indicate the potential to compete with resource-based approaches while performing well on both, semantic similarity and relatedness.",
"cite_spans": [
{
"start": 210,
"end": 238,
"text": "(Budanitsky and Hirst, 2006)",
"ref_id": "BIBREF2"
},
{
"start": 387,
"end": 413,
"text": "(Finkelstein et al., 2002;",
"ref_id": "BIBREF4"
},
{
"start": 414,
"end": 434,
"text": "Agirre et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 520,
"end": 545,
"text": "(Hughes and Ramage, 2007;",
"ref_id": "BIBREF8"
},
{
"start": 546,
"end": 560,
"text": "Jarmasz, 2003)",
"ref_id": "BIBREF9"
},
{
"start": 917,
"end": 945,
"text": "(Budanitsky and Hirst, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our approach to measuring semantic relatedness, we first automatically build a large semantic network from text and then measure the similarity of two terms by the similarity of the local networks around their corresponding nodes. The semantic network serves as a structured representation of the occurring concepts, relations and attributes in the text. It is built by translating every sentence in the text into a network fragment based on semantic analysis and then merging these networks into a large network by mapping all occurrences of the same term into one node. Figure 1 (a) contains a sample text snippet and the network derived from it. In this way, concepts are connected across sentences and documents, resulting in a high-level view of the information contained.",
"cite_spans": [],
"ref_spans": [
{
"start": 575,
"end": 583,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Similarity and Relatedness from semantic networks",
"sec_num": "2"
},
{
"text": "Our underlying assumption for measuring semantic relatedness is that semantically related nodes are connected to a similar set of nodes. In other words, we consider the context of a node in the network as a representation of its meaning. In contrast to standard approaches which look only at a type of context directly found in the text, e.g. words that occur within a certain window from the target word, our network-based context takes into account indirect connections between concepts. For example, in the text underlying the network in Fig. 2 , dissertation and module rarely co-occurred in a sentence, but the network shows a strong connection over student as well as over credit and work.",
"cite_spans": [],
"ref_spans": [
{
"start": 541,
"end": 547,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Similarity and Relatedness from semantic networks",
"sec_num": "2"
},
{
"text": "We build the network incrementally by parsing every sentence, translating it into a small network fragment and then mapping that fragment onto the main network generated from all previous sentences. Our translation of sentences from text to network is based on the one used in the ASKNet system (Harrington and Clark, 2007) . It makes use of two NLP tools, the Clark and Curran parser (Clark and Curran, 2004) and the semantic analysis tool Boxer (Bos et al., 2004) , both of which are part of the C&C Toolkit 1 . The parser is based on Combinatory Categorial Grammar (CCG) and has been trained on 40,000 manually annotated sentences of the WSJ. It is both robust and efficient. Boxer is designed to convert the CCG parsed text into a logical representation based on Discourse Representation Theory (DRT). This intermediate logical form representation presents an abstraction from syntactic details to semantic core information. For example, the syntactical forms progress of student and student's progress have the same Boxer representation as well as the student who attends the lecture and the student attending the lecture. In addition, Boxer provides some elementary co-reference resolution.",
"cite_spans": [
{
"start": 295,
"end": 323,
"text": "(Harrington and Clark, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 385,
"end": 409,
"text": "(Clark and Curran, 2004)",
"ref_id": "BIBREF3"
},
{
"start": 447,
"end": 465,
"text": "(Bos et al., 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Network Structure",
"sec_num": "2.1"
},
{
"text": "The translation from the Boxer output into a network is straightforward and an example is given in Figure 1 (b). The network structure distinguishes between object nodes (rectangular), relational nodes (diamonds) and attributes (rounded rectangles) and different types of links such as subject or object links.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 107,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Network Structure",
"sec_num": "2.1"
},
{
"text": "Students select modules from the published list and write a dissertation. Modules usually provide 15 credits each, but 30 credits are awarded for the dissertation. The student must discuss the topic of the final dissertation with their appointed tutor. The large unified network is then built by merging every occurrence of a concept (e.g. object node) into one node, thus accumulating the information on this concept. In the second example ( Figure ??) , the lecture node would be merged with occurrences of lecture in other sentences. Figure 2 gives a subset of a network generated from a few paragraphs taken from Oxford Student Handbooks. Multiple occurrences of the same relation between two object nodes are drawn as overlapping.",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 453,
"text": "Figure ??)",
"ref_id": null
},
{
"start": 537,
"end": 545,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Network Structure",
"sec_num": "2.1"
},
{
"text": "We measure the semantic relatedness of two concepts by measuring the similarity of the surroundings of their corresponding nodes in the network. Semantically related terms are then expected to be connected to a similar set of nodes. We retrieve the network context of a specific node and determine the level of significance of each node in the context using spreading activation 2 . The target node is given an initial activation of a x = 10 * numberOfLinks(x) and is fired so that the activation spreads over its outand ingoing links to the surrounding nodes. They in turn fire if their received activation level exceeds a certain threshold. The activation attenuates by a constant factor in every step and a stable state is reached when no node in the network can fire anymore. In this way, the context nodes receive different levels of activation reflecting their significance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Vector Space Model",
"sec_num": "2.2"
},
{
"text": "We derive a vector representation v(x) of the network context of x including only object nodes and their activation levels. The entries are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Vector Space Model",
"sec_num": "2.2"
},
{
"text": "v i (x) = act x,ax (n i ) n i \u2208 {n \u2208 nodes | type(n) = object node}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Vector Space Model",
"sec_num": "2.2"
},
{
"text": "The semantic relatedness of two target words is then measured by the cosine similarity of their context vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Vector Space Model",
"sec_num": "2.2"
},
{
"text": "sim rel(x, y) = cos( v(x), v(y)) = v(x) \u2022 v(y) v(x) v(y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Vector Space Model",
"sec_num": "2.2"
},
{
"text": "As spreading activation takes several factors into account, such as number of paths, length of paths, level of density and number of connections, this method leverages the full interconnected structure of the network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Vector Space Model",
"sec_num": "2.2"
},
{
"text": "We evaluate our approach on the WordSimilarity-353 (Finkelstein et al., 2002) test collection, which is a commonly used gold standard for the semantic relatedness task. It provides average human judgments scores of the degree of relatedness for 353 word pairs. The collection contains classically similar word Approach Spearman (Strube and Ponzetto, 2006) Wikipedia 0.19-0.48 (Jarmasz, 2003) Roget's 0.55 (Hughes and Ramage, 2007) WordNet 0.55 (Agirre et al., 2009) WordNet 0.56 (Finkelstein et al., 2002) Web corpus, LSA 0.56 (Harrington, 2010) Sem. Network 0.62 (Agirre et al., 2009) WordNet+gloss 0.66 (Agirre et al., 2009) Web corpus 0.66 (Gabrilovich and Markovitch, 2007) pairs such as street -avenue and topically related pairs such as hotel -reservation. However, no distinction was made while judging and the instruction was to rate the general degree of semantic relatedness.",
"cite_spans": [
{
"start": 51,
"end": 77,
"text": "(Finkelstein et al., 2002)",
"ref_id": "BIBREF4"
},
{
"start": 328,
"end": 355,
"text": "(Strube and Ponzetto, 2006)",
"ref_id": "BIBREF10"
},
{
"start": 376,
"end": 391,
"text": "(Jarmasz, 2003)",
"ref_id": "BIBREF9"
},
{
"start": 405,
"end": 430,
"text": "(Hughes and Ramage, 2007)",
"ref_id": "BIBREF8"
},
{
"start": 444,
"end": 465,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 479,
"end": 505,
"text": "(Finkelstein et al., 2002)",
"ref_id": "BIBREF4"
},
{
"start": 527,
"end": 545,
"text": "(Harrington, 2010)",
"ref_id": "BIBREF6"
},
{
"start": 564,
"end": 585,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 605,
"end": 626,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 643,
"end": 677,
"text": "(Gabrilovich and Markovitch, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "As a corpus we chose the British National Corpus (BNC) 3 . It is one of the largest standardized English corpora and contains approximately 5.9 million sentences. Choosing this text collection enables us to build a general purpose network that is not specifically created for the considered work pairs and ensures a realistic overall connectedness of the network as well as a broad coverage. In this paper we created a network from 2 million sentences of the BNC. It contains 27.5 million nodes out of which 635.000 are object nodes and the rest are relation and attribute nodes. The building time including parsing was approximately 4 days.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "Following the common practice in related work, we compared our scores to the human judgements using the Spearman rank-order correlation coefficient. The results can be found in Table 1 (a) with a comparison to previous results on the WordSimilarity-353 collection.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 184,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "Our first result over all word pairs is relatively low compared to the currently best performing systems. However, we noticed that many poorly rated word pairs contained at least one word with low frequency. Excluding these considerably improved the result to 0.50. On this reduced set of word pairs our scores are in the region of approaches which make use of the Wikipedia category network, the Word-Net taxonomic relations or Roget's thesaurus. This is a promising result as it indicates that our approach based on automatically generated networks has the potential of competing with those using manually created resources if we increase the corpus size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "While our results are not competitive with the best corpus based methods, we can note that our current corpus is an order of magnitude smaller -2 million sentences versus 1 million full Wikipedia articles (Gabrilovich and Markovitch, 2007) or 215MB versus 1.6 Terabyte (Agirre et al., 2009) . The extent to which corpus size influences our results is subject to further research.",
"cite_spans": [
{
"start": 205,
"end": 239,
"text": "(Gabrilovich and Markovitch, 2007)",
"ref_id": "BIBREF5"
},
{
"start": 269,
"end": 290,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "We also evaluated our scores separately on the semantically similar versus the semantically related subsets of WordSim-353 following Agirre et al. (2009) (Table 1(b) ). Taking the same low-frequency cut as above, we can see that our approach performs equally well on both sets. This is remarkable as different methods tend to be more appropriate to calculate either one or the other (Agirre et al., 2009) . In particular, WordNet based measures are well known to be better suited to measure similarity than relatedness due to its hierarchical, taxonomic structure (Budanitsky and Hirst, 2006) . The fact that our system achieves equal results on the subset indicates that it matches human judgement of semantic relatedness beyond specific types of relations. This could be due to the associative structure of the network.",
"cite_spans": [
{
"start": 133,
"end": 153,
"text": "Agirre et al. (2009)",
"ref_id": "BIBREF0"
},
{
"start": 383,
"end": 404,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 564,
"end": 592,
"text": "(Budanitsky and Hirst, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 154,
"end": 165,
"text": "(Table 1(b)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "Our approach is closely related to Harrington (2010) as our networks are built in a similar fashion and we also use spreading activation to measure semantic relatedness. In their approach, semantic relatedness of two terms a and b is measured by the activation b receives when a is fired. The core difference of this measurement to ours is that it is path-based while ours is context based. In addition, the corpus used was retrieved specifically for the word pairs in question while ours is a general-purpose corpus.",
"cite_spans": [
{
"start": 35,
"end": 52,
"text": "Harrington (2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In addition, our approach is related to work that uses personalized PageRank or Random Walks on WordNet (Agirre et al., 2009; Hughes and Ramage, 2007) . Similar the spreading activation method presented here, personalized PageRank and Random Walks are used to provide a relevance distribution of nodes surrounding the target word to its meaning. In contrast to the approaches based on resources, our network is automatically built and therefore does not rely on costly, manual creation. In addition, compared to WordNet based measures, our method is potentially not biased towards relatedness due to similarity.",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "(Agirre et al., 2009;",
"ref_id": "BIBREF0"
},
{
"start": 126,
"end": 150,
"text": "Hughes and Ramage, 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "We presented a novel approach to measuring semantic relatedness which first builds a large-scale semantic network and then determines the relatedness of nodes by the similarity of their surrounding local network. Our preliminary results of this ongoing work are promising and are in the region of several WordNet and Wikipedia link structure approaches. As future work, there are several ways of improvement we are going to investigate. Firstly, the results in Section 3 show the crucial influence of corpus size and occurrence frequency on the performance of our system. We will be experimenting with larger general networks (e.g. the whole BNC) as well as integration of retrieved documents for the low frequency terms. Secondly, the parameters and specific settings for the spreading activation algorithm need to be tuned. For example, the amount of initial activation of the target node determines the size of the context considered. Thirdly, we will investigate different vector representation variants. In particular, we can achieve a more fine-grained representation by also considering relation nodes in addition to object nodes. We believe that with these improvements our automatic semantic network approach will be able to compete with techniques based on manually created resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Outlook",
"sec_num": "5"
},
{
"text": "http://svn.ask.it.usyd.edu.au/trac/candc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The spreading activation algorithm is based onHarrington (2010)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.natcorp.ox.ac.uk/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A study on similarity and relatedness using distributional and wordnet-based approaches",
"authors": [
{
"first": "E",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kravalova",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL '09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agirre, E., E. Alfonseca, K. Hall, J. Kravalova, M. Pa\u015fca, and A. Soroa (2009). A study on similarity and relatedness using distributional and wordnet-based approaches. In NAACL '09.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Wide-coverage semantic representations from a ccg parser",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bos",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING'04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bos, J., S. Clark, M. Steedman, J. R. Curran, and J. Hockenmaier (2004). Wide-coverage semantic representations from a ccg parser. In COLING'04.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Evaluating wordnet-based measures of lexical semantic relatedness",
"authors": [
{
"first": "A",
"middle": [],
"last": "Budanitsky",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "1",
"pages": "13--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Budanitsky, A. and G. Hirst (2006). Evaluating wordnet-based measures of lexical semantic relatedness. Computational Linguistics 32(1), 13-47.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Parsing the wsj using ccg and log-linear models",
"authors": [
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL'04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, S. and J. R. Curran (2004). Parsing the wsj using ccg and log-linear models. In ACL'04.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Placing search in context: the concept revisited",
"authors": [
{
"first": "L",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Trans. Inf. Syst",
"volume": "20",
"issue": "1",
"pages": "116--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finkelstein, L., E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin (2002). Placing search in context: the concept revisited. ACM Trans. Inf. Syst. 20(1), 116-131.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Computing semantic relatedness using wikipedia-based explicit semantic analysis",
"authors": [
{
"first": "E",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Markovitch",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabrilovich, E. and S. Markovitch (2007). Computing semantic relatedness using wikipedia-based explicit semantic analysis. In IJCAI'07.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A semantic network approach to measuring semantic relatedness",
"authors": [
{
"first": "B",
"middle": [],
"last": "Harrington",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING'10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harrington, B. (2010). A semantic network approach to measuring semantic relatedness. In COLING'10.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Asknet: automated semantic knowledge network",
"authors": [
{
"first": "B",
"middle": [],
"last": "Harrington",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2007,
"venue": "AAAI'07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harrington, B. and S. Clark (2007). Asknet: automated semantic knowledge network. In AAAI'07.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Lexical semantic relatedness with random graph walks",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ramage",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL'07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hughes, T. and D. Ramage (2007). Lexical semantic relatedness with random graph walks. In EMNLP-CoNLL'07.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Roget's thesaurus as a lexical resource for natural language processsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Jarmasz",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jarmasz, M. (2003). Roget's thesaurus as a lexical resource for natural language processsing. Master's thesis, University of Ottawa.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Wikirelate! computing semantic relatedness using wikipedia",
"authors": [
{
"first": "M",
"middle": [],
"last": "Strube",
"suffix": ""
},
{
"first": "S",
"middle": [
"P"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2006,
"venue": "AAAI'06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strube, M. and S. P. Ponzetto (2006). Wikirelate! computing semantic relatedness using wikipedia. In AAAI'06.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "(a) Sample text snippet and according network representation. (b) Example of translation from text to network over Boxer semantic analysis",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Subgraph displaying selected concepts and relations from sample network.",
"type_str": "figure",
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>: (a) Spearman ranking correlation coefficient results for our approach and comparison with</td></tr><tr><td>previous approaches. (b) Separate results for similarity and relatedness subset.</td></tr></table>"
}
}
}
} |