File size: 19,737 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 |
{
"paper_id": "W10-0504",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:06:30.568997Z"
},
"title": "An Information-Retrieval Approach to Language Modeling: Applications to Social Data",
"authors": [
{
"first": "Juan",
"middle": [
"M"
],
"last": "Huerta",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM T. J.Watson Research Center",
"location": {
"addrLine": "1101 Kitchawan Road Yorktown Heights",
"postCode": "10598",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we propose the IR-LM (Information Retrieval Language Model) which is an approach to carrying out language modeling based on large volumes of constantly changing data as is the case of social media data. Our approach addresses specific characteristics of social data: large volume of constantly generated content as well as the need to frequently integrating and removing data from the model.",
"pdf_parse": {
"paper_id": "W10-0504",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we propose the IR-LM (Information Retrieval Language Model) which is an approach to carrying out language modeling based on large volumes of constantly changing data as is the case of social media data. Our approach addresses specific characteristics of social data: large volume of constantly generated content as well as the need to frequently integrating and removing data from the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We describe the Information Retrieval Language Model (IR-LM) which is a novel approach to language modeling motivated by domains with constantly changing large volumes of linguistic data. Our approach is based on information retrieval methods and constitutes a departure from the traditional statistical n-gram language modeling (SLM) approach. We believe the IR-LM is more adequate than SLM when: (a) language models need to be updated constantly, (b) very large volumes of data are constantly being generated and (c) it is possible and likely that the sentence we are trying to score has been observed in the data (albeit with small possible variations). These three characteristics are inherent of social domains such as blogging and micro-blogging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Statistical language models are widely used in main computational linguistics tasks to compute the probability of a string of words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": ") ... ( 1 i w w p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": "To facilitate its computation, this probability is expressed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": ") ... | ( ... ) | ( ) ( ) ... ( 1 1 1 2 1 1 \uf02d \uf0b4 \uf0b4 \uf0b4 \uf03d i i i w w w P w w P w P w w p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": "Assuming that only the most immediate word history affects the probability of any given word, and focusing on a trigram language model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": ") | ( ) ... | ( 1 2 1 1 \uf02d \uf02d \uf02d \uf0bb i i i i i w w w P w w w P This leads to: \uf0d5 \uf03d \uf02d \uf02d \uf0bb i k k k k i w w w p w w P .. 1 2 1 1 ) | ( ) ... (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": "Language models are typically applied in ASR, MT and other tasks in which multiple hypotheses need to be rescored according to their likelihood (i.e., ranked). In a smoothed backoff SLM (e.g., Goodman (2001) ), all the n-grams up to order n are computed and smoothed and backoff probabilities are calculated. If new data is introduced or removed from the corpus, the whole model, the counts and weights would need to be recalculated. Levenberg and Osborne (2009) proposed an approach for incorporating new data as it is seen in the stream. Language models have been used to support IR as a method to extend queries (Lavrenko et al. 2001) ; in this paper we focus on using IR to carry out language modeling.",
"cite_spans": [
{
"start": 193,
"end": 207,
"text": "Goodman (2001)",
"ref_id": "BIBREF1"
},
{
"start": 434,
"end": 462,
"text": "Levenberg and Osborne (2009)",
"ref_id": "BIBREF4"
},
{
"start": 615,
"end": 637,
"text": "(Lavrenko et al. 2001)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": "The IR-LM approach consists of two steps: the first is the identification of a set of matches from a corpus given a query sentence, and second is the estimation of a likelihood-like value for the query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The IR Language Model",
"sec_num": "2.1"
},
{
"text": "In the first step, given a corpus C and a query sentence S, we identify the k-closest matching sentences in the corpus through an information retrieval approach. We propose the use of a modified String Edit Distance as score in the IR process. To efficiently carry out the search of the closest sentences in the corpus we propose the use of an inverted index with word position information and a stack based search approach described in Huerta (2010) . A modification of the SED allows queries to match portions of long sentences (considering local insertion deletions and substitutions) without penalizing for missing the non-local portion of the matching sentence.",
"cite_spans": [
{
"start": 437,
"end": 450,
"text": "Huerta (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The IR Language Model",
"sec_num": "2.1"
},
{
"text": "In the second step, in general, we would like to compute a likelihood-like value of S through a function of the distances (or alternatively, similarity scores) of the query S to the top khypotheses. However, for now we will focus on the more particular problem of ranking multiple sentences in order of matching scores, which, while not directly producing likelihood estimates it will allow us to implement n-best rescoring. Specifically, our ranking is based on the level of matching between each sentence to be ranked and its best matching hypothesis in the corpus. In this case, integrating and removing data from the model simply involve adding to or pruning the index which generally are simpler than n-gram reestimation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The IR Language Model",
"sec_num": "2.1"
},
{
"text": "There is an important fundamental difference between the classic n-gram SLM approach and our approach. The n-gram approach says that a sentence S 1 is more likely than another sentence S 2 given a model if the n-grams that constitute S 1 have been observed more times than the n-grams of S 2 . Our approach, on the other hand, says that a sentence S 1 is more likely than S 2 if the closest match to S1 in C resembles S1 better than the closes match of S 2 resembles S 2 regardless of how many times these sentences have been observed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The IR Language Model",
"sec_num": "2.1"
},
{
"text": "We carried out experiments using the blog corpus provided by Spinn3r (Burton et al (2009) ). It consists of 44 million blog posts that originated during August and September 2008 from which we selected, cleaned, normalized and segmented 2 million English language blogs. We reserved the segments originating from blogs dated September 30 for testing.",
"cite_spans": [
{
"start": 69,
"end": 89,
"text": "(Burton et al (2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We took 1000 segments from the test subset and for each of these segments we built a 16hypothesis cohort (by creating 16 overlapping subsegments of the constant length from the segment).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We built a 5-gram SLM using a 20k word dictionary and Knesser-Ney smoothing using the SRILM toolkit (Stolcke (2002) ). We then ranked each of the 1000 test cohorts using each of the model's n-gram levels (unigram, bigram, etc.) . Our goal is to determine to what extent our approach correlates with an n-gram SLM-based rescoring.",
"cite_spans": [
{
"start": 100,
"end": 115,
"text": "(Stolcke (2002)",
"ref_id": "BIBREF5"
},
{
"start": 204,
"end": 227,
"text": "(unigram, bigram, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "For testing purposes we re-ranked each of the test cohorts using the IR-LM approach. We then compared the rankings produced by n-grams and by IR-LM for every n-gram order and several IR configurations. For this, we computed the Spearman rank correlation coefficient (SRCC). SRCC averages for each configuration are shown in table 1. Row 1 shows the SRCC for the best overall IR configuration and row 2 shows the SRCC for the IR configuration producing the best results for each particular n-gram model. We can see that albeit simple, IR-LM can produce results consistent with a language model based on fundamentally different assumptions. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The IR-LM can be beneficial when the language model needs to be updated with added and removed data. This is particularly important in social data where new content is constantly generated. Our approach also introduces a different interpretation of the concept of likelihood of a sentence: instead of assuming the frequentist assumption underlying n-gram models, it is based on sentence feasibility based on the closest segment similarity. Future work will look into: integrating information from the top k-matches, likelihood regression, as well as leveraging other approaches to information retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The ICWSM 2009 Spinn3r Dataset. Proc. ICWSM",
"authors": [
{
"first": "K",
"middle": [],
"last": "Burton",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Java",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Soboroff",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burton K., Java A., and Soboroff I. (2009) The ICWSM 2009 Spinn3r Dataset. Proc. ICWSM 2009",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Bit of Progress in Language Modeling",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2001,
"venue": "MS Res. Tech. Rpt. MSR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goodman J. (2001) A Bit of Progress in Language Modeling, MS Res. Tech. Rpt. MSR-TR-2001-72.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Stack Decoder Approach to Approximate String Matching",
"authors": [
{
"first": "J",
"middle": [],
"last": "Huerta",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of SIGIR 2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huerta J. (2010) A Stack Decoder Approach to Approximate String Matching, Proc. of SIGIR 2010",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Relevance based language models",
"authors": [
{
"first": "V",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lavrenko V. and Croft W. B. (2001) Relevance based language models. Proc. of SIGIR 2001",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Stream-based Randomised Lang. Models for SMT",
"authors": [
{
"first": "A",
"middle": [],
"last": "Levenberg",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levenberg A. and Osborne M. (2009), Stream-based Randomised Lang. Models for SMT, EMNLP 2009",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": ")s SRILM --An Extensible Language Modeling Toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke A. (2002)s SRILM --An Extensible Language Modeling Toolkit. Proc. ICSLP 2002",
"links": null
}
},
"ref_entries": {}
}
} |