Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W10-0504",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:06:30.568997Z"
},
"title": "An Information-Retrieval Approach to Language Modeling: Applications to Social Data",
"authors": [
{
"first": "Juan",
"middle": [
"M"
],
"last": "Huerta",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM T. J.Watson Research Center",
"location": {
"addrLine": "1101 Kitchawan Road Yorktown Heights",
"postCode": "10598",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we propose the IR-LM (Information Retrieval Language Model) which is an approach to carrying out language modeling based on large volumes of constantly changing data as is the case of social media data. Our approach addresses specific characteristics of social data: large volume of constantly generated content as well as the need to frequently integrating and removing data from the model.",
"pdf_parse": {
"paper_id": "W10-0504",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we propose the IR-LM (Information Retrieval Language Model) which is an approach to carrying out language modeling based on large volumes of constantly changing data as is the case of social media data. Our approach addresses specific characteristics of social data: large volume of constantly generated content as well as the need to frequently integrating and removing data from the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We describe the Information Retrieval Language Model (IR-LM) which is a novel approach to language modeling motivated by domains with constantly changing large volumes of linguistic data. Our approach is based on information retrieval methods and constitutes a departure from the traditional statistical n-gram language modeling (SLM) approach. We believe the IR-LM is more adequate than SLM when: (a) language models need to be updated constantly, (b) very large volumes of data are constantly being generated and (c) it is possible and likely that the sentence we are trying to score has been observed in the data (albeit with small possible variations). These three characteristics are inherent of social domains such as blogging and micro-blogging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Statistical language models are widely used in main computational linguistics tasks to compute the probability of a string of words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": ") ... ( 1 i w w p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": "To facilitate its computation, this probability is expressed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": ") ... | ( ... ) | ( ) ( ) ... ( 1 1 1 2 1 1 \uf02d \uf0b4 \uf0b4 \uf0b4 \uf03d i i i w w w P w w P w P w w p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": "Assuming that only the most immediate word history affects the probability of any given word, and focusing on a trigram language model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": ") | ( ) ... | ( 1 2 1 1 \uf02d \uf02d \uf02d \uf0bb i i i i i w w w P w w w P This leads to: \uf0d5 \uf03d \uf02d \uf02d \uf0bb i k k k k i w w w p w w P .. 1 2 1 1 ) | ( ) ... (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": "Language models are typically applied in ASR, MT and other tasks in which multiple hypotheses need to be rescored according to their likelihood (i.e., ranked). In a smoothed backoff SLM (e.g., Goodman (2001) ), all the n-grams up to order n are computed and smoothed and backoff probabilities are calculated. If new data is introduced or removed from the corpus, the whole model, the counts and weights would need to be recalculated. Levenberg and Osborne (2009) proposed an approach for incorporating new data as it is seen in the stream. Language models have been used to support IR as a method to extend queries (Lavrenko et al. 2001) ; in this paper we focus on using IR to carry out language modeling.",
"cite_spans": [
{
"start": 193,
"end": 207,
"text": "Goodman (2001)",
"ref_id": "BIBREF1"
},
{
"start": 434,
"end": 462,
"text": "Levenberg and Osborne (2009)",
"ref_id": "BIBREF4"
},
{
"start": 615,
"end": 637,
"text": "(Lavrenko et al. 2001)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram SLM and IR-LM",
"sec_num": "2"
},
{
"text": "The IR-LM approach consists of two steps: the first is the identification of a set of matches from a corpus given a query sentence, and second is the estimation of a likelihood-like value for the query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The IR Language Model",
"sec_num": "2.1"
},
{
"text": "In the first step, given a corpus C and a query sentence S, we identify the k-closest matching sentences in the corpus through an information retrieval approach. We propose the use of a modified String Edit Distance as score in the IR process. To efficiently carry out the search of the closest sentences in the corpus we propose the use of an inverted index with word position information and a stack based search approach described in Huerta (2010) . A modification of the SED allows queries to match portions of long sentences (considering local insertion deletions and substitutions) without penalizing for missing the non-local portion of the matching sentence.",
"cite_spans": [
{
"start": 437,
"end": 450,
"text": "Huerta (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The IR Language Model",
"sec_num": "2.1"
},
{
"text": "In the second step, in general, we would like to compute a likelihood-like value of S through a function of the distances (or alternatively, similarity scores) of the query S to the top khypotheses. However, for now we will focus on the more particular problem of ranking multiple sentences in order of matching scores, which, while not directly producing likelihood estimates it will allow us to implement n-best rescoring. Specifically, our ranking is based on the level of matching between each sentence to be ranked and its best matching hypothesis in the corpus. In this case, integrating and removing data from the model simply involve adding to or pruning the index which generally are simpler than n-gram reestimation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The IR Language Model",
"sec_num": "2.1"
},
{
"text": "There is an important fundamental difference between the classic n-gram SLM approach and our approach. The n-gram approach says that a sentence S 1 is more likely than another sentence S 2 given a model if the n-grams that constitute S 1 have been observed more times than the n-grams of S 2 . Our approach, on the other hand, says that a sentence S 1 is more likely than S 2 if the closest match to S1 in C resembles S1 better than the closes match of S 2 resembles S 2 regardless of how many times these sentences have been observed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The IR Language Model",
"sec_num": "2.1"
},
{
"text": "We carried out experiments using the blog corpus provided by Spinn3r (Burton et al (2009) ). It consists of 44 million blog posts that originated during August and September 2008 from which we selected, cleaned, normalized and segmented 2 million English language blogs. We reserved the segments originating from blogs dated September 30 for testing.",
"cite_spans": [
{
"start": 69,
"end": 89,
"text": "(Burton et al (2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We took 1000 segments from the test subset and for each of these segments we built a 16hypothesis cohort (by creating 16 overlapping subsegments of the constant length from the segment).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We built a 5-gram SLM using a 20k word dictionary and Knesser-Ney smoothing using the SRILM toolkit (Stolcke (2002) ). We then ranked each of the 1000 test cohorts using each of the model's n-gram levels (unigram, bigram, etc.) . Our goal is to determine to what extent our approach correlates with an n-gram SLM-based rescoring.",
"cite_spans": [
{
"start": 100,
"end": 115,
"text": "(Stolcke (2002)",
"ref_id": "BIBREF5"
},
{
"start": 204,
"end": 227,
"text": "(unigram, bigram, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "For testing purposes we re-ranked each of the test cohorts using the IR-LM approach. We then compared the rankings produced by n-grams and by IR-LM for every n-gram order and several IR configurations. For this, we computed the Spearman rank correlation coefficient (SRCC). SRCC averages for each configuration are shown in table 1. Row 1 shows the SRCC for the best overall IR configuration and row 2 shows the SRCC for the IR configuration producing the best results for each particular n-gram model. We can see that albeit simple, IR-LM can produce results consistent with a language model based on fundamentally different assumptions. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The IR-LM can be beneficial when the language model needs to be updated with added and removed data. This is particularly important in social data where new content is constantly generated. Our approach also introduces a different interpretation of the concept of likelihood of a sentence: instead of assuming the frequentist assumption underlying n-gram models, it is based on sentence feasibility based on the closest segment similarity. Future work will look into: integrating information from the top k-matches, likelihood regression, as well as leveraging other approaches to information retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The ICWSM 2009 Spinn3r Dataset. Proc. ICWSM",
"authors": [
{
"first": "K",
"middle": [],
"last": "Burton",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Java",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Soboroff",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burton K., Java A., and Soboroff I. (2009) The ICWSM 2009 Spinn3r Dataset. Proc. ICWSM 2009",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Bit of Progress in Language Modeling",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2001,
"venue": "MS Res. Tech. Rpt. MSR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goodman J. (2001) A Bit of Progress in Language Modeling, MS Res. Tech. Rpt. MSR-TR-2001-72.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Stack Decoder Approach to Approximate String Matching",
"authors": [
{
"first": "J",
"middle": [],
"last": "Huerta",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of SIGIR 2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huerta J. (2010) A Stack Decoder Approach to Approximate String Matching, Proc. of SIGIR 2010",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Relevance based language models",
"authors": [
{
"first": "V",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lavrenko V. and Croft W. B. (2001) Relevance based language models. Proc. of SIGIR 2001",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Stream-based Randomised Lang. Models for SMT",
"authors": [
{
"first": "A",
"middle": [],
"last": "Levenberg",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levenberg A. and Osborne M. (2009), Stream-based Randomised Lang. Models for SMT, EMNLP 2009",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": ")s SRILM --An Extensible Language Modeling Toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke A. (2002)s SRILM --An Extensible Language Modeling Toolkit. Proc. ICSLP 2002",
"links": null
}
},
"ref_entries": {}
}
}