Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W10-0214",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:06:14.815906Z"
},
"title": "Recognizing Stances in Ideological On-Line Debates",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh Pittsburgh",
"location": {
"postCode": "15260",
"region": "PA"
}
},
"email": "[email protected]"
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh Pittsburgh",
"location": {
"postCode": "15260",
"region": "PA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This work explores the utility of sentiment and arguing opinions for classifying stances in ideological debates. In order to capture arguing opinions in ideological stance taking, we construct an arguing lexicon automatically from a manually annotated corpus. We build supervised systems employing sentiment and arguing opinions and their targets as features. Our systems perform substantially better than a distribution-based baseline. Additionally, by employing both types of opinion features, we are able to perform better than a unigrambased system.",
"pdf_parse": {
"paper_id": "W10-0214",
"_pdf_hash": "",
"abstract": [
{
"text": "This work explores the utility of sentiment and arguing opinions for classifying stances in ideological debates. In order to capture arguing opinions in ideological stance taking, we construct an arguing lexicon automatically from a manually annotated corpus. We build supervised systems employing sentiment and arguing opinions and their targets as features. Our systems perform substantially better than a distribution-based baseline. Additionally, by employing both types of opinion features, we are able to perform better than a unigrambased system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this work, we explore if and how ideological stances can be recognized using opinion analysis. Following (Somasundaran and Wiebe, 2009) , stance, as used in this work, refers to an overall position held by a person toward an object, idea or proposition. For example, in a debate \"Do you believe in the existence of God?,\" a person may take a for-existence of God stance or an against existence of God stance. Similarly, being pro-choice, believing in creationism, and supporting universal healthcare are all examples of ideological stances.",
"cite_spans": [
{
"start": 108,
"end": 138,
"text": "(Somasundaran and Wiebe, 2009)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Online web forums discussing ideological and political hot-topics are popular. 1 In this work, we are interested in dual-sided debates (there are two possible polarizing sides that the participants can take). For example, in a healthcare debate, participants can take a for-healthcare stance or an against-healthcare stance. Participants generally pick a side (the websites provide a way for users to tag their stance) and post an argument/justification supporting their stance.",
"cite_spans": [
{
"start": 79,
"end": 80,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Personal opinions are clearly important in ideological stance taking, and debate posts provide outlets for expressing them. For instance, let us consider the following snippet from a universal healthcare debate. Here the writer is expressing a negative sentiment 2 regarding the government (the opinion spans are highlighted in bold and their targets, what the opinions are about, are highlighted in italics).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Government is a disease pretending to be its own cure. [side: against healthcare]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The writer's negative sentiment is directed toward the government, the initiator of universal healthcare. This negative opinion reveals his against-healthcare stance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We observed that arguing, a less well explored type of subjectivity, is prominently manifested in ideological debates. As used in this work, arguing is a type of linguistic subjectivity, where a person is arguing for or against something or expressing a belief about what is true, should be true or should be done in his or her view of the world Wilson, 2007; Somasundaran et al., 2008) .",
"cite_spans": [
{
"start": 346,
"end": 359,
"text": "Wilson, 2007;",
"ref_id": "BIBREF23"
},
{
"start": 360,
"end": 386,
"text": "Somasundaran et al., 2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For instance, let us consider the following snippet from a post supporting an against-existence of God stance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Obviously that hasn't happened, and to be completely objective (as all scientists should be) we must lean on the side of greatest evidence which at the present time is for evolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[side: against the existence of God]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In supporting their side, people not only express their sentiments, but they also argue about what is true (e.g., this is prominent in the existence of God debate) and about what should or should not be done (e.g., this is prominent in the healthcare debate).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we investigate whether sentiment and arguing expressions of opinion are useful for ideological stance classification. For this, we explore ways to capture relevant opinion information as machine learning features into a supervised stance classifier. While there is a large body of resources for sentiment analysis (e.g., the sentiment lexicon from ), arguing analysis does not seem to have a well established lexical resource. In order to remedy this, using a simple automatic approach and a manually annotated corpus, 3 we construct an arguing lexicon. We create features called opinion-target pairs, which encode not just the opinion information, but also what the opinion is about, its target. Systems employing sentiment-based and arguing-based features alone, or both in combination, are analyzed. We also take a qualitative look at features used by the learners to get insights about the information captured by them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We perform experiments on four different ideological domains. Our results show that systems using both sentiment and arguing features can perform substantially better than a distribution-based baseline and marginally better than a unigram-based system. Our qualitative analysis suggests that opinion features capture more insightful information than using words alone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows: We first describe our ideological debate data in Section 2. We explain the construction of our arguing lexicon in Section 3 and our different systems in Section 3 MPQA corpus available at http://www.cs.pitt.edu/mpqa. 4. Experiments, results and analyses are presented in Section 5. Related work is in Section 6 and conclusions are in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Political and ideological debates on hot issues are popular on the web. In this work, we analyze the following domains: Existence of God, Healthcare, Gun Rights, Gay Rights, Abortion and Creationism. Of these, we use the first two for development and the remaining four for experiments and analyses. Each domain is a political/ideological issue and has two polarizing stances: for and against. Table 2 lists the domains, examples of debate topics within each domain, the specific sides for each debate topic, and the domain-level stances that correspond to these sides. For example, consider the Existence of God domain in Table 2 . The two stances in this domain are for-existence of God and against-existence of God. \"Do you believe in God\", a specific debate topic within this domain, has two sides: \"Yes!!\" and \"No!!\". The former corresponds to the for-existence of God stance and the latter maps to the against-existence of God stance. The situation is different for the debate \"God Does Not Exist\". Here, side \"against\" corresponds to the forexistence of God stance, and side \"for\" corresponds to the against-existence of God stance.",
"cite_spans": [],
"ref_spans": [
{
"start": 394,
"end": 401,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 623,
"end": 630,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Ideological Debates",
"sec_num": "2"
},
{
"text": "In general, we see in Table 2 that, while specific debate topics may vary, in each case the two sides for the topic correspond to the domain-level stances. We download several debates for each domain and manually map debate-level stances to the stances for the domain. Table 2 also reports the number of debates, and the total number of posts for each domain. For instance, we collect 16 different debates in the healthcare domain which gives us a total of 336 posts. All debate posts have user-reported debate-level stance tags.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Ideological Debates",
"sec_num": "2"
},
{
"text": "Preliminary inspection of development data gave us insights which shaped our approach. We discuss some of our observations in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Observations",
"sec_num": "2.1"
},
{
"text": "We found that arguing opinions are prominent when people defend their ideological stances. We saw an instance of this in Example 2, where the participant argues against the existence of God. He argues for what (he believes) is right (should be), and is imperative (we must). He employs \"Obviously\" to draw emphasis and then uses a superlative construct (greatest) to argue for evolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arguing Opinion",
"sec_num": null
},
{
"text": "Example 3 below illustrates arguing in a healthcare debate. The spans most certainly believe and has or must do reveal arguing (ESSENTIAL, IM-PORTANT are sentiments).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arguing Opinion",
"sec_num": null
},
{
"text": "(3) ... I most certainly believe that there are some ESSENTIAL, IMPORTANT things that the government has or must do [side: for healthcare]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arguing Opinion",
"sec_num": null
},
{
"text": "Observe that the text spans revealing arguing can be a single word or multiple words. This is different from sentiment expressions that are more often single words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arguing Opinion",
"sec_num": null
},
{
"text": "As mentioned previously, a target is what an opinion is about. Targets are vital for determining stances. Opinions by themselves may not be as informative as the combination of opinions and targets. For instance, in Example 1 the writer supports an against-healthcare stance using a negative sentiment. There is a negative sentiment in the example below (Example 4) too. However, in this case the writer supports a for-healthcare stance. It is by understanding what the opinion is about, that we can recognize the stance. 4Oh, the answer is GREEDY insurance companies that buy your Rep & Senator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Targets",
"sec_num": null
},
{
"text": "[side: for healthcare]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Targets",
"sec_num": null
},
{
"text": "We also observed that targets, or in general items that participants from either side choose to speak about, by themselves may not be as informative as opinions in conjunction with the targets. For instance, Examples 1 and 3 both speak about the government but belong to opposing sides. Understanding that the former example is negative toward the government and the latter has a positive arguing about the government helps us to understand the corresponding stances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Targets",
"sec_num": null
},
{
"text": "Examples 1, 3 and 4 also illustrate that there are a variety of ways in which people support their stances. The writers express opinions about government, the initiator of healthcare and insurance companies, and the parties hurt by government run healthcare. Participants group government and healthcare as essentially the same concept, while they consider healthcare and insurance companies as alternative concepts. By expressing opinions regarding a variety of items that are same or alternative to main topic (healthcare, in these examples), they are, in effect, revealing their stance (Somasundaran et al., 2008) .",
"cite_spans": [
{
"start": 589,
"end": 616,
"text": "(Somasundaran et al., 2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Targets",
"sec_num": null
},
{
"text": "Arguing is a relatively less explored category in subjectivity. Due to this, there are no available lexicons with arguing terms (clues). However, the MPQA corpus (Version 2) is annotated with arguing subjectivity Wilson, 2007) . There are two arguing categories: positive arguing and negative arguing. We use this corpus to generate a ngram (up to trigram) arguing lexicon.",
"cite_spans": [
{
"start": 213,
"end": 226,
"text": "Wilson, 2007)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "The examples below illustrate MPQA arguing annotations. Examples 5 and 7 illustrate positive argu-ing annotations and Example 6 illustrates negative arguing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "(5) Iran insists its nuclear program is purely for peaceful purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "(6) Officials in Panama denied that Mr. Chavez or any of his family members had asked for asylum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "(7) Putin remarked that the events in Chechnia \"could be interpreted only in the context of the struggle against international terrorism.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "Inspection of these text spans reveal that arguing annotations can be considered to be comprised of two pieces of information. The first piece of information is what we call the arguing trigger expression. The trigger is an indicator that an arguing is taking place, and is the primary component that anchors the arguing annotation. The second component is the expression that reveals more about the argument, and can be considered to be secondary for the purposes of detecting arguing. In Example 5, \"insists\", by itself, conveys enough information to indicate that the speaker is arguing. It is quite likely that a sentence of the form \"X insists Y\" is going to be an arguing sentence. Thus, \"insists\" is an arguing trigger. Similarly, in Example 6, we see two arguing triggers: \"denied\" and \"denied that\". Each of these can independently act as arguing triggers (For example, in the constructs \"X denied that Y\" and \"X denied Y\"). Finally, in Example 7, the arguing annotation has the following independent trigger expressions \"could be * only\", \"could be\" and \"could\". The wild card in the first trigger expression indicates that there could be zero or more words in its place.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "Note that MPQA annotations do not provide this primary/secondary distinction. We make this distinction to create general arguing clues such as \"insist\". Table 3 lists examples of arguing annotations from the MPQA corpus and what we consider as their arguing trigger expressions.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 160,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "Notice that trigger words are generally at the beginning of the annotations. Most of these are unigrams, bigrams or trigrams (though it is possible for these to be longer, as seen in Example 7). Thus, we can create a lexicon of arguing trigger expressions by extracting the starting n-grams from the MPQA annotations. The process of creating the lexicon is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "1. Generate a candidate Set from the annotations in the corpus. Three candidates are extracted from the stemmed version of each annotation: the first word, the bigram starting at the first word, and the trigram starting at the first word. For example, if the annotation is \"can only rise to meet it by making some radical changes\", the following candidates are extracted from it: \"can\", \"can only\" and \"can only rise\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "2. Remove the candidates that are present in the sentiment lexicon from ) (as these are already accounted for in previous research). For example, \"actually\", which is a trigger word in Table 3 , is a neutral subjectivity clue in the lexicon.",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 192,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "3. For each candidate in the candidate Set, find the likelihood that it is a reliable indicator of positive or negative arguing in the MPQA corpus. These are likelihoods of the form: P (positive arguing|candidate) = #candidate is in a positive arguing span #candidate is in the corpus and P (negative arguing|candidate) = #candidate is in a negative arguing span #candidate is in the corpus 4. Make a lexicon entry for each candidate consisting of the stemmed text and the two probabilities described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "This process results in an arguing lexicon with 3762 entries, where 3094 entries have P (positive arguing|candidate) > 0; and 668 entries have P (negative arguing|candidate) > 0. Table 3 lists select interesting expressions from the arguing lexicon.",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 186,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "Entries indicative of Positive Arguing be important to, would be better, would need to, be just the, be the true, my opinion, the contrast, show the, prove to be, only if, on the verge, ought to, be most, youve get to, render, manifestation, ironically, once and for, no surprise, overwhelming evidence, its clear, its clear that, it be evident, it be extremely, it be quite, it would therefore Entries indicative of Negative Arguing be not simply, simply a, but have not, can not imagine, we dont need, we can not do, threat against, ought not, nor will, never again, far from be, would never, not completely, nothing will, inaccurate and, inaccurate and, find no, no time, deny that ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing an Arguing Lexicon",
"sec_num": "3"
},
{
"text": "We construct opinion target pair features, which are units that capture the combined information about opinions and targets. These are encoded as binary features into a standard machine learning algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for Stance Classification",
"sec_num": "4"
},
{
"text": "We create arguing features primarily from our arguing lexicon. We construct additional arguing features using modal verbs and syntactic rules. The latter are motivated by the fact that modal verbs such as \"must\", \"should\" and \"ought\" are clear cases of arguing, and are often involved in simple syntactic patterns with clear targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arguing-based Features",
"sec_num": "4.1"
},
{
"text": "The process for creating features for a post using the arguing lexicon is simple. For each sentence in the post, we first determine if it contains a positive or negative arguing expression by looking for trigram, bigram and unigram matches (in that order) with the arguing lexicon. We prevent the same text span from matching twice -once a trigram match is found, a substring bigram (or unigram) match with the same text span is avoided. If there are multiple arguing expression matches found within a sentence, we determine the most prominent arguing polarity by adding up the positive arguing probabilities and negative arguing probabilities (provided in the lexicon) of all the individual expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arguing-lexicon Features",
"sec_num": "4.1.1"
},
{
"text": "Once the prominent arguing polarity is determined for a sentence, the prefix ap (arguing positive) or an (arguing negative) is attached to all the content words in that sentence to construct opinion-target features. In essence, all content words (nouns, verbs, adjectives and adverbs) in the sentence are assumed to be the target. Arguing features are denoted as aptarget (positive arguing toward target) and an-target (negative arguing toward target).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arguing-lexicon Features",
"sec_num": "4.1.1"
},
{
"text": "Modals words such as \"must\" and \"should\" are usually good indicators of arguing. This is a small closed set. Also, the target (what the arguing is about) is syntactically associated with the modal word, which means it can be relatively accurately extracted by using a small set of syntactic rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modal Verb Features for Arguing",
"sec_num": "4.1.2"
},
{
"text": "For every modal detected, three features are created by combining the modal word with its subject and object. Note that all the different modals are replaced by \"should\" while creating features. This helps to create more general features. For example, given a sentence \"They must be available to all people\", the method creates three features \"they should\", \"should available\" and \"they should available\". These patterns are created independently of the arguing lexicon matches, and added to the feature set for the post.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modal Verb Features for Arguing",
"sec_num": "4.1.2"
},
{
"text": "Sentiment-based features are created independent of arguing features. In order to detect sentiment opinions, we use a sentiment lexicon . In addition to positive ( + ) and negative ( \u2212 ) words, this lexicon also contains subjective words that are themselves neutral ( = ) with respect to polarity. Examples of neutral entries are \"absolutely\", \"amplify\", \"believe\", and \"think\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment-based Features",
"sec_num": "4.2"
},
{
"text": "We find the sentiment polarity of the entire sentence and assign this polarity to each content word in the sentence (denoted, for example, as target + ). In order to detect the sentence polarity, we use the Vote and Flip algorithm from Choi and Cardie (2009) . This algorithm essentially counts the number of positive, negative and neutral lexicon hits in a given expression and accounts for negator words. The algorithm is used as is, except for the default polarity assignment (as we do not know the most prominent polarity in the corpus). Note that the Vote and Flip algorithm has been developed for expressions but we employ it on sentences. Once the polarity of a sentence is determined, we create sentiment features for the sentence. This is done for all sentences in the post.",
"cite_spans": [
{
"start": 236,
"end": 258,
"text": "Choi and Cardie (2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment-based Features",
"sec_num": "4.2"
},
{
"text": "Experiments are carried out on debate posts from the following four domains: Gun Rights, Gay Rights, Abortion, and Creationism. For each domain, a corpus with equal class distribution is created as follows: we merge all debates and sample instances (posts) from the majority class to obtain equal numbers of instances for each stance. This gives us a total of 2232 posts in the corpus: 306 posts for the Gun Rights domain, 846 posts for the Gay Rights domain, 550 posts for the Abortion domain and 530 posts for the Creationism domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Our first baseline is a distribution-based baseline, which has an accuracy of 50%. We also construct Unigram, a system based on unigram content information, but no explicit opinion information. Unigrams are reliable for stance classification in political domains (as seen in Kim and Hovy, 2007) ). Intuitively, evoking a particular topic can be indicative of a stance. For example, a participant who chooses to speak about \"child\" and \"life\" in an abortion debate is more likely from an against-abortion side, while someone speaking about \"woman\", \"rape\" and \"choice\" is more likely from a for-abortion stance.",
"cite_spans": [
{
"start": 275,
"end": 294,
"text": "Kim and Hovy, 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We construct three systems that use opinion information: The Sentiment system that uses only the sentiment features described in Section 4.2, the Arguing system that uses only arguing features constructed in Section 4.1, and the Arg+Sent system that uses both sentiment and arguing features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "All systems are implemented using a standard implementation of SVM in the Weka toolkit (Hall et al., 2009) . We measure performance using the accu-racy metric. Table 4 shows the accuracy averaged over 10 fold cross-validation experiments for each domain. The first row (Overall) reports the accuracy calculated over all 2232 posts in the data.",
"cite_spans": [
{
"start": 87,
"end": 106,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 160,
"end": 167,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Overall, we notice that all the supervised systems perform better than the distribution-based baseline. Observe that Unigram has a better performance than Sentiment. The good performance of Unigram indicates that what participants choose to speak about is a good indicator of ideological stance taking. This result confirms previous researchers' intuition that, in general, political orientation is a function of \"authors' attitudes over multiple issues rather than positive or negative sentiment with respect to a single issue\" (Pang and Lee, 2008) . Nevertheless, the Arg+Sent system that uses both arguing and sentiment features outperforms Unigram.",
"cite_spans": [
{
"start": 529,
"end": 549,
"text": "(Pang and Lee, 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "We performed McNemar's test to measure the difference in system behaviors. The test was performed on all pairs of supervised systems using all 2232 posts. The results show that there is a significant difference between the classification behavior of Unigram and Arg+Sent systems (p < 0.05). The difference between classifications of Unigram and Arguing approaches significance (p < 0.1). There is no significant difference in the behaviors of all other system pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "Moving on to detailed performance in each domain, we see that Unigram outperforms Sentiment for all domains. Arguing and Arg+Sent outperform Unigram for three domains (Guns, Gay Rights and Abortion), while the situation is reversed for one domain (Creationism). We carried out separate t-tests for each domain, using the results from each test fold as a data point. Our results indicate that the performance of Sentiment is significantly different from all other systems for all domains. However there is no significant difference between the performance of the remaining systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "On manual inspection of the top features used by the classifiers for discriminating the stances, we found that there is an overlap between the content words used by Unigram, Arg+Sent and Arguing. For example, in the Gay Rights domain, \"understand\" and \"equal\" are amongst the top features in Unigram, while \"ap-understand\" (positive arguing for \"understand\") and \"ap-equal\" are top features for Arg+Sent. However, we believe that Arg+Sent makes finer and more insightful distinctions based on polarity of opinions toward the same set of words. Table 5 lists some interesting features in the Gay Rights domain for Unigram and Arg+Sent. Depending on whether positive or negative attribute weights were assigned by the SVM learner, the features are either indicative of for-gay rights or against-gay rights. Even though the features for Unigram are intuitive, it is not evident if a word is evoked as, for example, a pitch, concern, or denial. Also, we do not see a clear separation of the terms (for e.g., \"bible\" is an indicator for against-gay rights while \"christianity\" is an indicator for for-gay rights)",
"cite_spans": [],
"ref_spans": [
{
"start": 544,
"end": 551,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "The arguing features from Arg+Sent seem to be relatively more informative -positive arguing about \"christianity\", \"corinthians\", \"mormonism\" and \"bible\" are all indicative of against-gay rights stance. These are indeed beliefs and concerns that shape an against-gay rights stance. On the other hand, negative arguings with these same words denote a for-gay rights stance. Presumably, these occur in refutations of the concerns influencing the opposite side. Likewise, the appeal for equal rights for gays is captured positive arguing about \"liberty\", \"independence\", \"pursuit\" and \"suffrage\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "Interestingly, we found that our features also capture the ideas of opinion variety and same and alternative targets as defined in previous research (Somasundaran et al., 2008) -in Table 5 , items that are similar (e.g., \"christianity\" and \"corinthians\") have similar opinions toward them for a given stance (for e.g., ap-christianity and ap-corinthians belong to against-gay rights stance while an-christianity and an-corinthians belong to for-gay rights stance). Additionally, items that are alternatives (e.g. \"gay\" and \"heterosexuality\") have opposite polarities associated with them for a given stance, that is, positive arguing for \"heterosexuality\" and negative arguing for \"gay\" reveal the the same stance.",
"cite_spans": [
{
"start": 149,
"end": 176,
"text": "(Somasundaran et al., 2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 181,
"end": 188,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "In general, unigram features associate the choice of topics with the stances, while the arguing features can capture the concerns, defenses, appeals or denials that signify each side (though we do not explicitly encode these fine-grained distinctions in this work). Interestingly, we found that sentiment features in Arg+Sent are not as informative as the arguing features discussed above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "Generally, research in identifying political viewpoints has employed information from words in the document (Malouf and Mullen, 2008; Mullen and Malouf, 2006; Grefenstette et al., 2004; Laver et al., 2003; Martin and Vanberg, 2008; Lin, 2006) . Specifically, Lin et al. observe that people from opposing perspectives seem to use words in differing frequencies. On similar lines, Kim and Hovy (2007) use unigrams, bigrams and trigrams for election prediction from forum posts. In contrast, our work specifically employs sentiment-based and arguing-based features to perform stance classification in political debates. Our experiments are focused on determining how different opinion expressions reinforce an overall political stance. Our results indicate that while unigram information is reliable, further improvements can be achieved in certain domains using our opinion-based approach. Our work is also complementary to that by Greene and Resnik (2009) , which focuses on syntactic packaging for recognizing perspectives.",
"cite_spans": [
{
"start": 108,
"end": 133,
"text": "(Malouf and Mullen, 2008;",
"ref_id": "BIBREF12"
},
{
"start": 134,
"end": 158,
"text": "Mullen and Malouf, 2006;",
"ref_id": "BIBREF14"
},
{
"start": 159,
"end": 185,
"text": "Grefenstette et al., 2004;",
"ref_id": "BIBREF6"
},
{
"start": 186,
"end": 205,
"text": "Laver et al., 2003;",
"ref_id": "BIBREF9"
},
{
"start": 206,
"end": 231,
"text": "Martin and Vanberg, 2008;",
"ref_id": "BIBREF13"
},
{
"start": 232,
"end": 242,
"text": "Lin, 2006)",
"ref_id": "BIBREF11"
},
{
"start": 379,
"end": 398,
"text": "Kim and Hovy (2007)",
"ref_id": "BIBREF8"
},
{
"start": 930,
"end": 954,
"text": "Greene and Resnik (2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Unigram Features constitution, fundamental, rights, suffrage, pursuit, discrimination, government, happiness, shame, wed, gay, heterosexuality, chromosome, evolution, genetic, christianity, mormonism, corinthians, procreate, adopt pervert, hormone, liberty, fidelity, naval, retarded, orientation, private, partner, kingdom, bible, sin, bigot Arguing Features from Arg+Sent ap-constitution, ap-fundamental, ap-rights, ap-hormone, ap-liberty, ap-independence, ap-suffrage, ap-pursuit, apdiscrimination, an-government, ap-fidelity, ap-happiness, an-pervert, an-naval, an-retarded, an-orientation, an-shame, ap-private, ap-wed, ap-gay, an-heterosexuality, ap-partner, ap-chromosome, ap-evolution, ap-genetic, an-kingdom, anchristianity, an-mormonism, an-corinthians, an-bible, an-sin, an-bigot, an-procreate, ap-adopt, an-constitution, an-fundamental, an-rights, an-hormone, an-liberty, an-independence, an-suffrage, an-pursuit, andiscrimination, ap-government, an-fidelity, an-happiness, ap-pervert, ap-naval, ap-retarded, ap-orientation, ap-shame, an-private, an-wed, an-gay, ap-heterosexuality, an-partner, an-chromosome, an-evolution, an-genetic, ap-kingdom, apchristianity, ap-mormonism, ap-corinthians, ap-bible, ap-sin, ap-bigot, ap-procreate, an-adopt Discourse-level participant relation, that is, whether participants agree/disagree has been found useful for determining political side-taking (Thomas et al., 2006; Bansal et al., 2008; Agrawal et al., 2003; Malouf and Mullen, 2008) . Agreement/disagreement relations are not the main focus of our work. Other work in the area of polarizing political discourse analyze co-citations (Efron, 2004) and linking patterns (Adamic and Glance, 2005) . In contrast, our focus is on document content and opinion expressions. Somasundaran et al. (2007b) have noted the usefulness of the arguing category for opinion QA. Our tasks are different; they use arguing to retrieve relevant answers, but not distinguish stances. Our work is also different from related work in the domain of product debates (Somasundaran and Wiebe, 2009) in terms of the methodology. Wilson (2007) manually adds positive/negative arguing information to entries in a sentiment lexicon from and uses these as arguing features. Our arguing trigger expressions are separate from the sentiment lexicon entries and are derived from a corpus. Our n-gram trigger expressions are also different from manually created regular expression-based arguing lexicon for speech data (Somasundaran et al., 2007a) .",
"cite_spans": [
{
"start": 8,
"end": 342,
"text": "Features constitution, fundamental, rights, suffrage, pursuit, discrimination, government, happiness, shame, wed, gay, heterosexuality, chromosome, evolution, genetic, christianity, mormonism, corinthians, procreate, adopt pervert, hormone, liberty, fidelity, naval, retarded, orientation, private, partner, kingdom, bible, sin, bigot",
"ref_id": null
},
{
"start": 1400,
"end": 1421,
"text": "(Thomas et al., 2006;",
"ref_id": "BIBREF20"
},
{
"start": 1422,
"end": 1442,
"text": "Bansal et al., 2008;",
"ref_id": "BIBREF2"
},
{
"start": 1443,
"end": 1464,
"text": "Agrawal et al., 2003;",
"ref_id": "BIBREF1"
},
{
"start": 1465,
"end": 1489,
"text": "Malouf and Mullen, 2008)",
"ref_id": "BIBREF12"
},
{
"start": 1639,
"end": 1652,
"text": "(Efron, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 1674,
"end": 1699,
"text": "(Adamic and Glance, 2005)",
"ref_id": null
},
{
"start": 1773,
"end": 1800,
"text": "Somasundaran et al. (2007b)",
"ref_id": "BIBREF18"
},
{
"start": 2046,
"end": 2076,
"text": "(Somasundaran and Wiebe, 2009)",
"ref_id": "BIBREF16"
},
{
"start": 2106,
"end": 2119,
"text": "Wilson (2007)",
"ref_id": "BIBREF23"
},
{
"start": 2487,
"end": 2515,
"text": "(Somasundaran et al., 2007a)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Against Gay Rights",
"sec_num": null
},
{
"text": "In this paper, we explore recognizing stances in ideological on-line debates. We created an arguing lex-icon from the MPQA annotations in order to recognize arguing, a prominent type of linguistic subjectivity in ideological stance taking. We observed that opinions or targets in isolation are not as informative as their combination. Thus, we constructed opinion target pair features to capture this information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "We performed supervised learning experiments on four different domains. Our results show that both unigram-based and opinion-based systems perform better than baseline methods. We found that, even though our sentiment-based system is able to perform better than the distribution-based baseline, it does not perform at par with the unigram system. However, overall, our arguing-based system does as well as the unigram-based system, and our system that uses both arguing and sentiment features obtains further improvement. Our feature analysis suggests that arguing features are more insightful than unigram features, as they make finer distinctions that reveal the underlying ideologies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "As used in this work, sentiment is a type of linguistic subjectivity, specifically positive and negative expressions of emotions, judgments, and evaluationsWilson, 2007;Somasundaran et al., 2008).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The political blogosphere and the 2004 u.s. election: Divided they blog",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lada",
"suffix": ""
}
],
"year": 2005,
"venue": "LinkKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lada A. Adamic and Natalie Glance. 2005. The political blogosphere and the 2004 u.s. election: Divided they blog. In LinkKDD.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Mining newsgroups using networks arising from social behavior",
"authors": [
{
"first": "Rakesh",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Ramakrishnan",
"middle": [],
"last": "Sridhar Rajagopalan",
"suffix": ""
},
{
"first": "Yirong",
"middle": [],
"last": "Srikant",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2003,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rakesh Agrawal, Sridhar Rajagopalan, Ramakrishnan Srikant, and Yirong Xu. 2003. Mining newsgroups using networks arising from social behavior. In WWW.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The power of negative thinking: Exploiting label disagreement in the min-cut classification framework",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics (COLING-2008)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, Claire Cardie, and Lillian Lee. 2008. The power of negative thinking: Exploiting label dis- agreement in the min-cut classification framework. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING-2008).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Adapting a polarity lexicon using integer linear programming for domainspecific sentiment classification",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "590--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi and Claire Cardie. 2009. Adapting a polarity lexicon using integer linear programming for domain- specific sentiment classification. In Proceedings of the 2009 Conference on Empirical Methods in Natu- ral Language Processing, pages 590-598, Singapore, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cultural orientation: Classifying subjective documents by cocitation analysis",
"authors": [
{
"first": "Miles",
"middle": [],
"last": "Efron",
"suffix": ""
}
],
"year": 2004,
"venue": "AAAI Fall Symposium on Style and Meaning in Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miles Efron. 2004. Cultural orientation: Classifying subjective documents by cocitation analysis. In AAAI Fall Symposium on Style and Meaning in Language, Art, and Music.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "More than words: Syntactic packaging and implicit sentiment",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Greene",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "503--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 503-511, Boulder, Colorado, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Coupling niche browsers and affect analysis for an opinion mining application",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "James",
"middle": [
"G"
],
"last": "Shanahan",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Evans",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceeding of RIAO-04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Grefenstette, Yan Qu, James G. Shanahan, and David A. Evans. 2004. Coupling niche browsers and affect analysis for an opinion mining application. In Proceeding of RIAO-04, Avignon, FR.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The weka data mining software: An update",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "SIGKDD Explorations",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: An update. In SIGKDD Explorations, Volume 11, Issue 1.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Crystal: Analyzing predictive opinions on the web",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Soo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "1056--1064",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soo-Min Kim and Eduard Hovy. 2007. Crystal: Ana- lyzing predictive opinions on the web. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 1056-1064.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Extracting policy positions from political texts using words as data",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Laver",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Benoit",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Garry",
"suffix": ""
}
],
"year": 2003,
"venue": "American Political Science Review",
"volume": "97",
"issue": "2",
"pages": "311--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Laver, Kenneth Benoit, and John Garry. 2003. Extracting policy positions from political texts using words as data. American Political Science Review, 97(2):311-331.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Which side are you on? Identifying perspectives at the document and sentence levels",
"authors": [
{
"first": "Wei-Hao",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Hauptmann",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL-2006)",
"volume": "",
"issue": "",
"pages": "109--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Hao Lin, Theresa Wilson, Janyce Wiebe, and Alexander Hauptmann. 2006. Which side are you on? Identifying perspectives at the document and sen- tence levels. In Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL- 2006), pages 109-116, New York, New York.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Identifying perspectives at the document and sentence levels using statistical models",
"authors": [
{
"first": "Wei-Hao",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Doctoral Consortium",
"volume": "",
"issue": "",
"pages": "227--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Hao Lin. 2006. Identifying perspectives at the doc- ument and sentence levels using statistical models. In Proceedings of the Human Language Technology Con- ference of the NAACL, Companion Volume: Doctoral Consortium, pages 227-230, New York City, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Taking sides: Graph-based user classification for informal online political discourse",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Malouf",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Mullen",
"suffix": ""
}
],
"year": 2008,
"venue": "Internet Research",
"volume": "18",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Malouf and Tony Mullen. 2008. Taking sides: Graph-based user classification for informal online po- litical discourse. Internet Research, 18(2).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A robust transformation procedure for interpreting political text",
"authors": [
{
"first": "Lanny",
"middle": [
"W"
],
"last": "Martin",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Vanberg",
"suffix": ""
}
],
"year": 2008,
"venue": "Political Analysis",
"volume": "16",
"issue": "1",
"pages": "93--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lanny W. Martin and Georg Vanberg. 2008. A ro- bust transformation procedure for interpreting political text. Political Analysis, 16(1):93-100.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A preliminary investigation into sentiment analysis of informal political discourse",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Mullen",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Malouf",
"suffix": ""
}
],
"year": 2006,
"venue": "AAAI 2006 Spring Symposium on Computational Approaches to Analysing Weblogs",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tony Mullen and Robert Malouf. 2006. A preliminary investigation into sentiment analysis of informal po- litical discourse. In AAAI 2006 Spring Symposium on Computational Approaches to Analysing Weblogs (AAAI-CAAW 2006).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Opinion mining and sentiment analysis",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "Foundations and Trends in Information Retrieval",
"volume": "2",
"issue": "1-2",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Infor- mation Retrieval, Vol. 2(1-2):pp. 1-135.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Recognizing stances in online debates",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "226--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran and Janyce Wiebe. 2009. Rec- ognizing stances in online debates. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 226-234, Suntec, Singapore, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Detecting arguing and sentiment in meetings",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2007,
"venue": "SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran, Josef Ruppenhofer, and Janyce Wiebe. 2007a. Detecting arguing and sentiment in meetings. In SIGdial Workshop on Discourse and Di- alogue, Antwerp, Belgium, September.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Qa with attitude: Exploiting opinion type analysis for improving question answering in on-line discussions and the news",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2007,
"venue": "In International Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran, Theresa Wilson, Janyce Wiebe, and Veselin Stoyanov. 2007b. Qa with attitude: Ex- ploiting opinion type analysis for improving question answering in on-line discussions and the news. In In- ternational Conference on Weblogs and Social Media, Boulder, CO.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Discourse level opinion interpretation",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "801--808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran, Janyce Wiebe, and Josef Rup- penhofer. 2008. Discourse level opinion interpreta- tion. In Proceedings of the 22nd International Con- ference on Computational Linguistics (Coling 2008), pages 801-808, Manchester, UK, August.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Get out the vote: Determining support or opposition from congressional floor-debate transcripts",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "327--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from con- gressional floor-debate transcripts. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 327-335, Sydney, Aus- tralia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Annotating attributions and private states",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL Workshop on Frontiers in Corpus Annotation II: Pie in the Sky",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson and Janyce Wiebe. 2005. Annotating attributions and private states. In Proceedings of ACL Workshop on Frontiers in Corpus Annotation II: Pie in the Sky.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Recognizing contextual polarity in phrase-level sentiment analysis",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "347--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In hltemnlp2005, pages 347-354, Vancouver, Canada.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Fine-grained Subjectivity and Sentiment Analysis: Recognizing the Intensity, Polarity, and Attitudes of private states",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson. 2007. Fine-grained Subjectivity and Sentiment Analysis: Recognizing the Intensity, Polar- ity, and Attitudes of private states. Ph.D. thesis, Intel- ligent Systems Program, University of Pittsburgh.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Examples of debate topics and their stances"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>Negative Arguing Annotations</td><td>Trigger Expr.</td></tr><tr><td>certainly not a foregone conclusion</td><td>certainly not</td></tr><tr><td>has never been any clearer</td><td>has never</td></tr><tr><td>not too cool for kids</td><td>not too</td></tr><tr><td>rather than issuing a letter of ...</td><td>rather than</td></tr><tr><td>there is no explanation for</td><td>there is no</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Positive arguing annotationsTrigger Expr. actually reflects Israel's determination ... actually am convinced that improving ... am convinced bear witness that Mohamed is his ... bear witness can only rise to meet it by making ... can only has always seen usama bin ladin's ...has always"
},
"TABREF3": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Arguing annotations from the MPQA corpus and their corresponding trigger expressions"
},
"TABREF4": {
"num": null,
"content": "<table><tr><td/><td>:</td><td colspan=\"2\">Examples</td><td>of</td><td>positive</td><td>argu-</td></tr><tr><td>ing</td><td colspan=\"2\">(P (positive</td><td colspan=\"3\">arguing|candidate)</td><td>&gt;</td></tr><tr><td colspan=\"6\">P (negative arguing|candidate)) and negative</td></tr><tr><td colspan=\"6\">arguing (P (negative arguing|candidate)</td><td>&gt;</td></tr><tr><td colspan=\"6\">P (positive arguing|candidate))from the arguing</td></tr><tr><td>lexicon</td><td/><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF6": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Accuracy of the different systems"
},
"TABREF7": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Examples of features associated with the stances in Gay Rights domain"
}
}
}
}