{ "paper_id": "W10-0501", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T05:03:52.537347Z" }, "title": "The \"Nays\" Have It: Exploring Effects of Sentiment in Collaborative Knowledge Sharing", "authors": [ { "first": "Ablimit", "middle": [], "last": "Aji", "suffix": "", "affiliation": { "laboratory": "", "institution": "Emory University", "location": {} }, "email": "aaji@mathcs.emory.edu" }, { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "", "affiliation": { "laboratory": "", "institution": "Emory University", "location": {} }, "email": "eugene@mathcs.emory.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we study what effects sentiment have on the temporal dynamics of user interaction and content generation in a knowledge sharing setting. We try to identify how sentiment influences interaction dynamics in terms of answer arrival, user ratings arrival, community agreement and content popularity. Our study suggests that \"Negativity Bias\" triggers more community attention and consequently more content contribution. Our findings provide insight into how users interact in online knowledge sharing communities, and helpful for improving existing systems.", "pdf_parse": { "paper_id": "W10-0501", "_pdf_hash": "", "abstract": [ { "text": "In this paper we study what effects sentiment have on the temporal dynamics of user interaction and content generation in a knowledge sharing setting. We try to identify how sentiment influences interaction dynamics in terms of answer arrival, user ratings arrival, community agreement and content popularity. Our study suggests that \"Negativity Bias\" triggers more community attention and consequently more content contribution. Our findings provide insight into how users interact in online knowledge sharing communities, and helpful for improving existing systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recently, Collaborative Knowledge Sharing sites( or CQA sites), such as Naver and Yahoo! Answers have exploded in popularity. Already, for many information needs, these sites are becoming valuable alternatives to search engines. Previous studies identified visibility as an important factor for content popularity and developed models in static settings. However, when users post social media content, they might either explicitly or implicitly express their personal attitudes or sentiment. The following example illustrates a question with negative sentiment. Q :Obama keeps saying we need to sacrifice. What sacrifices has he and the gov made collectively and individually? A 1 : Our hard earned tax dollars. 17 \u2191, 2 \u2193 A 2 : None and they never will. 18 \u2191, 2 \u2193 Psychological studies (Smith et al., 2008) suggest that our brain has \"Negativity Bias\" -that is, people automatically devote more attention to negative information than to positive information. Thus, our attitudes may be more heavily influenced by negative opinions. Our hypothesis is that this kind of human cognitive bias would have measurable effects on how users respond to information need in CQA communities. Our goal in this paper is to understand how question sentiment influence the dynamics of the user interactions in CQA -that is, to understand how users respond to questions of different sentiment, how question sentiment affects community agreement on best answer and question popularity.", "cite_spans": [ { "start": 786, "end": 806, "text": "(Smith et al., 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While (Aji et al., 2010) suggests that question category has a patent influence on interaction dynamics, we mainly focus on sentiment in this exploratory study, for the reason that sentiment is a high level but prominent facet in every piece of content. We focused on how may sentiment effect the following dimensions:", "cite_spans": [ { "start": 6, "end": 24, "text": "(Aji et al., 2010)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment Influence", "sec_num": "2" }, { "text": "\u2022 Answer Arrival: Measured as number of answers arrived every minute. \u2022 Vote Arrival: Measured as number of votes arrived per answer. \u2022 Community Agreement: Mean Reciprocal Rank (MRR), computed by ranking the answers in order of decreasing \"Thumbs up\" ratings, and identifying the rank of the actual \"best\" answer, as selected by the asker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Influence", "sec_num": "2" }, { "text": "M RR = 1 |Q| N \u2211 i=1 1 rank i (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Influence", "sec_num": "2" }, { "text": "where rank i is the rank of the best answer among the answers submitted for question i. \u2022 Answer Length, Question Length: We examine whether questions with different sentiment exhibit variations in question and answer length. \u2022 Interest \"Stars\": How many users marked question as interesting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment Influence", "sec_num": "2" }, { "text": "For our study we tracked a total of approximately 10,000 questions, sampled from 20 categories from Yahoo! Answers. Specifically, each new question in our tracking list crawled every five minutes until it's closed. As a result, we obtained approximately 22", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Description", "sec_num": "3" }, { "text": "million question-answer-feedback snapshots in total. Since labeling all the questions would be expensive, we randomly selected 2000 questions from this dataset for human labeling. We then utilized the Amazon Mechanical Turk Service 1 . Five workers labeled each question as either positive, negative or neutral; the ratings were filtered by using majority opinion (at least 3 out of 5 labels). Overall statistics of this dataset are reported in Table 1 . The overall inter-rater agreement was 65%. ", "cite_spans": [], "ref_spans": [ { "start": 445, "end": 452, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Dataset Description", "sec_num": "3" }, { "text": "Figure 1 reports answer arrival dynamics for question with varying sentiment. Answers to negative questions arrive substantially faster than answers to positive or neutral questions, whereas the difference between positive and neutral questions are minor. This strongly confirms the \"Negative Bias\" effect. Given the fact that questions stay in the category front page relatively same amount of time where their visibility contributes potential answers, on average, negative sentiment questions managed to get more answers than two other types of questions (4.3 vs. 3.3 and 3.5). It seems, sentiment expressed in a question contributes to the answer arrival more than visibility. Figure 2 reports rating arrival dynamics. Interestingly, positive ratings arrive much faster to negative questions, whereas positive and negative ratings arrive roughly at the same rate for positive and neutral questions. While this might be partially due to the fact that negative sentiment questions are more \"attention grabbing\" than other types of questions, we conjecture that this effect is caused by the selection bias of the raters participating in negative question threads, who tend to support answers that strongly 1 http://www.mturk.com best answer is lower for negative sentiment questions. On average, negative sentiment questions were marked as interesting more than positive or neutral questions were marked as interesting. Although this may sound counterintuitive, it is not surprising if we recall how the \"Negative Bias\" influences user behavior and may increase implicit \"visibility\". All the above mentioned differences are statistically significant(t-test p = 0.05). In summary, our preliminary exploration indicates that sentiment may have a powerful effect on the content contribution dynamics in collaborative question answering, and is a promising direction for further study of knowledge sharing communities.", "cite_spans": [], "ref_spans": [ { "start": 680, "end": 688, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" } ], "back_matter": [ { "text": "We thank HP Social Computing Labs for support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Deconstructing Interaction Dynamics in Knowledge Sharing Communities. International Conference on Social Computing, Behavioral Modeling, & Prediction", "authors": [ { "first": "Ablimit", "middle": [ "Eugene" ], "last": "Aji", "suffix": "" }, { "first": "", "middle": [], "last": "Agichtein", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ablimit Aji. Eugene Agichtein. 2010. Deconstructing Interaction Dynamics in Knowledge Sharing Commu- nities. International Conference on Social Computing, Behavioral Modeling, & Prediction.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Predicting the popularity of online content. HP Labs Technical Report", "authors": [ { "first": "Gabor", "middle": [], "last": "Szabo", "suffix": "" }, { "first": "", "middle": [], "last": "Bernardo Huberman", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabor Szabo. Bernardo Huberman. 2008. Predicting the popularity of online content. HP Labs Technical Re- port.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Social Information Processing in Social News Aggregation", "authors": [ { "first": "Kristina", "middle": [], "last": "Lerman", "suffix": "" } ], "year": 2007, "venue": "IEEE Internet Computing: Special Issue on Social Search", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Lerman. 2007. Social Information Processing in Social News Aggregation. IEEE Internet Comput- ing: Special Issue on Social Search.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Affective Context Moderates the Attention Bias Toward Negative Information", "authors": [ { "first": "Kyle", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Jeff", "middle": [ "T" ], "last": "Larsen Tanya", "suffix": "" }, { "first": "L", "middle": [], "last": "Chartrand John", "suffix": "" } ], "year": 2006, "venue": "Journal of Personality and Social Psychology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyle Smith Jeff T. Larsen Tanya L. Chartrand John T. Cacioppo 2006. Affective Context Moderates the Attention Bias Toward Negative Information. Journal of Personality and Social Psychology.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Cumulative answer arrival", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Cumulative user ratings arrival agree (or strongly disagree) with the question asker.Surprisingly, community agreement(MRR) on the", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "num": null, "content": "", "type_str": "table", "text": "Statistics of the Temporal dataset .", "html": null }, "TABREF3": { "num": null, "content": "
: Agreement, Question length, Answer Length
and Star count averaged over question type
", "type_str": "table", "text": "", "html": null } } } }