ACL-OCL / Base_JSON /prefixW /json /wit /2022.wit-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:58:15.899607Z"
},
"title": "Bi-Directional Recurrent Neural Ordinary Differential Equations for Social Media Text Classification",
"authors": [
{
"first": "Maunika",
"middle": [],
"last": "Tamire",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Hyderabad",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Anumasa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Hyderabad",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "P",
"middle": [
"K"
],
"last": "Srijith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Hyderabad",
"location": {
"country": "India"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Classification of posts in social media such as Twitter is difficult due to the noisy and short nature of texts. Sequence classification models based on recurrent neural networks (RNN) are popular for classifying posts that are sequential in nature. RNNs assume the hidden representation dynamics to evolve in a discrete manner and do not consider the exact time of the posting. In this work, we propose to use recurrent neural ordinary differential equations (RN-ODE) for social media post classification which consider the time of posting and allow the computation of hidden representation to evolve in a time-sensitive continuous manner. In addition, we propose a novel model, Bi-directional RNODE (Bi-RNODE), which can consider the information flow in both the forward and backward directions of posting times to predict the post label. Our experiments demonstrate that RNODE and Bi-RNODE are effective for the problem of stance classification of rumours in social media.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Classification of posts in social media such as Twitter is difficult due to the noisy and short nature of texts. Sequence classification models based on recurrent neural networks (RNN) are popular for classifying posts that are sequential in nature. RNNs assume the hidden representation dynamics to evolve in a discrete manner and do not consider the exact time of the posting. In this work, we propose to use recurrent neural ordinary differential equations (RN-ODE) for social media post classification which consider the time of posting and allow the computation of hidden representation to evolve in a time-sensitive continuous manner. In addition, we propose a novel model, Bi-directional RNODE (Bi-RNODE), which can consider the information flow in both the forward and backward directions of posting times to predict the post label. Our experiments demonstrate that RNODE and Bi-RNODE are effective for the problem of stance classification of rumours in social media.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information disseminated in social media such as Twitter can be useful for addressing several realworld problems like rumour detection, disaster management, and opinion mining. Most of these problems involve classifying social media posts into different categories based on their textual content. For example, classifying the veracity of tweets as False, True, or unverified allows one to debunk the rumours evolving in social media (Zubiaga et al., 2018a) . However, social media text is extremely noisy with informal grammar, typographical errors, and irregular vocabulary. In addition, the character limit (240 characters) imposed by social media such as Twitter make it even harder to perform text classification.",
"cite_spans": [
{
"start": 433,
"end": 456,
"text": "(Zubiaga et al., 2018a)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Social media text classification, such as rumour stance classification 1 (Qazvinian et al., 1 Rumour stance classification helps to identify the veracity 2011; Zubiaga et al., 2016; Lukasik et al., 2019) can be addressed effectively using sequence labelling models such as long short term memory (LSTM) networks (Zubiaga et al., 2016; Augenstein et al., 2016; Kochkina et al., 2017; Zubiaga et al., 2018b,a; Dey et al., 2018; Liu et al., 2019; Tian et al., 2020) . Though they consider the sequential nature of tweets, they ignore the temporal aspects associated with the tweets. The time gap between tweets varies a lot and LSTMs ignore this irregularity in tweet occurrences. They are discrete state space models where hidden representation changes from one tweet to another without considering the time difference between the tweets. Considering the exact times at which tweets occur can play an important role in determining the label. If the time gap between tweets is large, then the corresponding labels may not influence each other but can have a very high influence if they are closer.",
"cite_spans": [
{
"start": 92,
"end": 93,
"text": "1",
"ref_id": null
},
{
"start": 160,
"end": 181,
"text": "Zubiaga et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 182,
"end": 203,
"text": "Lukasik et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 312,
"end": 334,
"text": "(Zubiaga et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 335,
"end": 359,
"text": "Augenstein et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 360,
"end": 382,
"text": "Kochkina et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 383,
"end": 407,
"text": "Zubiaga et al., 2018b,a;",
"ref_id": null
},
{
"start": 408,
"end": 425,
"text": "Dey et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 426,
"end": 443,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 444,
"end": 462,
"text": "Tian et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose to use recurrent neural ordinary differential equations (RNODE) (Rubanova et al., 2019) and developed a novel approach bidirectional RNODE (Bi-RNODE), which can naturally consider the temporal information to perform time sensitive classification of social media posts. NODE (Chen et al., 2018 ) is a continuous depth deep learning model that performs transformation of feature vectors in a continuous manner using ordinary differential equation solvers. NODEs bring parameter efficiency and address model selection in deep learning to a great extent. RNODE generalizes RNN by extending NODE for time-series data by considering temporal information associated with the sequential data. Hidden representations are changed continuously by considering the temporal information.",
"cite_spans": [
{
"start": 75,
"end": 98,
"text": "(Rubanova et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 280,
"end": 303,
"text": "NODE (Chen et al., 2018",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose to use RNODE for the task of sequence labeling of posts, which considers arrival times of the posts for updating hidden representaof a rumour post by classifying the reply tweets into different stance classes such as Support, Deny, Question, Comment tions and for classifying the post. In addition, we propose a novel model, Bi-RNODE, which considers not only information from the past but also from the future in predicting the label of the post. Here, continuously evolving hidden representations in the forward and backward directions in time are combined and used to predict the post label. We show the effectiveness of the proposed models on the rumour stance classification problem in Twitter using the RumourEval-2019 (Derczynski et al., 2019) dataset. We found RNODE and Bi-RNODE can improve the social media text classification by effectively making use of the temporal information and is better than LSTMs and gated recurrent units (GRU) with temporal features.",
"cite_spans": [
{
"start": 736,
"end": 761,
"text": "(Derczynski et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We consider the problem of classifying social media posts into different classes. Let D be a collec-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "tion of N posts, D = {p i } N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": ". Each post p i is assumed to be a tuple containing information such as textual and contextual features x i , time of the post t i and the label associated with the post y i , thus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "p i = {(x i , t i , y i )}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Our aim is to develop a sequence classification model which considers the temporal information t i along with x i for classifying a social media post. In particular, we consider the rumour stance classification problem in Twitter where one classifies tweets into Support, Query, Deny, and Comment class, thus y i \u2208 Y={Support, Query, Deny, Comment}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "NODE were introduced as a continuous depth alternative to Residual Networks (ResNets) (He et al., 2016) . ResNets uses skip connections to avoid vanishing gradient problems when networks grow deeper. Residual block output is computed as",
"cite_spans": [
{
"start": 86,
"end": 103,
"text": "(He et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Ordinary Differential Equations",
"sec_num": "2.1"
},
{
"text": "h t+1 = h t + f (h t , \u03b8 t ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Ordinary Differential Equations",
"sec_num": "2.1"
},
{
"text": "where f () is a neural network (NN) parameterized by \u03b8 t and h t representing the hidden representation at depth t. This update is similar to a step in Euler numerical technique used for solving ordinary differential equations (ODE) dh(t) dt = f (h(t), t, \u03b8). The sequence of residual block operations in ResNets can be seen as a solution to this ODE. Consequently, NODEs can be interpreted as a continuous equivalent of ResNets modeling the evolution of hidden representationsh(t) over time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Ordinary Differential Equations",
"sec_num": "2.1"
},
{
"text": "For solving ODE, one can use fixed stepsize numerical techniques such as Euler, Runge-Kutta or adaptive step-size methods like Do-pri5(Dormand and Prince, 1980) . Solving an ODE requires one to specify an initial value h(0) (input x or its transformation) and can compute the value at t using an ODE solver ODESolverCompute(f \u03b8 , h(0), 0, t). An ODE is solved until some end-time T to obtain the final hidden representation h(T ) which is used to predict class labels\u0177. For classification problems, cross-entropy loss is used and parameters are learnt through adjoint sensitivity method (Zhuang et al., 2020; Chen et al., 2018) which provides efficient back-propagation and gradient computations.",
"cite_spans": [
{
"start": 127,
"end": 160,
"text": "Do-pri5(Dormand and Prince, 1980)",
"ref_id": null
},
{
"start": 587,
"end": 608,
"text": "(Zhuang et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 609,
"end": 627,
"text": "Chen et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Ordinary Differential Equations",
"sec_num": "2.1"
},
{
"text": "LSTMs are popular for sequence classification but only considers the sequential nature of the data and ignore the temporal features associated with the data in its standard setting. As the posts occur in irregular intervals of time, the nature of a new post will be influenced by the recent posts, influence will be inversely proportional to the time gap. In these situations, it will be beneficial to use a model where the number of transformations depend on the time gap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-Directional Recurrent NODE",
"sec_num": "3"
},
{
"text": "We propose to use RNODE which considers the arrival time and accordingly the hidden representations are transformed across time. In RNODE, the transformation of a hidden representation h(t i\u22121 ) at time t i\u22121 to h(t i ) at time t i is governed by an ODE parameterized by a NN f (). Unlike standard LSTMs where h(t i ) is obtained from h(t i\u22121 ) as a single NN transformation, RNODE first obtains a hidden representation h \u2032 (t i ) as a solution to an ODE at time t i with initial value h(t i\u22121 ). The number of update steps in the numerical technique used to solve this ODE depends on the time gap t i \u2212t i\u22121 between the consecutive posts. The hidden representation h \u2032 (t i ) and input post x i at time t i are passed through neural network transformation (RNNCell()) to obtain final hidden representation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-Directional Recurrent NODE",
"sec_num": "3"
},
{
"text": "h(t i ), i.e., h(t i ) = RNNCell(h \u2032 (t i ), x i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-Directional Recurrent NODE",
"sec_num": "3"
},
{
"text": "The process is repeated for every element (x i , t i ) in the sequence. The hidden representations associated with the elements in the sequence are then passed to a neural network (NN()) to obtain the post labels. Using standard cross-entropy loss, the parameters of the models are learnt through backpropagation. Figure 1 provides the detailed architecture of the Bi-directional RNNs (Schuster and Paliwal, 1997 ) such as Bi-LSTMS (Graves et al., 2013) were proven to be successful in many sequence labeling tasks in natural language processing such as POS tagging (Huang et al., 2015) . They use the information from the past and future to predict the label while standard LSTMs consider only information from the past. We propose a Bi-RNODE model, which uses the sequence of input observations from past and from the future to predict the post label at any time t. It assumes the hidden representation dynamics are influenced not only by the past posts but also by the futures posts. Unlike Bi-LSTMs, Bi-RNODE considers the exact time of the posts and their inter-arrival times in determining the transformations in the hidden representations. Bi-RNODE consists of two RNODE blocks, one performing transformations in the forward direction (in the order of posting times) and the other in the backward direction. The hidden representations H and H b computed by forward and backward RNODE respectively are aggregated either by concatenation or by averaging appropriately to obtain a final hidden representation and is passed through a NN to obtain the post labels. Bi-RNODE is useful when a sequence of posts with their time of occurrence needs to be classified together. Figure 2 provides an overview of Bi-RNODE model for post classification. For Bi-RNODE, an extra neural network f \u03b8 \u2032 () is required to compute hidden representations h b (t \u2032 i ) in the backward direction. Training in Bi-RNODE is done in a similar manner to RNODE, with cross-entropy loss and back-propagation to estimate parameters.",
"cite_spans": [
{
"start": 385,
"end": 412,
"text": "(Schuster and Paliwal, 1997",
"ref_id": "BIBREF14"
},
{
"start": 432,
"end": 453,
"text": "(Graves et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 566,
"end": 586,
"text": "(Huang et al., 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 314,
"end": 322,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1674,
"end": 1682,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Bi-Directional Recurrent NODE",
"sec_num": "3"
},
{
"text": "To demonstrate the effectiveness of the proposed approaches, we consider the stance classification problem in Twitter and RumourEval-2019 (Derczynski et al., 2019 data set. This Twitter data set consists of rumours associated with eight events. Each event has a collection of tweets labelled with one of the four labels -Support, Query, Deny and Comment. We picked four major events Charliehebdo, Ferguson, Ottawashooting and Sydneysiege (each with approximately 1000 tweets per event) from RumourEval-2019 to perform experiments.",
"cite_spans": [
{
"start": 122,
"end": 137,
"text": "RumourEval-2019",
"ref_id": null
},
{
"start": 138,
"end": 162,
"text": "(Derczynski et al., 2019",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Features : For dataset preparation, each data point x i associated with a Tweet includes text embedding, retweet count, favourites count, punctuation features, negative and positive word count, presence of hashtags, user mentions, URLs etc. obtained from the tweet. The text embedding of the tweet is obtained by concatenating the word embeddings 2 . Each tweet timestamp is converted to epoch time and Min-Max normalization is applied over the time stamps associated with each event to keep the duration of the event in the interval [0, 1].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We conducted experiments to predict the stance of social media posts propagating in seen events and unseen events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1"
},
{
"text": "-Seen Event Here we train, validate and test on tweets of the same event. Each event data is split 60:20:20 ratio in sequence of time. This setup helps in predicting the stance of unseen tweets of the same event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1"
},
{
"text": "-Unseen Event: This setup helps in evaluating performance on an unseen event and training on a larger dataset. Here, training and validation data are formed using data from 3 events and testing is done on the 4 th event. Last 20% of the training data (after ordering based on time) are set aside for validation. During training, mini-batches are formed only from the tweets belonging to the same event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1"
},
{
"text": "Baselines: We compared results of our proposed RNODE and Bi-RNODE models with RNN based baselines such LSTM (Kochkina et al., 2017) , Bi-LSTM (Augenstein et al., 2016) , GRU (Cho et al., 2014) , Bi-GRU, and Majority (labelling with most frequent class) baseline models. We also use a variant of LSTM baseline considering temporal information (Zubiaga et al., 2018b) , LSTM-timeGap where the time gap of consecutive data points is included as part of the input data.",
"cite_spans": [
{
"start": 108,
"end": 131,
"text": "(Kochkina et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 142,
"end": 167,
"text": "(Augenstein et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 174,
"end": 192,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 342,
"end": 365,
"text": "(Zubiaga et al., 2018b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1"
},
{
"text": "Evaluation Metrics: We consider the standard evaluation metrics such as precision, recall, F1 and in addition the AUC score to account for the data imbalance. We consider a weighted average of the Table 1 : Performance of all the models on RumourEval-2019 (Derczynski et al., 2019) dataset. First and second rows of each model represents seen event and unseen event experiment results respectively. evaluation metrics to compare the performance of models. Hyperparameters: All the models are trained for 50 epochs with 0.01 learning rate, Adam optimizer, dropout(0.2) regularizer, batchsize of 50, hidden representation size of 64 and cross entropy as the loss function. Different hyperparameters like neural network layers (1, 2), numerical methods (Euler, RK4, Dopri5 for RNODE and Bi-RNODE) and aggregation strategy (concatenation or averaging for Bi-LSTM Bi-GRU and Bi-RNODE) are used for all the models and the best configuration is selected from the validation data for different experimental setups and train/test data splits.",
"cite_spans": [
{
"start": 256,
"end": 281,
"text": "(Derczynski et al., 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1"
},
{
"text": "The results of seen event and unseen event experiment setup can be found in Table 1 , where the first and second rows for each model provides results on seen event and unseen event respectively. We can observe from Table 1 that for both seen event and unseen event experiment setup, RNODE and Bi-RNODE models performed better than the baseline models in general for all the 3 events 3 . In particular for the seen event setup, Bi-RNODE gives the best result outperforming RNODE and other models for most of the data sets and measures. Under seen event experiment on Syndneysiege event, we plot the ROC curve for all the models in Figure 3 . We can observe that AUC for Figures 3(a) and 3(e) corresponding to RNODE and Bi-RNODE respectively are higher than LSTM, GRU, Bi-LSTM , and Bi-GRU.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 1",
"ref_id": null
},
{
"start": 215,
"end": 222,
"text": "Table 1",
"ref_id": null
},
{
"start": 630,
"end": 638,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.2"
},
{
"text": "We proposed RNODE, Bi-RNODE models for sequence classification of social media posts. These models consider temporal information of the posts and hidden representation are evolved as solution to ODE. Through experiments, we show these models perform better than LSTMs on rumour stance classification problem in Twitter",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Using pre-trained word2vec vectors which are trained on Google News dataset: https://code.google.com/p/word2vec, each word is represented as an embedding of size 15.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Due to space constraint,Table 1presents results for 3 events, Syndneysiege results inFigure 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Stance detection with bidirectional conditional encoding",
"authors": [
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "876--885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabelle Augenstein, Tim Rockt\u00e4schel, Andreas Vla- chos, and Kalina Bontcheva. 2016. Stance detection with bidirectional conditional encoding. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 876-885.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural ordinary differential equations",
"authors": [
{
"first": "T",
"middle": [
"Q"
],
"last": "Ricky",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Rubanova",
"suffix": ""
},
{
"first": "David",
"middle": [
"K"
],
"last": "Bettencourt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Duvenaud",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "6571--6583",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. 2018. Neural ordinary differen- tial equations. In Advances in neural information processing systems, pages 6571-6583.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On the properties of neural machine translation: Encoder-decoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statistical Translation, pages 103-111.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Topical stance detection for twitter: A twophase lstm model using attention",
"authors": [
{
"first": "Kuntal",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "Ritvik",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Saroj",
"middle": [],
"last": "Kaushik",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "529--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuntal Dey, Ritvik Shrivastava, and Saroj Kaushik. 2018. Topical stance detection for twitter: A two- phase lstm model using attention. In Advances in Information Retrieval, pages 529-536.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A family of embedded runge-kutta formulae",
"authors": [
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dormand",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Prince",
"suffix": ""
}
],
"year": 1980,
"venue": "Journal of computational and applied mathematics",
"volume": "6",
"issue": "1",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John R Dormand and Peter J Prince. 1980. A family of embedded runge-kutta formulae. Journal of compu- tational and applied mathematics, 6(1):19-26.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Speech recognition with deep recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Abdel-Rahman",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2013,
"venue": "2013 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "6645--6649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recur- rent neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Process- ing, pages 6645-6649.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bidirectional LSTM-CRF models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, W. Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. ArXiv abs/1508.01991.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Turing at SemEval-2017 task 8: Sequential approach to rumour stance classification with branch-LSTM",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Kochkina",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "475--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Kochkina, Maria Liakata, and Isabelle Augen- stein. 2017. Turing at SemEval-2017 task 8: Sequen- tial approach to rumour stance classification with branch-LSTM. In Proceedings of the 11th Interna- tional Workshop on Semantic Evaluation (SemEval- 2017), pages 475-480.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Towards early identification of online rumors based on long short-term memory networks",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2019,
"venue": "Inf. Process. Manag",
"volume": "56",
"issue": "",
"pages": "1457--1467",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Liu, X. Jin, and H. Shen. 2019. Towards early iden- tification of online rumors based on long short-term memory networks. Inf. Process. Manag., 56:1457- 1467.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Gaussian processes for rumour stance classification in social media",
"authors": [
{
"first": "Michal",
"middle": [],
"last": "Lukasik",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Procter",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM Trans. Inf. Syst",
"volume": "37",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michal Lukasik, Kalina Bontcheva, Trevor Cohn, Arkaitz Zubiaga, Maria Liakata, and Rob Procter. 2019. Gaussian processes for rumour stance classifi- cation in social media. ACM Trans. Inf. Syst., 37(2).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Rumor has it: Identifying misinformation in microblogs",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Vahed Qazvinian",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Rosengren",
"suffix": ""
},
{
"first": "Qiaozhu",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mei",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1589--1599",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vahed Qazvinian, Emily Rosengren, Dragomir Radev, and Qiaozhu Mei. 2011. Rumor has it: Identifying misinformation in microblogs. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1589-1599.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Latent odes for irregularly-sampled time series",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Rubanova",
"suffix": ""
},
{
"first": "T",
"middle": [
"Q"
],
"last": "Ricky",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Duvenaud",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 33rd International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5320--5330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulia Rubanova, Ricky TQ Chen, and David Duvenaud. 2019. Latent odes for irregularly-sampled time series. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pages 5320-5330.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kuldip",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673-2681.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Early detection of rumours on twitter via stance transfer learning",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Xiuzhen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "575--588",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin Tian, Xiuzhen Zhang, Yan Wang, and Huan Liu. 2020. Early detection of rumours on twitter via stance transfer learning. In European Conference on Information Retrieval, pages 575-588. Springer.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Adaptive checkpoint adjoint method for gradient estimation in neural ode",
"authors": [
{
"first": "Juntang",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "Nicha",
"middle": [],
"last": "Dvornek",
"suffix": ""
},
{
"first": "Xiaoxiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sekhar",
"middle": [],
"last": "Tatikonda",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "11639--11649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juntang Zhuang, Nicha Dvornek, Xiaoxiao Li, Sekhar Tatikonda, Xenophon Papademetris, and James Dun- can. 2020. Adaptive checkpoint adjoint method for gradient estimation in neural ode. In Inter- national Conference on Machine Learning, pages 11639-11649. PMLR.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Detection and resolution of rumours in social media: A survey",
"authors": [
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Ahmet",
"middle": [],
"last": "Aker",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Procter",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "51",
"issue": "2",
"pages": "1--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2018a. Detection and res- olution of rumours in social media: A survey. ACM Computing Surveys (CSUR), 51(2):1-36.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Stance classification in rumours as a sequential task exploiting the tree structure of social media conversations",
"authors": [
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Kochkina",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Procter",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Lukasik",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2438--2448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, and Michal Lukasik. 2016. Stance classi- fication in rumours as a sequential task exploiting the tree structure of social media conversations. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2438-2448.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Discourseaware rumour stance classification in social media using sequential classifiers",
"authors": [
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Kochkina",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Procter",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Lukasik",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2018,
"venue": "Information Processing & Management",
"volume": "54",
"issue": "2",
"pages": "273--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, Michal Lukasik, Kalina Bontcheva, Trevor Cohn, and Isabelle Augenstein. 2018b. Discourse- aware rumour stance classification in social media using sequential classifiers. Information Processing & Management, 54(2):273-290.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Architecture details of RNODE",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Bi-RNODE Architecture RNODE model.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"content": "<table><tr><td>Model</td><td colspan=\"2\">Charliehebdo</td><td/><td colspan=\"2\">Ferguson</td><td/><td colspan=\"2\">Ottawashooting</td></tr><tr><td/><td>AUC F1</td><td colspan=\"3\">Recall Preci-AUC F1</td><td colspan=\"3\">Recall Preci-AUC F1</td><td>Recall Preci-</td></tr><tr><td/><td/><td/><td>sion</td><td/><td/><td>sion</td><td/><td>sion</td></tr><tr><td>RNODE</td><td colspan=\"2\">0.665 0.653 0.674</td><td>0.658</td><td>0.600 0.591</td><td>0.659</td><td>0.598</td><td colspan=\"2\">0.638 0.654 0.692</td><td>0.670</td></tr><tr><td/><td colspan=\"2\">0.638 0.672 0.700</td><td>0.721</td><td>0.618 0.632</td><td>0.677</td><td>0.640</td><td colspan=\"2\">0.659 0.651 0.703</td><td>0.642</td></tr><tr><td colspan=\"3\">Bi-RNODE 0.696 0.659 0.693</td><td>0.629</td><td>0.595 0.599</td><td>0.673</td><td>0.641</td><td colspan=\"2\">0.669 0.667 0.692</td><td>0.658</td></tr><tr><td/><td colspan=\"2\">0.651 0.697 0.737</td><td>0.690</td><td>0.615 0.643</td><td>0.695</td><td>0.635</td><td colspan=\"2\">0.652 0.624 0.662</td><td>0.618</td></tr><tr><td>Bi-LSTM</td><td colspan=\"2\">0.628 0.625 0.679</td><td>0.609</td><td>0.563 0.599</td><td>0.650</td><td>0.614</td><td colspan=\"2\">0.622 0.627 0.654</td><td>0.622</td></tr><tr><td/><td colspan=\"2\">0.662 0.690 0.717</td><td>0.671</td><td>0.603 0.623</td><td>0.667</td><td>0.600</td><td colspan=\"2\">0.650 0.637 0.686</td><td>0.622</td></tr><tr><td>Bi-GRU</td><td colspan=\"2\">0.654 0.643 0.660</td><td>0.641</td><td>0.588 0.571</td><td>0.631</td><td>0.625</td><td colspan=\"2\">0.640 0.651 0.686</td><td>0.644</td></tr><tr><td/><td colspan=\"2\">0.656 0.690 0.724</td><td>0.682</td><td>0.613 0.634</td><td>0.678</td><td>0.611</td><td colspan=\"2\">0.648 0.636 0.683</td><td>0.610</td></tr><tr><td>LSTM</td><td colspan=\"2\">0.625 0.600 0.637</td><td>0.637</td><td>0.567 0.602</td><td>0.650</td><td>0.611</td><td colspan=\"2\">0.605 0.609 0.635</td><td>0.603</td></tr><tr><td/><td colspan=\"2\">0.645 0.690 0.728</td><td>0.686</td><td>0.602 0.611</td><td>0.631</td><td>0.603</td><td colspan=\"2\">0.630 0.626 0.680</td><td>0.627</td></tr><tr><td>GRU</td><td colspan=\"2\">0.616 0.610 0.647</td><td>0.623</td><td>0.578 0.588</td><td>0.664</td><td>0.631</td><td colspan=\"2\">0.591 0.539 0.513</td><td>0.574</td></tr><tr><td/><td colspan=\"2\">0.682 0.695 0.713</td><td>0.686</td><td>0.614 0.640</td><td>0.687</td><td>0.623</td><td colspan=\"2\">0.638 0.632 0.683</td><td>0.618</td></tr><tr><td>LSTM-</td><td colspan=\"2\">0.638 0.631 0.679</td><td>0.605</td><td>0.565 0.581</td><td>0.627</td><td>0.590</td><td colspan=\"2\">0.625 0.640 0.679</td><td>0.650</td></tr><tr><td>timeGap</td><td colspan=\"2\">0.652 0.695 0.732</td><td>0.696</td><td>0.604 0.625</td><td>0.673</td><td>0.633</td><td colspan=\"2\">0.638 0.638 0.683</td><td>0.651</td></tr><tr><td>Majority</td><td colspan=\"2\">0.500 0.456 0.605</td><td>0.366</td><td>0.500 0.518</td><td>0.654</td><td>0.428</td><td colspan=\"2\">0.500 0.485 0.628</td><td>0.395</td></tr><tr><td/><td colspan=\"2\">0.500 0.542 0.673</td><td>0.453</td><td>0.500 0.528</td><td>0.662</td><td>0.439</td><td colspan=\"2\">0.500 0.467 0.614</td><td>0.377</td></tr></table>",
"html": null,
"text": "Figure 3: ROC curves of different models trained on sydneysiege event for seen event experimental setup. Bi-RNODE exhibits better AUC and class separability overall classes.",
"type_str": "table",
"num": null
}
}
}
}