{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:37.027248Z" }, "title": "Language that Captivates the Audience: Predicting Affective Ratings of TED Talks in a Multi-Label Classification Task", "authors": [ { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": {} }, "email": "elma.kerz@ifaar.rwth-aachen.de" }, { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": {} }, "email": "yu.qiao@rwth-aachen.de" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Amsterdam", "location": {} }, "email": "d.wiechmann@uva.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The aim of the paper is twofold: (1) to automatically predict the ratings assigned by viewers to 14 categories available for TED talks in a multi-label classification task and (2) to determine what types of features drive classification accuracy for each of the categories. The focus is on features of language usage from five groups pertaining to syntactic complexity, lexical richness, register-based n-gram measures, information-theoretic measures and LIWCstyle measures. We show that a Recurrent Neural Network classifier trained exclusively on within-text distributions of such features can reach relatively high levels of overall accuracy (69%) across the 14 categories. We find that features from two groups are strong predictors of the affective ratings across all categories and that there are distinct patterns of language usage for each rating category.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "The aim of the paper is twofold: (1) to automatically predict the ratings assigned by viewers to 14 categories available for TED talks in a multi-label classification task and (2) to determine what types of features drive classification accuracy for each of the categories. The focus is on features of language usage from five groups pertaining to syntactic complexity, lexical richness, register-based n-gram measures, information-theoretic measures and LIWCstyle measures. We show that a Recurrent Neural Network classifier trained exclusively on within-text distributions of such features can reach relatively high levels of overall accuracy (69%) across the 14 categories. We find that features from two groups are strong predictors of the affective ratings across all categories and that there are distinct patterns of language usage for each rating category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The ability to communicate competently and effectively is key to personal contentment, academic achievement and professional career success. The ability to communicate competently is even found to enhance social relationships (Burleson, 2007; Morreale and Pearson, 2008) . In educational and vocational contexts, competent speakers experience more success in sharing their knowledge, ideas and views (De Vries et al., 2010) . One of the essential communication skills is that of giving an informative and impactful public speech. The development and mastery of public speaking is recognized as a core communicative competence (Schreiber et al., 2012) and has been incorporated into the educational curric-ula and standards for both first and second/foreign language (Common Core State Standards Initiative, 2010) . Across various assessment grids and evaluation forms, a speech is considered competent and effective when the communicative intention is fulfilled -such as that of informing, persuading or entertaining an audience (and most speeches aim at achieving one or more of these) is achieved -when it is appropriate to the specific communicative context and when it matches the expectations of the audience. Given its central role, it is hardly surprising that research on public speaking has given rise to a vast field, scattered across disciplines using a range of methodological approaches (Backlund and Morreale, 2015) . This research has been directed towards understanding the role of both verbal and non-verbal components in the assessment of the multidimensional construct of public speaking competence (Morreale, 2007) . In more recent years, there has been a growing interest in automatic assessment of speaking skills. Most studies in this area have primarily focused on the role of auditory and acoustic measures of prosody (such as loudness, voice quality and pitch) and non-verbal cues (such as the use of gestures, eye-contact and posture) in predicting human ratings (see Section 2 on related work for more details).", "cite_spans": [ { "start": 226, "end": 242, "text": "(Burleson, 2007;", "ref_id": "BIBREF5" }, { "start": 243, "end": 270, "text": "Morreale and Pearson, 2008)", "ref_id": "BIBREF35" }, { "start": 400, "end": 423, "text": "(De Vries et al., 2010)", "ref_id": "BIBREF15" }, { "start": 626, "end": 650, "text": "(Schreiber et al., 2012)", "ref_id": "BIBREF46" }, { "start": 766, "end": 812, "text": "(Common Core State Standards Initiative, 2010)", "ref_id": null }, { "start": 1400, "end": 1429, "text": "(Backlund and Morreale, 2015)", "ref_id": "BIBREF0" }, { "start": 1618, "end": 1634, "text": "(Morreale, 2007)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The present paper contributes to and expands this emerging line of research by modeling public speaking skills in a multi-label classification task on the basis of five groups of features of language usage. More specifically, the aim of the paper is twofold: (1) to automatically predict the affective ratings of public speeches assigned by online viewers across fourteen rating categories and (2) to determine what types of features drive classification accuracy in predicting each of the categories. In pursuit of these aims, we use a large open repository of public speaking, TED Talks 1 . TED (Technology, Entertainment and Design) Talks are designed to not exceed the length of 18 minutes and provide succinct and enlightening insights on various topics or ideas that are \"worth spreading.\" Topics presented in these talks range from global warming to what keeps us happy and healthy as we go through life. Most popular TED Talks, such as Bren\u00e9 Brown's \"The Power of Vulnerability\" has garnered almost 48.000.000 million views. TED presenters are often selected not only on the basis of their expertise on a given topic but also for their ability to effectively and succinctly communicate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The dataset used in the paper included 2392 public speaking videos aligned with 5.89 million human ratings. Each TED speech is rated in terms of 14 categories: (1) beautiful, (2) confusing, (3) courageous, (4) fascinating, (5) funny, (6) informative, (7) ingenious, (8) inspiring, (9) jawdropping, (10) long-winded, (11) obnoxious, (12) OK, (13) persuasive and 14unconvincing. The speech transcripts were automatically analyzed using CoCoGen, a computational tool that implements a sliding-window technique to compute a series of measurements for a given language feature (contours) that captures the distribution of that feature within a text. A Recurrent Neural Network (RNN) classifier -that exploits the sequential information in those contours -was trained on these distributions to predict 14 rating categories investigated in the paper. The remainder of the paper is organized as follows: Section 2 briefly presents a concise review of related work. Section 3 describes the TED talk corpus we used alongside with viewer ratings. Sections 4 presents the tool used for automated text analysis and the five groups of features used in the paper. Section 5 gives a description of the classification model architecture. Section 6 presents the main results and discusses them. Finally, Section 7 summarizes the main findings reported in the paper and proposes future research directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 https://www.ted.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A combined use of automated text analysis of authentic language use in large corpora and machine learning techniques has received increasing interest in recent years. Such an approach has been successfully applied in various classification tasks, including detection of personality, gender, and age in the language of social media (Schwartz et al., 2013) , Alzheimer's dementia detection in spontaneous speech (Luz et al., 2020) , author identification and/or verification (Khanh and Vorobeva, 2020) , fake news detection (P\u00e9rez-Rosas et al., 2017) , etc. Most closely related to this paper is research focused on predicting human behavioral responses and human subjective/affective ratings/judgements through automated analysis of speaking samples. Several studies have used verbal and non-verbal cues in predicting audience laughter in humorous speeches (Chen and Lee, 2017) , human ratings of politicians' speaking performance along multiple dimensions, such as expressiveness, monotonicity and persuasiveness (Scherer et al., 2012) or predicting performance assessments of students' oral presentation (Luzardo et al., 2014) . For example, Luzardo et al. (2014) trained a binary classifier to predict the quality of the presentation. The accuracy of the trained binary classifier is 65% and 69% respectively for the features extracted from slides and audio track. In (Pfister and Robinson, 2011) , real-time recognition of affective states and its application to the assessment of public speaking skills are proposed by using acoustic features. The skills are predicted with 61-81% accuracy. In another interesting application, (Weninger et al., 2012) analysed 143 online speeches hosted on YouTube to classify an individual as achievers, charismatic speakers, and team players with 72.5% accuracy on unseen data. The most related work to ours is Weninger et al. (2013) , which predicted the affective ratings for online TED talks using lexical features, where online viewers assigned 3 out of 14 predefined rating categories that resulted in the affective state invoked in them listening to the talks. Their models reached average recall rates of 74.9 for positive categories (jaw-dropping, funny, courageous, fas-cinating, inspiring, ingenious, beautiful, informative, persuasive) and 60.3 for neutral or negative ones (OK, confusing, unconvincing, long-winded, obnoxious) . In summary, as reviewed above, the available literature on automatic assessment and evaluation of speaking competencies based on human judgments or affective ratings have primarily drawn on audio features (prosody of the speech) and/or visual cues.", "cite_spans": [ { "start": 331, "end": 354, "text": "(Schwartz et al., 2013)", "ref_id": "BIBREF47" }, { "start": 410, "end": 428, "text": "(Luz et al., 2020)", "ref_id": "BIBREF30" }, { "start": 473, "end": 499, "text": "(Khanh and Vorobeva, 2020)", "ref_id": "BIBREF23" }, { "start": 522, "end": 548, "text": "(P\u00e9rez-Rosas et al., 2017)", "ref_id": "BIBREF39" }, { "start": 856, "end": 876, "text": "(Chen and Lee, 2017)", "ref_id": "BIBREF8" }, { "start": 1013, "end": 1035, "text": "(Scherer et al., 2012)", "ref_id": "BIBREF45" }, { "start": 1105, "end": 1127, "text": "(Luzardo et al., 2014)", "ref_id": "BIBREF31" }, { "start": 1143, "end": 1164, "text": "Luzardo et al. (2014)", "ref_id": "BIBREF31" }, { "start": 1370, "end": 1398, "text": "(Pfister and Robinson, 2011)", "ref_id": "BIBREF40" }, { "start": 1631, "end": 1654, "text": "(Weninger et al., 2012)", "ref_id": "BIBREF51" }, { "start": 1850, "end": 1872, "text": "Weninger et al. (2013)", "ref_id": "BIBREF52" }, { "start": 2324, "end": 2377, "text": "(OK, confusing, unconvincing, long-winded, obnoxious)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "We analysed the TedTalk data gathered on the ted.com website 2 . We crawled the site and obtained every TED Talk transcript and its metadata from 2006 through 2017, which yielded a total of 2668 talks. Viewers on the Internet can vote for three impression-related labels out of the 14 types of labels: beautiful, confusing, courageous, fascinating, funny, informative, ingenious, inspiring, jaw-dropping, longwinded, obnoxious, OK, persuasive, and unconvincing. The labels are not mutually exclusive and users can select up to three labels for each talk. If only a single label is chosen, it is counted three times. All talks that featured more than one speaker as well as talks that centered around music performances were removed. This resulted in a dataset of 2392 TED talks with a total number of views of 4139 million and a total number of 5.89 million ratings (see Table 1 ). All ratings were normalized per million views to account for differences in the amount of time that talks have been online. All ratings were binarized by their medians, such that each category has a value 1 when the rating of a text in this category was above or equal to the median and 0 if not.", "cite_spans": [], "ref_spans": [ { "start": 871, "end": 878, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "The speech transcripts were automatically analyzed using CoCoGen, a computational tool that implements a sliding window technique to calculate within-text distributions of scores for a given language feature (for current applications of the tool in the context of text classification, see ). Here, in this paper, we employ a total of 119 features derived from multi-disciplinary integrated approaches to language (Christiansen and Chater, 2017) that fall into five categories: (1) measures of syntactic complexity, (2) measures of lexical richness, (3) register-based n-gram frequency measures, (4) information-theoretic measures, and (5) LIWC-style (Linguistic Inquiry and Word Count) measures. The first four categories of features are derived from the literature on language development showing that, in the course of their lifespan, humans learn to produce and understand complex syntactic structures, more sophisticated and diverse vocabulary and informationally denser language (Berman, 2007; Hartshorne and Germine, 2015; Ehret and Szmrecsanyi, 2019; Lu, 2010 Lu, , 2012 Brysbaert et al., 2019) . The inclusion of features in the fourth category is motivated by recent research on language adaptation (Chang et al., 2012) and research that looks at language from the perspective of complex adaptive systems (Beckner et al., 2009; Christiansen and Chater, 2016a) indicating that, based on accumulated language knowledge emerging from lifelong exposure to various types of language inputs, humans learn to adapt their language to meet the functional requirements of different communicative contexts. The fifth group of features is based on insights from many years of research conducted by Pennebaker and colleagues (Pennebaker et al., 2003; Tausczik and Pennebaker, 2010) , showing that the words people use in their everyday life provide important psychological cues to their thought processes, emotional states, intentions, and motivations.", "cite_spans": [ { "start": 984, "end": 998, "text": "(Berman, 2007;", "ref_id": "BIBREF2" }, { "start": 999, "end": 1028, "text": "Hartshorne and Germine, 2015;", "ref_id": "BIBREF19" }, { "start": 1029, "end": 1057, "text": "Ehret and Szmrecsanyi, 2019;", "ref_id": "BIBREF17" }, { "start": 1058, "end": 1066, "text": "Lu, 2010", "ref_id": "BIBREF28" }, { "start": 1067, "end": 1077, "text": "Lu, , 2012", "ref_id": "BIBREF29" }, { "start": 1078, "end": 1101, "text": "Brysbaert et al., 2019)", "ref_id": "BIBREF4" }, { "start": 1208, "end": 1228, "text": "(Chang et al., 2012)", "ref_id": "BIBREF7" }, { "start": 1314, "end": 1336, "text": "(Beckner et al., 2009;", "ref_id": "BIBREF1" }, { "start": 1337, "end": 1368, "text": "Christiansen and Chater, 2016a)", "ref_id": "BIBREF11" }, { "start": 1721, "end": 1746, "text": "(Pennebaker et al., 2003;", "ref_id": "BIBREF38" }, { "start": 1747, "end": 1777, "text": "Tausczik and Pennebaker, 2010)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Automated Text Analysis", "sec_num": "4" }, { "text": "In contrast to the standard approach implemented in other software for automated text analysis that relies on aggregate scores representing the average value of a feature in a text, the slidingwindow approach employed in CoCoGen tracks the distribution of the feature scores within a text. A sliding window can be conceived of as a window of size ws, which is defined by the number of sentences it contains. The window is moved across a text sentence-by-sentence, computing one value per window for a given indicator. The series of measurements generated by CoCo-Gen captures the progression of language performance within a text for a given indicator and is referred here to as a 'complexity contour'. In general, for a text comprising n sentences, there are w = n ws + 1 windows. 3 CoCoGen uses the Stanford CoreNLP suite (Manning et al., 2014) for performing tokenization, sentence splitting, part-of-speech tagging, lemmatization and syntactic parsing (Probabilistic Context Free Grammar Parser (Klein and Manning, 2003) ). Table 2 provides a concise overview of the features used in this study. The first group consists of 18 features pertaining to syntactic complexity. These features are implemented based on descriptions in Lu (2010) and using the Tregex tree pattern matching tool (Levy and Andrew, 2006) with syntactic parse trees for extracting specific patterns. The second group subsumes 12 features pertaining to lexical richness: five measures of lexical variation, one measure of lexical density, seven measures of lexical sophistication. The operationalizations of these measures follow those described in Lu (2012) and Str\u00f6bel (2014) . The third group includes 25 n-gram frequency features that are derived from the five register sub-components of the Contemporary Corpus of American English (COCA, (Davies, 2008) ): spoken, magazine, fiction, news and academic language 4 . Our frequency-ngram measures differ from those used in the earlier studies reviewed in Section 2. Instead of using only bigrams and trigrams, we extend them to include longer word combinations (four-and five-grams) and use a more nuanced definition to operationalize the usage of such combinations given in equation 1:", "cite_spans": [ { "start": 782, "end": 783, "text": "3", "ref_id": null }, { "start": 824, "end": 846, "text": "(Manning et al., 2014)", "ref_id": "BIBREF32" }, { "start": 999, "end": 1024, "text": "(Klein and Manning, 2003)", "ref_id": "BIBREF24" }, { "start": 1290, "end": 1313, "text": "(Levy and Andrew, 2006)", "ref_id": "BIBREF25" }, { "start": 1623, "end": 1632, "text": "Lu (2012)", "ref_id": "BIBREF29" }, { "start": 1637, "end": 1651, "text": "Str\u00f6bel (2014)", "ref_id": "BIBREF48" }, { "start": 1817, "end": 1831, "text": "(Davies, 2008)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 1028, "end": 1035, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Automated Text Analysis", "sec_num": "4" }, { "text": "Norm n,s,r = |C n,s,r | \u2022 log h Q c2|Cn,s,r| freq n,r (c) i |U n,s | (1) Let A n,s be the list of n-grams (n 2 [1, 5])", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automated Text Analysis", "sec_num": "4" }, { "text": "appearing within a sentence s, B n,r the list of n-gram appearing in the n-gram frequency list of register r (r 2 {acad, fic, mag, news, spok}) and C n,s,r = A n,s \\B n,r the list of n-grams appearing both in s and the n-gram frequency list of register r. U n,s is defined as the list of unique n-gram in s, and freq n,r (a) the frequency of n-gram a according to the n-gram frequency list of register r.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automated Text Analysis", "sec_num": "4" }, { "text": "A total of 25 measures results from the combination of (a) a 'reference list' containing the top 100,000 most frequent n-grams and their frequencies from one of five register subcomponents of the COCA corpus and (b) the size of the n-gram (n 2 [1, 5]). The fourth group includes three information-theoretic measures that are based on Kolmogorov complexity. These measures use the Deflate algorithm (Deutsch, 1996) to compress a text and obtain complexity scores by relating the size of the compressed file to the size of the original file (see (Str\u00f6bel, 2014) for the operationalization and implementation of these measures).", "cite_spans": [ { "start": 398, "end": 413, "text": "(Deutsch, 1996)", "ref_id": "BIBREF16" }, { "start": 544, "end": 559, "text": "(Str\u00f6bel, 2014)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Automated Text Analysis", "sec_num": "4" }, { "text": "We used a Recurrent Neural Network (RNN) classifier, specifically a bidirectional dynamic RNN model with Long Short-term Memory (LSTM) cells. A dynamic RNN was chosen as it can handle sequences of variable length 5 . As shown in Figure 1 , the input of the contour-based model is a sequence X = (x 1 , x 2 , . . . , x l , x l+1 , . . . , x n ), where x i , the output of CoCoGen for the ith window of a document, is a 119 dimensional vector, l is the length of the sequence, n 2 Z is a number, which is greater or equal to the length of the longest sequence in the dataset and x l+1 , \u2022 \u2022 \u2022 , x n are padded 0-vectors. The input of the contourbased model is fed into a RNN which consists of 5 bidirectional LSTM layers with 400 hidden units in each cell. To predict the class of a sequence, we concatenate the hidden variable of the last LSTM ", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 237, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Classification Models", "sec_num": "5" }, { "text": "BiLSTM x 1 x 2 x l x n . . . . . . . . . . . . BiLSTM BiLSTM BiLSTM BiLSTM . . . . . . ! h 11 h 11 ! h 12 h 12 ! h 1l 1 h 1l 1 ! h 1l h 1l ! h 1n h 1n ! h 21 h 21 ! h 22 h 22 ! h 2l 1 h 2l 1 ! h 2l h 2l ! h 2n h 2n ! h 51 h 51 ! h 52 h 52 ! h 5l 1 h 5l 1 ! h 5l h 5l ! h 5n h 5n L Dense Layer 1 Batch Norm & PReLU & Dropout Dense Layer 2 Sigmoid y h 51 ! h 5l [ ! h 5l | h 51 ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BiLSTM", "sec_num": null }, { "text": "- nation [ ! h 5l | ! h 51 ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BiLSTM", "sec_num": null }, { "text": "is then transformed through a feed-forward neural network. The feed-forward neural-network consists of two fully connected layers (dense layer), whose output dimensions are 400, 14. Between the first and sencond fully connected layer, a Batch Normalization layer, a Parametric Rectifier Linear Unit (PReLU) layer and a dropout layer were added. Before the final output, a sigmoid layer was applied. The dataset was splitted into training and testing sets with a ration of 80:20 and 5-fold cross-validation is applied. As the loss function for training, binary cross entropy was used:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BiLSTM", "sec_num": null }, { "text": "L(\u0176 , c) = 1 C C X i=1 (y i log(\u0177)+(1 y i ) log(1 \u0177))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BiLSTM", "sec_num": null }, { "text": "in which c = (y 1 , y 2 , . . . , y N ), C = 14 is number of responses and\u0176 = (\u0177 1 ,\u0177 2 , . . . ,\u0177 N ) is the output vector of the sigmoid layer rounded to closest integer. For optimization, we used Adamax with a learning rate \u2318 = 0.0001. The dropout rate of the RNN layers and the dropout layer is set to 0.5. The minibatch size is 32, which was shown as a reasonable value for modern GPUs (Masters and Luschi, 2018) . All models were implemented using PyTorch (Pytorch, 2019 ). Our baseline model has the same network archi-tecture as the model described above, but instead of being trained on complexity contours, it was trained on sentence embeddings extracted from the Universal Sentence Encoder (USE) (Cer et al., 2018) . USE takes a sentence as input and a 512dimensional sentence representation as its output. The pretrained USE model we used was obtained from the Tensorflow (TF) Hub website and, according to TF Hub, the model was trained with a deep averaging network (DAN) encoder (Iyyer et al., 2015) .", "cite_spans": [ { "start": 391, "end": 417, "text": "(Masters and Luschi, 2018)", "ref_id": "BIBREF33" }, { "start": 462, "end": 476, "text": "(Pytorch, 2019", "ref_id": "BIBREF41" }, { "start": 707, "end": 725, "text": "(Cer et al., 2018)", "ref_id": "BIBREF6" }, { "start": 993, "end": 1013, "text": "(Iyyer et al., 2015)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "BiLSTM", "sec_num": null }, { "text": "We report the results of multi-label classification with five-fold cross validation. The performance metrics of the RNN classification model (global accuracy, precision, recall and macro F1 scores) are presented in Table 3 . The model achieved an average total accuracy of 69% averaged across the 14 rating categories. The highest accuracy was reached for the persuasive category (77%) and lowest for the long-winded category (62%). The results of RNN models trained on each feature set introduced in Section 5 are presented in Table 4 . These results revealed that classification accuracy was mainly driven by LIWC-style and ngram-based features: The LIWC set was most predictive for 8/14 rating categories (persuasive, courageous, fascinating, inspiring, funny, ingenious, jaw-dropping and confusing), whereas the n-gram set ranked first in 3/10 rating cate-gories (beautiful, unconvincing, obnoxious) . In two rating categories (informative, OK) these two feature sets were equally predictive. Averaged across rating categories, the LIWC-based model achieved slightly better accuracy (+1%) at the cost of using 61 features (36 more features) compared to that of the n-gram based model trained on 25 features. RNN models trained on the lexical and syntactic features both reached 61% classification accuracy. The RNN trained on syntactic features reached the highest accuracy for the long-winded rating category (61%). The classifier based on the three information theoretic features achieved 56% accuracy. The finding that n-gram measures figure prominently in the classification is consistent with a growing body of studies indicating that n-gram measures are good predictors of human judgments/ratings of writing and speaking skills both in first and second language (Christiansen and Arnon, 2017; Garner et al., 2020; Saito, 2020) . More generally, the findings are also in support of recent theoretical proposals emphasizing the importance of knowledge of such 'chunks' for human language processing to ameliorate the effects of the 'real-time' constraints on language processing imposed by the limitations of human sensory system and human memory in combination with the continual deluge of language input (cf., Christiansen and Chater, 2016, 2017, for the 'Now-or-Never bottleneck') .", "cite_spans": [ { "start": 867, "end": 903, "text": "(beautiful, unconvincing, obnoxious)", "ref_id": null }, { "start": 1772, "end": 1802, "text": "(Christiansen and Arnon, 2017;", "ref_id": "BIBREF10" }, { "start": 1803, "end": 1823, "text": "Garner et al., 2020;", "ref_id": "BIBREF18" }, { "start": 1824, "end": 1836, "text": "Saito, 2020)", "ref_id": "BIBREF44" } ], "ref_spans": [ { "start": 215, "end": 222, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 528, "end": 535, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "The result that LIWC-style features emerged as strong predictors is not surprising since previous research employing these measures has provided valuable insight into psychological processes and behavioral outcomes (Tausczik and Pennebaker, 2010; Pennebaker et al., 2015b) . A closer examination of how individual features within each feature-group distinguished between higher-rated and lower-rated TED talks in a given rating category revealed some interesting patterns. For reasons of space, we focus on the results for the seven most frequently selected categories (persuasive, courageous, fascinating, beautiful, inspiring, informative and funny) for which classification accuracy was consistently greater than 70%. For each feature, we determined the differences between the group means for each indicator of language use (M higher rated -M lower rated ) for the persuasive category. For the persuasive rating category, the top LIWC-style features of highly rated talks concern words associated with words related to core drives and needs (power) and personal concerns (risk, money and work), while words related with perceptual processes (see, hear, feel) are associated with talks with lower ratings on the persuasive dimension. Highly persuasive talks are further characterized by a stronger reliance on higher frequency n-grams from more formal language registers: the top-five n-gram features associated with highly persuasive talks concern frequent 3-, 4-and 5-grams from the academic register and 3-and 4-grams from the news register. At the same time, frequent n-grams from the fiction register are indicative of lower persuasive talks. Regarding lexical richness, we found that highly-rated talks in the persuasive rating category exhibit high lexical diversity (CTTR, RTTR, NDW) in combination with high word prevalence and low lexical sophistication (NAWL, NGSL, ANC, BNC), indicating that persuasive talks use words that are widely known rather than those that are advanced and infrequent. At the syntactic level, persuasive talks show tendencies towards higher degrees of subordination (e.g. dependent clauses per T-unit) and phrasal complexity (complex nominals per T-unit) and lower degrees of coordination (coordinate phrases per clause). Figure 2 in the appendix shows the frequency with which features of particular types appeared in the top-five most associated (left) or top-five most dissociated features (right). 6 The figure discloses distinct patterns of language usage for particular rating categories: TED talks with high ratings in the categories beautiful, inspiring, courageous and funny are characterised by a strong reliance on frequent n-grams from the fiction register, words from the LIWC types pronoun and social, and relatively higher scores on indicators of lexical sophistication. At the same time, these talks exhibit relatively low proportions of frequent academic n-grams, low lexical diversity and shorter length of production units. Highly rated talks on the fascinating and ingenious dimensions group together and exhibit higher proportions of n-grams from the spoken register, higher proportions of words from the LIWC function-type -notably the personal pronoun I and are characterized by higher lexical sophistication. Finally, persuasive, ingenious, fascinating and inspiring talks display higher word prevalence score, which estimate the proportion of the population that knows a given word. This result indicates that the inclusion of this newly introduced crowd-sourced language metric (Johns et al., 2020) is a valuable addition to the existing features employed in research on automated assessment of spontaneous speech and calls for the development of additional crowdsourced language features.", "cite_spans": [ { "start": 215, "end": 246, "text": "(Tausczik and Pennebaker, 2010;", "ref_id": "BIBREF50" }, { "start": 247, "end": 272, "text": "Pennebaker et al., 2015b)", "ref_id": "BIBREF37" }, { "start": 2441, "end": 2442, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 2261, "end": 2267, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "The ability to communicate competently and efficiently yields innumerable benefits across a range of social areas, including the enjoyment of congenial personal relationships, educational success, career advancement and, more generally, successful participation in the complex communicative environments of the 21st century. Public speaking is the epitome of spoken communication and, at the same time, the most feared form of communication. The paper contributes to the growing body of research that relies on automatic assessment of speaking skills and machine learning to better understand what makes a public speech effective. We performed a multi-label classification task to automatically predict human affective ratings associated with 14 categories on a large dataset of TED Talk speech transcripts. We demonstrated that a Recurrent Neural Network classifier trained exclusively on within-text distributions of 119 language features can reach relatively high levels of accuracy (> 70%) on eight out of fourteen rating categories. We further showed that all rating categories are best predicted by (1) LIWC-style features, which counts words in psychologically meaningful categories, and (2) n-gram frequency measures, which reflect the use of register/genrespecific multiword sequences. Closer analysis of the distributions of particular language features disclosed distinct patterns of language usage for particular rating categories. More generally, the present paper responds to recent calls in the international scientific community of machine learning to use not only black box models but also explainable (white-box) models, since in any given application domain there is a need for both accurate but also understandable models (Rudin, 2019; Loyola-Gonzalez, 2019) . In this paper, we have demonstrated in the domain of public speaking that models trained on human interpretable features in combination with deep learning classifiers can compete with black box models based on word embeddings. Figure 2 : Heatmap of features associated (left; blue indicates higher counts) or dissociated (right, red indicates higher counts) with high ratings on each of the top-seven best predicted rating categories. Dendrograms represent Euclidean distances among rating categories and features respectively.", "cite_spans": [ { "start": 1742, "end": 1755, "text": "(Rudin, 2019;", "ref_id": "BIBREF43" }, { "start": 1756, "end": 1778, "text": "Loyola-Gonzalez, 2019)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 2008, "end": 2016, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://www.ted.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Given the constraint that there has to be at least one window, a text has to comprise at least as many sentences at the ws is wide n w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Contemporary Corpus of American English is the largest genre-balanced corpus of American English, which at the time the measures were derived comprised of 560 million words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The lengths of the feature vector sequences depends on the number of sentences of the texts in our corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In case a top-five list involved a change in sign only features with positive values (for the associated feature list) or negative values (for the dissociated feature list) were included.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Communication competence: Historical synopsis, definitions, applications, and looking to the future", "authors": [ { "first": "M", "middle": [], "last": "Philip", "suffix": "" }, { "first": "", "middle": [], "last": "Backlund", "suffix": "" }, { "first": "P", "middle": [], "last": "Sherwyn", "suffix": "" }, { "first": "", "middle": [], "last": "Morreale", "suffix": "" } ], "year": 2015, "venue": "Communication competence", "volume": "22", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip M Backlund and Sherwyn P Morreale. 2015. Communication competence: Historical synopsis, definitions, applications, and looking to the future. Communication competence, 22:11.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Language is a complex adaptive system: Position paper", "authors": [ { "first": "Clay", "middle": [], "last": "Beckner", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Joan", "middle": [], "last": "Bybee", "suffix": "" }, { "first": "H", "middle": [], "last": "Morten", "suffix": "" }, { "first": "William", "middle": [], "last": "Christiansen", "suffix": "" }, { "first": "", "middle": [], "last": "Croft", "suffix": "" }, { "first": "C", "middle": [], "last": "Nick", "suffix": "" }, { "first": "John", "middle": [], "last": "Ellis", "suffix": "" }, { "first": "Jinyun", "middle": [], "last": "Holland", "suffix": "" }, { "first": "Diane", "middle": [], "last": "Ke", "suffix": "" }, { "first": "", "middle": [], "last": "Larsen-Freeman", "suffix": "" } ], "year": 2009, "venue": "Language learning", "volume": "59", "issue": "", "pages": "1--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clay Beckner, Richard Blythe, Joan Bybee, Morten H Christiansen, William Croft, Nick C Ellis, John Holland, Jinyun Ke, Diane Larsen-Freeman, et al. 2009. Language is a complex adaptive system: Po- sition paper. Language learning, 59:1-26.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Developing linguistic knowledge and language use across adolescence", "authors": [ { "first": "A", "middle": [], "last": "Ruth", "suffix": "" }, { "first": "", "middle": [], "last": "Berman", "suffix": "" } ], "year": 2007, "venue": "Blackwell handbooks of developmental psychology", "volume": "", "issue": "", "pages": "347--367", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruth A Berman. 2007. Developing linguistic knowl- edge and language use across adolescence. In M. Shatz E. Hoff, editor, Blackwell handbooks of developmental psychology, page 347-367. Black- well Publishing.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The new general service list: Celebrating 60 years of vocabulary learning. The Language Teacher", "authors": [ { "first": "Charles", "middle": [], "last": "Browne", "suffix": "" } ], "year": 2013, "venue": "", "volume": "37", "issue": "", "pages": "13--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Browne et al. 2013. The new general service list: Celebrating 60 years of vocabulary learning. The Language Teacher, 37(4):13-16.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Word prevalence norms for 62,000 english lemmas. Behavior research methods", "authors": [ { "first": "Marc", "middle": [], "last": "Brysbaert", "suffix": "" }, { "first": "Pawe\u0142", "middle": [], "last": "Mandera", "suffix": "" }, { "first": "F", "middle": [], "last": "Samantha", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Mc-Cormick", "suffix": "" }, { "first": "", "middle": [], "last": "Keuleers", "suffix": "" } ], "year": 2019, "venue": "", "volume": "51", "issue": "", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Brysbaert, Pawe\u0142 Mandera, Samantha F Mc- Cormick, and Emmanuel Keuleers. 2019. Word prevalence norms for 62,000 english lemmas. Be- havior research methods, 51(2):467-479.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Explaining communication: Contemporary theories and exemplars", "authors": [ { "first": "", "middle": [], "last": "Brant R Burleson", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "105--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brant R Burleson. 2007. Constructivism: A gen- eral theory of communication skill. In W. Samter B. B. Whaley, editor, Explaining communica- tion: Contemporary theories and exemplars, page 105-128. Lawrence Erlbaum Associates Publish- ers.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Universal sentence encoder", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sheng-Yi", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Limtiaco", "suffix": "" }, { "first": "Rhomni", "middle": [], "last": "St John", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Guajardo-Cespedes", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Tar", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.11175" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Language adaptation and learning: Getting explicit about implicit learning", "authors": [ { "first": "Franklin", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Janciauskas", "suffix": "" }, { "first": "Hartmut", "middle": [], "last": "Fitz", "suffix": "" } ], "year": 2012, "venue": "Language and Linguistics Compass", "volume": "6", "issue": "5", "pages": "259--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franklin Chang, Marius Janciauskas, and Hartmut Fitz. 2012. Language adaptation and learning: Getting explicit about implicit learning. Language and Linguistics Compass, 6(5):259-278.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Predicting audience's laughter using convolutional neural network", "authors": [ { "first": "Lei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chong", "middle": [ "Min" ], "last": "Lee", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1702.02584" ] }, "num": null, "urls": [], "raw_text": "Lei Chen and Chong MIn Lee. 2017. Predicting audi- ence's laughter using convolutional neural network. arXiv preprint arXiv:1702.02584.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Common core state standards for english language arts and literacy in history/social studies, science, and technical subjects", "authors": [], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Council of Chief State School Officers & National Governors Association. 2010. Common core state standards for english language arts and literacy in history/social studies, science, and technical sub- jects.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "More than words: The role of multiword sequences in language learning and use", "authors": [ { "first": "H", "middle": [], "last": "Morten", "suffix": "" }, { "first": "Inbal", "middle": [], "last": "Christiansen", "suffix": "" }, { "first": "", "middle": [], "last": "Arnon", "suffix": "" } ], "year": 2017, "venue": "Topics in Cognitive Science", "volume": "9", "issue": "3", "pages": "542--551", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morten H Christiansen and Inbal Arnon. 2017. More than words: The role of multiword sequences in language learning and use. Topics in Cognitive Sci- ence, 9(3):542-551.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Creating language: Integrating evolution, acquisition, and processing", "authors": [ { "first": "H", "middle": [], "last": "Morten", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Christiansen", "suffix": "" }, { "first": "", "middle": [], "last": "Chater", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morten H Christiansen and Nick Chater. 2016a. Cre- ating language: Integrating evolution, acquisition, and processing. MIT Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The now-or-never bottleneck: A fundamental constraint on language", "authors": [ { "first": "H", "middle": [], "last": "Morten", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Christiansen", "suffix": "" }, { "first": "", "middle": [], "last": "Chater", "suffix": "" } ], "year": 2016, "venue": "Behavioral and Brain Sciences", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morten H Christiansen and Nick Chater. 2016b. The now-or-never bottleneck: A fundamental con- straint on language. Behavioral and Brain Sci- ences, 39.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Towards an integrated science of language", "authors": [ { "first": "H", "middle": [], "last": "Morten", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Christiansen", "suffix": "" }, { "first": "", "middle": [], "last": "Chater", "suffix": "" } ], "year": 2017, "venue": "Nature Human Behaviour", "volume": "", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morten H. Christiansen and Nick Chater. 2017. To- wards an integrated science of language. Nature Human Behaviour, 1(8).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Corpus of Contemporary American English (COCA): 560 million words", "authors": [ { "first": "Mark", "middle": [], "last": "Davies", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Davies. 2008. The Corpus of Contemporary American English (COCA): 560 million words, 1990-present.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Leadership= communication? the relations of leaders' communication styles with leadership styles, knowledge sharing and leadership outcomes", "authors": [ { "first": "", "middle": [], "last": "Reinout E De", "suffix": "" }, { "first": "Angelique", "middle": [], "last": "Vries", "suffix": "" }, { "first": "Wyneke", "middle": [], "last": "Bakker-Pieper", "suffix": "" }, { "first": "", "middle": [], "last": "Oostenveld", "suffix": "" } ], "year": 2010, "venue": "Journal of business and psychology", "volume": "25", "issue": "3", "pages": "367--380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinout E De Vries, Angelique Bakker-Pieper, and Wyneke Oostenveld. 2010. Leadership= commu- nication? the relations of leaders' communication styles with leadership styles, knowledge sharing and leadership outcomes. Journal of business and psychology, 25(3):367-380.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Deflate compressed data format specification version 1.3. IETF RFC", "authors": [ { "first": "Peter", "middle": [], "last": "Deutsch", "suffix": "" } ], "year": 1951, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Deutsch. 1996. Deflate compressed data format specification version 1.3. IETF RFC 1951.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Compressing learner language: An informationtheoretic measure of complexity in sla production data", "authors": [ { "first": "Katharina", "middle": [], "last": "Ehret", "suffix": "" }, { "first": "Benedikt", "middle": [], "last": "Szmrecsanyi", "suffix": "" } ], "year": 2019, "venue": "Second Language Research", "volume": "35", "issue": "1", "pages": "23--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katharina Ehret and Benedikt Szmrecsanyi. 2019. Compressing learner language: An information- theoretic measure of complexity in sla production data. Second Language Research, 35(1):23-45.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Beginning and intermediate l2 writer's use of n-grams: an association measures study", "authors": [ { "first": "James", "middle": [], "last": "Garner", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Crossley", "suffix": "" }, { "first": "Kristopher", "middle": [], "last": "Kyle", "suffix": "" } ], "year": 2020, "venue": "ternational Review of Applied Linguistics in Language Teaching", "volume": "58", "issue": "", "pages": "51--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Garner, Scott Crossley, and Kristopher Kyle. 2020. Beginning and intermediate l2 writer's use of n-grams: an association measures study. In- ternational Review of Applied Linguistics in Lan- guage Teaching, 58(1):51-74.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "When does cognitive functioning peak? the asynchronous rise and fall of different cognitive abilities across the life span", "authors": [ { "first": "K", "middle": [], "last": "Joshua", "suffix": "" }, { "first": "Laura", "middle": [ "T" ], "last": "Hartshorne", "suffix": "" }, { "first": "", "middle": [], "last": "Germine", "suffix": "" } ], "year": 2015, "venue": "Psychological science", "volume": "26", "issue": "4", "pages": "433--443", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshua K Hartshorne and Laura T Germine. 2015. When does cognitive functioning peak? the asyn- chronous rise and fall of different cognitive abil- ities across the life span. Psychological science, 26(4):433-443.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Deep unordered composition rivals syntactic methods for text classification", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Manjunatha", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing", "volume": "1", "issue": "", "pages": "1681--1691", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered com- position rivals syntactic methods for text classifi- cation. In Proceedings of the 53rd annual meet- ing of the association for computational linguistics and the 7th international joint conference on natu- ral language processing (volume 1: Long papers), pages 1681-1691.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Estimating the prevalence and diversity of words in written language", "authors": [ { "first": "T", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Melody", "middle": [], "last": "Johns", "suffix": "" }, { "first": "Michael N", "middle": [], "last": "Dye", "suffix": "" }, { "first": "", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2020, "venue": "Quarterly Journal of Experimental Psychology", "volume": "73", "issue": "6", "pages": "841--855", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brendan T Johns, Melody Dye, and Michael N Jones. 2020. Estimating the prevalence and diversity of words in written language. Quarterly Journal of Experimental Psychology, 73(6):841-855.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Becoming linguistically mature: Modeling english and german children's writing development across school grades", "authors": [ { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "65--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elma Kerz, Yu Qiao, Daniel Wiechmann, and Mar- cus Str\u00f6bel. 2020. Becoming linguistically mature: Modeling english and german children's writing development across school grades. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 65-74.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A preliminary performance comparison of machine learning algorithms for web author identification of vietnamese online messages", "authors": [ { "first": "Bui", "middle": [], "last": "Khanh", "suffix": "" }, { "first": "Alisa", "middle": [], "last": "Vorobeva", "suffix": "" } ], "year": 2020, "venue": "2020 26th Conference of Open Innovations Association (FRUCT)", "volume": "", "issue": "", "pages": "166--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bui Khanh and Alisa Vorobeva. 2020. A preliminary performance comparison of machine learning algo- rithms for web author identification of vietnamese online messages. In 2020 26th Conference of Open Innovations Association (FRUCT), pages 166-173. IEEE.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "423--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D Manning. 2003. Accu- rate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1, pages 423-430. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Tregex and tsurgeon: tools for querying and manipulating tree data structures", "authors": [ { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" } ], "year": 2006, "venue": "LREC", "volume": "", "issue": "", "pages": "2231--2234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger Levy and Galen Andrew. 2006. Tregex and tsurgeon: tools for querying and manipulating tree data structures. In LREC, pages 2231-2234. Cite- seer.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Black-box vs", "authors": [ { "first": "Octavio", "middle": [], "last": "Loyola-Gonzalez", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Octavio Loyola-Gonzalez. 2019. Black-box vs.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Understanding their advantages and weaknesses from a practical point of view", "authors": [], "year": null, "venue": "IEEE Access", "volume": "7", "issue": "", "pages": "154096--154113", "other_ids": {}, "num": null, "urls": [], "raw_text": "white-box: Understanding their advantages and weaknesses from a practical point of view. IEEE Access, 7:154096-154113.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Automatic analysis of syntactic complexity in second language writing", "authors": [ { "first": "Xiaofei", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2010, "venue": "International Journal of Corpus Linguistics", "volume": "15", "issue": "4", "pages": "474--496", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofei Lu. 2010. Automatic analysis of syntactic complexity in second language writing. Interna- tional Journal of Corpus Linguistics, 15(4):474- 496.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The relationship of lexical richness to the quality of ESL learners' oral narratives. The Modern Language", "authors": [ { "first": "Xiaofei", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2012, "venue": "Journal", "volume": "96", "issue": "2", "pages": "190--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofei Lu. 2012. The relationship of lexical richness to the quality of ESL learners' oral narratives. The Modern Language Journal, 96(2):190-208.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Alzheimer's dementia recognition through spontaneous speech: The adress challenge", "authors": [ { "first": "Saturnino", "middle": [], "last": "Luz", "suffix": "" }, { "first": "Fasih", "middle": [], "last": "Haider", "suffix": "" }, { "first": "Sofia", "middle": [], "last": "De La Fuente", "suffix": "" }, { "first": "Davida", "middle": [], "last": "Fromm", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Macwhinney", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.06833" ] }, "num": null, "urls": [], "raw_text": "Saturnino Luz, Fasih Haider, Sofia de la Fuente, Davida Fromm, and Brian MacWhinney. 2020. Alzheimer's dementia recognition through spon- taneous speech: The adress challenge. arXiv preprint arXiv:2004.06833.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Estimation of presentations skills based on slides and audio features", "authors": [ { "first": "Gonzalo", "middle": [], "last": "Luzardo", "suffix": "" }, { "first": "Bruno", "middle": [], "last": "Guam\u00e1n", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Chiluiza", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Castells", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Ochoa", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 acm workshop on multimodal learning analytics workshop and grand challenge", "volume": "", "issue": "", "pages": "37--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gonzalo Luzardo, Bruno Guam\u00e1n, Katherine Chiluiza, Jaime Castells, and Xavier Ochoa. 2014. Estimation of presentations skills based on slides and audio features. In Proceedings of the 2014 acm workshop on multimodal learning analytics workshop and grand challenge, pages 37-44.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "The stanford corenlp natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural lan- guage processing toolkit. In Proceedings of 52nd annual meeting of the association for computa- tional linguistics: system demonstrations, pages 55-60.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Revisiting small batch training for deep neural networks", "authors": [ { "first": "Dominic", "middle": [], "last": "Masters", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Luschi", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.07612" ] }, "num": null, "urls": [], "raw_text": "Dominic Masters and Carlo Luschi. 2018. Revisit- ing small batch training for deep neural networks. arXiv preprint arXiv:1804.07612.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "The competent speaker speech evaluation form", "authors": [ { "first": "P", "middle": [], "last": "Sherwyn", "suffix": "" }, { "first": "", "middle": [], "last": "Morreale", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sherwyn P Morreale. 2007. The competent speaker speech evaluation form. National Communication Association.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Why communication education is important: The centrality of the discipline in the 21st century", "authors": [ { "first": "P", "middle": [], "last": "Sherwyn", "suffix": "" }, { "first": "Judy", "middle": [ "C" ], "last": "Morreale", "suffix": "" }, { "first": "", "middle": [], "last": "Pearson", "suffix": "" } ], "year": 2008, "venue": "Communication Education", "volume": "57", "issue": "2", "pages": "224--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sherwyn P Morreale and Judy C Pearson. 2008. Why communication education is important: The cen- trality of the discipline in the 21st century. Com- munication Education, 57(2):224-240.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Linguistic inquiry and word count: LIWC 2015. Operator's manual. Pennebaker Conglomerates", "authors": [ { "first": "R", "middle": [ "J" ], "last": "James W Pennebaker", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Booth", "suffix": "" }, { "first": "M", "middle": [ "E" ], "last": "Boyd", "suffix": "" }, { "first": "", "middle": [], "last": "Francis", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James W Pennebaker, R.J. Booth, R.L. Boyd, and M.E Francis. 2015a. Linguistic inquiry and word count: LIWC 2015. Operator's manual. Pen- nebaker Conglomerates.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The development and psychometric properties of liwc2015", "authors": [ { "first": "W", "middle": [], "last": "James", "suffix": "" }, { "first": "", "middle": [], "last": "Pennebaker", "suffix": "" }, { "first": "L", "middle": [], "last": "Ryan", "suffix": "" }, { "first": "Kayla", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Jordan", "suffix": "" }, { "first": "", "middle": [], "last": "Blackburn", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James W Pennebaker, Ryan L Boyd, Kayla Jordan, and Kate Blackburn. 2015b. The development and psychometric properties of liwc2015. Technical re- port.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Psychological aspects of natural language use: Our words, our selves. Annual review of psychology", "authors": [ { "first": "Matthias", "middle": [ "R" ], "last": "James W Pennebaker", "suffix": "" }, { "first": "Kate", "middle": [ "G" ], "last": "Mehl", "suffix": "" }, { "first": "", "middle": [], "last": "Niederhoffer", "suffix": "" } ], "year": 2003, "venue": "", "volume": "54", "issue": "", "pages": "547--577", "other_ids": {}, "num": null, "urls": [], "raw_text": "James W Pennebaker, Matthias R Mehl, and Kate G Niederhoffer. 2003. Psychological aspects of nat- ural language use: Our words, our selves. Annual review of psychology, 54(1):547-577.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Automatic detection of fake news", "authors": [ { "first": "Ver\u00f3nica", "middle": [], "last": "P\u00e9rez-Rosas", "suffix": "" }, { "first": "Bennett", "middle": [], "last": "Kleinberg", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Lefevre", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1708.07104" ] }, "num": null, "urls": [], "raw_text": "Ver\u00f3nica P\u00e9rez-Rosas, Bennett Kleinberg, Alexan- dra Lefevre, and Rada Mihalcea. 2017. Auto- matic detection of fake news. arXiv preprint arXiv:1708.07104.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Real-time recognition of affective states from nonverbal features of speech and its application for public speaking skill analysis", "authors": [ { "first": "Tomas", "middle": [], "last": "Pfister", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Robinson", "suffix": "" } ], "year": 2011, "venue": "IEEE Transactions on Affective Computing", "volume": "2", "issue": "2", "pages": "66--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Pfister and Peter Robinson. 2011. Real-time recognition of affective states from nonverbal fea- tures of speech and its application for public speak- ing skill analysis. IEEE Transactions on Affective Computing, 2(2):66-78.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Pytorch: Tensors and dynamic neural networks in python with strong gpu acceleration", "authors": [ { "first": "", "middle": [], "last": "Pytorch", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pytorch. 2019. Pytorch: Tensors and dynamic neural networks in python with strong gpu acceleration. https://github.com/pytorch/pytorch.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "A language-based approach to fake news detection through interpretable features and brnn", "authors": [ { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "" }, { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 3rd International Workshop on Rumours and Deception in Social Media (RDSM)", "volume": "", "issue": "", "pages": "14--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Qiao, Daniel Wiechmann, and Elma Kerz. 2020. A language-based approach to fake news detec- tion through interpretable features and brnn. In Proceedings of the 3rd International Workshop on Rumours and Deception in Social Media (RDSM), pages 14-31.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead", "authors": [ { "first": "Cynthia", "middle": [], "last": "Rudin", "suffix": "" } ], "year": 2019, "venue": "Nature Machine Intelligence", "volume": "1", "issue": "5", "pages": "206--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cynthia Rudin. 2019. Stop explaining black box ma- chine learning models for high stakes decisions and use interpretable models instead. Nature Ma- chine Intelligence, 1(5):206-215.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Multi-or single-word units? the role of collocation use in comprehensible and contextually appropriate second language speech", "authors": [ { "first": "Kazuya", "middle": [], "last": "Saito", "suffix": "" } ], "year": 2020, "venue": "Language Learning", "volume": "70", "issue": "2", "pages": "548--588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kazuya Saito. 2020. Multi-or single-word units? the role of collocation use in comprehensible and contextually appropriate second language speech. Language Learning, 70(2):548-588.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "An audiovisual political speech analysis incorporating eyetracking and perception data", "authors": [ { "first": "Stefan", "middle": [], "last": "Scherer", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Layher", "suffix": "" }, { "first": "John", "middle": [], "last": "Kane", "suffix": "" }, { "first": "Heiko", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 2012, "venue": "LREC", "volume": "", "issue": "", "pages": "1114--1120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Scherer, Georg Layher, John Kane, Heiko Neumann, and Nick Campbell. 2012. An audio- visual political speech analysis incorporating eye- tracking and perception data. In LREC, pages 1114-1120.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "The development and test of the public speaking competence rubric", "authors": [ { "first": "M", "middle": [], "last": "Lisa", "suffix": "" }, { "first": "Gregory", "middle": [ "D" ], "last": "Schreiber", "suffix": "" }, { "first": "Lisa", "middle": [ "R" ], "last": "Paul", "suffix": "" }, { "first": "", "middle": [], "last": "Shibley", "suffix": "" } ], "year": 2012, "venue": "Communication Education", "volume": "61", "issue": "3", "pages": "205--233", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa M Schreiber, Gregory D Paul, and Lisa R Shib- ley. 2012. The development and test of the public speaking competence rubric. Communication Edu- cation, 61(3):205-233.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Personality, gender, and age in the language of social media: The open-vocabulary approach", "authors": [ { "first": "Andrew", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Johannes", "middle": [ "C" ], "last": "Eichstaedt", "suffix": "" }, { "first": "Margaret", "middle": [ "L" ], "last": "Kern", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Dziurzynski", "suffix": "" }, { "first": "M", "middle": [], "last": "Stephanie", "suffix": "" }, { "first": "Megha", "middle": [], "last": "Ramones", "suffix": "" }, { "first": "Achal", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Shah", "suffix": "" }, { "first": "David", "middle": [], "last": "Kosinski", "suffix": "" }, { "first": "", "middle": [], "last": "Stillwell", "suffix": "" }, { "first": "E", "middle": [ "P" ], "last": "Martin", "suffix": "" }, { "first": "", "middle": [], "last": "Seligman", "suffix": "" } ], "year": 2013, "venue": "PloS one", "volume": "8", "issue": "9", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H Andrew Schwartz, Johannes C Eichstaedt, Mar- garet L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, Martin EP Seligman, et al. 2013. Personality, gender, and age in the language of social media: The open-vocabulary ap- proach. PloS one, 8(9):e73791.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Tracking complexity of l2 academic texts: A sliding-window approach", "authors": [ { "first": "Marcus", "middle": [], "last": "Str\u00f6bel", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus Str\u00f6bel. 2014. Tracking complexity of l2 aca- demic texts: A sliding-window approach. Master thesis. RWTH Aachen University.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "The relationship between first and second language writing: Investigating the effects of first language complexity on second language complexity in advanced stages of learning", "authors": [ { "first": "Marcus", "middle": [], "last": "Str\u00f6bel", "suffix": "" }, { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "" } ], "year": 2020, "venue": "Language Learning", "volume": "70", "issue": "3", "pages": "732--767", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus Str\u00f6bel, Elma Kerz, and Daniel Wiechmann. 2020. The relationship between first and second language writing: Investigating the effects of first language complexity on second language complex- ity in advanced stages of learning. Language Learning, 70(3):732-767.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "The psychological meaning of words: Liwc and computerized text analysis methods", "authors": [ { "first": "R", "middle": [], "last": "Yla", "suffix": "" }, { "first": "James", "middle": [ "W" ], "last": "Tausczik", "suffix": "" }, { "first": "", "middle": [], "last": "Pennebaker", "suffix": "" } ], "year": 2010, "venue": "Journal of language and social psychology", "volume": "29", "issue": "1", "pages": "24--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and com- puterized text analysis methods. Journal of lan- guage and social psychology, 29(1):24-54.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "The voice of leadership: Models and performances of automatic analysis in online speeches", "authors": [ { "first": "Felix", "middle": [], "last": "Weninger", "suffix": "" }, { "first": "Jarek", "middle": [], "last": "Krajewski", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Batliner", "suffix": "" }, { "first": "Bj\u00f6rn", "middle": [], "last": "Schuller", "suffix": "" } ], "year": 2012, "venue": "IEEE Transactions on Affective Computing", "volume": "3", "issue": "4", "pages": "496--508", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Weninger, Jarek Krajewski, Anton Batliner, and Bj\u00f6rn Schuller. 2012. The voice of leadership: Models and performances of automatic analysis in online speeches. IEEE Transactions on Affective Computing, 3(4):496-508.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Words that fascinate the listener: Predicting affective ratings of on-line lectures", "authors": [ { "first": "Felix", "middle": [], "last": "Weninger", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Staudt", "suffix": "" }, { "first": "Bj\u00f6rn", "middle": [], "last": "Schuller", "suffix": "" } ], "year": 2013, "venue": "International Journal of Distance Education Technologies (IJDET)", "volume": "11", "issue": "2", "pages": "110--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Weninger, Pascal Staudt, and Bj\u00f6rn Schuller. 2013. Words that fascinate the listener: Predict- ing affective ratings of on-line lectures. Interna- tional Journal of Distance Education Technologies (IJDET), 11(2):110-123.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Roll-out of the RNN model based on complexity contours cell in layer 5 ! h 5l , i.e. the hidden variable of 5th RNN layer right after the feeding of x l , with hidden variable of the last LSTM cell in the backward direction ! h 51 . The result vector of concate" }, "TABREF0": { "type_str": "table", "html": null, "num": null, "content": "
MetaMeanSDMedianMinMax
Duration (min)13.625.2914.142.2544.63
Word Count2044.56885.542040155800
Views1730676.52 2540236.17 1139779.5 155895 47227110
RatingsN ratingsMeanSDMedianMinMax
Inspiring1277876534.231307.69235524924
Informative862945360.76550.77221.509787
Fascinating758344317.03636.79161.5514447
Persuasive544215227.51476.24101010704
Beautiful444193185.7481.466209437
Courageous401316167.77437.115308668
Funny368139153.9603.8820019645
Ingenious360870150.87285.876806073
Jaw-dropping344204143.9560.8440014728
OK19318480.7690.165501341
Unconvincing12729953.2293.112702194
Longwinded7815832.6742.26190447
Obnoxious6044025.2753.551201361
Confusing4981520.8331.88120531
", "text": "Descriptive statistics for TED talk dataset (N=2392 talks); total of 5.89 million ratings from a total of 4162 million views. Descriptive statistics of rating scores are normalized per million views." }, "TABREF1": { "type_str": "table", "html": null, "num": null, "content": "
Feature groupsNumber Sub-groupsExample/Description
of features
Syntactic complexity18Length of production unite.g. Mean length of clause
Subordinatione.g. Clauses per sentences
Coordinatione.g. Coordinate phrases per clause
Particular structurese.g. Complex nominals per clause
Lexical richness12Lexical densityThe number of lexical words
divided by total number of words
Lexical diversitye.g. Type-token ratio
Lexical sophisticatione.g. Words from the
New General Service List,
see Browne et al. (2013)
Register-based Information theory25 3Spoken (n 2 [1, 5]) Fiction (n 2 [1, 5]) Magazine (n 2 [1, 5]) News (n 2 [1, 5]) Academic (n 2 [1, 5]) Kolmogorov DeflateFrequencies of uni-, bi-tri-, four-, five-grams from the five sub-components (genres) of the COCA, see Davies (2008) Measures use Deflate algorithm
Kolmogorov Deflate Syntacticand relate size of compressed file
Kolmogorov Deflate Morphological to size of original file
LIWC-style61Linguistic dimensionsFor a comprehensive description
Psychological processesof LIWC features, see
RelativityPennebaker et al. (2015a)
Personal concerns
", "text": "Overview of the 119 features investigated in the paper" }, "TABREF2": { "type_str": "table", "html": null, "num": null, "content": "
Baseline: RNN USERNN model based on complexity contours
AccSDAcc SD Precision SD Recall SD F1 SD
Overall0.730.010.69 0.010.690.07 0.67 0.09 0.68 0.08
Persuasive0.800.020.77 0.030.770.07 0.80 0.07 0.78 0.07
Courageous 0.810.030.76 0.030.800.05 0.69 0.09 0.74 0.07
Fascinating0.820.030.75 0.030.750.06 0.78 0.06 0.76 0.06
Beautiful0.790.020.74 0.020.770.06 0.66 0.05 0.71 0.05
Inspiring0.790.020.73 0.020.760.02 0.65 0.18 0.70 0.03
Informative 0.780.020.73 0.040.710.13 0.82 0.11 0.76 0.12
Funny0.760.020.71 0.020.710.04 0.70 0.08 0.70 0.05
Ingenious0.770.020.70 0.020.680.04 0.75 0.05 0.71 0.05
Jaw-dropping 0.670.030.66 0.020.660.11 0.63 0.05 0.65 0.07
Unconvincing 0.670.010.65 0.030.660.04 0.58 0.18 0.62 0.07
OK0.650.010.64 0.020.630.12 0.63 0.13 0.63 0.12
Confusing0.670.030.63 0.030.630.04 0.56 0.23 0.59 0.06
Obnoxious0.630.010.63 0.020.610.09 0.54 0.21 0.57 0.12
Longwinded 0.650.020.62 0.030.590.13 0.55 0.17 0.57 0.15
", "text": "Performance statistics of the RNN classifiers aggregated over five crossvalidation runs" }, "TABREF3": { "type_str": "table", "html": null, "num": null, "content": "
All Features LIWCN-gramLexical Syntactic Inf. Theo.
(N = 119)(N=61)(N=25)(N=13)(N=18)(N=3)
Rating category M Overall 0.69 0.01 0.67 0.01 0.66 0.01 0.61 0.00 0.61 0.01 0.56 0.01
Persuasive0.77 0.03 0.75 0.03 0.74 0.02 0.65 0.04 0.64 0.04 0.58 0.05
Courageous0.76 0.03 0.76 0.02 0.74 0.02 0.65 0.01 0.62 0.01 0.53 0.04
Fascinating0.75 0.03 0.75 0.03 0.72 0.03 0.67 0.02 0.63 0.03 0.55 0.03
Beautiful0.74 0.02 0.71 0.01 0.73 0.01 0.63 0.02 0.63 0.03 0.56 0.04
Inspiring0.73 0.02 0.72 0.02 0.69 0.02 0.65 0.01 0.65 0.01 0.54 0.02
Informative0.73 0.04 0.71 0.03 0.71 0.03 0.64 0.03 0.67 0.04 0.61 0.05
Funny0.71 0.02 0.69 0.03 0.68 0.02 0.60 0.03 0.64 0.02 0.60 0.02
Ingenious0.70 0.02 0.69 0.02 0.66 0.01 0.64 0.02 0.60 0.02 0.53 0.02
Jaw-dropping 0.66 0.02 0.62 0.03 0.57 0.02 0.59 0.02 0.60 0.03 0.57 0.05
Unconvincing 0.65 0.03 0.60 0.01 0.61 0.03 0.57 0.02 0.56 0.04 0.53 0.05
OK0.64 0.02 0.59 0.01 0.59 0.02 0.59 0.02 0.57 0.06 0.58 0.04
Confusing0.63 0.03 0.60 0.02 0.58 0.02 0.56 0.02 0.57 0.02 0.51 0.02
Obnoxious0.63 0.02 0.58 0.02 0.61 0.01 0.56 0.01 0.54 0.03 0.54 0.03
Longwinded0.62 0.03 0.59 0.02 0.59 0.03 0.60 0.03 0.61 0.04 0.60 0.10
", "text": "Mean classification accuracy after five-fold cross validation with standard deviations across rating categories and feature-sets.Acc SD M Acc SD M Acc SD M Acc SD M Acc SD M Acc SD" } } } }