{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:21.769765Z" }, "title": "Pushing on Personality Detection from Verbal Behavior: A Transformer Meets Text Contours of Psycholinguistic Features", "authors": [ { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH-Aachen University", "location": {} }, "email": "elma.kerz@ifaar.rwth-aachen.de" }, { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "", "affiliation": {}, "email": "yu.qiao@rwth-aachen.de" }, { "first": "Sourabh", "middle": [], "last": "Zanwar", "suffix": "", "affiliation": {}, "email": "sourabh.zanwar@rwth-aachen.de" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Amsterdam", "location": {} }, "email": "d.wiechmann@uva.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Research at the intersection of personality psychology, computer science, and linguistics has recently focused increasingly on modeling and predicting personality from language use. We report two major improvements in predicting personality traits from text data: (1) to our knowledge, the most comprehensive set of theory-based psycholinguistic features and (2) hybrid models that integrate a pre-trained Transformer Language Model BERT and Bidirectional Long Short-Term Memory (BLSTM) networks trained on within-text distributions ('text contours') of psycholinguistic features. We experiment with BLSTM models (with and without Attention) and with two techniques for applying pre-trained language representations from the transformer model-'featurebased' and 'fine-tuning'. We evaluate the performance of the models we built on two benchmark datasets that target the two dominant theoretical models of personality: the Big Five Essay dataset (Pennebaker and King, 1999) and the MBTI Kaggle dataset (Li et al., 2018). Our results are encouraging as our models outperform existing work on the same datasets. More specifically, our models achieve improvement in classification accuracy by 2.9% on the Essay dataset and 8.28% on the Kaggle MBTI dataset. In addition, we perform ablation experiments to quantify the impact of different categories of psycholinguistic features in the respective personality prediction models.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Research at the intersection of personality psychology, computer science, and linguistics has recently focused increasingly on modeling and predicting personality from language use. We report two major improvements in predicting personality traits from text data: (1) to our knowledge, the most comprehensive set of theory-based psycholinguistic features and (2) hybrid models that integrate a pre-trained Transformer Language Model BERT and Bidirectional Long Short-Term Memory (BLSTM) networks trained on within-text distributions ('text contours') of psycholinguistic features. We experiment with BLSTM models (with and without Attention) and with two techniques for applying pre-trained language representations from the transformer model-'featurebased' and 'fine-tuning'. We evaluate the performance of the models we built on two benchmark datasets that target the two dominant theoretical models of personality: the Big Five Essay dataset (Pennebaker and King, 1999) and the MBTI Kaggle dataset (Li et al., 2018). Our results are encouraging as our models outperform existing work on the same datasets. More specifically, our models achieve improvement in classification accuracy by 2.9% on the Essay dataset and 8.28% on the Kaggle MBTI dataset. In addition, we perform ablation experiments to quantify the impact of different categories of psycholinguistic features in the respective personality prediction models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Personality is broadly defined as the combination of a person's behavior, emotions, motivation, and characteristics of thought patterns (Corr and Matthews, 2020) . Our personality has a major impact on our lives, influencing our life choices, well-being, health, and preferences and desires (Ozer and Benet-Martinez, 2006) . Specifically, personality has been repeatedly linked to individual (e.g., happiness, physical and mental health), interpersonal (e.g., quality of relationships with peers, family, and romantic partners), and socialinstitutional outcomes (e.g., career choice, satisfaction and achievement, social engagement, political ideology) (Soto, 2019) .", "cite_spans": [ { "start": 146, "end": 161, "text": "Matthews, 2020)", "ref_id": "BIBREF9" }, { "start": 291, "end": 322, "text": "(Ozer and Benet-Martinez, 2006)", "ref_id": "BIBREF37" }, { "start": 653, "end": 665, "text": "(Soto, 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While there are several models of human personality, the predominant and widely accepted model is the Big Five or Five Factor Model (McCrae and John, 1992; McCrae, 2009) . In this model, personality traits are divided into five factors: (1) Extraversion (assertive, energetic, outgoing, etc.), (2) Agreeableness (appreciative, generous, compassionate, etc.), (3) Conscientiousness (efficient, organized, responsible, etc.), (4) Neuroticism (anxious, self-pitying, worried, etc.), and (5) Openness (curious, empathetic, imaginative, etc.) . These five personality traits are commonly assessed by questionnaires in which a person reflects on his or her typical patterns of thinking and behavior, such as the NEO Five Factor Inventory (Costa and Mc-Crae, 1992) , and the Big-Five Inventory (John et al., 1991) ; (see Matthews et al., 2009 , for a comprehensive overview). The Myers-Briggs Type Indicator (MBTI) is another widely administered questionnaire, in particular in applied settings (Meyers et al., 1990) . In contrast to the Big Five personality taxonomy, which conceptualizes human personality as latent trait scores, the MBTI model describes personality in terms of 16 types that result from combining binary categories into four dimensions: (a) Extraversion/Introversion (E/I) -preference for how people direct and receive their energy, based on the external or internal world, (b) Sensing/Intuition (S/N) -preference for how people take in information, through the five senses or through interpretation and meanings, (c) Thinking/Feeling (T/F) -preference for how people make decisions, relying on logic or emotion over people and partic-ular circumstances, and (d) Judgment/Perception (J/P) -how people deal with the world, by ordering it or remaining open to new information.", "cite_spans": [ { "start": 132, "end": 155, "text": "(McCrae and John, 1992;", "ref_id": "BIBREF29" }, { "start": 156, "end": 169, "text": "McCrae, 2009)", "ref_id": "BIBREF28" }, { "start": 497, "end": 537, "text": "(curious, empathetic, imaginative, etc.)", "ref_id": null }, { "start": 732, "end": 757, "text": "(Costa and Mc-Crae, 1992)", "ref_id": null }, { "start": 787, "end": 806, "text": "(John et al., 1991)", "ref_id": "BIBREF17" }, { "start": 814, "end": 835, "text": "Matthews et al., 2009", "ref_id": "BIBREF27" }, { "start": 988, "end": 1009, "text": "(Meyers et al., 1990)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given its central importance in capturing the essential aspects of human life, increasing attention is being paid to the development of models that can leverage behavioral data to automatically predict personality. Data obtained from verbal behavior is one of the key types of such data. Even in the early years of psychology, a person's use of language was seen as a distillation of his or her underlying drives, emotions, and thought patterns (see Tausczik and Pennebaker, 2010; Boyd and Pennebaker, 2017 , for historical overviews). Early approaches to automatic personality prediction (APP) -also referred to as automatic personality prediction or recognition -from textual data have relied on machine learning models based on psycholinguistic features, whereas more recent approaches to APP typically draw on deep learning techniques that use pre-trained word embeddings (see Vinciarelli and Mohammadi, 2014, for an overview of the former) (see Mehta et al., 2020b , for an overview of deep learning-based APP).", "cite_spans": [ { "start": 450, "end": 480, "text": "Tausczik and Pennebaker, 2010;", "ref_id": null }, { "start": 481, "end": 506, "text": "Boyd and Pennebaker, 2017", "ref_id": "BIBREF3" }, { "start": 950, "end": 969, "text": "Mehta et al., 2020b", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we make a valuable contribution to this dynamic area of APP research by presenting two important improvements in predicting personality traits from textual data: (1) to our knowledge, the most comprehensive set of psycholinguistic features and (2) hybrid models that integrate a pre-trained Transformer Language Model BERT and Bidirectional Long Short-Term Memory (BLSTM) networks trained on in-text distributions ('text contours') of psycholinguistic features. Since our goal is to demonstrate the utility of our modeling approach, we conduct our experiments on two widely used benchmark datasets: the Big Five Essay dataset (Pennebaker and King, 1999) and the MBTI-Kaggle dataset (Li et al., 2018) , which align with the dominant personality models described above. The remainder of this paper is organized as follows: In Section 2, we briefly review recent related work on these two benchmark datasets. Then, in Section 3, we present the two benchmark datasets and the extraction of psycholinguistic features using automated text analysis based on a sliding window approach. In Section 4, we describe our modeling approach, and in Section 5, we present and discuss the results. Finally, we conclude with possible directions for future work in Section 6.", "cite_spans": [ { "start": 641, "end": 668, "text": "(Pennebaker and King, 1999)", "ref_id": "BIBREF41" }, { "start": 697, "end": 714, "text": "(Li et al., 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Majumder et al. (2017) used a convolutional neural network (CNN) feature extractor in which sentences were fed to convolution filters to obtain ngram feature vectors. Each individual text of the Big Five Essay dataset was represented by aggregating the vectors of its sentences and the obtained vectors were concatenated with psycholinguistic (Mairesse) features (Mairesse et al., 2007) . For classification, they fed the resulting document vector to a fully connected neural network with one hidden layer. Using this method, they were able to achieve an average classification accuracy of 58% for the Big Five personality traits on the Essays dataset. Kazameini et al. (2020) were the first to use a Transformer-Based Language model to extract contextualized word embeddings. Specifically, they built a Bagged-SVM classifier fed with contextualized embeddings extracted from BERT, a pre-trained language model with a Bidirectional Encoder from Transformers (Devlin et al., 2018) . Their model outperformed the CNN-based model proposed by the Majumder et al. (2017) model by 1.04%. Amirhosseini and Kazemian (2020) used a Gradient Boosting Model (GBM) based on Term Frequency-Inverse-Document-Frequency features (TF-IDF) to predict personality dimensions in the Kaggle MBTI dataset. Their modeling approach achieved an average classification accuracy across all dimensions of 76.1%. Using both the Big Five Essay dataset and the Myers-Briggs' type indicator Kaggle Dataset, Mehta et al. (2020a) proposed the integration of deep learning models and psycholinguistic features with language model embeddings for APP. They extracted a total of 123 psycholinguistic features, including the Mairesse features set (Mairesse et al., 2007) , SenticNet (Cambria et al., 2010) , NRC-Emotion Lexicon (Mohammad and Turney, 2013) , and NRC-VAD Lexicon (Mohammad, 2018). Language model features were extracted using BERT. Their experiments compared the performance of BERT-base and BERT-large in synergy with SVM or Multi-layer Perceptron (MLP) classifiers. BERT-base + MLP yielded an average score of 60.6 on the Essay dataset, while BERTlarge + MLP yielded an average score of 77.1 on the Kaggle dataset. The approach taken in Mehta et al. (2020a) outperformed the previously best-performing model by Amirhosseini and Kazemian (2020) by 1%. Zooming on classification accuracy for specific personality traits, the models in Mehta et al. (2020a) achieved the highest performance on two of the Big Five personality traits in the Essays dataset (openness, accuracy = 64.6%, and conscientiousness, accuracy = 59.2%) and on three of the four MBTI dimensions in the Kaggle MBTI dataset (Intuitive/Sensing (N/S), accuracy = 86.6%, Thinking/Feeling (T/F), accuracy = 76.1% and Perception/Judging (P/J), accuracy = 67.2%). The highest performance on the Introversion/Extraversion (I/E) MBTI dimension (79%) was obtained by the 'GBM + TFIDF' model reported in Amirhosseini and Kazemian (2020). The highest performance on the three remaining Big Five dimensions was achieved recently by Ramezani et al. 2021, which used an ensemble modeling approach (stacking) to combine linguistic and ontology-based features with deep learningbased methods based on a hierarchical attention network as a meta-model. Although the overall performance of SOTA on the Essay dataset was not superior -mainly due to relatively poor performance on the Openness trait (accuracy = 56.3%), this work has demonstrated the utility of model stacking as an effective way to boost the prediction of personality traits. For a performance overview of the models reviewed here for different data sets and personality dimensions, see Table 1 in Section 4.", "cite_spans": [ { "start": 343, "end": 353, "text": "(Mairesse)", "ref_id": null }, { "start": 363, "end": 386, "text": "(Mairesse et al., 2007)", "ref_id": "BIBREF24" }, { "start": 653, "end": 676, "text": "Kazameini et al. (2020)", "ref_id": null }, { "start": 958, "end": 979, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF13" }, { "start": 1043, "end": 1065, "text": "Majumder et al. (2017)", "ref_id": "BIBREF25" }, { "start": 1474, "end": 1494, "text": "Mehta et al. (2020a)", "ref_id": null }, { "start": 1707, "end": 1730, "text": "(Mairesse et al., 2007)", "ref_id": "BIBREF24" }, { "start": 1743, "end": 1765, "text": "(Cambria et al., 2010)", "ref_id": "BIBREF8" }, { "start": 1788, "end": 1815, "text": "(Mohammad and Turney, 2013)", "ref_id": "BIBREF36" }, { "start": 2214, "end": 2234, "text": "Mehta et al. (2020a)", "ref_id": null }, { "start": 2410, "end": 2430, "text": "Mehta et al. (2020a)", "ref_id": null } ], "ref_spans": [ { "start": 3676, "end": 3683, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "We conducted our experiments with two widely used personality benchmark datasets: (1) The Essays Dataset (Pennebaker and King, 1999) and 2Kaggle MBTI Dataset (Li et al., 2018) . (1) Essays: This stream-of-consciousness dataset consists of 2468 essays written by students and annotated with the binary labels of the Big Five personality traits, which were obtained through a standardized selfreport questionnaire. The average text length is 672 words and the total size of the dataset is approximately 1.6 million words. One of the reasons why Essays is an established benchmark dataset is the relatively large amount of continuous language use and the fact that the personality traits were obtained using a validated instrument. (2) Kaggle MBTI: This dataset was collected through the Per-sonalityCafe forum 1 and thus provides a diverse sample of people interacting in an informal online social environment. It consists of samples of social media interactions from 8675 users, all of whom indicated their MBTI type. The average text length is 1,288 words. The total size of the entire dataset is approximately 11.2 million words.", "cite_spans": [ { "start": 105, "end": 132, "text": "(Pennebaker and King, 1999)", "ref_id": "BIBREF41" }, { "start": 158, "end": 175, "text": "(Li et al., 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "The texts from both datasets (the Big Five Essay dataset and the MBTI Kaggle dataset) were automatically analyzed using an automated text analysis (ATA) system that employs a sliding window technique to compute sentence-level measurements. These measurements capture the within-text distributions of scores for a given psycholinguistic feature, referred to here as 'text contours' (for recent applications of the ATA system in the context of text classification, see (Kerz et al., 2020; Qiao et al., 2021a,b) . We extracted a set of 437 theorybased psycholinguistic features that can be binned into four groups: (1) features of morpho-syntactic complexity (N=19), (2) features of lexical richness, diversity and sophistication (N=77), (3) readability features (N=14), and (4) lexicon features designed to detect sentiment, emotion and/or affect (N=326). Tokenization, sentence splitting, part-of-speech tagging, lemmatization and syntactic PCFG parsing were performed using Stanford CoreNLP (Manning et al., 2014). The group of morpho-syntactic complexity features includes (i) surface features related to the length of production units, such as the average length of clauses and sentences, (ii) features of the type and frequency of embeddings, such as number of dependent clauses per T-Unit or verb phrases per sentence and (iii) the frequency of particular structure types, such as the number of complex nominals per clause. This group also includes (iv) information-theoretic features of morphological and syntactic complexity based on the Deflate algorithm (Deutsch, 1996) . The group of lexical richness, diversity and sophistication features includes six different subtypes: (i) lexical density features, such as the ratio of the number of lexical (as opposed to grammatical) words to the total number of words in a text, (ii) lexical variation, i.e. the range of vocabulary as manifested in language use, captured by text-size corrected type-token ratio, (iii) lexical sophistication, i.e. the proportion of relatively unusual or advanced words in a text, such as the number of words from the New General Service List (Browne et al., 2013) , (iv) psycholinguistic norms of words, such as the average age of acquisition of the word (Kuperman et al., 2012) and two recently introduced types of features: (v) word prevalence features that capture the number of people who know the word (Brysbaert et al., 2019; Johns et al., 2020) and viregister-based n-gram frequency features that take into account both frequency rank and the number of word n-grams (n \u2208 [1, 5]). The latter were derived from the five register subcomponents of the Contemporary Corpus of American English (COCA, 560 million words, Davies, 2008) : spoken, magazine, fiction, news and academic language (see Kerz et al., 2020 , for details see e.g.). The group of readability features combines a word familiarity variable defined by a prespecified vocabulary resource to estimate semantic difficulty along with a syntactic variable, such as average sentence length. Examples of these measures include the Fry index (Fry, 1968) or the SMOG (McLaughlin, 1969) . The group of lexicon-based sentiment/emotion/affect features (SentiEmo) was derived from a total of ten lexicons that have been successfully used in personality detection, emotion recognition and sentiment analysis research: 1 (Mohammad, 2018) , (9) SenticNet (Cambria et al., 2010) , and (10) the Sentiment140 lexicon . The feature value for each subcategory in a given lexicon is the mean value of all rated/scored words in a given sentence. The informational gain of 'text contours' compared to text-averages is illustrated in Figure 1 . The Figure shows the distribution of z-standardized values of three selected features for a randomly selected text from the Essay dataset. The red line represents the average feature value of the text. As can be seen from the graphs, all feature values fluctuate within the text, with high values for one feature often offset by lower values for another. The contour-based classifiers, discussed in more detail in Section 3, can take advantage of this high-resolution assessment of psycholinguistic features.", "cite_spans": [ { "start": 467, "end": 486, "text": "(Kerz et al., 2020;", "ref_id": "BIBREF20" }, { "start": 487, "end": 508, "text": "Qiao et al., 2021a,b)", "ref_id": null }, { "start": 1562, "end": 1577, "text": "(Deutsch, 1996)", "ref_id": "BIBREF12" }, { "start": 2126, "end": 2147, "text": "(Browne et al., 2013)", "ref_id": "BIBREF6" }, { "start": 2239, "end": 2262, "text": "(Kuperman et al., 2012)", "ref_id": "BIBREF21" }, { "start": 2391, "end": 2415, "text": "(Brysbaert et al., 2019;", "ref_id": "BIBREF7" }, { "start": 2416, "end": 2435, "text": "Johns et al., 2020)", "ref_id": "BIBREF18" }, { "start": 2705, "end": 2718, "text": "Davies, 2008)", "ref_id": "BIBREF11" }, { "start": 2780, "end": 2797, "text": "Kerz et al., 2020", "ref_id": "BIBREF20" }, { "start": 3087, "end": 3098, "text": "(Fry, 1968)", "ref_id": "BIBREF14" }, { "start": 3111, "end": 3129, "text": "(McLaughlin, 1969)", "ref_id": "BIBREF30" }, { "start": 3359, "end": 3375, "text": "(Mohammad, 2018)", "ref_id": "BIBREF34" }, { "start": 3392, "end": 3414, "text": "(Cambria et al., 2010)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 3662, "end": 3670, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Measurement of text contours of psycholinguistic features", "sec_num": "3.2" }, { "text": "Our models are constructed from three components: (a) a 'contour encoder' that converts a sequence of psycholinguistic features into a hidden representation vector, (b) a pre-trained transformer-based language model, BERT, that converts a sequence of tokens into a hidden representation vector, and (c) a classifier that outputs the probability of a personality feature given the hidden representation of the sample. We conduct experiments with three types of personality prediction models: (1) contour encoder + classifier, (2) hybrid models that combine the contour encoder with a transformerbased language model + classifier, and (3) a stacking model that combines ten repetitions of the best performing model. As for the contour encoder, we experiment with BLSTM and BLSTM with attention models. Attention-based models have been successfully used in a variety of tasks, including machine translation (Bahdanau et al., 2014) , speech recognition (Huang and Narayanan, 2016) and relation classification (Zhou et al., 2016) . In the context of personality classification, learning a scoring function gives sentence weighting to the attention mechanism and allows a model to pay more attention to the most influential sentences in a text for a personality trait. As for the hybrid models, we experiment with different strategies for applying the pre-trained language model -'feature-based' and 'fine-tuning': In the feature-based approach, we freeze model weights during training and use the pre-trained contextualized word embeddings from BERT. In the 'fine-tuning' approach, we unfreeze all 12 layers and fine-tune towards the personality detection task (see Devlin et al., 2018) . All models are implemented using PyTorch (Paszke et al., 2019) . Unless specifically stated otherwise, we use binary cross entropy as our loss function, 'AdamW' as optimizer, a fixed learning rate of 8\u00d710 \u22124 and dropout = 0.1, l2 = 1\u00d710 \u22124 as the regularization. The optimal network structures and values of hyperparameters were found by grid-search. The performance of the models is evaluated by 10-fold cross-validation (ten repetitions) to counter variability due to initialization of the weights. We report the results of the best performing models in comparison to the performance of the APP systems presented in Section 2 Table 1.", "cite_spans": [ { "start": 904, "end": 927, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF2" }, { "start": 949, "end": 976, "text": "(Huang and Narayanan, 2016)", "ref_id": "BIBREF16" }, { "start": 1005, "end": 1024, "text": "(Zhou et al., 2016)", "ref_id": null }, { "start": 1661, "end": 1681, "text": "Devlin et al., 2018)", "ref_id": "BIBREF13" }, { "start": 1725, "end": 1746, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling approach", "sec_num": "4" }, { "text": "Contour Encoder:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "The contour encoder, Encoder P SY LIN G (X), transforms a sequence of psycholinguistic features X = (x 1 , x 2 , . . . , x n ) to a hidden psycholinguistic representation vector P P SY LIN G of a given text. Here, x i is a 436 dimensional vector for the ith sentence obtained from the APA system described in Section 3.2. In this paper, two architectures of contour encoder are applied: BLSTM and BLSTM with attention (ATTN).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "The BLSTM contour encoder is a L-layer BLSTM with number of hidden states of d h . The hidden representation from this model is a d o = 2d h dimensional vector, which is a concatenation of the last hidden states of the last layer in forward ( \u2212 \u2192 h n ) and backward direction (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "\u2190 \u2212 h 1 ). Specifically, X \u2192 Encoder BLST M (X) = P : [ \u2212 \u2192 H , \u2190 \u2212 H ] = BLST M (X) P = [ \u2212 \u2192 h n T | \u2190 \u2212 h 1 T ] T where [\u2022|\u2022] is concatenation operator, \u2212 \u2192 H = ( \u2212 \u2192 h 1 , \u2212 \u2192 h 2 , . . . , \u2212 \u2192 h n ) and \u2190 \u2212 H = ( \u2190 \u2212 h 1 , \u2190 \u2212 h 2 , . . . , \u2190 \u2212 h n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "are BLSTM model's last layer hidden states in the forward and backward direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "The ATTN contour encoder model was constructed as follows: Given a input sequence X, a sequence of weights will be computed with the help of a BLSTM model. Then the hidden representation of a given text can be obtained by computing the weighted sum of (a) concatenated hidden vectors from the last layer of the BLSTM model in forward and backward direction (b) feature vectors in X. We also experimented with (c) computing weights for each individual dimension of x i and then taking weighted sum of X by applying this weights. Our experiments shows, that the approach (c) works best for both dataset. So in this paper, we define X \u2192 Encoder AT T N (X)=P:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "H = BLST M (X) M = Tanh(W att H + b att ) \u03b1 \u03b1 \u03b1 = Softmax(M ) V = n i=1 \u03b1 i \u03b1 i \u03b1 i \u2299 x i x i x i P = Tanh(W pool V + b pool )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "where BERT Language Model: We use a pre-trained BERT transformer model, 'bert-base-uncased', from Huggingface's transformers library (Wolf et al., 2019) . The model consists of 12 transformer layers with a hidden size of 768 and 12 attention heads. Texts are tokenized using BERT's BPE tokenizer. We use as input to BERT language model the initial 512 tokens T = (t 1 , t 2 , . . . , t m ) of a given text, i.e. up to 510 word tokens plus the [cls] token at the beginning and the [sep] token at the end of a given text). Assuming the output of the l layer of BERT is H (l) ", "cite_spans": [ { "start": 133, "end": 152, "text": "(Wolf et al., 2019)", "ref_id": null }, { "start": 569, "end": 572, "text": "(l)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "W att \u2208 R 436\u00d7do , b att \u2208 R", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "= (h (1) 1 , h (l) 2 , . . . , h (l) n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": ", then a hidden vector is computed by either (a) the output for the [cls]-token, i.e. i.e., V = h i . Experiments with both approaches for l \u2208 [1, 12] revealed that that (a) the latter approach consistently works better than the former and (b) that l = 11 works best for the Essays dataset, wheras l = 12 works best for the MBTI dataset. So we define X \u2192 Encoder BERT (T ) = P", "cite_spans": [ { "start": 143, "end": 146, "text": "[1,", "ref_id": null }, { "start": 147, "end": 150, "text": "12]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "H (l) = BERT(T ) V = 1 m\u22122 m\u22122 i=i h (l) i P = Tanh(W pool V + b pool )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "Classifier: We use a multi-layer feed-forward neural network as our classifier component. The input to the classifier has a dimension corresponding to the underlying encoder's output dimension. We use PReLU as the activation function. Batch normalization was applied between layers of the classifier. All hidden layers share a same hidden size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components", "sec_num": "4.1" }, { "text": "We first construct models based solely on psycholinguistic features. These models (1) serve as interpretable baselines for the hybrid prediction models and (2) allow us to determine feature importance of individual features groups in predicting personality traits. To fully utilize the information provided by the contour-based measurement of text features, the models rely on BLSTM or BLSTM with attention architecture, i.e. at position of Encoder P SY LIN G , Encoder BLST M or Encoder AT T N is applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "P = Encoder P SY LIN G (X) y = Classifier(P )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "Encoder BLST M has 3 layers with 256 hidden states. We applied a learning rate of 0.001 during training of this model. The BLSTM in Encoder AT T N has 3 layers with 512 hidden states. The classifier has 3 layers with hidden size of 512.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "Our hybrid architecture combines text contours of psycholinguistic features with Transformerbased language models using a late-fusion method by concatenating the hidden representations from the psycholinguistic contour encoder and BERT, specifically", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "P P SY LIN G = Encoder P SY LIN G (X) P BERT = Encoder BERT (T ) P = [P T P SY LIN G |P T BERT ] T y = Classifier(P )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "At the position of Encoder P SY LIN G , Encoder BLST M can be used, which has 3 layers with hidden states of 256, or Encoder AT T N , of which BLSTM also has 3 layers with hidden states of 256 with dropout = 0.2. During training, parameters of BERT has a fixed learning rate of 2 \u00d7 10 \u22125 while learning rate of 8 \u00d7 10 \u22125 is applied to other parameters. The classifier has 3 layers with hidden size of 512.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "The final model used in our experiments employed a stacking approach to ensemble our best performing models (Wolpert, 1992) , which has been shown to effectively increase the accuracy of the ensembled individual models. Specifically, we employed model stacking to combine BERT+ATTN-PSYLING (FT) model instances for both dataset.", "cite_spans": [ { "start": 108, "end": 123, "text": "(Wolpert, 1992)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "The training procedure consists of two stages: In stage one, we take the model prediction on the devfold of each model trained on the train-fold of a k-fold CV. These predictions are then concatenated and constitute the one dimension out of 10 of the input data in a subsequent stage (stage 2). We did the same for all 10 iterations. The final predictions of the model are derived from another logistic regression model trained on the concatenated prediction vectors from stage 1 (10-fold CV).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "To assess the relative importance of the feature groups, we employed Submodular Pick Lime (SP-LIME; Ribeiro et al. (2016)). SPLIME is a method to construct a global explanation of a model by aggregating the weights of linear models, that locally approximate the original model. To this end, we first constructed local explanations using LIME. Analogous to super-pixels for images, we categorized our features into four groups -lexical richness, morphosyntactic complexity, readability, sentiment/emotion (see section 3.2). We used binary vectors z \u2208 {0, 1} d to denote the absence and presence of feature groups in the perturbed data samples, where d is the number of feature groups. Here, 'absent' means that all values of the features in the feature group are set to 0, and 'present' means that their values are retained. For simplicity, a linear regression model was chosen as the local explanatory model. An exponential kernel function with Hamming distance and kernel width \u03c3 = 0.75 \u221a d was used to assign different weights to each perturbed data sample. After constructing their local explanation for each data sample in the original dataset, the matrix W \u2208 R n\u00d7d was obtained, where n is the number of data samples in the original dataset and W ij is the jth coefficient of the fitted linear regression model to explain data sample x i . The global importance score of the SP-LIME for feature j can then be derived by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature importance", "sec_num": "4.3" }, { "text": "I j = n i=1 |W ij |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature importance", "sec_num": "4.3" }, { "text": "An overview of the results of our models in comparison to those reported in the previous studies reviewed above is presented in Table 1. As Table 1 shows, we achieve state-of-the-art (SOTA) results on both benchmark personality datasets: On the Big Five Essay dataset, our best-performing model achieves a classification accuracy of 63.5%, which corresponds to an increase of 2.9% over the previous SOTA. On the MBTI Kaggle dataset, our best model improved the classification accuracy of SOTA by 8.28%. On both datasets the highest classification accuracy was achieved by the ensemble model, which combined ten iterations of a hybrid model integrating a fine-tuned BERT model with an attention-based BLSTM model trained on text contours (see BERT+PSYLING Ensemble in Table 1 ). Our models achieve the highest performance on four of the Big Five -all except Extraversionand on all four MBTI dimensions, with the largest increase in performance for the Big Five on the Openness dimension (+7.35%) and for the MBTI on the T/F dimension (+9.6%). Comparing the accuracy for each personality trait from Table 1 for the hybrid models trained with the \"feature-based\" strategy (denoted by \"FB\") with the corresponding value for the models trained with the \"fine-tuning\" strategy (denoted by \"FT\"), we find that the accuracy of all traits improved when each pre-trained model was fine-tuned on the data set.", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 147, "text": "Table 1. As Table 1", "ref_id": "TABREF2" }, { "start": 767, "end": 775, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 1098, "end": 1105, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "Comparing the accuracy for each personality trait for the models trained with an attention mechanism (denoted by 'ATTN') to the corresponding value for the models trained without this mechanism (denoted by 'BLSTM'), we find that accuracy on all dimensions except the MBTI N/S improved when an attention mechanism was used. Our results also show that approaches grounded in interpretable features can achieve competitive performance with Transformer-based approaches: Our best-performing model trained solely on psycholinguistic features, the attention-based BLSTM model (ATT-PSYLING), achieved an average classification accuracy of 60.04%, approaching the previous SOTA model, BERT-base + MLP Mehta et al. (2020a) , by only 0.54%. This is a promising finding given the need for more interpretable personality prediction models that can provide valuable insights into key psycholinguistic features to drive personality prediction and advance personality psychology research. See e.g. Rudin (2019) for more general calls for using white-box models to solve practical problems, particularly in the context of critical industries such as healthcare, criminal justice, and news. This is due to the fact that human experts in a given application domain require both accurate and understandable models (Loyola-Gonzalez, 2019) .", "cite_spans": [ { "start": 693, "end": 713, "text": "Mehta et al. (2020a)", "ref_id": null }, { "start": 1295, "end": 1318, "text": "(Loyola-Gonzalez, 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "In what follows, we present the results of the ablation experiments. Feature group importance was quantified using SP-LIME on the best performing model trained only on text contours of psycholinguistic features, the ATTN-PSYLING model. The results of the feature ablation experiment are presented in Table 2 . The table shows that the prediction of personality traits was influenced by all four feature groups (all I > 4.21). Overall, personality traits were best predicted by the sentiment/emotion/affect (SentiEmo) feature group. The lexical richness, diversity and sophistication group consistently ranked second on all traits except the P/J MBTI dimension. This result indicates that in addition to words associated with affective-emotional categories, personality traits are also related to more general aspects of vocabulary. Morphosyntactic complexity and readability play a minor role but still achieve high I-scores compared to the highest scoring group in predicting Extraversion, Neuroticism, and Agreeableness (ratio: I(group j ) / I(SentEmo) > 0.45). Finally, zooming in on the specific interactions between psycholinguistic cues and personality traits, we calculated the difference between the average feature scores of text samples with different labels for each personality trait. Visualizations of the most important psycholinguistic features that influence the prediction of personality traits are shown in Figures 4 and in the Appendix. Some interesting patterns emerged: For example, texts produced by extroverts tend to (a) have less complex morphosyntax than those by introverts (as indicated by the lower scores of the information-theoretic complexity measures), (b) contain a greater proportion of positive words, and (c) have a higher proportion of frequently used n-grams from the spoken language, news, and magazine registers. The language use of individuals scoring high on Neuroticism showed (a) a higher proportion of self-referencing words, (b) higher proportions of words related to sadness, anxiety and disappointment, but also (c) a higher proportion of longer n-grams from the fiction register. Highly conscientious individuals showed (a) a higher proportion of words with high prevalence, i.e. words that are known by a larger percentage of the population, (b) more words associated with affiliation (ally, friend) and (c) a higher proportions of frequently used n-grams from the academic register. These results replicate and extend previous findings reported in the literature (for overviews see, e.g., Mairesse et al., 2007; Park et al., 2015; Boyd and Schwartz, 2021) .", "cite_spans": [ { "start": 2541, "end": 2563, "text": "Mairesse et al., 2007;", "ref_id": "BIBREF24" }, { "start": 2564, "end": 2582, "text": "Park et al., 2015;", "ref_id": "BIBREF38" }, { "start": 2583, "end": 2607, "text": "Boyd and Schwartz, 2021)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 300, "end": 307, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "Due to its central importance in capturing the essential aspects of human life, increasing attention is being paid to the modeling and predicting personality traits. In this work, we made valuable contributions to advance the state of the art in automatic prediction of personality traits from verbal behavior. We demonstrated that models trained with a comprehensive set of theory-based psycholinguistic features can compete with a Transformer-based model when their within-text distribution is taken into account. Moreover, we showed that hybrid models incorporating such features can improve the performance of pre-trained Transformer language models, even when the latter is based on a larger model (BERT-large). We also showed that different techniques for applying pre-trained language representations from the Transformer model have an impact on model performance. Our ablation experiments have yielded interesting insights into the interplay between theory-based psycholinguistic features and personality traits. Here, we decided to focus on the two most widely used benchmark datasets. In our future work, we intend to conduct experiments with more recent, larger personality datasets such as PANDORA (Gjurkovic et al., 2020) . Since this dataset also includes metadata (gender, age, and location/region), it would be interesting to see how they contribute to modeling and predicting personality traits from language use. ", "cite_spans": [ { "start": 1210, "end": 1234, "text": "(Gjurkovic et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://www.personalitycafe.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Plotted scores represent the difference between the z-standardized mean scores of high-and low-scoring individuals on a given personality trait. Positive scores are characteristic of the high-scoring individuals on a given trait (e.g. individuals with high extraversion scores).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Machine learning approach to personality type prediction based on the myers-briggs type indicator\u00ae", "authors": [ { "first": "Amirhosseini", "middle": [], "last": "Mohammad Hossein", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Kazemian", "suffix": "" } ], "year": 2020, "venue": "Multimodal Technologies and Interaction", "volume": "4", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Hossein Amirhosseini and Hassan Kazemian. 2020. Machine learning approach to per- sonality type prediction based on the myers-briggs type indicator\u00ae. Multimodal Technologies and Interaction, 4(1):9.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Depechemood++: a bilingual emotion lexicon built through simple yet powerful techniques", "authors": [ { "first": "Oscar", "middle": [], "last": "Araque", "suffix": "" }, { "first": "Lorenzo", "middle": [], "last": "Gatti", "suffix": "" }, { "first": "Jacopo", "middle": [], "last": "Staiano", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Guerini", "suffix": "" } ], "year": 2019, "venue": "IEEE transactions on affective computing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar Araque, Lorenzo Gatti, Jacopo Staiano, and Marco Guerini. 2019. Depechemood++: a bilingual emotion lexicon built through simple yet powerful techniques. IEEE transactions on affective comput- ing.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Language-based personality: a new approach to personality in a digital world", "authors": [ { "first": "L", "middle": [], "last": "Ryan", "suffix": "" }, { "first": "James", "middle": [ "W" ], "last": "Boyd", "suffix": "" }, { "first": "", "middle": [], "last": "Pennebaker", "suffix": "" } ], "year": 2017, "venue": "Current opinion in behavioral sciences", "volume": "18", "issue": "", "pages": "63--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan L Boyd and James W Pennebaker. 2017. Language-based personality: a new approach to per- sonality in a digital world. Current opinion in behav- ioral sciences, 18:63-68.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Natural language analysis and the psychology of verbal behavior: The past, present, and future states of the field", "authors": [ { "first": "L", "middle": [], "last": "Ryan", "suffix": "" }, { "first": "H Andrew", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 2021, "venue": "Journal of Language and Social Psychology", "volume": "40", "issue": "1", "pages": "21--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan L Boyd and H Andrew Schwartz. 2021. Natu- ral language analysis and the psychology of verbal behavior: The past, present, and future states of the field. Journal of Language and Social Psychology, 40(1):21-41.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Affective norms for english words (anew): Instruction manual and affective ratings", "authors": [ { "first": "M", "middle": [], "last": "Margaret", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Bradley", "suffix": "" }, { "first": "", "middle": [], "last": "Lang", "suffix": "" } ], "year": 1999, "venue": "Technical report C-1, the center for research in psychophysiology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret M Bradley and Peter J Lang. 1999. Affective norms for english words (anew): Instruction manual and affective ratings. Technical report, Technical report C-1, the center for research in psychophysiol- ogy . . . .", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The new general service list: Celebrating 60 years of vocabulary learning. The Language Teacher", "authors": [ { "first": "Charles", "middle": [], "last": "Browne", "suffix": "" } ], "year": 2013, "venue": "", "volume": "37", "issue": "", "pages": "13--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Browne et al. 2013. The new general service list: Celebrating 60 years of vocabulary learning. The Language Teacher, 37(4):13-16.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Word prevalence norms for 62,000 english lemmas. Behavior research methods", "authors": [ { "first": "Marc", "middle": [], "last": "Brysbaert", "suffix": "" }, { "first": "Pawe\u0142", "middle": [], "last": "Mandera", "suffix": "" }, { "first": "F", "middle": [], "last": "Samantha", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Mc-Cormick", "suffix": "" }, { "first": "", "middle": [], "last": "Keuleers", "suffix": "" } ], "year": 2019, "venue": "", "volume": "51", "issue": "", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Brysbaert, Pawe\u0142 Mandera, Samantha F Mc- Cormick, and Emmanuel Keuleers. 2019. Word prevalence norms for 62,000 english lemmas. Be- havior research methods, 51(2):467-479.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Senticnet: A publicly available semantic resource for opinion mining", "authors": [ { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Robyn", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Hussain", "suffix": "" } ], "year": 2010, "venue": "2010 AAAI fall symposium series", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik Cambria, Robyn Speer, Catherine Havasi, and Amir Hussain. 2010. Senticnet: A publicly avail- able semantic resource for opinion mining. In 2010 AAAI fall symposium series.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Cambridge handbook of personality psychology", "authors": [ { "first": "J", "middle": [], "last": "Philip", "suffix": "" }, { "first": "Gerald", "middle": [], "last": "Corr", "suffix": "" }, { "first": "", "middle": [], "last": "Matthews", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip J Corr and Gerald Matthews. 2020. The Cam- bridge handbook of personality psychology. Cam- bridge University Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Neo personality inventory-revised (NEO PI-R)", "authors": [ { "first": "T", "middle": [], "last": "Paul", "suffix": "" }, { "first": "", "middle": [], "last": "Costa", "suffix": "" }, { "first": "", "middle": [], "last": "Robert R Mccrae", "suffix": "" } ], "year": 1992, "venue": "Psychological Assessment", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul T Costa and Robert R McCrae. 1992. Neo person- ality inventory-revised (NEO PI-R). Psychological Assessment Resources Odessa, FL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Corpus of Contemporary American English (COCA): 560 million words", "authors": [ { "first": "Mark", "middle": [], "last": "Davies", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Davies. 2008. The Corpus of Contemporary American English (COCA): 560 million words, 1990- present.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Rfc1951: Deflate compressed data format specification version 1", "authors": [ { "first": "Peter", "middle": [], "last": "Deutsch", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Deutsch. 1996. Rfc1951: Deflate compressed data format specification version 1.3.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A readability formula that saves time", "authors": [ { "first": "Edward", "middle": [], "last": "Fry", "suffix": "" } ], "year": 1968, "venue": "Journal of reading", "volume": "11", "issue": "7", "pages": "513--578", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Fry. 1968. A readability formula that saves time. Journal of reading, 11(7):513-578.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "PANDORA talks: Personality and demographics on reddit", "authors": [ { "first": "Matej", "middle": [], "last": "Gjurkovic", "suffix": "" }, { "first": "Mladen", "middle": [], "last": "Karan", "suffix": "" }, { "first": "Iva", "middle": [], "last": "Vukojevic", "suffix": "" }, { "first": "Mihaela", "middle": [], "last": "Bosnjak", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matej Gjurkovic, Mladen Karan, Iva Vukojevic, Mi- haela Bosnjak, and Jan Snajder. 2020. PANDORA talks: Personality and demographics on reddit. CoRR, abs/2004.04460.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Attention assisted discovery of sub-utterance structure in speech emotion recognition", "authors": [ { "first": "Che-Wei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "S", "middle": [], "last": "Shrikanth", "suffix": "" }, { "first": "", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2016, "venue": "Interspeech", "volume": "", "issue": "", "pages": "1387--1391", "other_ids": {}, "num": null, "urls": [], "raw_text": "Che-Wei Huang and Shrikanth S Narayanan. 2016. At- tention assisted discovery of sub-utterance structure in speech emotion recognition. In Interspeech, pages 1387-1391.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Big five inventory", "authors": [ { "first": "Eileen", "middle": [ "M" ], "last": "Oliver P John", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Donahue", "suffix": "" }, { "first": "", "middle": [], "last": "Kentle", "suffix": "" } ], "year": 1991, "venue": "Journal of Personality and Social Psychology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oliver P John, Eileen M Donahue, and Robert L Kentle. 1991. Big five inventory. Journal of Personality and Social Psychology.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Estimating the prevalence and diversity of words in written language", "authors": [ { "first": "T", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Melody", "middle": [], "last": "Johns", "suffix": "" }, { "first": "Michael N", "middle": [], "last": "Dye", "suffix": "" }, { "first": "", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2020, "venue": "Quarterly Journal of Experimental Psychology", "volume": "73", "issue": "6", "pages": "841--855", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brendan T Johns, Melody Dye, and Michael N Jones. 2020. Estimating the prevalence and diversity of words in written language. Quarterly Journal of Experimental Psychology, 73(6):841-855.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Sauleh Eetemadi, and Erik Cambria. 2020. Personality trait detection using bagged svm over bert word embedding ensembles", "authors": [ { "first": "Amirmohammad", "middle": [], "last": "Kazameini", "suffix": "" }, { "first": "Samin", "middle": [], "last": "Fatehi", "suffix": "" }, { "first": "Yash", "middle": [], "last": "Mehta", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.01309" ] }, "num": null, "urls": [], "raw_text": "Amirmohammad Kazameini, Samin Fatehi, Yash Mehta, Sauleh Eetemadi, and Erik Cambria. 2020. Per- sonality trait detection using bagged svm over bert word embedding ensembles. arXiv preprint arXiv:2010.01309.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Becoming linguistically mature: Modeling english and german children's writing development across school grades", "authors": [ { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Str\u00f6bel", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "65--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elma Kerz, Yu Qiao, Daniel Wiechmann, and Marcus Str\u00f6bel. 2020. Becoming linguistically mature: Mod- eling english and german children's writing devel- opment across school grades. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 65-74.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Age-of-acquisition ratings for 30,000 english words. Behavior research methods", "authors": [ { "first": "Victor", "middle": [], "last": "Kuperman", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Stadthagen-Gonzalez", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brysbaert", "suffix": "" } ], "year": 2012, "venue": "", "volume": "44", "issue": "", "pages": "978--990", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Kuperman, Hans Stadthagen-Gonzalez, and Marc Brysbaert. 2012. Age-of-acquisition ratings for 30,000 english words. Behavior research meth- ods, 44(4):978-990.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Feature extraction from social media posts for psychometric typing of participants", "authors": [ { "first": "Charles", "middle": [], "last": "Li", "suffix": "" }, { "first": "Monte", "middle": [], "last": "Hancock", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Bowles", "suffix": "" }, { "first": "Olivia", "middle": [], "last": "Hancock", "suffix": "" }, { "first": "Lesley", "middle": [], "last": "Perg", "suffix": "" }, { "first": "Payton", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Asher", "middle": [], "last": "Burrell", "suffix": "" }, { "first": "Gianella", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Frankie", "middle": [], "last": "Stiers", "suffix": "" }, { "first": "Shana", "middle": [], "last": "Marshall", "suffix": "" } ], "year": 2018, "venue": "International Conference on Augmented Cognition", "volume": "", "issue": "", "pages": "267--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Li, Monte Hancock, Ben Bowles, Olivia Han- cock, Lesley Perg, Payton Brown, Asher Burrell, Gi- anella Frank, Frankie Stiers, Shana Marshall, et al. 2018. Feature extraction from social media posts for psychometric typing of participants. In International Conference on Augmented Cognition, pages 267-286. Springer.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Black-box vs. whitebox: Understanding their advantages and weaknesses from a practical point of view", "authors": [ { "first": "Octavio", "middle": [], "last": "Loyola-Gonzalez", "suffix": "" } ], "year": 2019, "venue": "IEEE Access", "volume": "7", "issue": "", "pages": "154096--154113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Octavio Loyola-Gonzalez. 2019. Black-box vs. white- box: Understanding their advantages and weak- nesses from a practical point of view. IEEE Access, 7:154096-154113.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Using linguistic cues for the automatic recognition of personality in conversation and text", "authors": [ { "first": "Fran\u00e7ois", "middle": [], "last": "Mairesse", "suffix": "" }, { "first": "A", "middle": [], "last": "Marilyn", "suffix": "" }, { "first": "Matthias", "middle": [ "R" ], "last": "Walker", "suffix": "" }, { "first": "Roger K", "middle": [], "last": "Mehl", "suffix": "" }, { "first": "", "middle": [], "last": "Moore", "suffix": "" } ], "year": 2007, "venue": "Journal of artificial intelligence research", "volume": "30", "issue": "", "pages": "457--500", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fran\u00e7ois Mairesse, Marilyn A Walker, Matthias R Mehl, and Roger K Moore. 2007. Using linguistic cues for the automatic recognition of personality in con- versation and text. Journal of artificial intelligence research, 30:457-500.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Deep learning-based document modeling for personality detection from text", "authors": [ { "first": "Navonil", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Gelbukh", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" } ], "year": 2017, "venue": "IEEE Intelligent Systems", "volume": "32", "issue": "2", "pages": "74--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Navonil Majumder, Soujanya Poria, Alexander Gelbukh, and Erik Cambria. 2017. Deep learning-based doc- ument modeling for personality detection from text. IEEE Intelligent Systems, 32(2):74-79.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The stanford corenlp natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language process- ing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55-60.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Personality Traits", "authors": [ { "first": "G", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "I", "middle": [], "last": "Deary", "suffix": "" }, { "first": "M", "middle": [], "last": "Whiteman", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Matthews, I. Deary, and M. Whiteman. 2009. Per- sonality Traits. Cambridge University Press.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The five-factor model of personality traits: Consensus and controversy. The Cambridge handbook of personality psychology", "authors": [ { "first": "", "middle": [], "last": "Robert R Mccrae", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "148--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert R McCrae. 2009. The five-factor model of per- sonality traits: Consensus and controversy. The Cam- bridge handbook of personality psychology, pages 148-161.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "An introduction to the five-factor model and its applications", "authors": [ { "first": "R", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Oliver P John", "middle": [], "last": "Mccrae", "suffix": "" } ], "year": 1992, "venue": "Journal of personality", "volume": "60", "issue": "2", "pages": "175--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert R McCrae and Oliver P John. 1992. An intro- duction to the five-factor model and its applications. Journal of personality, 60(2):175-215.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Clearing the smog. Journal of Reading", "authors": [ { "first": "", "middle": [], "last": "Harry Mclaughlin", "suffix": "" } ], "year": 1969, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G Harry McLaughlin. 1969. Clearing the smog. Jour- nal of Reading.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Erik Cambria, and Sauleh Eetemadi. 2020a. Bottom-up and top-down: Predicting personality with psycholinguistic and language model features", "authors": [ { "first": "Yash", "middle": [], "last": "Mehta", "suffix": "" }, { "first": "Samin", "middle": [], "last": "Fatehi", "suffix": "" }, { "first": "Amirmohammad", "middle": [], "last": "Kazameini", "suffix": "" }, { "first": "Clemens", "middle": [], "last": "Stachl", "suffix": "" } ], "year": null, "venue": "2020 IEEE International Conference on Data Mining (ICDM)", "volume": "", "issue": "", "pages": "1184--1189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yash Mehta, Samin Fatehi, Amirmohammad Kazameini, Clemens Stachl, Erik Cambria, and Sauleh Eetemadi. 2020a. Bottom-up and top-down: Predicting per- sonality with psycholinguistic and language model features. In 2020 IEEE International Conference on Data Mining (ICDM), pages 1184-1189. IEEE.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Recent trends in deep learning based personality detection", "authors": [ { "first": "Yash", "middle": [], "last": "Mehta", "suffix": "" }, { "first": "Navonil", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Gelbukh", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" } ], "year": 2020, "venue": "Artificial Intelligence Review", "volume": "53", "issue": "4", "pages": "2313--2339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yash Mehta, Navonil Majumder, Alexander Gelbukh, and Erik Cambria. 2020b. Recent trends in deep learning based personality detection. Artificial Intel- ligence Review, 53(4):2313-2339.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Introduction to Type: A Description of the Theory and Applications of the Myers-Briggs Type Indicator", "authors": [ { "first": "Isabel", "middle": [ "Briggs" ], "last": "Meyers", "suffix": "" }, { "first": "Mary", "middle": [ "H" ], "last": "Mccaulley", "suffix": "" }, { "first": "Allen L", "middle": [], "last": "Hammer", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isabel Briggs Meyers, Mary H McCaulley, and Allen L Hammer. 1990. Introduction to Type: A Description of the Theory and Applications of the Myers-Briggs Type Indicator. Consulting Psychologists Press.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 english words", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "174--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad. 2018. Obtaining reliable human rat- ings of valence, arousal, and dominance for 20,000 english words. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 174-184.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Nrc-canada: Building the state-of-the-art in sentiment analysis of tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the seventh international workshop on Semantic Evaluation Exercises (SemEval-2013)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. Nrc-canada: Building the state-of-the-art in sentiment analysis of tweets. In Proceedings of the seventh international workshop on Semantic Evalu- ation Exercises (SemEval-2013), Atlanta, Georgia, USA.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Crowdsourcing a word-emotion association lexicon", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2013, "venue": "Computational intelligence", "volume": "29", "issue": "3", "pages": "436--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M Mohammad and Peter D Turney. 2013. Crowd- sourcing a word-emotion association lexicon. Com- putational intelligence, 29(3):436-465.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Personality and the prediction of consequential outcomes", "authors": [ { "first": "J", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Veronica", "middle": [], "last": "Ozer", "suffix": "" }, { "first": "", "middle": [], "last": "Benet-Martinez", "suffix": "" } ], "year": 2006, "venue": "Annu. Rev. Psychol", "volume": "57", "issue": "", "pages": "401--421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel J Ozer and Veronica Benet-Martinez. 2006. Per- sonality and the prediction of consequential out- comes. Annu. Rev. Psychol., 57:401-421.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Automatic personality assessment through social media language", "authors": [ { "first": "Gregory", "middle": [], "last": "Park", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Johannes", "middle": [ "C" ], "last": "Eichstaedt", "suffix": "" }, { "first": "Margaret", "middle": [ "L" ], "last": "Kern", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Kosinski", "suffix": "" }, { "first": "J", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Stillwell", "suffix": "" }, { "first": "H", "middle": [], "last": "Lyle", "suffix": "" }, { "first": "Martin", "middle": [ "Ep" ], "last": "Ungar", "suffix": "" }, { "first": "", "middle": [], "last": "Seligman", "suffix": "" } ], "year": 2015, "venue": "Journal of personality and social psychology", "volume": "108", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Park, H Andrew Schwartz, Johannes C Eich- staedt, Margaret L Kern, Michal Kosinski, David J Stillwell, Lyle H Ungar, and Martin EP Seligman. 2015. Automatic personality assessment through social media language. Journal of personality and social psychology, 108(6):934.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" } ], "year": 2019, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Linguistic inquiry and word count: Liwc", "authors": [ { "first": "Martha", "middle": [ "E" ], "last": "James W Pennebaker", "suffix": "" }, { "first": "Roger J", "middle": [], "last": "Francis", "suffix": "" }, { "first": "", "middle": [], "last": "Booth", "suffix": "" } ], "year": 2001, "venue": "Mahway", "volume": "71", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Associates, 71(2001):2001.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Linguistic styles: language use as an individual difference", "authors": [ { "first": "W", "middle": [], "last": "James", "suffix": "" }, { "first": "Laura", "middle": [ "A" ], "last": "Pennebaker", "suffix": "" }, { "first": "", "middle": [], "last": "King", "suffix": "" } ], "year": 1999, "venue": "Journal of personality and social psychology", "volume": "77", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James W Pennebaker and Laura A King. 1999. Lin- guistic styles: language use as an individual differ- ence. Journal of personality and social psychology, 77(6):1296.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Alzheimer's disease detection from spontaneous speech through combining linguistic complexity and (dis) fluency features with pretrained language models", "authors": [ { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Xuefeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "" }, { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2106.08689" ] }, "num": null, "urls": [], "raw_text": "Yu Qiao, Xuefeng Yin, Daniel Wiechmann, and Elma Kerz. 2021a. Alzheimer's disease detection from spontaneous speech through combining linguistic complexity and (dis) fluency features with pretrained language models. arXiv preprint arXiv:2106.08689.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Prediction of listener perception of argumentative speech in a crowdsourced data using (psycho-) linguistic and fluency features", "authors": [ { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Sourabh", "middle": [], "last": "Zanwar", "suffix": "" }, { "first": "Rishab", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Wiechmann", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Elma", "middle": [], "last": "Kerz", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Schl\u00fcter", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2111.07130" ] }, "num": null, "urls": [], "raw_text": "Yu Qiao, Sourabh Zanwar, Rishab Bhattacharyya, Daniel Wiechmann, Wei Zhou, Elma Kerz, and Ralf Schl\u00fcter. 2021b. Prediction of listener perception of argumentative speech in a crowdsourced data us- ing (psycho-) linguistic and fluency features. arXiv preprint arXiv:2111.07130.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Automatic personality prediction; an enhanced method using ensemble modeling", "authors": [ { "first": "Mehrdad", "middle": [], "last": "Khasmakhi", "suffix": "" }, { "first": "Zoleikha", "middle": [], "last": "Ranjbar-Khadivi", "suffix": "" }, { "first": "Elnaz", "middle": [], "last": "Jahanbakhsh-Nagadeh", "suffix": "" }, { "first": "Taymaz", "middle": [], "last": "Zafarani-Moattar", "suffix": "" }, { "first": "", "middle": [], "last": "Rahkar-Farshi", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2007.04571.ngram_2_reg_spok___Extraversion" ] }, "num": null, "urls": [], "raw_text": "Khasmakhi, Mehrdad Ranjbar-Khadivi, Zoleikha Jahanbakhsh-Nagadeh, Elnaz Zafarani-Moattar, and Taymaz Rahkar-Farshi. 2021. Automatic personality prediction; an enhanced method using ensemble modeling. arXiv preprint arXiv:2007.04571. ngram_2_reg_spok___Extraversion", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Syntactic Syntactic.MeanLengthClause.Syntactic EmoLex.negative.Emotion EmoLex.sadness.Emotion Base", "authors": [ { "first": "", "middle": [], "last": "Syntactic", "suffix": "" }, { "first": "", "middle": [], "last": "Coordinatephrasesperclause", "suffix": "" } ], "year": null, "venue": "Kolmogorov.InfTheo Syntactic.Kolmogorov.InfTheo Lexical.WordPrevalence.Lexical Sophistication.BNC.Lexical LIWC.Anx.LIWC LIWC.I.LIWC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Syntactic.CoordinatePhrasesPerClause.Syntactic Syntactic.MeanLengthClause.Syntactic EmoLex.negative.Emotion EmoLex.sadness.Emotion Base.Kolmogorov.InfTheo Syntactic.Kolmogorov.InfTheo Lexical.WordPrevalence.Lexical Sophistication.BNC.Lexical LIWC.Anx.LIWC LIWC.I.LIWC", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The Affective Norms for English Words (ANEW) (Bradley and Lang, 1999), (2) ANEW-Emo lexicons (Stevenson et al., 2007), (3) DepecheMood++ (Araque et al., 2019), (4) The Geneva Affect Label Coder (GALC) (Scherer, 2005), (5) The General Inquirer (Stone et al., 1966), (6) The LIWC dictionary (Pennebaker et al., 2001), (7) The NRC Word-Emotion Association Lexicon (Mohammad and Turney, 2013), (8) The NRC Valence, Arousal, and Dominance (NRC-VAD) lexicon", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Text contours for three selected features of first 40 sentences of a randomly selected text from the Essays dataset(ID: 2004 499).", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "Essays dataset: Upper panel: Top 20 most characteristic features from each feature group by personality trait. Lower panel: Top 2 most characteristic features from each feature group by personality trait. Plotted scores represent the difference between the z-standardized mean scores of high-and low-scoring individuals on a given personality trait. Positive scores are characteristic of the high-scoring individuals on a given trait (e.g. individuals with high extraversion scores).", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "text": "436 . H and d o is defined as in BLSTM encoder description. Softmax is defined as: \u03b1 ij =", "type_str": "table", "html": null, "num": null, "content": "
e m ij n k=1 e m kj
" }, "TABREF2": { "text": "", "type_str": "table", "html": null, "num": null, "content": "" }, "TABREF3": { "text": ".57 readability 9.57 morph.syn 9.17 morph.syn 6.23 morph.syn 8.11 morph.syn 7.08 morph.syn 8.91 readability 7.51 readability 4.21 readability 7.06", "type_str": "table", "html": null, "num": null, "content": "
OCEAN
GroupIGroupIGroupIGroupIGroupI
SentiEmo 18.49 SentiEmo 21.36 SentiEmo 16.39 SentiEmo 9.28 SentiEmo 16.62
lexical12.90lexical14.48lexical10.93lexical7.52lexical10.23
readability 9I/EN/ST/FP/J
GroupIGroupIGroupIGroupI
SentiEmo 33.73 SentiEmo 21.32 SentiEmo 45.06 SentiEmo 24.97
lexical29.94lexical14.25lexical24.64 readability 17.21
morph.syn 20.65 readability 12.55 morph.syn 20.31 morph.syn 16.02
readability 18.33 morph.syn 10.40 readability 18.76lexical14.48
" }, "TABREF4": { "text": "", "type_str": "table", "html": null, "num": null, "content": "
: Results of the feature ablation for Big Five Essays datset (top) and Kaggle MBTI dataset (bottom): Feature importance (Model: ATTN-PSYLING) macro-averaged across 100 model instances. (10 \u00d7 10-fold CV).
" }, "TABREF5": { "text": "Marco TulioRibeiro, Sameer Singh, and Carlos Guestrin. 2016. \" why should i trust you?\" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144. Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th annual meeting of the association for computational linguistics (volume 2: Short papers), pages 207-212.", "type_str": "table", "html": null, "num": null, "content": "
Klaus R Scherer. 2005. What are emotions? and how can they be measured? Social science information, 44(4):695-729.
Christopher J Soto. 2019. How replicable are links between personality traits and consequential life out-comes? the life outcomes of personality replication project. Psychological Science, 30(5):711-727.
Ryan A Stevenson, Joseph A Mikels, and Thomas W James. 2007. Characterization of the affective norms for english words by discrete emotional categories. Behavior research methods, 39(4):1020-1024.
Philip J Stone, Dexter C Dunphy, and Marshall S Smith. 1966. The general inquirer: A computer approach to content analysis.
Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and comput-erized text analysis methods. Journal of language and social psychology, 29(1):24-54.
Alessandro Vinciarelli and Gelareh Mohammadi. 2014. A survey of personality computing. IEEE Transac-tions on Affective Computing, 5(3):273-291.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
David H Wolpert. 1992. Stacked generalization. Neural networks, 5(2):241-259.
Peng Zhou,
" } } } }