|
{"doc_id": "2001.04063", "revision_depth": 2, "before_revision": "In this paper , we present a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pre-training corpus.", "after_revision": "This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of optimizing one-step-ahead prediction in the traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction that predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large-scale dataset (160GB) , respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pre-training corpus.", "edit_actions": [{"type": "R", "before": "In this paper , we present", "after": "This paper presents", "start_char_pos": 0, "end_char_pos": 26, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "R", "before": "the optimization of one-step ahead prediction in", "after": "optimizing one-step-ahead prediction in the", "start_char_pos": 237, "end_char_pos": 285, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "fluency"]}, {"type": "R", "before": "which", "after": "that", "start_char_pos": 381, "end_char_pos": 386, "major_intent": "fluency", "raw_intents": ["fluency", "coherence", "fluency"]}, {"type": "R", "before": "large scale", "after": "large-scale", "start_char_pos": 690, "end_char_pos": 701, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 718, "end_char_pos": 718, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}], "sents_char_pos": [0, 225, 480, 625, 732, 874], "domain": "arxiv"} |
|
{"doc_id": "2001.07676", "revision_depth": 1, "before_revision": "Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with \"task descriptions\" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases which help the language model understand the given task. Theses phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, regular supervised training is performed on the resulting training set. On several tasks , we show that PET outperforms both supervised training and unsupervised approaches in low-resource settings by a large margin.", "after_revision": "Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with \"task descriptions\" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, regular supervised training is performed on the resulting training set. For several tasks and languages, PET outperforms both supervised training and unsupervised approaches in low-resource settings by a large margin.", "edit_actions": [{"type": "R", "before": "which help the language model understand the", "after": "to help language models understand a", "start_char_pos": 437, "end_char_pos": 481, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "Theses", "after": "These", "start_char_pos": 494, "end_char_pos": 500, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "On several tasks , we show that", "after": "For several tasks and languages,", "start_char_pos": 664, "end_char_pos": 695, "major_intent": "clarity", "raw_intents": ["clarity", "meaning-changed", "coherence"]}], "sents_char_pos": [0, 176, 493, 582, 663], "domain": "arxiv"} |
|
{"doc_id": "2004.12765", "revision_depth": 2, "before_revision": "Automatic humor detection has interesting use cases in modern technologies, such as chatbots and virtual assistants. Based on the general linguistic structure of humor, in this paper, we propose a novel approach for detecting humor in short texts by using BERT sentence embedding . Our proposed method uses BERT to generate embeddings for sentences of a given text and uses these embeddings as inputs for parallel lines of hidden layers in a neural network. These lines are finally concatenated to predict the target value. For evaluation purposes, we created a new dataset for humor detection consisting of 200k formal short texts (100k positive and 100k negative). Experimental results show that our proposed method can determine humor in short texts with accuracy and an F1-score of 98.2 percent. Our 8-layer model with 110M parameters outperforms all baseline models with a large margin, showing the importance of utilizing linguistic structure in machine learning models.", "after_revision": "Automatic humor detection has interesting use cases in modern technologies, such as chatbots and virtual assistants. In this paper, we propose a novel approach for detecting humor in short texts based on the general linguistic structure of humor . Our proposed method uses BERT to generate embeddings for sentences of a given text and uses these embeddings as inputs of parallel lines of hidden layers in a neural network. These lines are finally concatenated to predict the target value. For evaluation purposes, we created a new dataset for humor detection consisting of 200k formal short texts (100k positive and 100k negative). Experimental results show that our proposed method can determine humor in short texts with accuracy and an F1-score of 98.2 percent. Our 8-layer model with 110M parameters outperforms the baseline models with a large margin, showing the importance of utilizing linguistic structure of texts in machine learning models.", "edit_actions": [{"type": "R", "before": "Based on the general linguistic structure of humor, in", "after": "In", "start_char_pos": 117, "end_char_pos": 171, "major_intent": "coherence", "raw_intents": ["coherence", "clarity", "coherence"]}, {"type": "R", "before": "by using BERT sentence embedding", "after": "based on the general linguistic structure of humor", "start_char_pos": 247, "end_char_pos": 279, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "meaning-changed"]}, {"type": "R", "before": "for", "after": "of", "start_char_pos": 401, "end_char_pos": 404, "major_intent": "fluency", "raw_intents": ["fluency", "clarity", "fluency"]}, {"type": "R", "before": "all", "after": "the", "start_char_pos": 851, "end_char_pos": 854, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": "of texts", "start_char_pos": 949, "end_char_pos": 949, "major_intent": "clarity", "raw_intents": ["clarity", "coherence", "clarity"]}], "sents_char_pos": [0, 116, 281, 457, 523, 666, 799], "domain": "arxiv"} |
|
{"doc_id": "2004.14519", "revision_depth": 1, "before_revision": "The Arabic language is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER), Part-of-Speech tagging (POS), Argument Role Labeling (ARL) and Relation Extraction (RE). A few multilingual pre-trained models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis. Their performance on the IE tasks is less known, in particular, the cross-lingual transfer capability from English to Arabic. In this work, we pre-train a Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learning on the information extraction tasks. Our GigaBERT model can outperform mBERT and XLM-R-base on NER, POS and ARL tasks, with regarding to the per-language and /or zero-transfer performance.We make our pre-trained models publicly available at URL to facilitate the research of this field.", "after_revision": "Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER), Part-of-Speech tagging (POS), Argument Role Labeling (ARL) , and Relation Extraction (RE). A few multilingual pre-trained models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis. Their performance on the IE tasks is less known, in particular, the cross-lingual transfer capability from English to Arabic. In this work, we pre-train a Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learning on various IE tasks. Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both supervised and zero-shot learning settings.We have made our pre-trained models publicly available at URL ", "edit_actions": [{"type": "R", "before": "The Arabic language", "after": "Arabic", "start_char_pos": 0, "end_char_pos": 19, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 217, "end_char_pos": 217, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "the information extraction", "after": "various IE", "start_char_pos": 787, "end_char_pos": 813, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "model can outperform mBERT and XLM-R-base on NER, POS and ARL tasks, with regarding to the per-language and /or zero-transfer performance.We make", "after": "outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both supervised and zero-shot learning settings.", "start_char_pos": 834, "end_char_pos": 979, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "A", "before": null, "after": "We have made", "start_char_pos": 979, "end_char_pos": 979, "major_intent": "clarity", "raw_intents": ["coherence", "clarity", "clarity"]}, {"type": "D", "before": "to facilitate the research of this field.", "after": null, "start_char_pos": 1029, "end_char_pos": 1070, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}], "sents_char_pos": [0, 247, 500, 626, 820, 972]} |
|
{"doc_id": "2004.14623", "revision_depth": 1, "before_revision": "In adversarial (challenge) testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard tasks? In this paper, we argue that an essential factor is the ability to form modular representations . Our central contribution is a definition of what it means for a representation to be modular and an experimental method for assessing the extent to which a system's solution is modular in this general sense . Our work is grounded empirically in a new challenge Natural Language Inference dataset designed to assess systems on their ability to reason about entailment and negation. We find that a BERT model with fine-tuning is strikingly successful at the hard generalization tasks we pose using this dataset, and our active manipulations help us to understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations.", "after_revision": "In adversarial testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard behavioral tasks? We argue that an essential factor is modular internal structure . Our central contribution is a new experimental method called 'interchange interventions', in which systematic manipulations of model-internal states are related to causal effects on their outputs, thereby allowing us to identify modular structure . Our work is grounded empirically in a new challenge Natural Language Inference dataset designed to assess systems on their ability to reason about entailment and negation. We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset, and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations.", "edit_actions": [{"type": "D", "before": "(challenge)", "after": null, "start_char_pos": 15, "end_char_pos": 26, "major_intent": "coherence", "raw_intents": ["style", "coherence", "coherence"]}, {"type": "R", "before": "tasks? In this paper, we", "after": "behavioral tasks? We", "start_char_pos": 205, "end_char_pos": 229, "major_intent": "clarity", "raw_intents": ["others", "clarity", "clarity"]}, {"type": "R", "before": "the ability to form modular representations", "after": "modular internal structure", "start_char_pos": 264, "end_char_pos": 307, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "R", "before": "definition of what it means for a representation to be modular and an experimental method for assessing the extent to which a system's solution is modular in this general sense", "after": "new experimental method called 'interchange interventions', in which systematic manipulations of model-internal states are related to causal effects on their outputs, thereby allowing us to identify modular structure", "start_char_pos": 340, "end_char_pos": 516, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "meaning-changed"]}, {"type": "D", "before": "with fine-tuning", "after": null, "start_char_pos": 717, "end_char_pos": 733, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "coherence"]}, {"type": "R", "before": "hard generalization tasks", "after": "systematic generalization task", "start_char_pos": 766, "end_char_pos": 791, "major_intent": "clarity", "raw_intents": ["clarity", "meaning-changed", "clarity"]}, {"type": "R", "before": "help us to", "after": "of model-internal vectors help us", "start_char_pos": 849, "end_char_pos": 859, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}], "sents_char_pos": [0, 135, 211, 309, 518, 690], "domain": "arxiv"} |
|
{"doc_id": "2005.00192", "revision_depth": 1, "before_revision": "For the automatic evaluation of Generative Question Answering (genQA ) systems, it is essential to assess the correctness of the generated answers . However, n-gram similarity metrics, which are widely used to compare generated texts and references, are prone to misjudge fact-based assessments . Moreover, there is a lack of benchmark datasets to measure the quality of metrics in terms of the correctness. To study a better metric for genQA, we collect high-quality human judgments of correctness on two standard genQA datasets. Using our human-evaluation datasets, we show that existing metrics based on n-gram similarity do not correlate with human judgments. To alleviate this problem, we propose a new metric for evaluating the correctness of genQA . Specifically, the new metric assigns different weights on each token via keyphrase prediction, thereby judging whether a predicted answer sentence captures the key meaning of the human judge's ground-truth . Our proposed metric shows a significantly higher correlation with human judgment than widely used existing metrics .", "after_revision": "In the automatic evaluation of generative question answering (GenQA ) systems, it is difficult to assess the correctness of generated answers due to the free-form of the answer . Moreover, there is a lack of benchmark datasets to evaluate the suitability of existing metrics in terms of correctness. To study a better metric for GenQA, we first create high-quality human judgments of correctness on two standard GenQA datasets. Using our human-evaluation datasets, we show that widely used n-gram similarity metrics do not correlate with human judgments. To alleviate this problem, we propose a new metric for evaluating the correctness of GenQA . Specifically, our new metric assigns different weights to each token via keyphrase prediction, thereby judging whether a generated answer sentence captures the key meaning of the reference answer . Our proposed metric shows a significantly higher correlation with human judgments than existing metrics in various datasets .", "edit_actions": [{"type": "R", "before": "For", "after": "In", "start_char_pos": 0, "end_char_pos": 3, "major_intent": "coherence", "raw_intents": ["coherence", "fluency", "coherence"]}, {"type": "R", "before": "Generative Question Answering (genQA", "after": "generative question answering (GenQA", "start_char_pos": 32, "end_char_pos": 68, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "essential", "after": "difficult", "start_char_pos": 86, "end_char_pos": 95, "major_intent": "clarity", "raw_intents": ["style", "clarity", "clarity"]}, {"type": "R", "before": "the generated answers . However, n-gram similarity metrics, which are widely used to compare generated texts and references, are prone to misjudge fact-based assessments", "after": "generated answers due to the free-form of the answer", "start_char_pos": 125, "end_char_pos": 294, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "measure the quality of", "after": "evaluate the suitability of existing", "start_char_pos": 348, "end_char_pos": 370, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "D", "before": "the", "after": null, "start_char_pos": 391, "end_char_pos": 394, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "genQA, we collect", "after": "GenQA, we first create", "start_char_pos": 437, "end_char_pos": 454, "major_intent": "clarity", "raw_intents": ["clarity", "coherence", "clarity"]}, {"type": "R", "before": "genQA", "after": "GenQA", "start_char_pos": 515, "end_char_pos": 520, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "clarity"]}, {"type": "R", "before": "existing metrics based on", "after": "widely used", "start_char_pos": 581, "end_char_pos": 606, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "A", "before": null, "after": "metrics", "start_char_pos": 625, "end_char_pos": 625, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "genQA", "after": "GenQA", "start_char_pos": 750, "end_char_pos": 755, "major_intent": "fluency", "raw_intents": ["fluency", "clarity", "fluency"]}, {"type": "R", "before": "the", "after": "our", "start_char_pos": 772, "end_char_pos": 775, "major_intent": "clarity", "raw_intents": ["clarity", "fluency", "clarity"]}, {"type": "R", "before": "on", "after": "to", "start_char_pos": 813, "end_char_pos": 815, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "coherence"]}, {"type": "R", "before": "predicted", "after": "generated", "start_char_pos": 879, "end_char_pos": 888, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "human judge's ground-truth", "after": "reference answer", "start_char_pos": 937, "end_char_pos": 963, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "judgment than widely used existing metrics", "after": "judgments than existing metrics in various datasets", "start_char_pos": 1038, "end_char_pos": 1080, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}], "sents_char_pos": [0, 148, 296, 407, 530, 664, 965], "domain": "arxiv"} |
|
{"doc_id": "2005.00619", "revision_depth": 1, "before_revision": "Vision, as a central component of human perception, plays a fundamental role in shaping natural language . To better understand how text models are connected to our visual perceptions , we propose a method for examining the similarities between neural representations extracted from words in text and objects in images . Our approach uses a lightweight probing model that learns to map language representations of concrete words to the visual domain . We find that representations from models trained on purely textual data, such as BERT, can be nontrivially mapped to those of a vision model. Such mappings generalize to object categories that were never seen by the probe during training, unlike mappings learned from permuted or random representations . Moreover, we find that the context surrounding objects in sentences greatly impacts performance. Finally, we show that humans significantly outperform all examined models , suggesting considerable room for improvement in representation learning and grounding .", "after_revision": "While large-scale language models have enjoyed great success recently, much remains to be understood about what is encoded in their representations. In this work , we propose a method for characterizing how language representations of concrete nouns relate to the physical appearance of the objects they refer to . Our approach uses a probing model that examines how useful language representations are in discerning between different visual representations. We show evidence of a surprising common ground with the visual domain , finding representations of many language models to be useful in retrieving semantically aligned image patches. In control experiments where language and visual representations are intentionally mismatched, we observe much weaker results. Furthermore, we examine the impact of textual context in our experiments, finding, for instance, that nouns accompanied by adjectives lead to more accurate retrieval . Finally, we show that the examined models substantially under-perform humans in retrieval. Altogether, our findings shed new empirical insights on language grounding, suggesting that some physical properties are being captured by trained language models, and highlighting large room for future progress .", "edit_actions": [{"type": "R", "before": "Vision, as a central component of human perception, plays a fundamental role in shaping natural language . To better understand how text models are connected to our visual perceptions", "after": "While large-scale language models have enjoyed great success recently, much remains to be understood about what is encoded in their representations. In this work", "start_char_pos": 0, "end_char_pos": 183, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "meaning-changed"]}, {"type": "R", "before": "examining the similarities between neural representations extracted from words in text and objects in images", "after": "characterizing how language representations of concrete nouns relate to the physical appearance of the objects they refer to", "start_char_pos": 210, "end_char_pos": 318, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "clarity"]}, {"type": "D", "before": "lightweight", "after": null, "start_char_pos": 341, "end_char_pos": 352, "major_intent": "coherence", "raw_intents": ["fluency", "coherence", "coherence"]}, {"type": "R", "before": "learns to map language representations of concrete words to", "after": "examines how useful language representations are in discerning between different visual representations. We show evidence of a surprising common ground with", "start_char_pos": 372, "end_char_pos": 431, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": ". We find that representations from models trained on purely textual data, such as BERT, can be nontrivially mapped to those of a vision model. Such mappings generalize to object categories that were never seen by the probe during training, unlike mappings learned from permuted or random representations", "after": ", finding representations of many language models to be useful in retrieving semantically aligned image patches. In control experiments where language and visual representations are intentionally mismatched, we observe much weaker results. Furthermore, we examine the impact of textual context in our experiments, finding, for instance, that nouns accompanied by adjectives lead to more accurate retrieval", "start_char_pos": 450, "end_char_pos": 754, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "D", "before": "Moreover, we find that the context surrounding objects in sentences greatly impacts performance.", "after": null, "start_char_pos": 757, "end_char_pos": 853, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "others"]}, {"type": "R", "before": "humans significantly outperform all examined models , suggesting considerable room for improvement in representation learning and grounding", "after": "the examined models substantially under-perform humans in retrieval. Altogether, our findings shed new empirical insights on language grounding, suggesting that some physical properties are being captured by trained language models, and highlighting large room for future progress", "start_char_pos": 876, "end_char_pos": 1015, "major_intent": "clarity", "raw_intents": ["clarity", "meaning-changed", "clarity"]}], "sents_char_pos": [0, 106, 320, 451, 593, 756, 853], "domain": "arxiv"} |
|
{"doc_id": "2005.00619", "revision_depth": 2, "before_revision": "While large-scale language models have enjoyed great success recently, much remains to be understood about what is encoded in their representations. In this work, we propose a method for characterizing how language representations of concrete nouns relate to the physical appearance of the objects they refer to. Our approach uses a probing model that examines how useful language representations are in discerning between different visual representations. We show evidence of a surprising common ground with the visual domain, finding representations of many language models to be useful in retrieving semantically aligned image patches . In control experiments where language and visual representations are intentionally mismatched, we observe much weaker results. Furthermore, we examine the impact of textual context in our experiments, finding, for instance, that nouns accompanied by adjectives lead to more accurate retrieval. Finally, we show that the examined models substantially under-perform humans in retrieval . Altogether, our findings shed new empirical insights on language grounding , suggesting that some physical properties are being captured by trained language models, and highlighting large room for future progress .", "after_revision": "While large-scale contextual language models have enjoyed great success recently, much remains to be understood about what is encoded in their representations. In this work, we characterize how contextual representations of concrete nouns extracted by trained language models relate to the physical properties of the objects they refer to. Our approach uses a probing model that examines how effective these language representations are in discerning between different visual representations. We show that many recent language models yield representations that are useful in retrieving semantically aligned image patches , and explore the role of context in this process. Much weaker results are found in control experiments, attesting the selectivity of the probe. All examined models greatly under-perform humans in retrieval , highlighting substantial room for future progress . Altogether, our findings shed new empirical insights on language grounding and its materialization in contextual language models .", "edit_actions": [{"type": "A", "before": null, "after": "contextual", "start_char_pos": 18, "end_char_pos": 18, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "coherence", "meaning-changed"]}, {"type": "R", "before": "propose a method for characterizing how language", "after": "characterize how contextual", "start_char_pos": 167, "end_char_pos": 215, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": "extracted by trained language models", "start_char_pos": 250, "end_char_pos": 250, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "appearance", "after": "properties", "start_char_pos": 274, "end_char_pos": 284, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "useful", "after": "effective these", "start_char_pos": 367, "end_char_pos": 373, "major_intent": "style", "raw_intents": ["style", "style", "style"]}, {"type": "R", "before": "evidence of a surprising common ground with the visual domain, finding representations of many language models to be", "after": "that many recent language models yield representations that are", "start_char_pos": 467, "end_char_pos": 583, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": ". In control experiments where language and visual representations are intentionally mismatched, we observe much weaker results. Furthermore, we examine the impact of textual context in our experiments, finding, for instance, that nouns accompanied by adjectives lead to more accurate retrieval. Finally, we show that the examined models substantially", "after": ", and explore the role of context in this process. Much weaker results are found in control experiments, attesting the selectivity of the probe. All examined models greatly", "start_char_pos": 640, "end_char_pos": 991, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "A", "before": null, "after": ", highlighting substantial room for future progress", "start_char_pos": 1026, "end_char_pos": 1026, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "coherence"]}, {"type": "R", "before": ", suggesting that some physical properties are being captured by trained language models, and highlighting large room for future progress", "after": "and its materialization in contextual language models", "start_char_pos": 1104, "end_char_pos": 1241, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}], "sents_char_pos": [0, 149, 314, 458, 641, 768, 935, 1028], "domain": "arxiv"} |
|
{"doc_id": "2005.00619", "revision_depth": 3, "before_revision": "While large-scale contextual language models have enjoyed great success recently, much remains to be understood about what is encoded in their representations. In this work, we characterize how contextual representations of concrete nouns extracted by trained language models relate to the physical properties of the objects they refer to. Our approach uses a probing model that examines how effective these language representations are in discerning between different visual representations. We show that many recent language models yield representations that are useful in retrieving semantically aligned image patches , and explore the role of context in this process. Much weaker results are found in control experiments, attesting the selectivity of the probe. All examined models greatly under-perform humans in retrieval, highlighting substantial room for future progress. Altogether, our findings shed new empirical insights on language grounding and its materialization in contextual language models.", "after_revision": "The success of large-scale contextual language models has attracted great interest in probing what is encoded in their representations. In this work, we consider a new question: to what extent contextual representations of concrete nouns are aligned with corresponding visual representations? We design a probing model that evaluates how effective are text-only representations in distinguishing between matching and non-matching visual representations. Our findings show that language representations alone provide a strong signal for retrieving image patches from the correct object categories. Moreover, they are effective in retrieving specific instances of image patches; textual context plays an important role in this process. Visually grounded language models slightly outperform text-only language models in instance retrieval, but greatly under-perform humans . We hope our analyses inspire future research in understanding and improving the visual capabilities of language models.", "edit_actions": [{"type": "R", "before": "While", "after": "The success of", "start_char_pos": 0, "end_char_pos": 5, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "coherence"]}, {"type": "R", "before": "have enjoyed great success recently, much remains to be understood about", "after": "has attracted great interest in probing", "start_char_pos": 45, "end_char_pos": 117, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "R", "before": "characterize how", "after": "consider a new question: to what extent", "start_char_pos": 177, "end_char_pos": 193, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "R", "before": "extracted by trained language models relate to the physical properties of the objects they refer to. Our approach uses", "after": "are aligned with corresponding visual representations? We design", "start_char_pos": 239, "end_char_pos": 357, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "meaning-changed"]}, {"type": "R", "before": "examines how effective these language representations are in discerning between different", "after": "evaluates how effective are text-only representations in distinguishing between matching and non-matching", "start_char_pos": 379, "end_char_pos": 468, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "R", "before": "We show that many recent language models yield representations that are useful in retrieving semantically aligned image patches , and explore the role of context", "after": "Our findings show that language representations alone provide a strong signal for retrieving image patches from the correct object categories. Moreover, they are effective in retrieving specific instances of image patches; textual context plays an important role", "start_char_pos": 493, "end_char_pos": 654, "major_intent": "style", "raw_intents": ["style", "style", "style"]}, {"type": "R", "before": "Much weaker results are found in control experiments, attesting the selectivity of the probe. All examined models", "after": "Visually grounded language models slightly outperform text-only language models in instance retrieval, but", "start_char_pos": 672, "end_char_pos": 785, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "in retrieval, highlighting substantial room for future progress. Altogether, our findings shed new empirical insights on language grounding and its materialization in contextual", "after": ". We hope our analyses inspire future research in understanding and improving the visual capabilities of", "start_char_pos": 815, "end_char_pos": 992, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}], "sents_char_pos": [0, 159, 339, 492, 671, 765, 879], "domain": "arxiv"} |
|
{"doc_id": "2005.03954", "revision_depth": 2, "before_revision": "We focus on the study of conversational recommendation in the context of multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e.g., QA) to a recommendation dialog, taking into account user's interests and feedback. To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), which contains multiple sequential dialogs for every pair of a recommendation seeker (user) and a recommender (bot). In each dialog, the recommender proactively leads a multi-type dialog to approach recommendation targets and then makes multiple recommendations with rich interaction behavior. This dataset allows us to systematically investigate different parts of the overall problem, e.g., how to naturally lead a dialog, how to interact with users for recommendation. Finally we establish baseline results on DuRecDial for future studies. Dataset and codes are publicly available at URL", "after_revision": "We propose a new task of conversational recommendation over multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e.g., QA) to a recommendation dialog, taking into account user's interests and feedback. To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), which contains multiple sequential dialogs for every pair of a recommendation seeker (user) and a recommender (bot). In each dialog, the recommender proactively leads a multi-type dialog to approach recommendation targets and then makes multiple recommendations with rich interaction behavior. This dataset allows us to systematically investigate different parts of the overall problem, e.g., how to naturally lead a dialog, how to interact with users for recommendation. Finally we establish baseline results on DuRecDial for future studies. Dataset and codes are publicly available at URL", "edit_actions": [{"type": "R", "before": "focus on the study", "after": "propose a new task", "start_char_pos": 3, "end_char_pos": 21, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "in the context of", "after": "over", "start_char_pos": 55, "end_char_pos": 72, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "DuRecDial", "after": "DuRecDial", "start_char_pos": 369, "end_char_pos": 378, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}], "sents_char_pos": [0, 280, 533, 710, 888, 959], "domain": "arxiv"} |
|
{"doc_id": "2005.08081", "revision_depth": 1, "before_revision": "In sequence-to-sequence learning, the attention mechanism has been a great success in bridging the information between the encoder and the decoder. However, it is often overlooked that the decoder only has a single view of the source sequences, that is , the representations generated by the last encoder layer , which is supposed to be a global view of source sequences . Such implementation hinders the decoder from concrete, fine-grained , local source information . In this work, we explore to reuse the representations from different encoder layers for layer-wise cross-view decoding , that is, different views of the source sequences are presented to different decoder layers . We investigate multiple , representative strategies for cross-view coding, of which the granularity consistent attention (GCA) strategy proves the most efficient and effective in the experiments on neural machine translation task . Especially, GCA surpasses the previous state-of-the-art architecture on three machine translation datasets.", "after_revision": "In sequence-to-sequence learning, the attention mechanism has been a great success in bridging the information between the encoder and the decoder. However, it is often overlooked that the decoder obtains only a single view of the source sequences, i.e. , the representations generated by the last encoder layer . Although those representations are supposed to be a comprehensive, global view of source sequences , such practice keeps the decoders from concrete, fine-grained source information generated by other encoder layers . In this work, we propose to encourage the decoder to take the full advantage of the multi-level source representations for layer-wise cross-view decoding . Concretely, different views of the source sequences are presented to different decoder layers and multiple strategies are explored to route the source representations. In particular, the granularity consistent attention (GCA) strategy proves the most efficient and effective in the experiments on the neural machine translation task , surpassing the previous state-of-the-art architecture on three benchmark datasets.", "edit_actions": [{"type": "R", "before": "only has", "after": "obtains only", "start_char_pos": 197, "end_char_pos": 205, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "that is", "after": "i.e.", "start_char_pos": 245, "end_char_pos": 252, "major_intent": "clarity", "raw_intents": ["clarity", "style", "clarity"]}, {"type": "R", "before": ", which is", "after": ". Although those representations are", "start_char_pos": 311, "end_char_pos": 321, "major_intent": "coherence", "raw_intents": ["coherence", "meaning-changed", "coherence"]}, {"type": "A", "before": null, "after": "comprehensive,", "start_char_pos": 339, "end_char_pos": 339, "major_intent": "clarity", "raw_intents": ["coherence", "clarity", "clarity"]}, {"type": "R", "before": ". Such implementation hinders the decoder", "after": ", such practice keeps the decoders", "start_char_pos": 372, "end_char_pos": 413, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "fluency"]}, {"type": "R", "before": ", local source information", "after": "source information generated by other encoder layers", "start_char_pos": 442, "end_char_pos": 468, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "explore to reuse the representations from different encoder layers", "after": "propose to encourage the decoder to take the full advantage of the multi-level source representations", "start_char_pos": 488, "end_char_pos": 554, "major_intent": "clarity", "raw_intents": ["clarity", "meaning-changed", "clarity"]}, {"type": "R", "before": ", that is,", "after": ". Concretely,", "start_char_pos": 590, "end_char_pos": 600, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "coherence"]}, {"type": "R", "before": ". We investigate multiple , representative strategies for cross-view coding, of which", "after": "and multiple strategies are explored to route the source representations. In particular,", "start_char_pos": 683, "end_char_pos": 768, "major_intent": "clarity", "raw_intents": ["clarity", "meaning-changed", "clarity"]}, {"type": "A", "before": null, "after": "the", "start_char_pos": 883, "end_char_pos": 883, "major_intent": "fluency", "raw_intents": ["fluency", "coherence", "fluency"]}, {"type": "R", "before": ". Especially, GCA surpasses", "after": ", surpassing", "start_char_pos": 916, "end_char_pos": 943, "major_intent": "clarity", "raw_intents": ["coherence", "clarity", "clarity"]}, {"type": "R", "before": "machine translation", "after": "benchmark", "start_char_pos": 996, "end_char_pos": 1015, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}], "sents_char_pos": [0, 147, 373, 470, 684], "domain": "arxiv"} |
|
{"doc_id": "2005.08081", "revision_depth": 2, "before_revision": "In sequence-to-sequence learning, the attention mechanism has been a great success in bridging the information between the encoderand the decoder. However, it is often overlooked that the decoder obtains only a single view of the source sequences, i.e., the representations generated by the last encoder layer . Although those representations are supposed to be a comprehensive, global view of source sequences, such practice keeps the decoders from concrete, fine-grained source information generated by other encoder layers . In this work, we propose to encourage the decoder to take the full advantage of the multi-level source representations for layer-wise cross-view decoding . Concretely, different views of the source sequences are presented to different decoder layers and multiple strategies are explored to route the source representations. In particular, the granularity consistent attention (GCA) strategy proves the most efficient and effective in the experiments on the neural machine translation task, surpassing the previous state-of-the-art architecture on three benchmark datasets .", "after_revision": "In sequence-to-sequence learning, the decoder relies on the attention mechanism to efficiently extract information from the encoder. While it is common practice to draw information from only the last encoder layer, recent work has proposed to use representations from different encoder layers for diversified levels of information. Nonetheless, the decoder still obtains only a single view of the source sequences, which might lead to insufficient training of the encoder layer stack due to the hierarchy bypassing problem . In this work, we propose layer-wise cross-view decoding , where for each decoder layer, together with the representations from the last encoder layer, which serve as a global view, those from other encoder layers are supplemented for a stereoscopic view of the source sequences. Systematic experiments show that we successfully address the hierarchy bypassing problem and substantially improve the performance of sequence-to-sequence learning with deep representations on diverse tasks .", "edit_actions": [{"type": "R", "before": "attention mechanism has been a great success in bridging the information between the encoderand the decoder. However, it is often overlooked that the decoder", "after": "decoder relies on the attention mechanism to efficiently extract information from the encoder. While it is common practice to draw information from only the last encoder layer, recent work has proposed to use representations from different encoder layers for diversified levels of information. Nonetheless, the decoder still", "start_char_pos": 38, "end_char_pos": 195, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "i.e., the representations generated by the last encoder layer . Although those representations are supposed to be a comprehensive, global view of source sequences, such practice keeps the decoders from concrete, fine-grained source information generated by other encoder layers", "after": "which might lead to insufficient training of the encoder layer stack due to the hierarchy bypassing problem", "start_char_pos": 248, "end_char_pos": 525, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "clarity"]}, {"type": "D", "before": "to encourage the decoder to take the full advantage of the multi-level source representations for", "after": null, "start_char_pos": 553, "end_char_pos": 650, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "R", "before": ". Concretely, different views of the source sequences are presented to different decoder layers and multiple strategies are explored to route the source representations. In particular, the granularity consistent attention (GCA) strategy proves the most efficient and effective in the experiments on the neural machine translation task, surpassing the previous state-of-the-art architecture on three benchmark datasets", "after": ", where for each decoder layer, together with the representations from the last encoder layer, which serve as a global view, those from other encoder layers are supplemented for a stereoscopic view of the source sequences. Systematic experiments show that we successfully address the hierarchy bypassing problem and substantially improve the performance of sequence-to-sequence learning with deep representations on diverse tasks", "start_char_pos": 682, "end_char_pos": 1099, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "clarity", "meaning-changed"]}], "sents_char_pos": [0, 146, 311, 527, 851], "domain": "arxiv"} |
|
{"doc_id": "2005.08081", "revision_depth": 3, "before_revision": "In sequence-to-sequence learning, the decoder relies on the attention mechanism to efficiently extract information from the encoder. While it is common practice to draw information from only the last encoder layer, recent work has proposed to use representations from different encoder layers for diversified levels of information. Nonetheless, the decoder still obtains only a single view of the source sequences, which might lead to insufficient training of the encoder layer stack due to the hierarchy bypassing problem. In this work, we propose layer-wise cross-view decoding, where for each decoder layer, together with the representations from the last encoder layer, which serve as a global view, those from other encoder layers are supplemented for a stereoscopic view of the source sequences. Systematic experiments show that we successfully address the hierarchy bypassing problem and substantially improve the performance of sequence-to-sequence learning with deep representations on diverse tasks .", "after_revision": "In sequence-to-sequence learning, e.g., natural language generation, the decoder relies on the attention mechanism to efficiently extract information from the encoder. While it is common practice to draw information from only the last encoder layer, recent work has proposed to use representations from different encoder layers for diversified levels of information. Nonetheless, the decoder still obtains only a single view of the source sequences, which might lead to insufficient training of the encoder layer stack due to the hierarchy bypassing problem. In this work, we propose layer-wise multi-view decoding, where for each decoder layer, together with the representations from the last encoder layer, which serve as a global view, those from other encoder layers are supplemented for a stereoscopic view of the source sequences. Systematic experiments and analyses show that we successfully address the hierarchy bypassing problem and substantially improve the performance of sequence-to-sequence learning with deep representations on diverse tasks , i.e., machine translation, abstractive summarization and image captioning. In particular, our approach surpasses the previous state-of-the-art models on three benchmark machine translation datasets .", "edit_actions": [{"type": "A", "before": null, "after": "e.g., natural language generation,", "start_char_pos": 34, "end_char_pos": 34, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "cross-view", "after": "multi-view", "start_char_pos": 561, "end_char_pos": 571, "major_intent": "clarity", "raw_intents": ["style", "clarity", "clarity"]}, {"type": "A", "before": null, "after": "and analyses", "start_char_pos": 826, "end_char_pos": 826, "major_intent": "coherence", "raw_intents": ["meaning-changed", "coherence", "coherence"]}, {"type": "A", "before": null, "after": ", i.e., machine translation, abstractive summarization and image captioning. In particular, our approach surpasses the previous state-of-the-art models on three benchmark machine translation datasets", "start_char_pos": 1011, "end_char_pos": 1011, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}], "sents_char_pos": [0, 133, 332, 524, 802], "domain": "arxiv"} |
|
{"doc_id": "2005.14108", "revision_depth": 1, "before_revision": "Deep leaning models have been used widely for various purposes in recent years in object recognition, self-driving cars, face recognition, speech recognition, sentiment analysis and many others. However, in recent years it has been shown that these models possess weakness to noises which forces the model to misclassify. This issue has been studied profoundly in image and audio domain. Very little has been studied on this issue with respect to textual data. Even less survey on this topic has been performed to understand different types of attacks and defense techniques. In this manuscript we accumulated and analyzed different attacking techniques , various defense models on how to overcome this issue in order to provide a more comprehensive idea. Later we point out some of the interesting findings of all papers and challenges that need to be overcome in order to move forward in this field.", "after_revision": "Deep learning models have been used widely for various purposes in recent years in object recognition, self-driving cars, face recognition, speech recognition, sentiment analysis , and many others. However, in recent years it has been shown that these models possess weakness to noises which force the model to misclassify. This issue has been studied profoundly in the image and audio domain. Very little has been studied on this issue concerning textual data. Even less survey on this topic has been performed to understand different types of attacks and defense techniques. In this manuscript , we accumulated and analyzed different attacking techniques and various defense models to provide a more comprehensive idea. Later we point out some of the interesting findings of all papers and challenges that need to be overcome to move forward in this field.", "edit_actions": [{"type": "R", "before": "leaning", "after": "learning", "start_char_pos": 5, "end_char_pos": 12, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 178, "end_char_pos": 178, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "forces", "after": "force", "start_char_pos": 290, "end_char_pos": 296, "major_intent": "fluency", "raw_intents": ["clarity", "fluency", "fluency"]}, {"type": "A", "before": null, "after": "the", "start_char_pos": 365, "end_char_pos": 365, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "with respect to", "after": "concerning", "start_char_pos": 433, "end_char_pos": 448, "major_intent": "clarity", "raw_intents": ["coherence", "clarity", "clarity"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 597, "end_char_pos": 597, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": ",", "after": "and", "start_char_pos": 657, "end_char_pos": 658, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "on how to overcome this issue in order to", "after": "to", "start_char_pos": 682, "end_char_pos": 723, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "in order", "after": null, "start_char_pos": 865, "end_char_pos": 873, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "coherence"]}], "sents_char_pos": [0, 195, 322, 389, 462, 577, 758], "domain": "arxiv"} |
|
{"doc_id": "2006.00632", "revision_depth": 1, "before_revision": "Deep neural networks excel at learning from labeled data and achieve state-of-the-art results on a wide array of Natural Language Processing tasks. In contrast, learning from unlabeled data, especially under domain shift, remains a challenge. Motivated by the latest advances, in this survey we review neural unsupervised domain adaptation techniques which do not require labeled target domain data. This is a more challenging yet a more widely applicable setup. We outline methods, from early approaches in traditional non-neural methods to pre-trained model transfer. We also revisit the notion of domain, and we uncover a bias in the type of Natural Language Processing tasks which received most attention. Lastly, we outline future directions, particularly the broader need for out-of-distribution generalization of future intelligent NLP.", "after_revision": "Deep neural networks excel at learning from labeled data and achieve state-of-the-art resultson a wide array of Natural Language Processing tasks. In contrast, learning from unlabeled data, especially under domain shift, remains a challenge. Motivated by the latest advances, in this survey we review neural unsupervised domain adaptation techniques which do not require labeled target domain data. This is a more challenging yet a more widely applicable setup. We outline methods, from early traditional non-neural methods to pre-trained model transfer. We also revisit the notion of domain, and we uncover a bias in the type of Natural Language Processing tasks which received most attention. Lastly, we outline future directions, particularly the broader need for out-of-distribution generalization of future NLP.", "edit_actions": [{"type": "R", "before": "results on", "after": "resultson", "start_char_pos": 86, "end_char_pos": 96, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "others"]}, {"type": "D", "before": "approaches in", "after": null, "start_char_pos": 494, "end_char_pos": 507, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "intelligent", "after": null, "start_char_pos": 827, "end_char_pos": 838, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "coherence"]}], "sents_char_pos": [0, 147, 242, 399, 462, 569, 709], "domain": "arxiv"} |
|
{"doc_id": "2008.12988", "revision_depth": 1, "before_revision": "We give a general framework for inference in spanning tree models. We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models. Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms. These algorithms are easy to implement, given the prevalence of automatic differentiation software. We motivate the development of our framework with several cautionary tales of previous re-search , which has developed numerous less-than-optimal algorithms for computing expectations and their gradients. We demonstrate how our framework efficiently computes several quantities with known algorithms, including the expected attachment score, entropy, and generalized expectation criteria. As a bonus, we give algorithms for quantities that are missing in the literature, including the KL divergence. In all cases, our approach matches the efficiency of existing algorithms and, in several cases, reducesthe runtime complexity by a factor (or two) of the sentence length. We validate the implementation of our framework through runtime experiments. We find our algorithms are upto 12 and 26 times faster than previous algorithms for computing the Shannon entropy and the gradient of the generalized expectation objective, respectively.", "after_revision": "We give a general framework for inference in spanning tree models. We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models. Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms. These algorithms are easy to implement, given the prevalence of automatic differentiation software. We motivate the development of our framework with several cautionary tales of previous research , which has developed numerous less-than-optimal algorithms for computing expectations and their gradients. We demonstrate how our framework efficiently computes several quantities with known algorithms, including the expected attachment score, entropy, and generalized expectation criteria. As a bonus, we give algorithms for quantities that are missing in the literature, including the KL divergence. In all cases, our approach matches the efficiency of existing algorithms and, in several cases, reduces the runtime complexity by a factor (or two) of the sentence length. We validate the implementation of our framework through runtime experiments. We find our algorithms are up to 12 and 26 times faster than previous algorithms for computing the Shannon entropy and the gradient of the generalized expectation objective, respectively.", "edit_actions": [{"type": "R", "before": "re-search", "after": "research", "start_char_pos": 553, "end_char_pos": 562, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "reducesthe", "after": "reduces the", "start_char_pos": 1062, "end_char_pos": 1072, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "upto", "after": "up to", "start_char_pos": 1241, "end_char_pos": 1245, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}], "sents_char_pos": [0, 66, 233, 365, 465, 670, 854, 965, 1136, 1213], "domain": "arxiv"} |
|
{"doc_id": "2008.12988", "revision_depth": 2, "before_revision": "We give a general framework for inference in spanning tree models. We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models. Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms. These algorithms are easy to implement, given the prevalence of automatic differentiation software. We motivate the development of our framework with several cautionary tales of previous research, which has developed numerous less-than-optimal algorithms for computing expectations and their gradients. We demonstrate how our framework efficiently computes several quantities with known algorithms, including the expected attachment score, entropy, and generalized expectation criteria . As a bonus, we give algorithms for quantities that are missing in the literature, including the KL divergence . In all cases, our approach matches the efficiency of existing algorithms and, in several cases, reduces the runtime complexity by a factor (or two) of the sentence length. We validate the implementation of our framework through runtime experiments. We find our algorithms are up to 12 and 26 times faster than previous algorithms for computing the Shannon entropy and the gradient of the generalized expectation objective, respectively .", "after_revision": "We propose a general framework for computing expectations in edge-factored, non-projective spanning-tree models. Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms. We motivate the development of our framework with several cautionary tales of previous research, which has developed numerous inefficient algorithms for computing expectations and their gradients. We demonstrate how our framework efficiently computes several quantities with known algorithms, including the Shannon entropy, the expected attachment score, and the generalized expectation criterion . As a bonus, we give algorithms for quantities that are missing in the literature, including the gradient of entropy, the KL divergence, and the gradient of the KL divergence . In all cases, our approach matches the efficiency of existing algorithms and, in several cases, reduces the runtime complexity by a factor of the sentence length. We validate our framework through rigorous proofs of correctness and efficiency .", "edit_actions": [{"type": "R", "before": "give", "after": "propose", "start_char_pos": 3, "end_char_pos": 7, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "inference in spanning tree models. We propose unified algorithms for the important cases of first-order expectations and second-order expectations", "after": "computing expectations", "start_char_pos": 32, "end_char_pos": 178, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "These algorithms are easy to implement, given the prevalence of automatic differentiation software.", "after": null, "start_char_pos": 366, "end_char_pos": 465, "major_intent": "coherence", "raw_intents": ["clarity", "coherence", "coherence"]}, {"type": "R", "before": "cautionary tales", "after": "cautionary tales", "start_char_pos": 524, "end_char_pos": 540, "major_intent": "style", "raw_intents": ["style", "style", "coherence"]}, {"type": "R", "before": "less-than-optimal", "after": "inefficient", "start_char_pos": 592, "end_char_pos": 609, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": "Shannon entropy, the", "start_char_pos": 779, "end_char_pos": 779, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "entropy, and generalized expectation criteria", "after": "and the generalized expectation criterion", "start_char_pos": 807, "end_char_pos": 852, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "KL divergence", "after": "gradient of entropy, the KL divergence, and the gradient of the KL divergence", "start_char_pos": 951, "end_char_pos": 964, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "D", "before": "(or two)", "after": null, "start_char_pos": 1106, "end_char_pos": 1114, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "the implementation of", "after": null, "start_char_pos": 1151, "end_char_pos": 1172, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "runtime experiments. We find our algorithms are up to 12 and 26 times faster than previous algorithms for computing the Shannon entropy and the gradient of the generalized expectation objective, respectively", "after": "rigorous proofs of correctness and efficiency", "start_char_pos": 1195, "end_char_pos": 1402, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}], "sents_char_pos": [0, 66, 233, 365, 465, 668, 854, 966, 1138, 1215], "domain": "arxiv"} |
|
{"doc_id": "2008.12988", "revision_depth": 3, "before_revision": "We propose a general framework for computing expectations in edge-factored, non-projective spanning-tree models. Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms. We motivate the development of our framework with several emph{cautionary tales} of previous research, which has developed numerous inefficient algorithms for computing expectations and their gradients. We demonstrate how our framework efficiently computes several quantities with known algorithms, including the Shannon entropy, the expected attachment score, and the generalized expectation criterion . As a bonus, we give algorithms for quantities that are missing in the literature, including the gradient of entropy, the KL divergence, and the gradient of the KL divergence . In all cases, our approach matches the efficiency of existing algorithms and, in several cases, reduces the runtime complexity by a factor of the sentence length. We validate our framework through rigorous proofs of correctness and efficiency .", "after_revision": "We give a general framework for inference in spanning tree models. We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models. Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms. These algorithms are easy to implement with or without automatic differentiation software. We motivate the development of our framework with several emph{cautionary tales} of previous research, which has developed numerous inefficient algorithms for computing expectations and their gradients. We demonstrate how our framework efficiently computes several quantities with known algorithms, including the expected attachment score, entropy, and generalized expectation criteria . As a bonus, we give algorithms for quantities that are missing in the literature, including the KL divergence . In all cases, our approach matches the efficiency of existing algorithms and, in several cases, reduces the runtime complexity by a factor of the sentence length. We validate the implementation of our framework through runtime experiments. We find our algorithms are up to 15 and 9 times faster than previous algorithms for computing the Shannon entropy and the gradient of the generalized expectation objective, respectively .", "edit_actions": [{"type": "R", "before": "propose", "after": "give", "start_char_pos": 3, "end_char_pos": 10, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "R", "before": "computing expectations", "after": "inference in spanning tree models. We propose unified algorithms for the important cases of first-order expectations and second-order expectations", "start_char_pos": 35, "end_char_pos": 57, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "clarity", "meaning-changed"]}, {"type": "A", "before": null, "after": "These algorithms are easy to implement with or without automatic differentiation software.", "start_char_pos": 245, "end_char_pos": 245, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "D", "before": "Shannon entropy, the", "after": null, "start_char_pos": 559, "end_char_pos": 579, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "coherence"]}, {"type": "R", "before": "and the generalized expectation criterion", "after": "entropy, and generalized expectation criteria", "start_char_pos": 607, "end_char_pos": 648, "major_intent": "clarity", "raw_intents": ["clarity", "coherence", "clarity"]}, {"type": "R", "before": "gradient of entropy, the KL divergence, and the gradient of the KL divergence", "after": "KL divergence", "start_char_pos": 747, "end_char_pos": 824, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": "the implementation of", "start_char_pos": 1002, "end_char_pos": 1002, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "coherence"]}, {"type": "R", "before": "rigorous proofs of correctness and efficiency", "after": "runtime experiments. We find our algorithms are up to 15 and 9 times faster than previous algorithms for computing the Shannon entropy and the gradient of the generalized expectation objective, respectively", "start_char_pos": 1025, "end_char_pos": 1070, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}], "sents_char_pos": [0, 112, 244, 448, 650, 826, 989], "domain": "arxiv"} |
|
{"doc_id": "2009.05169", "revision_depth": 2, "before_revision": "We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on task-specific parts of the input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust differentiable top-k operator. For example, our experiments on a challenging summarization task of long documents show that our method is much faster and up to 16 times more memory efficient while significantly outperforming both dense and state-of-the-art sparse transformer models. The method can be effortlessly applied to many models used in NLP and CV, simultaneously with other improvements since representation pooling addresses a different aspect of the attention's complexity problem .", "after_revision": "We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on task-specific parts of the input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-k operator. For example, our experiments on a challenging summarization task of long documents show that our method is over 3 times faster and up to 16 times more memory efficient while significantly outperforming both dense and state-of-the-art sparse transformer models. The method can be effortlessly applied to many models used in NLP and CV, simultaneously with other improvements .", "edit_actions": [{"type": "R", "before": "differentiable", "after": "trainable", "start_char_pos": 312, "end_char_pos": 326, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "much", "after": "over 3 times", "start_char_pos": 450, "end_char_pos": 454, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "D", "before": "since representation pooling addresses a different aspect of the attention's complexity problem", "after": null, "start_char_pos": 709, "end_char_pos": 804, "major_intent": "clarity", "raw_intents": ["coherence", "clarity", "clarity"]}], "sents_char_pos": [0, 217, 342, 595], "domain": "arxiv"} |
|
{"doc_id": "2009.06891", "revision_depth": 1, "before_revision": "Inspired by Google's Neural Machine Translation (NMT) \\mbox{%DIFAUXCMD Wu2016Google that models the one-to-one alignment in translation tasks with an optimal uniform attention distribution during the inference, this study proposes an attention-aware inference algorithm for Neural Abstractive Summarization (NAS) to regulate generated summaries to attend to source paragraphs/sentences with the optimal coverage. Unlike NMT, the attention-aware inference of NAS requires the prediction of the optimal attention distribution. Therefore, an attention-prediction model is constructed to learn the dependency between attention weights and sources. To apply the attention-aware inference on multi-document summarization, a Hierarchical Transformer (HT) is developed to accept lengthy inputs at the same time project cross-document information. Experiments on WikiSum \\mbox{%DIFAUXCMD liu2018generating By refining the regular beam search with the attention-aware inference , significant improvements on the quality of summaries could be further observed. Last but not the least, the attention-aware inference could be adopted to single-document summarization with straightforward modifications according to the model architecture .", "after_revision": "Inspired by Google's Neural Machine Translation (NMT) that models the one-to-one alignment in translation tasks with an uniform attention distribution during the inference, this study proposes an attention-aware inference algorithm for Neural Abstractive Summarization (NAS) to regulate generated summaries to attend to source contents with the optimal coverage. Unlike NMT, NAS is not based on one-to-one transformation. Instead, its attention distribution for the input should be irregular and depend on the content layout of the source documents. To address this matter, we construct an attention-prediction model to learn the dependency between the optimal attention distribution and the source. By refining the vanilla beam search with the attention-aware mechanism , significant improvements on the quality of summaries could be observed. Last but not the least, the attention-aware inference has strong universality that can be easily adopted to different hierarchical summarization models to promote the models' performance .", "edit_actions": [{"type": "D", "before": "\\mbox{%DIFAUXCMD Wu2016Google", "after": null, "start_char_pos": 54, "end_char_pos": 83, "major_intent": "style", "raw_intents": ["style", "style", "style"]}, {"type": "D", "before": "optimal", "after": null, "start_char_pos": 150, "end_char_pos": 157, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "paragraphs/sentences", "after": "contents", "start_char_pos": 365, "end_char_pos": 385, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "the attention-aware inference of NAS requires the prediction of the optimal attention distribution. Therefore,", "after": "NAS is not based on one-to-one transformation. Instead, its attention distribution for the input should be irregular and depend on the content layout of the source documents. To address this matter, we construct", "start_char_pos": 425, "end_char_pos": 535, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "is constructed", "after": null, "start_char_pos": 566, "end_char_pos": 580, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "coherence"]}, {"type": "R", "before": "attention weights and sources. To apply the attention-aware inference on multi-document summarization, a Hierarchical Transformer (HT) is developed to accept lengthy inputs at the same time project cross-document information. Experiments on WikiSum \\mbox{%DIFAUXCMD liu2018generating", "after": "the optimal attention distribution and the source.", "start_char_pos": 613, "end_char_pos": 896, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "regular", "after": "vanilla", "start_char_pos": 913, "end_char_pos": 920, "major_intent": "style", "raw_intents": ["style", "style", "style"]}, {"type": "R", "before": "inference", "after": "mechanism", "start_char_pos": 958, "end_char_pos": 967, "major_intent": "style", "raw_intents": ["style", "style", "style"]}, {"type": "D", "before": "further", "after": null, "start_char_pos": 1032, "end_char_pos": 1039, "major_intent": "style", "raw_intents": ["style", "style", "style"]}, {"type": "R", "before": "could be adopted to single-document summarization with straightforward modifications according to the model architecture", "after": "has strong universality that can be easily adopted to different hierarchical summarization models to promote the models' performance", "start_char_pos": 1104, "end_char_pos": 1224, "major_intent": "style", "raw_intents": ["style", "style", "style"]}], "sents_char_pos": [0, 412, 524, 643, 838, 1049], "domain": "arxiv"} |
|
{"doc_id": "2009.06891", "revision_depth": 2, "before_revision": "Inspired by Google's Neural Machine Translation (NMT) that models the one-to-one alignment in translation tasks with an uniform attention distribution during the inference, this study proposes an attention-aware inference algorithm for Neural Abstractive Summarization (NAS) to regulate generated summaries to attend to source contents with the optimal coverage. Unlike NMT, NAS is not based on one-to-one transformation. Instead, its attention distribution for the input should be irregular and depend on the content layout of the source documents. To address this matter, we construct an attention-prediction model to learn the dependency between the optimal attention distribution and the source . By refining the vanilla beam searchwith the attention-aware mechanism, significant improvements on the quality of summaries could be observed. Last but not the least, the attention-aware inference has strong universality that can be easily adopted to different hierarchical summarization models to promote the models' performance .", "after_revision": "This paper proposes a novel inference algorithm for seq-to-seq models. It corrects beam search step-by-step via the optimal attention distribution to make the generated text attend to source tokens in a controlled way. Experiments show the proposed attention-aware inference produces summaries rather differently from the beam search, and achieves promising improvements of higher scores and greater conciseness. The algorithm is also proven robust as it remains to outperform beam search significantly even with corrupted attention distributions .", "edit_actions": [{"type": "R", "before": "Inspired by Google's Neural Machine Translation (NMT) that models the one-to-one alignment in translation tasks with an uniform attention distribution during the inference, this study proposes an attention-aware", "after": "This paper proposes a novel", "start_char_pos": 0, "end_char_pos": 211, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "Neural Abstractive Summarization (NAS) to regulate generated summaries to attend to source contents with the optimal coverage. Unlike NMT, NAS is not based on one-to-one transformation. Instead, its attention distribution for the input should be irregular and depend on the content layout of the source documents. To address this matter, we construct an attention-prediction model to learn the dependency between the optimal attention distribution and the source . By refining the vanilla beam searchwith the", "after": "seq-to-seq models. It corrects beam search step-by-step via the optimal attention distribution to make the generated text attend to source tokens in a controlled way. Experiments show the proposed", "start_char_pos": 236, "end_char_pos": 744, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "mechanism, significant improvements on the quality of summaries could be observed. Last but not the least, the attention-aware inference has strong universality that can be easily adopted to different hierarchical summarization models to promote the models' performance", "after": "inference produces summaries rather differently from the beam search, and achieves promising improvements of higher scores and greater conciseness. The algorithm is also proven robust as it remains to outperform beam search significantly even with corrupted attention distributions", "start_char_pos": 761, "end_char_pos": 1030, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}], "sents_char_pos": [0, 362, 421, 549, 700, 843], "domain": "arxiv"} |
|
{"doc_id": "2009.06891", "revision_depth": 3, "before_revision": "This paper proposes a novel inference algorithm for seq-to-seq models. It corrects beam search step-by-step via the optimal attention distribution to make the generated text attend to source tokens in a controlled way. Experiments show the proposed attention-aware inferenceproduces summaries rather differently from the beam search , and achieves promising improvements of higher scores and greater conciseness. The algorithm is also proven robust as it remains to outperform beam search significantly even with corrupted attention distributions .", "after_revision": "This study develops a calibrated beam-based algorithm with global awareness for neural abstractive summarization, aiming to improve the local optimality problem of the original beam search in a rigorous way. Specifically, a novel global protocol is proposed based on the attention distribution to stipulate how a global optimal hypothesis should attend to the source. A global scoring function is then developed to regulate beam search to generate summaries in a more near-global optimal fashion. This novel design enjoys a distinctive property, i.e. the global attention distribution could be predicted before inference, enabling stepwise improvements on the beam search through the global scoring function. Extensive experiments on 9 datasets show that the global-aware inference significantly improves state-of-the-art summarization models even using empirical hyper-parameters. The algorithm is also proven robust as it remains to generate meaningful texts with corrupted attention distributions . The codes and a comprehensive set of examples are available .", "edit_actions": [{"type": "R", "before": "paper proposes a novel inference algorithm for seq-to-seq models. It corrects beam search step-by-step via the optimal", "after": "study develops a calibrated beam-based algorithm with global awareness for neural abstractive summarization, aiming to improve the local optimality problem of the original beam search in a rigorous way. Specifically, a novel global protocol is proposed based on the", "start_char_pos": 5, "end_char_pos": 123, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "make the generated text attend to source tokens in a controlled way. Experiments show the proposed attention-aware inferenceproduces summaries rather differently from", "after": "stipulate how a global optimal hypothesis should attend to the source. A global scoring function is then developed to regulate beam search to generate summaries in a more near-global optimal fashion. This novel design enjoys a distinctive property, i.e. the global attention distribution could be predicted before inference, enabling stepwise improvements on", "start_char_pos": 150, "end_char_pos": 316, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": ", and achieves promising improvements of higher scores and greater conciseness.", "after": "through the global scoring function. Extensive experiments on 9 datasets show that the global-aware inference significantly improves state-of-the-art summarization models even using empirical hyper-parameters.", "start_char_pos": 333, "end_char_pos": 412, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "outperform beam search significantly even", "after": "generate meaningful texts", "start_char_pos": 466, "end_char_pos": 507, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "A", "before": null, "after": ". The codes and a comprehensive set of examples are available", "start_char_pos": 547, "end_char_pos": 547, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}], "sents_char_pos": [0, 70, 218, 412], "domain": "arxiv"} |
|
{"doc_id": "2009.08553", "revision_depth": 2, "before_revision": "Conventional sparse retrieval methods such as TF-IDF and BM25 are simple and efficient, but solely rely on lexical overlap without semantic matching. Recent dense retrieval methods learn latent representations to tackle the lexical mismatch problem, while being more computationally expensive and insufficient for exact matching as they embed the text sequence into a single vector with limited capacity. In this paper, we present Generation-Augmented Retrieval (GAR) , a query expansion method that augments a query with relevant contexts through text generation . We demonstrate on open-domain question answering that the generated contexts significantly enrich the semantics of the queries and thus GAR with sparse representations (BM25) achieves comparable or better performance than the state-of-the-art dense methods such as DPR \\mbox{%DIFAUXCMD karpukhin2020dense . We show that generating various contexts of a query is beneficial as fusing their results consistently yields better retrieval accuracy. Moreover, as sparse and dense representations are often complementary, GAR can be easily combined with DPR to achieve even better performance. Furthermore, GAR achieves the state-of-the-art performance on the Natural Questions and TriviaQA datasets under the extractive setting when equipped with an extractive reader, and consistently outperforms other retrieval methods when the same generative reader is used.", "after_revision": "We propose Generation-Augmented Retrieval (GAR) for answering open-domain questions, which augments a query through text generation of heuristically discovered relevant contexts without external resources as supervision . We demonstrate that the generated contexts substantially enrich the semantics of the queries and GAR with sparse representations (BM25) achieves comparable or better performance than state-of-the-art dense retrieval methods such as DPR . We show that generating diverse contexts for a query is beneficial as fusing their results consistently yields better retrieval accuracy. Moreover, as sparse and dense representations are often complementary, GAR can be easily combined with DPR to achieve even better performance. GAR achieves state-of-the-art performance on Natural Questions and TriviaQA datasets under the extractive QA setup when equipped with an extractive reader, and consistently outperforms other retrieval methods when the same generative reader is used.", "edit_actions": [{"type": "R", "before": "Conventional sparse retrieval methods such as TF-IDF and BM25 are simple and efficient, but solely rely on lexical overlap without semantic matching. Recent dense retrieval methods learn latent representations to tackle the lexical mismatch problem, while being more computationally expensive and insufficient for exact matching as they embed the text sequence into a single vector with limited capacity. In this paper, we present", "after": "We propose", "start_char_pos": 0, "end_char_pos": 430, "major_intent": "clarity", "raw_intents": ["clarity", "coherence", "clarity"]}, {"type": "R", "before": ", a query expansion method that", "after": "for answering open-domain questions, which", "start_char_pos": 468, "end_char_pos": 499, "major_intent": "clarity", "raw_intents": ["clarity", "meaning-changed", "clarity"]}, {"type": "D", "before": "with relevant contexts", "after": null, "start_char_pos": 517, "end_char_pos": 539, "major_intent": "coherence", "raw_intents": ["clarity", "coherence", "coherence"]}, {"type": "A", "before": null, "after": "of heuristically discovered relevant contexts without external resources as supervision", "start_char_pos": 564, "end_char_pos": 564, "major_intent": "clarity", "raw_intents": ["clarity", "meaning-changed", "clarity"]}, {"type": "D", "before": "on open-domain question answering", "after": null, "start_char_pos": 582, "end_char_pos": 615, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "clarity"]}, {"type": "R", "before": "significantly", "after": "substantially", "start_char_pos": 644, "end_char_pos": 657, "major_intent": "clarity", "raw_intents": ["clarity", "style", "clarity"]}, {"type": "D", "before": "thus", "after": null, "start_char_pos": 698, "end_char_pos": 702, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "clarity"]}, {"type": "D", "before": "the", "after": null, "start_char_pos": 789, "end_char_pos": 792, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "clarity"]}, {"type": "A", "before": null, "after": "retrieval", "start_char_pos": 816, "end_char_pos": 816, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "meaning-changed"]}, {"type": "D", "before": "\\mbox{%DIFAUXCMD karpukhin2020dense", "after": null, "start_char_pos": 837, "end_char_pos": 872, "major_intent": "others", "raw_intents": ["others", "others", "others"]}, {"type": "R", "before": "various contexts of", "after": "diverse contexts for", "start_char_pos": 899, "end_char_pos": 918, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "Furthermore, GAR achieves the", "after": "GAR achieves", "start_char_pos": 1155, "end_char_pos": 1184, "major_intent": "coherence", "raw_intents": ["clarity", "coherence", "coherence"]}, {"type": "D", "before": "the", "after": null, "start_char_pos": 1217, "end_char_pos": 1220, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "coherence"]}, {"type": "R", "before": "setting", "after": "QA setup", "start_char_pos": 1282, "end_char_pos": 1289, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "meaning-changed"]}], "sents_char_pos": [0, 149, 404, 566, 874, 1011, 1154]} |
|
{"doc_id": "2011.10896", "revision_depth": 2, "before_revision": "Hardware-agnostic programming with high performance portability will be the bedrock for realizing the ubiquitous adoption of emerging accelerator technologies in future heterogeneous high-performance computing (HPC) systems, which is the key to achieving the next level of HPC performance on an expanding accelerator landscape. In this paper , we present HALO 1.0, an open-ended extensible multi-agent software framework that implements a set of proposed hardware-agnostic accelerator orchestration (HALO) principles and a novel compute-centric message passing interface (C^2MPI) specification for enabling the portable and performance-optimized execution of hardware-agnostic application host codes across heterogeneous accelerator resources . The experiment results of evaluating eight widely used HPC subroutines based on Intel Xeon E5-2620 v4 CPUs, Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows for a unified control flow for the host program to run across all the computing devices with a consistently maximum performance portability score of 1.0 , which is 2x-861,883x higher than the OpenCL-based solution that suffers from an unstably low performance portability score. of the documentation of their work .", "after_revision": "This paper presents HALO 1.0, an open-ended extensible multi-agent software framework that implements a set of proposed hardware-agnostic accelerator orchestration (HALO) principles . HALO implements a novel compute-centric message passing interface (C^2MPI) specification for enabling the performance-portable execution of a hardware-agnostic host application across heterogeneous accelerators . The experiment results of evaluating eight widely used HPC subroutines based on Intel Xeon E5-2620 CPUs, Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows for a unified control flow for host programs to run across all the computing devices with a consistently top performance portability score , which is up to five orders of magnitude higher than the OpenCL-based solution .", "edit_actions": [{"type": "R", "before": "Hardware-agnostic programming with high performance portability will be the bedrock for realizing the ubiquitous adoption of emerging accelerator technologies in future heterogeneous high-performance computing (HPC) systems, which is the key to achieving the next level of HPC performance on an expanding accelerator landscape. In this paper , we present", "after": "This paper presents", "start_char_pos": 0, "end_char_pos": 354, "major_intent": "clarity", "raw_intents": ["coherence", "clarity", "clarity"]}, {"type": "R", "before": "and", "after": ". HALO implements", "start_char_pos": 517, "end_char_pos": 520, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "meaning-changed"]}, {"type": "R", "before": "portable and performance-optimized execution of", "after": "performance-portable execution of a", "start_char_pos": 611, "end_char_pos": 658, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "application host codes across heterogeneous accelerator resources", "after": "host application across heterogeneous accelerators", "start_char_pos": 677, "end_char_pos": 742, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "v4", "after": null, "start_char_pos": 844, "end_char_pos": 846, "major_intent": "clarity", "raw_intents": ["clarity", "style", "clarity"]}, {"type": "R", "before": "the host program", "after": "host programs", "start_char_pos": 971, "end_char_pos": 987, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "maximum", "after": "top", "start_char_pos": 1048, "end_char_pos": 1055, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "D", "before": "of 1.0", "after": null, "start_char_pos": 1086, "end_char_pos": 1092, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "R", "before": "2x-861,883x", "after": "up to five orders of magnitude", "start_char_pos": 1104, "end_char_pos": 1115, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "that suffers from an unstably low performance portability score. of the documentation of their work", "after": null, "start_char_pos": 1154, "end_char_pos": 1253, "major_intent": "clarity", "raw_intents": ["coherence", "clarity", "clarity"]}], "sents_char_pos": [0, 327, 744, 852], "domain": "arxiv"} |
|
{"doc_id": "2012.15859", "revision_depth": 2, "before_revision": "Natural Language Processing (NLP) systems learn harmful societal biases that cause them to widely proliferate inequality as they are deployed in more and more situations. To address and combat this , the NLP community relies on a variety of metrics to identify and quantify bias in black-box modelsand to guide efforts at debiasing . Some of these metrics are intrinsic, and are measured in word embedding spaces, and some are extrinsic, which measure the bias present downstream in the tasks that the word embeddings are plugged into. This research examines whether easy-to-measure intrinsic metrics correlate well to real world extrinsic metrics . We measure both intrinsic and extrinsic bias across hundreds of trained models covering different tasks and experimental conditions and find that there is no reliable correlation between these metrics that holds in all scenarios across tasks and languages. We advise that efforts to debias embedding spaces be always also paired with measurement of downstream model bias, and suggest that that community increase effort into making downstream measurement more feasible via creation of additional challenge sets and annotated test data. We additionally release code, a new intrinsic metric, and an annotated test set for gender bias for hatespeech .", "after_revision": "Natural Language Processing (NLP) systems learn harmful societal biases that cause them to amplify inequality as they are deployed in more and more situations. To guide efforts at debiasing these systems , the NLP community relies on a variety of metrics that quantify bias in models . Some of these metrics are intrinsic, measuring bias in word embedding spaces, and some are extrinsic, measuring bias in downstream tasks that the word embeddings enable. Do these intrinsic and extrinsic metrics correlate with each other? We compare intrinsic and extrinsic metrics across hundreds of trained models covering different tasks and experimental conditions . Our results show no reliable correlation between these metrics that holds in all scenarios across tasks and languages. We urge researchers working on debiasing to focus on extrinsic measures of bias, and to make using these measures more feasible via creation of new challenge sets and annotated test data. To aid this effort, we release code, a new intrinsic metric, and an annotated test set focused on gender bias in hate speech .", "edit_actions": [{"type": "R", "before": "widely proliferate", "after": "amplify", "start_char_pos": 91, "end_char_pos": 109, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "address and combat this", "after": "guide efforts at debiasing these systems", "start_char_pos": 174, "end_char_pos": 197, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "R", "before": "to identify and", "after": "that", "start_char_pos": 249, "end_char_pos": 264, "major_intent": "fluency", "raw_intents": ["clarity", "fluency", "fluency"]}, {"type": "R", "before": "black-box modelsand to guide efforts at debiasing", "after": "models", "start_char_pos": 282, "end_char_pos": 331, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "and are measured", "after": "measuring bias", "start_char_pos": 371, "end_char_pos": 387, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "which measure the bias present downstream in the", "after": "measuring bias in downstream", "start_char_pos": 438, "end_char_pos": 486, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "are plugged into. This research examines whether easy-to-measure intrinsic metrics correlate well to real world extrinsic metrics . We measure both", "after": "enable. Do these intrinsic and extrinsic metrics correlate with each other? We compare", "start_char_pos": 518, "end_char_pos": 665, "major_intent": "clarity", "raw_intents": ["clarity", "style", "clarity"]}, {"type": "R", "before": "bias", "after": "metrics", "start_char_pos": 690, "end_char_pos": 694, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "and find that there is", "after": ". Our results show", "start_char_pos": 782, "end_char_pos": 804, "major_intent": "coherence", "raw_intents": ["coherence", "style", "coherence"]}, {"type": "R", "before": "advise that efforts to debias embedding spaces be always also paired with measurement of downstream model", "after": "urge researchers working on debiasing to focus on extrinsic measures of", "start_char_pos": 910, "end_char_pos": 1015, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "suggest that that community increase effort into making downstream measurement", "after": "to make using these measures", "start_char_pos": 1026, "end_char_pos": 1104, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "additional", "after": "new", "start_char_pos": 1135, "end_char_pos": 1145, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "R", "before": "We additionally", "after": "To aid this effort, we", "start_char_pos": 1186, "end_char_pos": 1201, "major_intent": "clarity", "raw_intents": ["style", "clarity", "clarity"]}, {"type": "R", "before": "for gender bias for hatespeech", "after": "focused on gender bias in hate speech", "start_char_pos": 1266, "end_char_pos": 1296, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "fluency"]}], "sents_char_pos": [0, 170, 535, 649, 906, 1185], "domain": "arxiv"} |
|
{"doc_id": "2103.06874", "revision_depth": 1, "before_revision": "Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences--without explicit tokenization or vocabulary--and a pre-training strategy with soft inductive biases in place of hard token boundaries. To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes con-text . CANINE outperforms a comparable mBERT model by >= 1 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28\\% fewer model parameters.", "after_revision": "Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy with soft inductive biases in place of hard token boundaries. To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context . CANINE outperforms a comparable mBERT model by >= 1 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28\\% fewer model parameters.", "edit_actions": [{"type": "R", "before": "sequences--without", "after": "sequences, without", "start_char_pos": 512, "end_char_pos": 530, "major_intent": "coherence", "raw_intents": ["coherence", "clarity", "coherence"]}, {"type": "R", "before": "vocabulary--and", "after": "vocabulary, and", "start_char_pos": 556, "end_char_pos": 571, "major_intent": "coherence", "raw_intents": ["coherence", "fluency", "coherence"]}, {"type": "R", "before": "con-text", "after": "context", "start_char_pos": 834, "end_char_pos": 842, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}], "sents_char_pos": [0, 162, 424, 657, 844]} |
|
{"doc_id": "2103.14972", "revision_depth": 1, "before_revision": "This paper describes a corpus annotation process to support the identification of hate speech and offensive language in social media. The corpus was collected from Instagram pages of political personalities and manually annotated, being composed by 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments ), the level of the offense (highly offensive, moderately offensive and slightly offensive messages), and the identification regarding the target of the discriminatory content (xenophobia, racism, homophobia, sexism, religion intolerance, partyism, apology to the dictatorship, antisemitism and fat phobia). Each comment was annotated by three different annotators, which achieved high inter-annotator agreement .", "after_revision": "This paper describes a corpus annotation process to support the identification of hate speech and offensive language in social media. In addition, we provide the first robust corpus this kind for the Brazilian Portuguese language. The corpus was collected from Instagram pages of political personalities and manually annotated, being composed by 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive language ), the level of offense (highly offensive, moderately offensive and slightly offensive messages), and the identification regarding the target of the discriminatory content (xenophobia, racism, homophobia, sexism, religion intolerance, partyism, apology to the dictatorship, antisemitism and fat phobia). Each comment was annotated by three different annotators, which achieved high inter-annotator agreement . The proposed annotation process is also language and domain independent .", "edit_actions": [{"type": "A", "before": null, "after": "In addition, we provide the first robust corpus this kind for the Brazilian Portuguese language.", "start_char_pos": 134, "end_char_pos": 134, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "comments", "after": "language", "start_char_pos": 369, "end_char_pos": 377, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "the", "after": null, "start_char_pos": 394, "end_char_pos": 397, "major_intent": "fluency", "raw_intents": ["fluency", "clarity", "fluency"]}, {"type": "A", "before": null, "after": ". The proposed annotation process is also language and domain independent", "start_char_pos": 790, "end_char_pos": 790, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}], "sents_char_pos": [0, 133, 312, 685], "domain": "arxiv"} |
|
{"doc_id": "2103.14972", "revision_depth": 2, "before_revision": "This paper describes a corpus annotation process to support the identification of hate speech and offensive language in social media. In addition, we provide the first robust corpus this kind for the Brazilian Portuguese language. The corpus was collected from Instagram pages of political personalities and manually annotated, being composed by 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive language), the level of offense (highly offensive, moderately offensive and slightly offensive messages), and the identification regarding the target of the discriminatory content (xenophobia, racism, homophobia, sexism, religion intolerance, partyism, apology to the dictatorship, antisemitism and fat phobia). Each comment was annotated by three different annotators, which achieved high inter-annotator agreement. The proposed annotation process is also language and domain independent .", "after_revision": "This paper describes a corpus annotation process to support the identification of hate speech and offensive language in social media. In addition, we provide the first robust corpus this kind for the Brazilian Portuguese language. The corpus was collected from Instagram pages of political personalities and manually annotated, being composed by 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive language), the level of offense (highly offensive, moderately offensive and slightly offensive messages), and the identification regarding the target of the discriminatory content (xenophobia, racism, homophobia, sexism, religion intolerance, partyism, apology to the dictatorship, antisemitism and fat phobia). Each comment was annotated by three different annotators, which achieved high inter-annotator agreement. The proposed annotation approach is also language and domain independent , nevertheless, it was currently applied for Brazilian Portuguese .", "edit_actions": [{"type": "R", "before": "process", "after": "approach", "start_char_pos": 906, "end_char_pos": 913, "major_intent": "clarity", "raw_intents": ["clarity", "style", "clarity"]}, {"type": "A", "before": null, "after": ", nevertheless, it was currently applied for Brazilian Portuguese", "start_char_pos": 954, "end_char_pos": 954, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}], "sents_char_pos": [0, 133, 230, 408, 776, 881], "domain": "arxiv"} |
|
{"doc_id": "2103.14972", "revision_depth": 3, "before_revision": "This paperdescribes a corpus annotation process to support the identification of hate speech and offensive language in social media. In addition, we provide the first robust corpus this kind for the Brazilian Portuguese language. The corpus was collected from Instagram pages of political personalities and manually annotated, being composed by 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive language), the level of offense (highly offensive, moderately offensive and slightly offensive messages), and the identification regarding the target of the discriminatory content (xenophobia, racism, homophobia, sexism, religion intolerance, partyism, apology to the dictatorship, antisemitism and fat phobia). Each comment was annotated by three different annotators, which achieved high inter-annotator agreement. The proposed annotation approach is also language and domain independent, nevertheless, it was currently applied for Brazilian Portuguese .", "after_revision": "The understanding of an offense is subjective and people may have different opinions about the offensiveness of a comment. Also, offenses and hate speech may occur through sarcasm, which hides the real intention of the comment and makes the decision of the annotators more confusing. Therefore, provide a well-structured annotation process is crucial to a better understanding of hate speech and offensive language phenomena, as well as supply better performance for machine learning classifiers. In this paper, we describe a corpus annotation process , which was guided by a linguist, and a hate speech skilled to support the identification of hate speech and offensive language on social media. In addition, we provide the first robust corpus of this kind for the Brazilian Portuguese language. The corpus was collected from Instagram posts of political personalities and manually annotated, being composed by 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive language), the level of offense (highly offensive, moderately offensive , and slightly offensive messages), and the identification regarding the target of the discriminatory content (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology to the dictatorship, antisemitism , and fat phobia). Each comment was annotated by three different annotators, and achieved high inter-annotator agreement. The new proposed annotation approach is also language and domain-independent (although it has been applied for Brazilian Portuguese ) .", "edit_actions": [{"type": "R", "before": "This paperdescribes", "after": "The understanding of an offense is subjective and people may have different opinions about the offensiveness of a comment. Also, offenses and hate speech may occur through sarcasm, which hides the real intention of the comment and makes the decision of the annotators more confusing. Therefore, provide a well-structured annotation process is crucial to a better understanding of hate speech and offensive language phenomena, as well as supply better performance for machine learning classifiers. In this paper, we describe", "start_char_pos": 0, "end_char_pos": 19, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "A", "before": null, "after": ", which was guided by a linguist, and a hate speech skilled", "start_char_pos": 48, "end_char_pos": 48, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "in", "after": "on", "start_char_pos": 117, "end_char_pos": 119, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "A", "before": null, "after": "of", "start_char_pos": 182, "end_char_pos": 182, "major_intent": "fluency", "raw_intents": ["fluency", "clarity", "fluency"]}, {"type": "R", "before": "pages", "after": "posts", "start_char_pos": 272, "end_char_pos": 277, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 538, "end_char_pos": 538, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "coherence"]}, {"type": "R", "before": "religion", "after": "religious", "start_char_pos": 688, "end_char_pos": 696, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 762, "end_char_pos": 762, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "which", "after": "and", "start_char_pos": 838, "end_char_pos": 843, "major_intent": "coherence", "raw_intents": ["coherence", "fluency", "coherence"]}, {"type": "A", "before": null, "after": "new", "start_char_pos": 889, "end_char_pos": 889, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "domain independent, nevertheless, it was currently", "after": "domain-independent (although it has been", "start_char_pos": 940, "end_char_pos": 990, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": ")", "start_char_pos": 1024, "end_char_pos": 1024, "major_intent": "fluency", "raw_intents": ["fluency", "others", "fluency"]}], "sents_char_pos": [0, 133, 231, 409, 779, 884], "domain": "arxiv"} |
|
{"doc_id": "2104.12265", "revision_depth": 1, "before_revision": "This paper presents a new approach for offensive language and hate speech detection on social media. Our approach incorporates an offensive lexicon composed by implicit and explicit offensive and swearing expressions annotated with binary classes: context-dependent offensive and context-independent offensive. Due to the severity of the hate speech and offensive comments in Brazil and the lack of research in Portuguese, Brazilian Portuguese is the language used to validate our method. However, the proposal may be applied to any other language or domain. Based on the obtained results, the proposed approach showed high performance results overcoming the current baselines for European and Brazilian Portuguese.", "after_revision": "This paper presents a new approach for offensive language and hate speech detection on social media. Our approach incorporate an offensive lexicon composed by implicit and explicit offensive and swearing expressions annotated with binary classes: context-dependent and context-independent offensive. Due to the severity of the hate speech and offensive comments in Brazil , and the lack of research in Portuguese, Brazilian Portuguese is the language used to validate our method. However, the proposal may be applied to any other language or domain. Based on the obtained results, the proposed approach showed high performance results overcoming the current baselines for European and Brazilian Portuguese.", "edit_actions": [{"type": "R", "before": "incorporates", "after": "incorporate", "start_char_pos": 114, "end_char_pos": 126, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "D", "before": "offensive", "after": null, "start_char_pos": 266, "end_char_pos": 275, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 383, "end_char_pos": 383, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}], "sents_char_pos": [0, 100, 310, 489, 559], "domain": "arxiv"} |
|
{"doc_id": "100912", "revision_depth": 1, "before_revision": "Gordon Brown , who made a speech to the UK Labour Party todayThe UK Prime Minister, Gordon Brown, has made a speech to UK Labour Party members as part of his party's spring conference. In the speech he made several commitments, which he said he would \"work tirelessly\" to achieve. On the subjects of employment and welfare, Mr. Brown said that his party \"will offer new entitlements to the carers that serve this country [the UK] so well. He continued by saying that since Labour came to power in 1997, three million new jobs have been created. He also said that Labour will attempt to get 100,00 people back to work. When discussing crime in the speech, Mr. Brown claimed that his party has \"already cut crime by 32\\%.\" He also made promises of community policing across the UK.", "after_revision": "Gordon Brown in 2004.United Kingdom Prime Minister, Gordon Brown, has made a speech to UK Labour Party members as part of the party's spring conference. In the speech he made several commitments, which he said he would \"work tirelessly\" to achieve. On the subjects of employment and welfare, Mr. Brown said that his party \"will offer new entitlements to the careers that serve this country [the UK] so well. He continued by saying that since Labour came to power in 1997, three million new jobs have been created. He also said that Labour will attempt to get 100,00 people back to work. While discussing crime in the speech, Mr. Brown claimed that his party has \"already cut crime by 32\\%.\" He also made promises of community policing across the UK.", "edit_actions": [{"type": "R", "before": ", who made a speech to the UK Labour Party todayThe UK", "after": "in 2004.United Kingdom", "start_char_pos": 13, "end_char_pos": 67, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "meaning-changed"]}, {"type": "R", "before": "his", "after": "the", "start_char_pos": 154, "end_char_pos": 157, "major_intent": "clarity", "raw_intents": ["clarity", "style", "fluency"]}, {"type": "R", "before": "carers", "after": "careers", "start_char_pos": 390, "end_char_pos": 396, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "When", "after": "While", "start_char_pos": 618, "end_char_pos": 622, "major_intent": "fluency", "raw_intents": ["fluency", "coherence", "fluency"]}], "sents_char_pos": [0, 184, 280, 438, 544, 617, 720], "domain": "news"} |
|
{"doc_id": "106096", "revision_depth": 1, "before_revision": "noicon|Launch of the Opening Ceremony. 2008 The 27th Young Designers' Exhibition, famed as the largest show of students' creations, recognized by International Council of Societies of Industrial Design (ICSID) since last year, started at Taipei World Trade Center today and will close at Sunday (May 18) with participations from 87 academical and industrial units in Taiwan and 20 units from United States, United Kingdom, Italy, Netherlands, New Zealand, and Australia to showcase varied achievements from design industry. Besides for the early-announced \"Wow! Taiwan Design Award\", winners from \"2008 Young Designers Competition\" and \"2008 YODEX Interior Design Competition\" will also be announced in this Saturday (May 17).", "after_revision": "noicon|Launch of the Opening Ceremony. 2008 The 27th Young Designers' Exhibition, famed as the largest show from students' creations, recognized by International Council of Societies of Industrial Design (ICSID) since last year, started at Taipei World Trade Center yesterday and will close at Sunday (May 18) with participations from 87 academical units in Taiwan and 20 units from United States, United Kingdom, Italy, Netherlands, New Zealand, and Australia to showcase varied achievements from design industry. Besides of the early-announced \"Wow! Taiwan Design Award\", winners from \"2008 Young Designers ' Competition\" and \"2008 YODEX Interior Design Competition\" will also be announced this Saturday (May 17).", "edit_actions": [{"type": "R", "before": "of", "after": "from", "start_char_pos": 108, "end_char_pos": 110, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "today", "after": "yesterday", "start_char_pos": 264, "end_char_pos": 269, "major_intent": "clarity", "raw_intents": ["clarity", "style", "clarity"]}, {"type": "D", "before": "and industrial", "after": null, "start_char_pos": 343, "end_char_pos": 357, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "coherence"]}, {"type": "R", "before": "for", "after": "of", "start_char_pos": 532, "end_char_pos": 535, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": "'", "start_char_pos": 619, "end_char_pos": 619, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "in", "after": null, "start_char_pos": 701, "end_char_pos": 703, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}], "sents_char_pos": [0, 38, 523, 561], "domain": "news"} |
|
{"doc_id": "106096", "revision_depth": 2, "before_revision": "noicon|Launch of the Opening Ceremony. 2008 The 27th Young Designers' Exhibition, famed as the largest show from students' creations, recognized by International Council of Societies of Industrial Design (ICSID) since last year, started at Taipei World Trade Center yesterday and will close at Sunday (May 18) with participations from 87 academical units in Taiwan and 20 units from United States, United Kingdom, Italy, Netherlands, New Zealand, and Australia to showcase varied achievements from designindustry .", "after_revision": "noicon|Launch of the Opening Ceremony. 2008 The 27th Young Designers' Exhibition, opened on May 15 at the Taipei World Trade Center and will close at Sunday May 18. It features participation by 87 academic groups in Taiwan and 20 groups from United States, United Kingdom, Italy, Netherlands, New Zealand, and Australia to showcase various achievements in industrial design. It is recognized by the International Council of Societies of Industrial Design (ICSID) as the largest show of student creations .", "edit_actions": [{"type": "R", "before": "famed as the largest show from students' creations, recognized by International Council of Societies of Industrial Design (ICSID) since last year, started at", "after": "opened on May 15 at the", "start_char_pos": 82, "end_char_pos": 239, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}, {"type": "D", "before": "yesterday", "after": null, "start_char_pos": 266, "end_char_pos": 275, "major_intent": "clarity", "raw_intents": ["clarity", "style", "clarity"]}, {"type": "R", "before": "(May 18) with participations from", "after": "May 18. It features participation by", "start_char_pos": 301, "end_char_pos": 334, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "coherence"]}, {"type": "R", "before": "academical units", "after": "academic groups", "start_char_pos": 338, "end_char_pos": 354, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "units", "after": "groups", "start_char_pos": 372, "end_char_pos": 377, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "varied achievements from designindustry", "after": "various achievements in industrial design. It is recognized by the International Council of Societies of Industrial Design (ICSID) as the largest show of student creations", "start_char_pos": 473, "end_char_pos": 512, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}], "sents_char_pos": [0, 38], "domain": "news"} |
|
{"doc_id": "106096", "revision_depth": 3, "before_revision": "noicon|Launch of the Opening Ceremony. 2008 The 27th Young Designers' Exhibition, opened on May 15 at the Taipei World Trade Center and will close at Sunday May 18. It features participation by 87 academic groups in Taiwan and 20 groups from United States, United Kingdom, Italy, Netherlands, New Zealand, and Australia to showcase various achievements in industrial design. It is recognized by the International Council of Societies of Industrial Design (ICSID) as the largest show of student creations. Not only several design competitions, sponsors like International Forum Design (iF), EPSON, MUJI (in Japanese: \u7121\u5370\u826f\u54c1, Mujirushi Ry\u014dhin), Tsann Kuen Trans-nation Group will also showcase different solutions for design, creative, and cultural industries. In addition , Taiwan Design Center, the show organizer, also designed several site events like \"On-line Graduate Season Show\", \" Carrer Match-up\", \"Creative and Cultural Showcase and Performance\", \"Seminars of YODEX 2008\" to link the actual exhibition with on-line exhibition. Besides of the early-announced \"Wow! Taiwan Design Award\", winners from \"2008 Young Designers' Competition\" and \"2008 YODEX Interior Design Competition\" will also be announced this Saturday(May 17).", "after_revision": "noicon|Launch of the Opening Ceremony. 2008 The 27th Young Designers' Exhibition, opened on May 15 at the Taipei World Trade Center and closes Sunday May 18. It features participation by 87 academic groups in Taiwan and 20 groups from United States, United Kingdom, Italy, Netherlands, New Zealand, and Australia to showcase various achievements in industrial design. It is recognized by the International Council of Societies of Industrial Design (ICSID) as the largest show of student creations. Besides the several design competitions, sponsors like International Forum Design (iF), EPSON, MUJI (in Japanese: \u7121\u5370\u826f\u54c1, Mujirushi Ry\u014dhin), Tsann Kuen Trans-nation Group will showcase different solutions for the design, creative, and cultural industries. The show's organizer , Taiwan Design Center, also designed several on-site events like \"On-line Graduate Season Show\", \" Career Match-up\", \"Creative and Cultural Showcase and Performance\", \"Seminars of YODEX 2008\" to link the actual exhibition with the on-line exhibition. Besides of the previously announced \"Wow! Taiwan Design Award\", winners from \"2008 Young Designers' Competition\" and \"2008 YODEX Interior Design Competition\" were announced on Saturday, May 17.", "edit_actions": [{"type": "R", "before": "will close at", "after": "closes", "start_char_pos": 136, "end_char_pos": 149, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "Not only", "after": "Besides the", "start_char_pos": 505, "end_char_pos": 513, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "coherence"]}, {"type": "D", "before": "also", "after": null, "start_char_pos": 676, "end_char_pos": 680, "major_intent": "clarity", "raw_intents": ["clarity", "coherence", "clarity"]}, {"type": "A", "before": null, "after": "the", "start_char_pos": 714, "end_char_pos": 714, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "In addition", "after": "The show's organizer", "start_char_pos": 758, "end_char_pos": 769, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "coherence"]}, {"type": "D", "before": "the show organizer,", "after": null, "start_char_pos": 794, "end_char_pos": 813, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "coherence"]}, {"type": "R", "before": "site", "after": "on-site", "start_char_pos": 836, "end_char_pos": 840, "major_intent": "clarity", "raw_intents": ["fluency", "clarity", "clarity"]}, {"type": "R", "before": "Carrer", "after": "Career", "start_char_pos": 887, "end_char_pos": 893, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "A", "before": null, "after": "the", "start_char_pos": 1015, "end_char_pos": 1015, "major_intent": "clarity", "raw_intents": ["clarity", "fluency", "clarity"]}, {"type": "R", "before": "early-announced", "after": "previously announced", "start_char_pos": 1051, "end_char_pos": 1066, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "will also be announced this Saturday(May 17).", "after": "were announced on Saturday, May 17.", "start_char_pos": 1189, "end_char_pos": 1234, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "meaning-changed"]}], "sents_char_pos": [0, 38, 164, 374, 504, 757, 1035, 1072], "domain": "news"} |
|
{"doc_id": "15187", "revision_depth": 1, "before_revision": "One hundred students from Serbia who decide to start graduate studies abroad will get 15,000 a year scholarships from the Fund for young talents . Students who can apply need to have at least 8.5 GPA on 5 to 10 scale, and have to be younger then 26. Another condition is that they have to work in Serbia for at least 5 years after finishing their graduate studies. Winners will be decided based on their GPA and prestige of the university they apply to . Also, one thousand university seniors will be awarded 250 a month. The secretary of the Serbian Student Union, Marko Milovanovi, is satisfied with this action, but also notes that it would be better if economic needs of students were taken into account too . The Fund for young talents will cover about 2.5\\% of students in Serbia.", "after_revision": "One hundred students from Serbia who decide to pursue graduate studies abroad will get 15,000 a year scholarships from the Fund for Young Talents . Students who wish to apply must have at least an 8.5 GPA on 5 to 10 scale, and must be younger then 26. Additionally, students must commit to work in Serbia for at least 5 years after finishing their graduate studies. Scholarship will be awarded based on the student's GPA and the prestige of the university to which they apply . Also, one thousand university seniors will be awarded 250 a month. The secretary of the Serbian Student Union, Marko Milovanovi, is satisfied with this action, but also notes that it would be better if economic needs of students were taken into account as well . The Fund for Young Talents will provide funding to about 2.5\\% of students in Serbia.", "edit_actions": [{"type": "R", "before": "start", "after": "pursue", "start_char_pos": 47, "end_char_pos": 52, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "young talents", "after": "Young Talents", "start_char_pos": 131, "end_char_pos": 144, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "can apply need to", "after": "wish to apply must", "start_char_pos": 160, "end_char_pos": 177, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": "an", "start_char_pos": 192, "end_char_pos": 192, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "have to", "after": "must", "start_char_pos": 223, "end_char_pos": 230, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "Another condition is that they have", "after": "Additionally, students must commit", "start_char_pos": 251, "end_char_pos": 286, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "coherence"]}, {"type": "R", "before": "Winners will be decided based on their GPA and", "after": "Scholarship will be awarded based on the student's GPA and the", "start_char_pos": 366, "end_char_pos": 412, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "they apply to", "after": "to which they apply", "start_char_pos": 440, "end_char_pos": 453, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "R", "before": "too", "after": "as well", "start_char_pos": 709, "end_char_pos": 712, "major_intent": "style", "raw_intents": ["style", "style", "style"]}, {"type": "R", "before": "young talents will cover", "after": "Young Talents will provide funding to", "start_char_pos": 728, "end_char_pos": 752, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}], "sents_char_pos": [0, 250, 365, 522, 714], "domain": "news"} |
|
{"doc_id": "1928792", "revision_depth": 2, "before_revision": "Flag of ISIL. A video was released yesterday purporting to show the execution of 21 by supporters of (ISIL) . The video shows the prisoners being beheaded in a location apparently near in . The captives, all wearing orange in the video, were picked up in , a coastal town in Libya, during December and January. The video indicates that the Christians were targeted by ISIL because of their religion. The Coptic Orthodox Church stated they were \"confident\" justice would be carried out . 's President stated: \"Egypt and the whole world are in a fierce battle with extremist groups carrying extremist ideology and sharing the same goals\".", "after_revision": "Flag of ISIL. A video purporting to show the execution of 21 by supporters of (ISIL) has been released yesterday . The video shows them being beheaded in a location apparently near in . The captives, all shown being executed in orange in the video, were picked up in , a coastal town in Libya, during December and January. The video asserts the Christians were targeted by ISIL because of their religion. The Coptic Orthodox Church stated they were \"confident\" justice would be done on those who executed their followers . 's President stated: \"Egypt and the whole world are in a fierce battle with extremist groups carrying extremist ideology and sharing the same goals\".", "edit_actions": [{"type": "D", "before": "was released yesterday", "after": null, "start_char_pos": 22, "end_char_pos": 44, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": "has been released yesterday", "start_char_pos": 108, "end_char_pos": 108, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "the prisoners", "after": "them", "start_char_pos": 127, "end_char_pos": 140, "major_intent": "style", "raw_intents": ["style", "clarity", "style"]}, {"type": "R", "before": "wearing", "after": "shown being executed in", "start_char_pos": 209, "end_char_pos": 216, "major_intent": "style", "raw_intents": ["style", "style", "style"]}, {"type": "R", "before": "indicates that", "after": "asserts", "start_char_pos": 322, "end_char_pos": 336, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "carried out", "after": "done on those who executed their followers", "start_char_pos": 474, "end_char_pos": 485, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "meaning-changed"]}], "sents_char_pos": [0, 13, 110, 190, 311, 400, 487], "domain": "news"} |
|
{"doc_id": "2795820", "revision_depth": 2, "before_revision": "The for by reports disputed by the Russian Consul shot and killed an attacker on Thursday, after two individuals attempted to commit a robbery. As reported, lawyer Marcos Cesar Feres Braga so-named by the newspaper working for the , was driving with his family through the suburb of near the main complex when the incident occurred. According to reports from the Rio newspaper , the 60-year-old man, his wife, and his daughter were stopped in traffic due to the procession in a nearby neighbourhood. Two individuals on motorbikes approached and used a gun to smash the car window. Braga, trained and considered an expert according to reports in , took control of the attacker's firearm and proceeded to shoot dead the alleged attacker, while the second suspect fled the scene. Rio de Janeiro's homicide branch has released a statement in relation to the incident. \"The vice consul got into a physical struggle with the assailant and during the fight the aggressor's weapon fired shots. The assailant died of his wounds on the spot.\" However, Russia's Consul General Vladimir Tokmakov has released a statement claiming no Russian diplomats or employees of the consulate general were involved in the incident. \"Information circulating in the Brazilian press about the alleged shooting [...] of a Brazilian national by a Russian diplomat during an armed robbery does not reflect reality \".", "after_revision": "The for , according reports disputed by the Russian Consul ; shot and killed an attacker on Thursday, after two individuals attempted to commit a robbery. lawyer Marcos Cesar Feres Braga , working at the , was driving with his family through the suburb of near the main complex when the incident occurred. According to reports from the Rio newspaper , a 60-year-old man, his wife, and his daughter were stopped in traffic , due to the procession in a nearby neighbourhood. Two individuals on motorbikes approached and used a gun to smash the car window. Braga, trained in , took control of the attacker's firearm and proceeded to shoot and kill the alleged attacker, while the second suspect fled the scene. Rio de Janeiro's homicide branch has released a statement in relation to the incident. \"The vice consul got into a physical struggle with the assailant , and during the fight the aggressor's weapon fired shots. The assailant died of his wounds on the spot.\" Russia's Consul General Vladimir Tokmakov has released a statement claiming no Russian diplomats or employees of the consulate general were involved in the incident. \"Information circulating in the Brazilian press about the alleged shooting [...] of a Brazilian national by a Russian diplomat during an armed robbery does not reflect reality .\"", "edit_actions": [{"type": "R", "before": "by", "after": ", according", "start_char_pos": 8, "end_char_pos": 10, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": ";", "start_char_pos": 50, "end_char_pos": 50, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "D", "before": "As reported,", "after": null, "start_char_pos": 145, "end_char_pos": 157, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "coherence"]}, {"type": "R", "before": "so-named by the newspaper working for", "after": ", working at", "start_char_pos": 190, "end_char_pos": 227, "major_intent": "others", "raw_intents": ["others", "others", "others"]}, {"type": "R", "before": "the", "after": "a", "start_char_pos": 380, "end_char_pos": 383, "major_intent": "others", "raw_intents": ["others", "others", "others"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 452, "end_char_pos": 452, "major_intent": "others", "raw_intents": ["others", "others", "others"]}, {"type": "D", "before": "and considered an expert according to reports", "after": null, "start_char_pos": 598, "end_char_pos": 643, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "dead", "after": "and kill", "start_char_pos": 711, "end_char_pos": 715, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 931, "end_char_pos": 931, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "D", "before": "However,", "after": null, "start_char_pos": 1036, "end_char_pos": 1044, "major_intent": "coherence", "raw_intents": ["clarity", "coherence", "coherence"]}, {"type": "R", "before": "\".", "after": ".\"", "start_char_pos": 1387, "end_char_pos": 1389, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}], "sents_char_pos": [0, 144, 333, 501, 582, 778, 865, 988, 1035, 1210], "domain": "news"} |
|
{"doc_id": "31572", "revision_depth": 1, "before_revision": "Jos\u00e9 S\u00f3crates, prime minister of Portugal, said he was \"satisfied with the decision of Volkswagen to produce a new model in the factory of Palmela,\" and considered that the decision, \"reflected the confidence [of the investors] in the portuguese economy.\" Volkswagen will reveal the new model to be produced next week. By 2008 the factory at Palmela will be only producing the multi-purpose vehicle Sharan and the Eos models. With the end of the production of the multi-purpose vehicle, the factory needs to garantee new product lines, since the new Eos is insufficient to maintain the current 2,790 workers.", "after_revision": "Jos\u00e9 S\u00f3crates, prime minister of Portugal, said he was \"satisfied with the decision of Volkswagen to produce a new model in the factory of Palmela,\" and considered that the decision, \"reflected the confidence [of the investors] in the Portuguese economy.\" Volkswagen will reveal the new model to be produced next week. By 2008 , the factory at Palmela will only be producing the multi-purpose vehicle Sharan and the Eos models. With the end of the production of the multi-purpose vehicle, the factory needs to guarantee new product lines, since the new Eos is not sufficient to maintain the current 2,790 workers.", "edit_actions": [{"type": "R", "before": "portuguese", "after": "Portuguese", "start_char_pos": 235, "end_char_pos": 245, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 327, "end_char_pos": 327, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "be only", "after": "only be", "start_char_pos": 356, "end_char_pos": 363, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "R", "before": "garantee", "after": "guarantee", "start_char_pos": 509, "end_char_pos": 517, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "insufficient", "after": "not sufficient", "start_char_pos": 558, "end_char_pos": 570, "major_intent": "style", "raw_intents": ["style", "style", "style"]}], "sents_char_pos": [0, 255, 318, 426], "domain": "news"} |
|
{"doc_id": "36498", "revision_depth": 1, "before_revision": "David Parker today resigned from Cabinet; a day after he resigned his position as Attorney-General , after publicity around his filing of an incorrect declaration , with the Companies Office on behalf of Queens Park Mews Limited. The declaration said that ; the three shareholders had unanimously agreed not to appoint an auditor for the company , but according to Investigate Magazine, another shareholder Russell Hyslop, had never been consulted about the matter. Parker was saying early this morning , that he would persist to hold his other portfolios. Prime Minister Helen Clark released the news shortly before 11am (NZST) as she walked into a caucus meeting. She says \"I have this morning accepted Mr Parker's resignation from all his portfolios.\" Knowingly authorising a false statement is an offence under the Companies Act. The maximum penalty punishable under section 373(4) for filing a false return is a fine of $200,000 or 5 years imprisonment.", "after_revision": "David Parker resigned today from Cabinet, a day after he resigned his position as Attorney-General and after publicity around his filing of an incorrect declaration with the Companies Office on behalf of Queens Park Mews Limited. The declaration said that the three shareholders had unanimously agreed not to appoint an auditor for the company ; but according to Investigate Magazine, another shareholder , Russell Hyslop, had never been consulted about the matter. Parker was saying early this morning that he would persist in holding his other portfolios. Prime Minister Helen Clark released the news shortly before 11 a.m. (NZST) as she walked into a caucus meeting. She said, \"I have this morning accepted Mr Parker's resignation from all his portfolios.\" Knowingly authorising a false statement is an offence under the Companies Act. The maximum penalty under section 373(4) for filing a false return is a fine of $200,000 or five years of imprisonment.", "edit_actions": [{"type": "R", "before": "today resigned from Cabinet;", "after": "resigned today from Cabinet,", "start_char_pos": 13, "end_char_pos": 41, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": ",", "after": "and", "start_char_pos": 99, "end_char_pos": 100, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "fluency"]}, {"type": "D", "before": ",", "after": null, "start_char_pos": 163, "end_char_pos": 164, "major_intent": "fluency", "raw_intents": ["others", "fluency", "fluency"]}, {"type": "D", "before": ";", "after": null, "start_char_pos": 256, "end_char_pos": 257, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": ",", "after": ";", "start_char_pos": 346, "end_char_pos": 347, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 407, "end_char_pos": 407, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "D", "before": ",", "after": null, "start_char_pos": 504, "end_char_pos": 505, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "to hold", "after": "in holding", "start_char_pos": 528, "end_char_pos": 535, "major_intent": "clarity", "raw_intents": ["clarity", "fluency", "clarity"]}, {"type": "R", "before": "11am", "after": "11 a.m.", "start_char_pos": 618, "end_char_pos": 622, "major_intent": "fluency", "raw_intents": ["fluency", "meaning-changed", "fluency"]}, {"type": "R", "before": "says", "after": "said,", "start_char_pos": 671, "end_char_pos": 675, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "D", "before": "punishable", "after": null, "start_char_pos": 855, "end_char_pos": 865, "major_intent": "clarity", "raw_intents": ["coherence", "clarity", "clarity"]}, {"type": "R", "before": "5 years", "after": "five years of", "start_char_pos": 938, "end_char_pos": 945, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "fluency"]}], "sents_char_pos": [0, 41, 229, 257, 466, 557, 666, 834], "domain": "news"} |
|
{"doc_id": "47188", "revision_depth": 1, "before_revision": "According to BBC NEWS , Merseyside police have apparently been called in and are working with the Crown Prosecution Service. As yet there has been no confirmation that the blog was acting in or had intended to act in a criminal manner. An Earlier article in the Liverpool Echo, had however suggested that some employees of Liverpool City Council felt postings on the blog may be defamatory, a civil rather criminal matter. Liverpool City Council, has so far declined to comment on the sites dissaperance , although the site had been previosuly blocked from the council's IT systems.", "after_revision": "According to BBC News , Merseyside police have apparently been called in and are working with the Crown Prosecution Service. As yet there has been no confirmation that the blog was acting in or had intended to act in a criminal manner. An earlier article in the Liverpool Echo, had however suggested that some employees of Liverpool City Council felt postings on the blog may be defamatory, a civil rather than criminal matter. Liverpool City Council, has so far declined to comment on the sites disappearance , although the site had been previously blocked from the council's IT systems.", "edit_actions": [{"type": "R", "before": "NEWS", "after": "News", "start_char_pos": 17, "end_char_pos": 21, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "Earlier", "after": "earlier", "start_char_pos": 239, "end_char_pos": 246, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "A", "before": null, "after": "than", "start_char_pos": 406, "end_char_pos": 406, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "dissaperance", "after": "disappearance", "start_char_pos": 492, "end_char_pos": 504, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "previosuly", "after": "previously", "start_char_pos": 534, "end_char_pos": 544, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}], "sents_char_pos": [0, 124, 235, 423], "domain": "news"} |
|
{"doc_id": "49966", "revision_depth": 1, "before_revision": "Hewlett-Packard is in a scandal over the alegged illegal investigation of its own board members over leaks from the board to the press. The companys admits that it hired private investigators and that they used pretexting to get private phone records of it's own directors. Pretexting is where someone pretends to be the customer when calling the phone company to get their phone records. It is illegal in the State of California. The aim of the HP investigation was to find out which director(s) had leaked information to journalists. VC investor Tom Perkins resigned from the board on May 22nd 2006 over the issue and requested that HP look into the methods used in it's leak inquiry. The FCC is looking into phone companies who may have supplied the phone records illegally under California law. So", "after_revision": "Hewlett-Packard (HP) is embroiled in a scandal over the alleged illegal investigation of its own board members over leaks from the board to the press. The companys admits that it hired private investigators and that they used pretexting to get private phone records of its own directors. 'Pretexting' is where someone pretends to be the customer when calling the phone company to get their phone records. It is illegal in the state of California. The aim of the HP investigation was to find out which director(s) had leaked information to journalists. Venture Capitalist investor Tom Perkins resigned from the board on May 22, 2006 over the issue and requested that HP look into the methods used in its leak inquiry. The Federal Communications Commission is looking into phone companies who may have supplied the phone records illegally under California law. ", "edit_actions": [{"type": "R", "before": "is", "after": "(HP) is embroiled", "start_char_pos": 16, "end_char_pos": 18, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "alegged", "after": "alleged", "start_char_pos": 41, "end_char_pos": 48, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "it's", "after": "its", "start_char_pos": 254, "end_char_pos": 258, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "Pretexting", "after": "'Pretexting'", "start_char_pos": 274, "end_char_pos": 284, "major_intent": "fluency", "raw_intents": ["others", "fluency", "fluency"]}, {"type": "R", "before": "State", "after": "state", "start_char_pos": 410, "end_char_pos": 415, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "VC", "after": "Venture Capitalist", "start_char_pos": 536, "end_char_pos": 538, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "fluency"]}, {"type": "R", "before": "22nd", "after": "22,", "start_char_pos": 591, "end_char_pos": 595, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "style"]}, {"type": "R", "before": "it's", "after": "its", "start_char_pos": 668, "end_char_pos": 672, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "FCC", "after": "Federal Communications Commission", "start_char_pos": 691, "end_char_pos": 694, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "So", "after": null, "start_char_pos": 799, "end_char_pos": 801, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "clarity"]}], "sents_char_pos": [0, 135, 273, 388, 430, 535, 686], "domain": "news"} |
|
{"doc_id": "54370", "revision_depth": 1, "before_revision": " 73-year old retired pastor, Reverend Roland Weisselberg, burned himself alive during Reformation Day services on Tuesday in Erfurt, Germany. His self-immolation was apparently to protest against the spread of Islam, which he felt the Protestant church should take more seriously. His last words were \"Jesus and Oskar,\" which is believed to refer to Oskar Bruesewitz, a priest who burned himself alive to protest the Communist government of East Germany. He was transported to a burns unit in Halle but died en route. Reverend Weisselberg had lived under Communism in East Germany, and had been a publisher in his former vocation. Axel Noack, Bishop of Saxony, said that he was shocked at Rev. Weisselberg's self-immolation, stressing that Christians could not accept a \"clash of cultures\". He confessed that the issue of Islam had been sidelined within the Church and was only spoken about in private. There were few Muslims there with whom they could engage in dialogue, Noack claimed .", "after_revision": "A 73-year old retired pastor, Reverend Roland Weisselberg, burned himself alive during Reformation Day services on Tuesday in Erfurt, Germany. His self-immolation was apparently in protest against the spread of Islam, which he felt the Protestant church should take more seriously. His last words were \"Jesus and Oskar,\" , believed to a reference to Oskar Bruesewitz, a priest who burned himself alive to protest the Communist government of East Germany. He was transported to a burns unit in Halle but died en route. Rev. Weisselberg had lived under Communism in East Germany, and had been a publisher in his former vocation. Axel Noack, Bishop of Saxony, said that he was shocked at Rev. Weisselberg's self-immolation, stressing that Christians could not accept a \"clash of cultures\". He confessed that the issue of Islam had been sidelined within the Church and was only spoken about in private. There were few Muslims there with whom they could engage in dialogue, Noack said .", "edit_actions": [{"type": "A", "before": null, "after": "A", "start_char_pos": 0, "end_char_pos": 0, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "others"]}, {"type": "R", "before": "to", "after": "in", "start_char_pos": 177, "end_char_pos": 179, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "which is believed to refer", "after": ", believed to a reference", "start_char_pos": 320, "end_char_pos": 346, "major_intent": "style", "raw_intents": ["style", "style", "clarity"]}, {"type": "R", "before": "Reverend", "after": "Rev.", "start_char_pos": 518, "end_char_pos": 526, "major_intent": "clarity", "raw_intents": ["fluency", "clarity", "clarity"]}, {"type": "R", "before": "claimed", "after": "said", "start_char_pos": 979, "end_char_pos": 986, "major_intent": "style", "raw_intents": ["clarity", "style", "style"]}], "sents_char_pos": [0, 141, 280, 454, 517, 630, 790, 902], "domain": "news"} |
|
{"doc_id": "55265", "revision_depth": 1, "before_revision": "Nine people are confirmed dead in a fire in a leather bag facory in Kolkata's South 24 Pargana district in Topsia area. Eighteen people were seriously injured and admitted to National Medical College and Hospital. Inspector General of Police (Law and Order) Raj Kanojia said that the fire was broken out at about 3 am when the victims were inside the factory and all the doors were closed. Director General of Fire Brigade Gopal Bhattacharjee said that it was an illegal factory in Kolkata located at third floor of the building. At the time of fire, all the premises were closed and workers could not come out from the factory. The cause of the fire was yet to be ascertained. Kolkata Mayor Bikash Ranjan Bhattacharya predicted short circuit or an unattended cigarette causing fire in the factory .", "after_revision": "Nine people are confirmed dead in a fire in a leather bag factory in Kolkata's South 24 Pargana district in Topsia area. Eighteen people were seriously injured and admitted to the National Medical College and Hospital. Inspector General of Police (Law and Order) Raj Kanojia said that the fire broke out at about 3 a.m. local time, when the victims were inside the factory and all the doors were closed. Director General of Fire Brigade Gopal Bhattacharjee said that it was an illegal factory in Kolkata located at third floor of the building. At the time of fire, all the exits were shut and workers could not come out from the factory. The cause of the fire is yet to be ascertained. Kolkata Mayor Bikash Ranjan Bhattacharya suggested that a short circuit or an unattended cigarette may have caused the fire .", "edit_actions": [{"type": "R", "before": "facory", "after": "factory", "start_char_pos": 58, "end_char_pos": 64, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "A", "before": null, "after": "the", "start_char_pos": 175, "end_char_pos": 175, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "was broken", "after": "broke", "start_char_pos": 290, "end_char_pos": 300, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "am", "after": "a.m. local time,", "start_char_pos": 316, "end_char_pos": 318, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "meaning-changed"]}, {"type": "R", "before": "premises were closed", "after": "exits were shut", "start_char_pos": 560, "end_char_pos": 580, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "was", "after": "is", "start_char_pos": 652, "end_char_pos": 655, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "predicted", "after": "suggested that a", "start_char_pos": 720, "end_char_pos": 729, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "causing fire in the factory", "after": "may have caused the fire", "start_char_pos": 771, "end_char_pos": 798, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}], "sents_char_pos": [0, 119, 214, 390, 530, 629, 678], "domain": "news"} |
|
{"doc_id": "18842359", "revision_depth": 1, "before_revision": "The ocean (also the sea or the world ocean) is the body of salt water which covers approximately 71\\% of the surface of the Earth and contains 97\\% of Earth's water. It is also \"any of the large bodies of water into which the great ocean is divided\".\"Ocean.\" Merriam-Webster.com Dictionary, Merriam-Webster, URL Accessed March 14, 2021. A common definition lists five oceans, in descending order by area, the Pacific , Atlantic, Indian, Southern (Antarctic), and Arctic Oceans . Seawater covers approximately of the Earth. As the world's ocean is the principal component of Earth's hydrosphere, it is integral to life on Earth, forms part of the carbon cycle and water cycle, and - as a huge heat reservoir - influences climate and weather patterns.", "after_revision": "The ocean (also the sea or the world ocean) is the body of salt water which covers approximately 71\\% of the surface of the Earth and contains 97\\% of Earth's water. Another definition is \"any of the large bodies of water into which the great ocean is divided\".\"Ocean.\" Merriam-Webster.com Dictionary, Merriam-Webster, URL Accessed March 14, 2021. The ocean is divided into five oceans: Pacific (the largest) , Atlantic, Indian, Southern (Antarctic), and Arctic (the smallest) . Seawater covers approximately of the Earth. As the world's ocean is the principal component of Earth's hydrosphere, it is integral to life on Earth, forms part of the carbon cycle and water cycle, and - as a huge heat reservoir - influences climate and weather patterns.", "edit_actions": [{"type": "R", "before": "It is also", "after": "Another definition is", "start_char_pos": 166, "end_char_pos": 176, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "coherence"]}, {"type": "R", "before": "A common definition lists five oceans, in descending order by area, the Pacific", "after": "The ocean is divided into five oceans: Pacific (the largest)", "start_char_pos": 337, "end_char_pos": 416, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "R", "before": "Oceans", "after": "(the smallest)", "start_char_pos": 470, "end_char_pos": 476, "major_intent": "clarity", "raw_intents": ["clarity", "meaning-changed", "clarity"]}], "sents_char_pos": [0, 165, 250, 336, 478, 522], "domain": "wiki"} |
|
{"doc_id": "18842359", "revision_depth": 2, "before_revision": "The ocean (also the sea or the world ocean) is the body of salt water which covers approximately 71\\% of the surface of the Earth and contains 97\\% of Earth's water. Another definition is \"any of the large bodies of water into which the great ocean is divided\".\"Ocean.\" Merriam-Webster.com Dictionary, Merriam-Webster, URL Accessed March 14, 2021. The ocean is divided into five oceans : Pacific (the largest) , Atlantic, Indian, Southern (Antarctic), and Arctic (the smallest). Seawater covers approximately of the Earth. As the world's ocean is the principal component of Earth's hydrosphere, it is integral to life on Earth , forms part of the carbon cycle and water cycle, and - as a huge heat reservoir - influences climate and weather patterns. Oceanographers divide the ocean into different vertical and horizontal zones defined by physical and biological conditions. The pelagic zone consists of the water column of the open ocean, and can be divided into further regions categorized by light abundance and by depth. The photic zone includes the oceans from the surface to a depth of 200m; it is the region where photosynthesis can occur and is, therefore, the most biodiverse. Photosynthesis by plants and microscopic algae (free floating phytoplankton) allows them to URLanic matter from chemical precursors including water and carbon dioxide. It is this upper sunlit ocean that creates the food supply that ultimately sustains most of the ocean ecosystem. Light only penetrates to a few hundred meters depth, so the remaining ocean below this is cold and dark. The continental shelf around the edge of the oceans is shallower (a few hundred meters or less) and this region is most impacted by human activity.", "after_revision": "The ocean (also the sea or the world ocean) is the body of salt water which covers approximately 71\\% of the surface of the Earth and contains 97\\% of Earth's water. Another definition is \"any of the large bodies of water into which the great ocean is divided\".\"Ocean.\" Merriam-Webster.com Dictionary, Merriam-Webster, URL Accessed March 14, 2021. Separate names are used to identify five different areas of the ocean : Pacific (the largest) Atlantic, Indian, Southern (Antarctic), and Arctic (the smallest). Seawater covers approximately of the Earth. The ocean is the principal component of Earth's hydrosphere, and therefore integral to life on Earth . The ocean is a huge heat reservoir and influences climate and weather patterns. the carbon cycle and the water cycle. Oceanographers divide the ocean into different vertical and horizontal zones based on physical and biological conditions. The pelagic zone consists of the water column of the open ocean, and can be divided into further regions categorized by light abundance and by depth. The photic zone includes the oceans from the surface to a depth of 200m; it is the region where photosynthesis can occur and is, therefore, the most biodiverse. Photosynthesis by plants and microscopic algae (free floating phytoplankton) allows them to URLanic matter from chemical precursors including water and carbon dioxide. It is this upper sunlit ocean that creates the food supply that ultimately sustains most of the ocean ecosystem. Light only penetrates to a few hundred meters depth, so the remaining ocean below this is cold and dark. The continental shelf around the edge of the oceans is shallower (a few hundred meters or less) and this region is most impacted by human activity.", "edit_actions": [{"type": "R", "before": "The ocean is divided into five oceans", "after": "Separate names are used to identify five different areas of the ocean", "start_char_pos": 348, "end_char_pos": 385, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "D", "before": ",", "after": null, "start_char_pos": 410, "end_char_pos": 411, "major_intent": "fluency", "raw_intents": ["coherence", "fluency", "fluency"]}, {"type": "R", "before": "As the world's", "after": "The", "start_char_pos": 523, "end_char_pos": 537, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "coherence"]}, {"type": "R", "before": "it is", "after": "and therefore", "start_char_pos": 595, "end_char_pos": 600, "major_intent": "clarity", "raw_intents": ["clarity", "coherence", "clarity"]}, {"type": "R", "before": ", forms part of the carbon cycle and water cycle, and - as", "after": ". The ocean is", "start_char_pos": 627, "end_char_pos": 685, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "coherence"]}, {"type": "R", "before": "-", "after": "and", "start_char_pos": 708, "end_char_pos": 709, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "coherence"]}, {"type": "A", "before": null, "after": "the carbon cycle and the water cycle.", "start_char_pos": 751, "end_char_pos": 751, "major_intent": "meaning-changed", "raw_intents": ["coherence", "meaning-changed", "meaning-changed"]}, {"type": "R", "before": "defined by", "after": "based on", "start_char_pos": 829, "end_char_pos": 839, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}], "sents_char_pos": [0, 165, 261, 347, 478, 522, 750, 875, 1025, 1098, 1186, 1354, 1467, 1572], "domain": "wiki"} |
|
{"doc_id": "18842359", "revision_depth": 3, "before_revision": "Oceanographers divide the ocean into different vertical and horizontal zones based on physical and biological conditions. The pelagic zone consists of the water column of the open ocean , and can be divided into further regions categorized by light abundance and by depth. The photic zone includes the oceans from the surface to a depth of 200m ; it is the region where photosynthesis can occur and is, therefore, the most biodiverse. Photosynthesis by plants and microscopic algae (free floating phytoplankton) allows them to URLanic matter from chemical precursors including water and carbon dioxide. It is this upper sunlit ocean that creates the food supply that ultimately sustains most of the ocean ecosystem . Light only penetrates to a few hundred meters depth , so the remaining ocean below this is cold and dark. The continental shelf around the edge of the oceans is shallower (a few hundred meters or less) and this region is most impacted by human activity.", "after_revision": "Oceanographers divide the ocean into different vertical and horizontal zones based on physical and biological conditions. The pelagic zone consists of the water column from surface to ocean floor throughout the open ocean . The water column is further categorized in other zones depending on light abundance and depth. The photic zone includes the water column from the surface to a depth of 200m , where photosynthesis can occur . This makes the photic zone the most biodiverse. Photosynthesis by plants and microscopic algae (free floating phytoplankton) allows them to URLanic matter from chemical precursors including water and carbon dioxide. This upper sunlit zone of the ocean ultimately sustains most of the ocean ecosystem since that is the origin of the food supply necessary to all life forms . Light only penetrates to a depth of a few hundred meters , so the remaining ocean below is cold and dark. The continental shelf where the ocean touches land is more shallow (a few hundred meters or less) and experiences more impact from human activity.", "edit_actions": [{"type": "R", "before": "of", "after": "from surface to ocean floor throughout", "start_char_pos": 168, "end_char_pos": 170, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "R", "before": ", and can be divided into further regions categorized by", "after": ". The water column is further categorized in other zones depending on", "start_char_pos": 186, "end_char_pos": 242, "major_intent": "coherence", "raw_intents": ["clarity", "coherence", "coherence"]}, {"type": "D", "before": "by", "after": null, "start_char_pos": 263, "end_char_pos": 265, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "clarity"]}, {"type": "R", "before": "oceans", "after": "water column", "start_char_pos": 302, "end_char_pos": 308, "major_intent": "clarity", "raw_intents": ["clarity", "style", "clarity"]}, {"type": "R", "before": "; it is the region", "after": ",", "start_char_pos": 345, "end_char_pos": 363, "major_intent": "coherence", "raw_intents": ["coherence", "fluency", "coherence"]}, {"type": "R", "before": "and is, therefore, the", "after": ". This makes the photic zone the", "start_char_pos": 395, "end_char_pos": 417, "major_intent": "clarity", "raw_intents": ["clarity", "coherence", "clarity"]}, {"type": "R", "before": "It is this upper sunlit ocean that creates the food supply that", "after": "This upper sunlit zone of the ocean", "start_char_pos": 603, "end_char_pos": 666, "major_intent": "clarity", "raw_intents": ["clarity", "style", "clarity"]}, {"type": "A", "before": null, "after": "since that is the origin of the food supply necessary to all life forms", "start_char_pos": 715, "end_char_pos": 715, "major_intent": "meaning-changed", "raw_intents": ["meaning-changed", "meaning-changed", "clarity"]}, {"type": "A", "before": null, "after": "depth of a", "start_char_pos": 745, "end_char_pos": 745, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "depth", "after": null, "start_char_pos": 765, "end_char_pos": 770, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "this", "after": null, "start_char_pos": 802, "end_char_pos": 806, "major_intent": "clarity", "raw_intents": ["fluency", "clarity", "clarity"]}, {"type": "R", "before": "around the edge of the oceans is shallower", "after": "where the ocean touches land is more shallow", "start_char_pos": 847, "end_char_pos": 889, "major_intent": "clarity", "raw_intents": ["style", "clarity", "clarity"]}, {"type": "R", "before": "this region is most impacted by", "after": "experiences more impact from", "start_char_pos": 925, "end_char_pos": 956, "major_intent": "style", "raw_intents": ["clarity", "style", "style"]}], "sents_char_pos": [0, 121, 272, 346, 434, 602, 824], "domain": "wiki"} |
|
{"doc_id": "43193013", "revision_depth": 1, "before_revision": "During study of fluid mechanics, researchers attempted to give a correct vision of maritime currents. The origin of currents is owed to physical differences between various water masses, the main parameter being the difference of density that varies in function of the temperature and the concentration of salt . The study of these currents, combined with other factors such as tides (producing a change in sea level) and wind (originally swell) for understanding the marine hydrodynamics and different processes which are linked there as sediment movement and the climate balance.", "after_revision": "During study of fluid mechanics, researchers attempted to give a correct explanation of marine currents. Currents are caused by external driving forces such as wind, gravitational effects, coriolis forces and physical differences between various water masses, the main parameter being the difference of density that varies in function of the temperature and salinity . The study of currents, combined with other factors such as tides and waves is relevant for understanding marine hydrodynamics and linked processes such as sediment transport and climate balance.", "edit_actions": [{"type": "R", "before": "vision of maritime currents. The origin of currents is owed to", "after": "explanation of marine currents. Currents are caused by external driving forces such as wind, gravitational effects, coriolis forces and", "start_char_pos": 73, "end_char_pos": 135, "major_intent": "clarity", "raw_intents": ["clarity", "others", "clarity"]}, {"type": "R", "before": "the concentration of salt", "after": "salinity", "start_char_pos": 285, "end_char_pos": 310, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "D", "before": "these", "after": null, "start_char_pos": 326, "end_char_pos": 331, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "fluency"]}, {"type": "R", "before": "(producing a change in sea level) and wind (originally swell) for understanding the", "after": "and waves is relevant for understanding", "start_char_pos": 384, "end_char_pos": 467, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "style"]}, {"type": "R", "before": "different processes which are linked there as sediment movement and the", "after": "linked processes such as sediment transport and", "start_char_pos": 493, "end_char_pos": 564, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}], "sents_char_pos": [0, 101, 312], "domain": "wiki"} |
|
{"doc_id": "4914567", "revision_depth": 1, "before_revision": "A trainee is commonly known as an individual taking part in a trainee program or a graduate program within a company after having graduated from university or college . A trainee is an official employee of the firm that is being trained to the job they were originally hired for. Literally, a trainee is an employee in training. Trainee programs and graduate programs are arranged by private companies and public sector employers where the trainee is offered the possibility to take part 6 to 20 months training programs . During the duration of these programs, the trainee is expected to receive a salary as well as is expected to have full-time employment awaiting in the company when the program is over. Often used as an insurance measure by companies, firms typically will have a trainee period (23 months) where the person is still being evaluated after which an official decision to hire on a permanent basis is made.", "after_revision": "A trainee is commonly known as an individual taking part in a trainee program within URLanization after having graduated from higher and technical courses . A trainee is an official employee of the firm that is being trained to the job they were originally hired for. Literally, a trainee is an employee in training. Trainee programs are arranged by private companies and public sector employers where the trainee position has a varied duration depending on the company's program . During the duration of these programs, the trainee is expected to receive a salary as well as is expected to have full-time employment awaiting in the company when the program is over. Often used as an insurance measure by companies, firms typically will have a trainee period where the person is still being evaluated after which an official decision to hire on a permanent basis is made.", "edit_actions": [{"type": "R", "before": "or a graduate program within a company", "after": "within URLanization", "start_char_pos": 78, "end_char_pos": 116, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "R", "before": "university or college", "after": "higher and technical courses", "start_char_pos": 145, "end_char_pos": 166, "major_intent": "clarity", "raw_intents": ["clarity", "others", "clarity"]}, {"type": "D", "before": "and graduate programs", "after": null, "start_char_pos": 346, "end_char_pos": 367, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}, {"type": "R", "before": "is offered the possibility to take part 6 to 20 months training programs", "after": "position has a varied duration depending on the company's program", "start_char_pos": 448, "end_char_pos": 520, "major_intent": "clarity", "raw_intents": ["meaning-changed", "clarity", "clarity"]}, {"type": "D", "before": "(23 months)", "after": null, "start_char_pos": 800, "end_char_pos": 811, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "clarity"]}], "sents_char_pos": [0, 168, 279, 328, 707], "domain": "wiki"} |
|
{"doc_id": "51157723", "revision_depth": 1, "before_revision": "Thenumgal Poulose Jacob is an Indian surgeon specialized in vascular surgery, and the founder Head of the Department of Vascular Surgery at Madras Medical College. Born in Aluva, in the south Indian state of Kerala to Thenumgal Poulose and Mariam in a Malayali family, he did his under-graduate studies at UC College before graduating in medicine from Stanley Medical College, Chennai and secured his MS degree from the same institution. He started his practice under government service at Madras Medical College where he helped establish the department of vascular surgery in 1978 and served as its founder head till his superannuation in 1993. The Government of India awarded him the fourth highest civilian honour of the Padma Shri, in 2014, for his contributions to medical science. He is married to Esther and the couple has a daughter, Siji Jacob, an academic and a son, Hasum Jacob Thenumgal, an engineer at TCS. He is doing private practice at T. P. Jacob Clinic in Royapuram, Chennai and is also a consultant vascular surgeon at MV Hospitals for Diabetes, Royapuram.", "after_revision": "Thenumgal Poulose Jacob is an Indian surgeon specializing in vascular surgery, and the founder Head of the Department of Vascular Surgery at Madras Medical College. Born in Aluva, in the south Indian state of Kerala , to Thenumgal Poulose and Mariam in a Malayali family, he did his under-graduate studies at UC College before graduating in medicine from Stanley Medical College, Chennai , and secured his MS degree from the same institution. He started his practice under government service at Madras Medical College where he helped establish the department of vascular surgery in 1978 and served as its founder head until his superannuation in 1993. The Government of India awarded him the fourth highest civilian honour of the Padma Shri, in 2014, for his contributions to medical science. He is married to Esther and the couple has a daughter, Siji Jacob, an academic and a son, Hasum Jacob Thenumgal, an engineer at TCS. He is in private practice at T. P. Jacob Clinic in Royapuram, Chennai and is also a consultant vascular surgeon at MV Hospitals for Diabetes, Royapuram.", "edit_actions": [{"type": "R", "before": "specialized", "after": "specializing", "start_char_pos": 45, "end_char_pos": 56, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "style"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 215, "end_char_pos": 215, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "A", "before": null, "after": ",", "start_char_pos": 386, "end_char_pos": 386, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "till", "after": "until", "start_char_pos": 615, "end_char_pos": 619, "major_intent": "fluency", "raw_intents": ["clarity", "fluency", "fluency"]}, {"type": "R", "before": "doing", "after": "in", "start_char_pos": 928, "end_char_pos": 933, "major_intent": "fluency", "raw_intents": ["others", "fluency", "fluency"]}], "sents_char_pos": [0, 163, 439, 647, 788, 921], "domain": "wiki"} |
|
{"doc_id": "51355210", "revision_depth": 1, "before_revision": "Fhrancis Oliver Lopez (born February 3, 1989 in Las Pias, Philippines), or better known as Fhrancis Lopez, is a model and endorser in Philippines , He is known as one of the winners in Mister Philippines 2011, which is made to the audience of his appeal , he is look a like an actor in the Philippines, JM de URL Biography Lopez was born and raised in the southern tip of Metro Manila , before he joined the pageant of Mister Philippines, Fhrancis work as a model and he signing contracted in a Go Green Artists his agency and the Project Management , After he won and got the prize, Fhrancis held pageant of Patravadi Theatre, Lopez is one of the finalists Top 16 in Mister International 2011. Mister Philippines 2011 & Mister International 2011 Lopez declared as one of the winner of Mister Philippines 2011, After he won the competition on December 17, 2011, Lopez joined the compete in the Mister International 2011 pageant which is happening in Pattaya, Thailand and Bangkok.", "after_revision": "Fhrancis Oliver Lopez (born February 3, 1989 in Las Pias, Philippines), or better known as Fhrancis Lopez, is a model and endorser in Philippines . He is known for being one of the winners in Mister Philippines 2011, which is made to the audience of his appeal . Biography Lopez was born and raised in the southern tip of Metro Manila . Before he joined the pageant of Mister Philippines, Fhrancis worked as a model . Fhrancis and his agency signed a contract with the Go Green Artists and the Project Management . After he won and got the prize, Fhrancis held a pageant in Patravadi Theatre. Lopez is one of the Top 16 finalists in Mister International 2011. Mister Philippines 2011 & Mister International 2011 Lopez was declared as one of the winners of Mister Philippines 2011. After he won the competition on December 17, 2011, Lopez joined to compete in the Mister International 2011 pageant in Pattaya, Thailand and Bangkok.", "edit_actions": [{"type": "R", "before": ",", "after": ".", "start_char_pos": 146, "end_char_pos": 147, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "as", "after": "for being", "start_char_pos": 160, "end_char_pos": 162, "major_intent": "clarity", "raw_intents": ["clarity", "fluency", "clarity"]}, {"type": "R", "before": ", he is look a like an actor in the Philippines, JM de URL", "after": ".", "start_char_pos": 254, "end_char_pos": 312, "major_intent": "coherence", "raw_intents": ["coherence", "clarity", "coherence"]}, {"type": "R", "before": ", before", "after": ". Before", "start_char_pos": 385, "end_char_pos": 393, "major_intent": "fluency", "raw_intents": ["coherence", "fluency", "fluency"]}, {"type": "R", "before": "work", "after": "worked", "start_char_pos": 448, "end_char_pos": 452, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "style"]}, {"type": "R", "before": "and he signing contracted in a", "after": ". Fhrancis and his agency signed a contract with the", "start_char_pos": 464, "end_char_pos": 494, "major_intent": "coherence", "raw_intents": ["meaning-changed", "coherence", "coherence"]}, {"type": "D", "before": "his agency", "after": null, "start_char_pos": 512, "end_char_pos": 522, "major_intent": "clarity", "raw_intents": ["clarity", "clarity", "coherence"]}, {"type": "R", "before": ",", "after": ".", "start_char_pos": 550, "end_char_pos": 551, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "pageant of Patravadi Theatre,", "after": "a pageant in Patravadi Theatre.", "start_char_pos": 598, "end_char_pos": 627, "major_intent": "fluency", "raw_intents": ["fluency", "others", "clarity"]}, {"type": "D", "before": "finalists", "after": null, "start_char_pos": 648, "end_char_pos": 657, "major_intent": "coherence", "raw_intents": ["style", "coherence", "coherence"]}, {"type": "A", "before": null, "after": "finalists", "start_char_pos": 665, "end_char_pos": 665, "major_intent": "clarity", "raw_intents": ["clarity", "others", "coherence"]}, {"type": "A", "before": null, "after": "was", "start_char_pos": 754, "end_char_pos": 754, "major_intent": "fluency", "raw_intents": ["fluency", "meaning-changed", "fluency"]}, {"type": "R", "before": "winner", "after": "winners", "start_char_pos": 778, "end_char_pos": 784, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "2011,", "after": "2011.", "start_char_pos": 807, "end_char_pos": 812, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "fluency"]}, {"type": "R", "before": "the", "after": "to", "start_char_pos": 877, "end_char_pos": 880, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "coherence"]}, {"type": "D", "before": "which is happening", "after": null, "start_char_pos": 930, "end_char_pos": 948, "major_intent": "coherence", "raw_intents": ["coherence", "coherence", "clarity"]}], "sents_char_pos": [0, 39, 57, 71, 106, 147, 209, 695], "domain": "wiki"} |
|
{"doc_id": "65581384", "revision_depth": 1, "before_revision": "Tuanku Panglima Paderap, also called Panglima Deli, was the third ruler of the Deli Sultanate, now part of North Sumatra, Indonesia. He succeeded his father Tuanku Panglima Perunggit, who died around 1700. Paderap has four sons, namely Tuanku Jalaluddin (or Kejuruan Metar), Tuanku Panglima Pasutan (or Kejuruan Padang), Tuanku Tawar (or Kejuruan Santun), and Tuanku Umar (or Kejuruan Junjongan). A power struggle happened in the Deli after Paderap died in 1720. Jalaluddin, Paderap's first son, cannot replace him because of a physical disability. In the end, it was Pasutan who became the fourth ruler of Deli, while his younger brother Umar got a separate territory and became the first ruler of the Serdang Sultanate.", "after_revision": "Tuanku Panglima Paderap, also called Panglima Deli, was the third ruler of the Deli Sultanate, now part of North Sumatra, Indonesia. He succeeded his father Tuanku Panglima Perunggit, who died around 1700. Paderap had four sons, namely Tuanku Jalaluddin (or Kejuruan Metar), Tuanku Panglima Pasutan (or Kejuruan Padang), Tuanku Tawar (or Kejuruan Santun), and Tuanku Umar (or Kejuruan Junjongan). A power struggle happened in the Deli after Paderap died in 1720. Jalaluddin, Paderap's first son, could not replace him because of a physical disability. It was Pasutan who became the fourth ruler of Deli, while his younger brother Umar became the first ruler of the Serdang Sultanate.", "edit_actions": [{"type": "R", "before": "has", "after": "had", "start_char_pos": 214, "end_char_pos": 217, "major_intent": "fluency", "raw_intents": ["coherence", "fluency", "fluency"]}, {"type": "R", "before": "cannot", "after": "could not", "start_char_pos": 496, "end_char_pos": 502, "major_intent": "fluency", "raw_intents": ["fluency", "fluency", "style"]}, {"type": "R", "before": "In the end, it", "after": "It", "start_char_pos": 549, "end_char_pos": 563, "major_intent": "clarity", "raw_intents": ["clarity", "coherence", "clarity"]}, {"type": "D", "before": "got a separate territory and", "after": null, "start_char_pos": 644, "end_char_pos": 672, "major_intent": "clarity", "raw_intents": ["clarity", "coherence", "clarity"]}], "sents_char_pos": [0, 132, 205, 396, 462, 548], "domain": "wiki"} |
|
|