File size: 5,922 Bytes
c2c3ca0 6298afd c2c3ca0 6298afd c2c3ca0 6298afd 24cf931 6298afd 24cf931 6298afd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
---
language: en
license: mit
datasets:
- wall-street-journal
tags:
- coherence
- feature-extraction
inference: false
model-index:
- name: CoherenceMomentum
results:
- task:
type: feature-extraction
name: Coherence-Momentum
dataset:
name: permuted WSJ dataset
type: Permuted dataset
metrics:
- name: Accuracy
type: accuracy
value: 0.988
- task:
type: feature-extraction
name: Coherence-Momentum
dataset:
name: data reported by authors on permuted WSJ dataset
type: Permuted dataset
metrics:
- name: Accuracy
type: accuracy
value: 0.986
---
# Coherence Modelling
You can **test the model** at [coherence modelling](https://huggingface.co/spaces/aisingapore/coherence-modelling).<br />
If you want to find out more information, please contact us at [email protected].
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Model Parameters](#parameters)
- [Other Information](#other-information)
## Model Details
**Model Name:** Coherence-Momentum
- **Description:** This is a neural network model that makes use of a momentum encoder and hard negative mining during training. This model is able to take in a piece of text and output a coherence score. The coherence score is only meant for comparison, i.e. it is only meaningful when used to compare between two texts, and the text with the higher coherence score is deemed to be more coherent by the model.
- **Paper:** Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), May 2022 (pp. 6044-6059).
- **Author(s):** Jwalapuram, P., Joty, S., & Lin, X. (2022).
- **URL:** https://aclanthology.org/2022.acl-long.418/
# How to Get Started With the Model
## Install Python package
SGnlp is an initiative by AI Singapore's NLP Hub. They aim to bridge the gap between research and industry, promote translational research, and encourage adoption of NLP techniques in the industry. <br><br> Various NLP models, other than aspect sentiment analysis are available in the python package. You can try them out at [SGNLP-Demo](https://sgnlp.aisingapore.net/) | [SGNLP-Github](https://github.com/aisingapore/sgnlp).
```python
pip install sgnlp
```
## Examples
For more full code (such as Coherence-Momentum), please refer to this [github](https://github.com/aisingapore/sgnlp). <br> Alternatively, you can also try out the [demo](https://huggingface.co/spaces/aisingapore/coherence-modelling) for Coherence-Momentum.
Example of Coherence Momentum modelling:
```python
from sgnlp.models.coherence_momentum import CoherenceMomentumModel, CoherenceMomentumConfig, \
CoherenceMomentumPreprocessor
# Load Model
config = CoherenceMomentumConfig.from_pretrained(
"https://storage.googleapis.com/sgnlp/models/coherence_momentum/config.json"
)
model = CoherenceMomentumModel.from_pretrained(
"https://storage.googleapis.com/sgnlp/models/coherence_momentum/pytorch_model.bin",
config=config
)
preprocessor = CoherenceMomentumPreprocessor(config.model_size, config.max_len)
# Example text inputs
text1 = "Companies listed below reported quarterly profit substantially different from the average of analysts ' " \
"estimates . The companies are followed by at least three analysts , and had a minimum five-cent change in " \
"actual earnings per share . Estimated and actual results involving losses are omitted . The percent " \
"difference compares actual profit with the 30-day estimate where at least three analysts have issues " \
"forecasts in the past 30 days . Otherwise , actual profit is compared with the 300-day estimate . " \
"Source : Zacks Investment Research"
text2 = "The companies are followed by at least three analysts , and had a minimum five-cent change in actual " \
"earnings per share . The percent difference compares actual profit with the 30-day estimate where at least " \
"three analysts have issues forecasts in the past 30 days . Otherwise , actual profit is compared with the " \
"300-day estimate . Source : Zacks Investment Research. Companies listed below reported quarterly profit " \
"substantially different from the average of analysts ' estimates . Estimated and actual results involving " \
"losses are omitted ."
text1_tensor = preprocessor([text1])
text2_tensor = preprocessor([text2])
text1_score = model.get_main_score(text1_tensor["tokenized_texts"]).item()
text2_score = model.get_main_score(text2_tensor["tokenized_texts"]).item()
print(text1_score, text2_score)
```
# Training
The training datasets can be retrieved from Permuted dataset derived from Linguistic Data Consortium's (LDC) Wall Street Journal (WSJ) dataset.
Please contact the authors to get the dataset if you have a valid LDC license.
#### Training Results
- **Training Time:** ~24 hours for ~46000 steps (batch size of 1) on a single A100 GPU
- **Datasets:** Permuted dataset derived from Linguistic Data Consortium's (LDC) Wall Street Journal (WSJ) dataset.
# Model Parameters
- **Model Weights:** [link](https://storage.googleapis.com/sgnlp/models/coherence_momentum/pytorch_model.bin)
- **Model Config:** [link](https://storage.googleapis.com/sgnlp/models/coherence_momentum/config.json)
- **Model Inputs:** A paragraph of text. During training, each positive example can be paired with one or more negative examples.
- **Model Outputs:** Coherence score for the input text.
- **Model Size:** ~930MB
- **Model Inference Info:** Not available.
- **Usage Scenarios:** Essay scoring, summarization, language generation.
# Other Information
- **Original Code:** [link](https://github.com/ntunlp/coherence-paradigm)
|