coherence-momentum / README.md
tainc's picture
Replaced links from sgnlp to sgnlp-models
91a37d1
|
raw
history blame
5.96 kB
metadata
language: en
license: mit
datasets:
  - wall-street-journal
tags:
  - coherence
  - conversational
  - text-generation
inference: false
model-index:
  - name: CoherenceMomentum
    results:
      - task:
          type: text-generation
          name: Coherence-Momentum
        dataset:
          name: permuted WSJ dataset
          type: Permuted dataset
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.988
      - task:
          type: text-generation
          name: Coherence-Momentum
        dataset:
          name: data reported by authors on permuted WSJ dataset
          type: Permuted dataset
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.986

Coherence Modelling

You can test the model at coherence modeling.
If you want to find out more information, please contact us at [email protected].

Table of Contents

Model Details

Model Name: Coherence-Momentum

  • Description: This is a neural network model that makes use of a momentum encoder and hard negative mining during training. This model is able to take in a piece of text and output a coherence score. The coherence score is only meant for comparison, i.e. it is only meaningful when used to compare between two texts, and the text with the higher coherence score is deemed to be more coherent by the model.
  • Paper: Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), May 2022 (pp. 6044-6059).
  • Author(s): Jwalapuram, P., Joty, S., & Lin, X. (2022).
  • URL: https://aclanthology.org/2022.acl-long.418/

How to Get Started With the Model

Install Python package

SGnlp is an initiative by AI Singapore's NLP Hub. They aim to bridge the gap between research and industry, promote translational research, and encourage adoption of NLP techniques in the industry.

Various NLP models, other than aspect sentiment analysis are available in the python package. You can try them out at SGNLP-Demo | SGNLP-Github.

pip install sgnlp

Examples

For more full code (such as Coherence-Momentum), please refer to this github.
Alternatively, you can also try out the demo for Coherence-Momentum.

Example of Coherence Momentum modelling:

from sgnlp.models.coherence_momentum import CoherenceMomentumModel, CoherenceMomentumConfig, \
    CoherenceMomentumPreprocessor

# Load Model
config = CoherenceMomentumConfig.from_pretrained(
    "https://storage.googleapis.com/sgnlp-models/models/coherence_momentum/config.json"
)
model = CoherenceMomentumModel.from_pretrained(
    "https://storage.googleapis.com/sgnlp-models/models/coherence_momentum/pytorch_model.bin",
    config=config
)

preprocessor = CoherenceMomentumPreprocessor(config.model_size, config.max_len)

# Example text inputs
text1 = "Companies listed below reported quarterly profit substantially different from the average of analysts ' " \
        "estimates . The companies are followed by at least three analysts , and had a minimum five-cent change in " \
        "actual earnings per share . Estimated and actual results involving losses are omitted . The percent " \
        "difference compares actual profit with the 30-day estimate where at least three analysts have issues " \
        "forecasts in the past 30 days . Otherwise , actual profit is compared with the 300-day estimate . " \
        "Source : Zacks Investment Research"
text2 = "The companies are followed by at least three analysts , and had a minimum five-cent change in actual " \
        "earnings per share . The percent difference compares actual profit with the 30-day estimate where at least " \
        "three analysts have issues forecasts in the past 30 days . Otherwise , actual profit is compared with the " \
        "300-day estimate . Source : Zacks Investment Research. Companies listed below reported quarterly profit " \
        "substantially different from the average of analysts ' estimates . Estimated and actual results involving " \
        "losses are omitted ."

text1_tensor = preprocessor([text1])
text2_tensor = preprocessor([text2])

text1_score = model.get_main_score(text1_tensor["tokenized_texts"]).item()
text2_score = model.get_main_score(text2_tensor["tokenized_texts"]).item()

print(text1_score, text2_score)

Training

The training datasets can be retrieved from Permuted dataset derived from Linguistic Data Consortium's (LDC) Wall Street Journal (WSJ) dataset. Please contact the authors to get the dataset if you have a valid LDC license.

Training Results

  • Training Time: ~24 hours for ~46000 steps (batch size of 1) on a single A100 GPU
  • Datasets: Permuted dataset derived from Linguistic Data Consortium's (LDC) Wall Street Journal (WSJ) dataset.
  • Training Config: link

Model Parameters

  • Model Weights: link
  • Model Inputs: A paragraph of text. During training, each positive example can be paired with one or more negative examples.
  • Model Outputs: Coherence score for the input text.
  • Model Size: ~930MB
  • Model Inference Info: Not available.
  • Usage Scenarios: Essay scoring, summarization, language generation.

Other Information

  • Original Code: link