Dzeniks's picture
Update README.md
a095d39
metadata
license: mit
pipeline_tag: text-classification

Roberta for Justification analyst

This model is a fine-tuned version of the Roberta architecture that has been trained specifically for sequence classification. The fine-tuning process involved using the PyTorch deep learning framework and specific hyperparameters (2-4e, 1-8 epsilon) with Adagrad optimizer.


Example Usage

To use the model, first load it in PyTorch:

import torch
from transformers import RobertaForSequenceClassification, RobertaTokenizer
# Load the fine-tuned model
model = RobertaForSequenceClassification.from_pretrained('Dzeniks/justification-analyst')

# Load the tokenizer
tokenizer = RobertaTokenizer.from_pretrained('Dzeniks/justification-analyst')

# Tokenize the input sequence
input_text = "This is a sample input sequence"
input = tokenizer.encode_plus(claim, evidence, return_tensors="pt")
# Use the model to make a prediction
model.eval()
with torch.no_grad():
  prediction = model(**x)
predictions = torch.argmax(outputs[0], dim=1).item()

Classification Labels

The model was trained on a dataset consisting of claims and evidence, where the goal was to classify each claim as either supporting, refuting, or not having enough information to make a decision. The labels used for this task are as follows:

  • Label 0: Supports
  • Label 1: Refutes
  • Label 2: Not enough information