Pretrained model for evidence alignment on cutietestrun28May2020 dataset. The task was binary prediction whether the claim and evidence are relevant to each other. The model was built as a part of CEASystem.

Usage

model = transformers.AutoModelForSequenceClassification.from_pretrained("yevhenkost/cutiesRun28-05-2020-roberta-base-evidenceAlignment")
tokenizer = transformers.AutoTokenizer.from_pretrained("yevhenkost/cutiesRun28-05-2020-roberta-base-evidenceAlignment")

claim_evidence_pairs = [
  ["The water is wet", "The sky is blue"],
["The car crashed", "Driver could not see the road"]
]

tokenized_inputs = tokenizer.batch_encode_plus(
            predict_pairs,
            return_tensors="pt",
            padding=True,
            truncation=True
        )
preds = model(**tokenized_batch_input)

# logits: preds.logits
Downloads last month
17
Safetensors
Model size
125M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.