How to run this?
I do not understand why there is not single example of how to run it?
If this is an open ai detector, obviously people want to be able to detect it, and there is not even single code example here to tell people how to do so.
Hi
@zokica
! The easiest way to run the model is to use the Transformers pipeline
method, like the following (once you pip install transformers
and have PyTorch installed:
from transformers import pipeline
pipe = pipeline("text-classification", model="roberta-base-openai-detector")
print(pipe("Hello world! Is this content AI-generated?")) # [{'label': 'Real', 'score': 0.8036582469940186}]
I'll add this to the model card!
Also, since it's a RoBERTa model, I think you should be able to run it with a code snippet like the one on this page: https://huggingface.co/docs/transformers/main/en/model_doc/roberta#transformers.RobertaForSequenceClassification.forward.example
You'll just have to swap out the model ID, so it should look like:
import torch
from transformers import AutoTokenizer, RobertaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("roberta-base-openai-detector")
model = RobertaForSequenceClassification.from_pretrained("roberta-base-openai-detector")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = RobertaForSequenceClassification.from_pretrained("roberta-base-openai-detector", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
Awesome, thanks.