Text Generation
Transformers
PyTorch
English
gptj
Inference Endpoints
Edit model card

Question answering model finetuned from GPT4All-J v1.3 with Direct Preference Optimization.
Dataset: Dahoas/instruct-synthetic-prompt-responses.

The model was finetuned with the following promt:
"Answer the following question in context:\n\nQuestion: " + samples["prompt"] + " Answer: "
It should be benefical to use the same or a similar prompt for inference.

An increase in performance compared to GPT4All-J v1.3 was observed when using two-shot Chain-of-Thought prompting.

HellaSwag WinoGrande BooLQ ARC-c
62.37% 63.3% 65.2% 32.76%
Downloads last month
5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Z3R6X/gpt4all_dpo_instruct