Edit model card

Model Card

Model Description

This is a Large Language Model (LLM) trained on a subset of the dataset "mlabonne/orpo-dpo-mix-40k".

Evaluation Results

Hellaswag

Metric Value
Accuracy 0.4517

How to Use

To use this model, simply download the checkpoint and load it into your preferred deep learning framework.

Downloads last month
25
Safetensors
Model size
1.24B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for d4niel92/llama-3.2-1B-orpo

Finetuned
(126)
this model

Dataset used to train d4niel92/llama-3.2-1B-orpo