Edit model card

tFINE-900m-e16-d32-instruct

Model description

This model is a fine-tuned version of BEE-spoke-data/tFINE-900m-e16-d32-flan on the pszemraj/infinity-instruct-7m-T2T_en dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3588
  • Num Input Tokens Seen: 810173896

Usage Example

You can also run inference with turboT5 on ampere+ GPUs for better performance. See example on Colab.

from transformers import pipeline

pipe = pipeline(
  "text2text-generation",
  model="BEE-spoke-data/tFINE-900m-e16-d32-instruct",
  # device_map="auto", # uncomment if have GPU/accelerate
)
prompt = "Write me a python script that demonstrates an advanced sorting algorithm"
res = pipe(
    prompt,
    max_new_tokens=384,
    num_beams=4,
    early_stopping=True,
    no_repeat_ngram_size=6,
)
print(res[0]["generated_text"])

evals

open-llm-leaderboard 2

Model Average ⬆️ IFEval BBH MATH Lvl 5 GPQA MUSR MMLU-PRO
🔶 BEE-spoke-data/tFINE-900m-e16-d32-instruct 5.82 13.21 4.74 0 0.56 13.81 2.63
🔶 BEE-spoke-data/tFINE-900m-e16-d32-flan 4.43 15.06 4.41 0 0 3.72 3.41
Downloads last month
38
Safetensors
Model size
887M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for BEE-spoke-data/tFINE-900m-e16-d32-instruct

Finetuned
(1)
this model
Finetunes
1 model

Dataset used to train BEE-spoke-data/tFINE-900m-e16-d32-instruct