metadata
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
language:
- en
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- eo
- es
- et
- eu
- fa
- ff
- fi
- fr
- fy
- ga
- gd
- gl
- gn
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lg
- li
- ln
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- ns
- om
- or
- pa
- pl
- ps
- pt
- qu
- rm
- ro
- ru
- sa
- si
- sc
- sd
- sk
- sl
- so
- sq
- sr
- ss
- su
- sv
- sw
- ta
- te
- th
- tl
- tn
- tr
- ug
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zu
datasets:
- bigcode/programming-languages-keywords
- bigcode/the-stack-smol-xs
- nampdn-ai/tiny-textbooks
- xu-song/cc100-samples
- m-a-p/CodeFeedback-Filtered-Instruction
- nampdn-ai/tiny-codes
- ajibawa-2023/Maths-College
- microsoft/orca-math-word-problems-200k
- mlabonne/FineTome-100k
- arcee-ai/agent-data
- cognitivecomputations/SystemChat-2.0
- badrex/llm-emoji-dataset
tags:
- litgpt
- litdata
tangled-llama-108m-32k-base-v0.1
A pretrained language model based on the Llama model with about 108M parameters. This model has been trained on 9.7B (9,782,206,713
) tokens from more than 5.2M (5,285,575
) dataset rows.
This model isn't designed for immediate use but rather for Continued Pretraining and Finetuning on a downstream task. While it can handle a context length of up to 32K (32,768
) tokens, it was pretrained with sequences of 2K (2048
) tokens.
The objective is to streamline the cognitive or reasoning core, eliminating any redundant knowledge from the model.
lm-evaluation-harness
litgpt evaluate --tasks 'leaderboard' --out_dir 'evaluate-0/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
leaderboard | N/A | |||||||
- leaderboard_bbh | N/A | |||||||
- leaderboard_bbh_boolean_expressions | 1 | none | 3 | acc_norm | ↑ | 0.5040 | ± | 0.0317 |
- leaderboard_bbh_causal_judgement | 1 | none | 3 | acc_norm | ↑ | 0.5134 | ± | 0.0366 |
- leaderboard_bbh_date_understanding | 1 | none | 3 | acc_norm | ↑ | 0.1880 | ± | 0.0248 |
- leaderboard_bbh_disambiguation_qa | 1 | none | 3 | acc_norm | ↑ | 0.3000 | ± | 0.0290 |
- leaderboard_bbh_formal_fallacies | 1 | none | 3 | acc_norm | ↑ | 0.4680 | ± | 0.0316 |
- leaderboard_bbh_geometric_shapes | 1 | none | 3 | acc_norm | ↑ | 0.0560 | ± | 0.0146 |
- leaderboard_bbh_hyperbaton | 1 | none | 3 | acc_norm | ↑ | 0.4960 | ± | 0.0317 |
- leaderboard_bbh_logical_deduction_five_objects | 1 | none | 3 | acc_norm | ↑ | 0.1680 | ± | 0.0237 |
- leaderboard_bbh_logical_deduction_seven_objects | 1 | none | 3 | acc_norm | ↑ | 0.1400 | ± | 0.0220 |
- leaderboard_bbh_logical_deduction_three_objects | 1 | none | 3 | acc_norm | ↑ | 0.3360 | ± | 0.0299 |
- leaderboard_bbh_movie_recommendation | 1 | none | 3 | acc_norm | ↑ | 0.2280 | ± | 0.0266 |
- leaderboard_bbh_navigate | 1 | none | 3 | acc_norm | ↑ | 0.4720 | ± | 0.0316 |
- leaderboard_bbh_object_counting | 1 | none | 3 | acc_norm | ↑ | 0.1000 | ± | 0.0190 |
- leaderboard_bbh_penguins_in_a_table | 1 | none | 3 | acc_norm | ↑ | 0.1918 | ± | 0.0327 |
- leaderboard_bbh_reasoning_about_colored_objects | 1 | none | 3 | acc_norm | ↑ | 0.1480 | ± | 0.0225 |
- leaderboard_bbh_ruin_names | 1 | none | 3 | acc_norm | ↑ | 0.2920 | ± | 0.0288 |
- leaderboard_bbh_salient_translation_error_detection | 1 | none | 3 | acc_norm | ↑ | 0.1520 | ± | 0.0228 |
- leaderboard_bbh_snarks | 1 | none | 3 | acc_norm | ↑ | 0.4494 | ± | 0.0374 |
- leaderboard_bbh_sports_understanding | 1 | none | 3 | acc_norm | ↑ | 0.4600 | ± | 0.0316 |
- leaderboard_bbh_temporal_sequences | 1 | none | 3 | acc_norm | ↑ | 0.2480 | ± | 0.0274 |
- leaderboard_bbh_tracking_shuffled_objects_five_objects | 1 | none | 3 | acc_norm | ↑ | 0.1920 | ± | 0.0250 |
- leaderboard_bbh_tracking_shuffled_objects_seven_objects | 1 | none | 3 | acc_norm | ↑ | 0.1560 | ± | 0.0230 |
- leaderboard_bbh_tracking_shuffled_objects_three_objects | 1 | none | 3 | acc_norm | ↑ | 0.3000 | ± | 0.0290 |
- leaderboard_bbh_web_of_lies | 1 | none | 3 | acc_norm | ↑ | 0.5040 | ± | 0.0317 |
- leaderboard_gpqa | N/A | |||||||
- leaderboard_gpqa_diamond | 1 | none | 0 | acc_norm | ↑ | 0.2222 | ± | 0.0296 |
- leaderboard_gpqa_extended | 1 | none | 0 | acc_norm | ↑ | 0.2711 | ± | 0.0190 |
- leaderboard_gpqa_main | 1 | none | 0 | acc_norm | ↑ | 0.2589 | ± | 0.0207 |
- leaderboard_ifeval | 3 | none | 0 | inst_level_loose_acc | ↑ | 0.2050 | ± | N/A |
none | 0 | inst_level_strict_acc | ↑ | 0.1966 | ± | N/A | ||
none | 0 | prompt_level_loose_acc | ↑ | 0.1072 | ± | 0.0133 | ||
none | 0 | prompt_level_strict_acc | ↑ | 0.1035 | ± | 0.0131 | ||
- leaderboard_math_hard | N/A | |||||||
- leaderboard_math_algebra_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_counting_and_prob_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_geometry_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_intermediate_algebra_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_num_theory_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_prealgebra_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_precalculus_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_mmlu_pro | 0.1 | none | 5 | acc | ↑ | 0.1151 | ± | 0.0029 |
- leaderboard_musr | N/A | |||||||
- leaderboard_musr_murder_mysteries | 1 | none | 0 | acc_norm | ↑ | 0.4840 | ± | 0.0317 |
- leaderboard_musr_object_placements | 1 | none | 0 | acc_norm | ↑ | 0.3125 | ± | 0.0290 |
- leaderboard_musr_team_allocation | 1 | none | 0 | acc_norm | ↑ | 0.3840 | ± | 0.0308 |
litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge' --out_dir 'evaluate-1/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
arc_challenge | 1 | none | 0 | acc | ↑ | 0.1911 | ± | 0.0115 |
none | 0 | acc_norm | ↑ | 0.2355 | ± | 0.0124 | ||
gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.0152 | ± | 0.0034 |
strict-match | 5 | exact_match | ↑ | 0.0000 | ± | 0.0000 | ||
hellaswag | 1 | none | 0 | acc | ↑ | 0.2661 | ± | 0.0044 |
none | 0 | acc_norm | ↑ | 0.2708 | ± | 0.0044 | ||
mmlu | 2 | none | acc | ↑ | 0.2315 | ± | 0.0036 | |
- humanities | 2 | none | acc | ↑ | 0.2372 | ± | 0.0062 | |
- formal_logic | 1 | none | 0 | acc | ↑ | 0.2937 | ± | 0.0407 |
- high_school_european_history | 1 | none | 0 | acc | ↑ | 0.2424 | ± | 0.0335 |
- high_school_us_history | 1 | none | 0 | acc | ↑ | 0.2451 | ± | 0.0302 |
- high_school_world_history | 1 | none | 0 | acc | ↑ | 0.2321 | ± | 0.0275 |
- international_law | 1 | none | 0 | acc | ↑ | 0.1983 | ± | 0.0364 |
- jurisprudence | 1 | none | 0 | acc | ↑ | 0.2315 | ± | 0.0408 |
- logical_fallacies | 1 | none | 0 | acc | ↑ | 0.1840 | ± | 0.0304 |
- moral_disputes | 1 | none | 0 | acc | ↑ | 0.2110 | ± | 0.0220 |
- moral_scenarios | 1 | none | 0 | acc | ↑ | 0.2380 | ± | 0.0142 |
- philosophy | 1 | none | 0 | acc | ↑ | 0.1961 | ± | 0.0226 |
- prehistory | 1 | none | 0 | acc | ↑ | 0.2315 | ± | 0.0235 |
- professional_law | 1 | none | 0 | acc | ↑ | 0.2503 | ± | 0.0111 |
- world_religions | 1 | none | 0 | acc | ↑ | 0.2865 | ± | 0.0347 |
- other | 2 | none | acc | ↑ | 0.2385 | ± | 0.0076 | |
- business_ethics | 1 | none | 0 | acc | ↑ | 0.2900 | ± | 0.0456 |
- clinical_knowledge | 1 | none | 0 | acc | ↑ | 0.2113 | ± | 0.0251 |
- college_medicine | 1 | none | 0 | acc | ↑ | 0.1965 | ± | 0.0303 |
- global_facts | 1 | none | 0 | acc | ↑ | 0.1900 | ± | 0.0394 |
- human_aging | 1 | none | 0 | acc | ↑ | 0.3004 | ± | 0.0308 |
- management | 1 | none | 0 | acc | ↑ | 0.1748 | ± | 0.0376 |
- marketing | 1 | none | 0 | acc | ↑ | 0.2863 | ± | 0.0296 |
- medical_genetics | 1 | none | 0 | acc | ↑ | 0.2800 | ± | 0.0451 |
- miscellaneous | 1 | none | 0 | acc | ↑ | 0.2350 | ± | 0.0152 |
- nutrition | 1 | none | 0 | acc | ↑ | 0.2255 | ± | 0.0239 |
- professional_accounting | 1 | none | 0 | acc | ↑ | 0.2482 | ± | 0.0258 |
- professional_medicine | 1 | none | 0 | acc | ↑ | 0.1985 | ± | 0.0242 |
- virology | 1 | none | 0 | acc | ↑ | 0.2771 | ± | 0.0348 |
- social sciences | 2 | none | acc | ↑ | 0.2281 | ± | 0.0076 | |
- econometrics | 1 | none | 0 | acc | ↑ | 0.2105 | ± | 0.0384 |
- high_school_geography | 1 | none | 0 | acc | ↑ | 0.1818 | ± | 0.0275 |
- high_school_government_and_politics | 1 | none | 0 | acc | ↑ | 0.2280 | ± | 0.0303 |
- high_school_macroeconomics | 1 | none | 0 | acc | ↑ | 0.2410 | ± | 0.0217 |
- high_school_microeconomics | 1 | none | 0 | acc | ↑ | 0.2353 | ± | 0.0276 |
- high_school_psychology | 1 | none | 0 | acc | ↑ | 0.2055 | ± | 0.0173 |
- human_sexuality | 1 | none | 0 | acc | ↑ | 0.2595 | ± | 0.0384 |
- professional_psychology | 1 | none | 0 | acc | ↑ | 0.2418 | ± | 0.0173 |
- public_relations | 1 | none | 0 | acc | ↑ | 0.2091 | ± | 0.0390 |
- security_studies | 1 | none | 0 | acc | ↑ | 0.2408 | ± | 0.0274 |
- sociology | 1 | none | 0 | acc | ↑ | 0.2040 | ± | 0.0285 |
- us_foreign_policy | 1 | none | 0 | acc | ↑ | 0.3100 | ± | 0.0465 |
- stem | 2 | none | acc | ↑ | 0.2195 | ± | 0.0074 | |
- abstract_algebra | 1 | none | 0 | acc | ↑ | 0.2700 | ± | 0.0446 |
- anatomy | 1 | none | 0 | acc | ↑ | 0.1630 | ± | 0.0319 |
- astronomy | 1 | none | 0 | acc | ↑ | 0.2303 | ± | 0.0343 |
- college_biology | 1 | none | 0 | acc | ↑ | 0.2569 | ± | 0.0365 |
- college_chemistry | 1 | none | 0 | acc | ↑ | 0.2400 | ± | 0.0429 |
- college_computer_science | 1 | none | 0 | acc | ↑ | 0.2100 | ± | 0.0409 |
- college_mathematics | 1 | none | 0 | acc | ↑ | 0.2100 | ± | 0.0409 |
- college_physics | 1 | none | 0 | acc | ↑ | 0.2745 | ± | 0.0444 |
- computer_security | 1 | none | 0 | acc | ↑ | 0.3000 | ± | 0.0461 |
- conceptual_physics | 1 | none | 0 | acc | ↑ | 0.1957 | ± | 0.0259 |
- electrical_engineering | 1 | none | 0 | acc | ↑ | 0.2276 | ± | 0.0349 |
- elementary_mathematics | 1 | none | 0 | acc | ↑ | 0.2302 | ± | 0.0217 |
- high_school_biology | 1 | none | 0 | acc | ↑ | 0.1968 | ± | 0.0226 |
- high_school_chemistry | 1 | none | 0 | acc | ↑ | 0.1527 | ± | 0.0253 |
- high_school_computer_science | 1 | none | 0 | acc | ↑ | 0.2500 | ± | 0.0435 |
- high_school_mathematics | 1 | none | 0 | acc | ↑ | 0.1963 | ± | 0.0242 |
- high_school_physics | 1 | none | 0 | acc | ↑ | 0.2053 | ± | 0.0330 |
- high_school_statistics | 1 | none | 0 | acc | ↑ | 0.2269 | ± | 0.0286 |
- machine_learning | 1 | none | 0 | acc | ↑ | 0.2768 | ± | 0.0425 |
truthfulqa_mc2 | 2 | none | 0 | acc | ↑ | 0.4681 | ± | 0.0159 |
winogrande | 1 | none | 0 | acc | ↑ | 0.5146 | ± | 0.0140 |
Groups | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
mmlu | 2 | none | acc | ↑ | 0.2315 | ± | 0.0036 | |
- humanities | 2 | none | acc | ↑ | 0.2372 | ± | 0.0062 | |
- other | 2 | none | acc | ↑ | 0.2385 | ± | 0.0076 | |
- social sciences | 2 | none | acc | ↑ | 0.2281 | ± | 0.0076 | |
- stem | 2 | none | acc | ↑ | 0.2195 | ± | 0.0074 |