Edit model card

DiscoLM 120b (Alpha)

DiscoLM 120b (Alpha) is an experimental 120b model based on Alpindale´s Goliath 120b, a merge of different Llama2-70b models, and further finetuned on a dataset of some the most popular open-source instruction sets. Disco 120b is a DiscoResearch project and was trained by Björn Plüster.

Many thanks to LAION and HessianAI for scientific supervision, coordination and compute resources provided for this project on supercomputer 42 by HessianAI!

Table of Contents

  1. Download
  2. Benchmarks
  3. Prompt Format
  4. Dataset
  5. Acknowledgements
  6. Contact
  7. About DiscoResearch
  8. Disclaimer

Download

Huggingface GPTQ GGUF AWQ Base Model
Link Link Link Link Goliath 120b

Benchmarks

Hugginface Leaderboard

This models is still an early Alpha and we can't guarantee that there isn't any contamination. However, the average of 73.198 would earn the #2 spot on the HF leaderboard at the time of writing and the highest score for a >70b model yet.

Metric Value
ARC (25-shot) 69.54
HellaSwag (10-shot) 86.49
MMLU (5-shot) 70.32
TruthfulQA (0-shot) 61.42
Winogrande (5-shot) 83.03
GSM8k (5-shot) 68.39
Avg. 73.198

We use Language Model Evaluation Harness to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard.

FastEval

Metric Value
GSM8K 81.2
Math 22.3
BBH 72.9
MMLU 67.9
Avg. 53.3

This places DiscoLM 120b firmly ahead of gpt-3.5-turbo-0613 as seen on the screenshot of the current (sadly no longer maintained) FastEval CoT leaderboard: FastEval Leaderboard

MTBench

{
    "first_turn": 8.45,
    "second_turn": 7.45,
    "categories": {
        "writing": 9.4,
        "roleplay": 8.65,
        "reasoning": 6.85,
        "math": 5.55,
        "coding": 4.95,
        "extraction": 9.15,
        "stem": 9.225,
        "humanities": 9.825
    },
    "average": 7.95
}

Screenshot of the current FastEval MT Bench leaderboard: FastEval Leaderboard

Prompt Format

This model follows the ChatML format:

<|im_start|>system
You are DiscoLM, a helpful assistant.
<|im_end|>
<|im_start|>user
Please tell me possible reasons to call a research collective "Disco Research"<|im_end|>
<|im_start|>assistant

This formatting is also available via a pre-defined Transformers chat template, which means that lists of messages can be formatted for you with the apply_chat_template() method:

chat = [
  {"role": "system", "content": "You are DiscoLM, a helpful assistant."},
  {"role": "user", "content": "Please tell me possible reasons to call a research collective Disco Research"}
]
tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

If you use tokenize=True and return_tensors="pt" instead, then you will get a tokenized and formatted conversation ready to pass to model.generate().

Dataset

The dataset curation for DiscoLM 120b followed a "brute force"/"PoC" approach, as one goal was to see whether a 120b model can "absorb" more instruction data than a 70b model.

The following datasets were used for training DiscoLM 120b:

Many thanks for all dataset providers/curators!

Contact

Best way to reach us is on our Discord.

About DiscoResearch

DiscoResearch is an aspiring open research community. Disco should be a place where researchers from many communities can come together to combine their expertise and create innovative and groundbreaking LLMs. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!

Acknowledgements

Disco 120b is a DiscoResearch project and was trained by Björn Plüster. Jan Harries helped with technical adivce, logistics and the Model Card and AutoMeta also provided helpful technical adivce. The model was trained with compute provided by HessianAI in collaboration with LAION - many thanks in particular to Patrick Schramowski for his support.

We are standing on the shoulders of giants; many thanks in no particular order to LAION and especially to Christoph Schuhmann who got us all connected, alpindale for Goliath 120b (with important contributions by Charles Goddard and Undi95), TheBloke for providing quantized versions, winglian for Axolotl which was used to train the model and the SlimOrca dataset, garage-bAInd, Teknium, Migel Tissera, MetaMath, and LDJnr for their great datasets (please contact us if we forgot to mention you here!).

Built with Axolotl

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be used for research purposes. The original Llama2 license and all restrictions of datasets used to train this model apply.

Downloads last month
2,188
Safetensors
Model size
118B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for DiscoResearch/DiscoLM-120b

Quantizations
3 models

Datasets used to train DiscoResearch/DiscoLM-120b