---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- nlp
- llm
---
# K2: a fully-reproducible large language model outperforming Llama 2 70B using 35% less compute
LLM360 demystifies the training recipe used for Llama 2 - 70B with K2. K2 is fully transparent, meaning we’ve open-sourced all artifacts, including code, data, model checkpoints, intermediate results, and more.
## About K2:
* 65 billion parameter LLM
* Tokens: 1.4T
* Languages: English
* Models Released: base, chat model
* Trained in 2 stages
* License: Apache 2.0
K2 was developed as a collaboration between [MBZUAI](https://mbzuai.ac.ae/institute-of-foundation-models/), [Petuum](petuum.com), and [LLM360](llm360.ai).
## LLM360 Model Performance and Evaluation Collection
The LLM360 Performance and Evaluation Collection is a robust evaluations set consisting of general and domain specific evaluations to assess model knowledge and function.
Evaluations include standard best practice benchmarks, medical, math, and coding knowledge. More about the evaluations can be found [here](llm360.ai/evaluations).
Detailed analysis can be found on the K2 Weights and Biases project [here](https://wandb.ai/llm360/K2?nw=29mu6l0zzqq)
## K2 Gallery
The K2 gallery allows one to browse the output of various prompts on intermediate K2 checkpoints, which provides an intuitive understanding on how the model develops and improves over time. This is inspired by The Bloom Book.
[View K2 gallery here](https://huggingface.co/spaces/LLM360/k2-gallery)
## Datasets and Mix
The following data mix was used to train K2 and achieve results in line with Llama 2 70B.
The full data sequence can be found [here](https://huggingface.co/datasets/LLM360/K2Datasets/tree/main)
| Dataset | Starting Tokens | Multiplier | Total Tokens |% of Total |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| dm-math | 4.33B | 3x | 13B | 1% |
| pubmed-abstracts | 4.77B | 3x | 14.3B | 1.1% |
| uspto | 4.77B | 3x | 14.3B | 1.1% |
| pubmed-central | 26B | 1x | 26B | 2% |
| [redpajama.arxiv](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 27.3B | 1x | 27.3B | 2.1% |
| [starcoder.spm](https://huggingface.co/datasets/bigcode/starcoderdata) | 67.6B | 0.5x | 33.8B | 2.6% |
| [starcoder.fim](https://huggingface.co/datasets/bigcode/starcoderdata) | 67.6B | 0.5x | 33.8B | 2.6% |
| [redpajama.stackexchange](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 61.1B | 1x | 61.1B | 4.7% |
| [starcoder](https://huggingface.co/datasets/bigcode/starcoderdata) | 132.6B | 0.5x | 66.3B | 5.1% |
| [pile-of-law](https://huggingface.co/datasets/pile-of-law/pile-of-law) | 76.7B | 1x | 76.7B | 5.9% |
| [redpajama.book](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 80.6B | 1x | 80.6B | 6.2% |
| s2orc | 107.9B | 1x | 107.9B | 8.3% |
| [redpajama.wikipedia](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 22.1B | 6x | 132.6B | 10.2% |
| [refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 612.3B | 1x | 612.3B | 47.1% |
| Totals | - | - | 1.3T | 100% |
# LLM360 Reasearch Suite
## Stage 2 - Last 10 Checkpoints
| Checkpoints | |
| ----------- | ----------- |
| [Checkpoint 380](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_380) | [Checkpoint 375](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_375) |
| [Checkpoint 379](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_379) | [Checkpoint 374](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_374) |
| [Checkpoint 378](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_378) | [Checkpoint 373](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_373) |
| [Checkpoint 377](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_377) | [Checkpoint 372](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_372) |
| [Checkpoint 376](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_376) | [Checkpoint 371](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_371) |
## Stage 1 - Last 10 Checkpoints
| Checkpoints | |
| ----------- | ----------- |
| [Checkpoint 360](https://huggingface.co/LLM360/K2/tree/ckpt_360) | [Checkpoint 355](https://huggingface.co/LLM360/K2/tree/ckpt_355) |
| [Checkpoint 359](https://huggingface.co/LLM360/K2/tree/ckpt_359) | [Checkpoint 354](https://huggingface.co/LLM360/K2/tree/ckpt_354) |
| [Checkpoint 358](https://huggingface.co/LLM360/K2/tree/ckpt_358) | [Checkpoint 353](https://huggingface.co/LLM360/K2/tree/ckpt_353) |
| [Checkpoint 357](https://huggingface.co/LLM360/K2/tree/ckpt_357) | [Checkpoint 352](https://huggingface.co/LLM360/K2/tree/ckpt_352) |
| [Checkpoint 356](https://huggingface.co/LLM360/K2/tree/ckpt_356) | [Checkpoint 351](https://huggingface.co/LLM360/K2/tree/ckpt_351) |
[to find all branches: git branch -a]
## LLM360 Pretraining Suite
We provide step-by-step reproducation tutorials for tech enthusiasts, AI practitioners and academic or industry researchers who want to learn pretraining techniques [here](https://www.llm360.ai/pretraining.html).
## LLM360 Developer Suite
We provide step-by-step finetuning tutorials for tech enthusiasts, AI practitioners and academic or industry researchers [here](https://www.llm360.ai/developer.html).
# Loading K2
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("LLM360/K2")
model = AutoModelForCausalLM.from_pretrained("LLM360/K2")
prompt = 'what is the highest mountain on earth?'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, do_sample=True, max_new_tokens=128)
print("-"*20 + "Output for model" + 20 * '-')
print(tokenizer.batch_decode(gen_tokens)[0])
```
## About LLM360
LLM360 is an open research lab enabling community-owned AGI through open-source large model research and development.
LLM360 enables community-owned AGI by creating standards and tools to advance the bleeding edge of LLM capability and empower knowledge transfer, research, and development.
We believe in a future where artificial general intelligence (AGI) is created by the community, for the community. Through an open ecosystem of equitable computational resources, high quality data, and flowing technical knowledge, we can ensure ethical AGI development and universal access for all innovators.
[Visit us](https://www.llm360.ai/)
## Citation
**BibTeX:**
```bibtex
@article{
title={LLM360 K2-65B: Scaling Up Fully Transparent Open-Source LLMs},
author={The LLM360 Team},
year={2024},
}
```