--- license: mit datasets: - pints-ai/Expository-Prose-V1 - HuggingFaceH4/ultrachat_200k - Open-Orca/SlimOrca-Dedup - meta-math/MetaMathQA - HuggingFaceH4/deita-10k-v0-sft - WizardLM/WizardLM_evol_instruct_V2_196k - togethercomputer/llama-instruct - LDJnr/Capybara - HuggingFaceH4/ultrafeedback_binarized language: - en model-index: - name: 1.5-Pints results: - task: type: text-generation dataset: name: MTBench type: ai2_arc metrics: - name: MTBench type: LLM-as-a-Judge value: 3.73 source: name: MTBench url: https://huggingface.co/spaces/lmsys/mt-bench pipeline_tag: text-generation extra_gated_prompt: >- Though best efforts has been made to ensure, as much as possible, that all texts in the training corpora are royalty free, this does not constitute a legal guarantee that such is the case. **By using any of the models, corpora or part thereof, the user agrees to bear full responsibility to do the necessary due diligence to ensure that he / she is in compliance with their local copyright laws. Additionally, the user agrees to bear any damages arising as a direct cause (or otherwise) of using any artifacts released by the pints research team, as well as full responsibility for the consequences of his / her usage (or implementation) of any such released artifacts. The user also indemnifies Pints Research Team (and any of its members or agents) of any damage, related or unrelated, to the release or subsequent usage of any findings, artifacts or code by the team. For the avoidance of doubt, any artifacts released by the Pints Research team are done so in accordance with the 'fair use' clause of Copyright Law, in hopes that this will aid the research community in bringing LLMs to the next frontier. extra_gated_fields: Company: text Country: country Specific date: date_picker I want to use this model for: type: select options: - Research - Education - label: Other value: other I agree to use this model for in accordance to the afore-mentioned Terms of Use: checkbox --- # 1.5-Pints -- A model pretrained in 9 days by using high quality data

Pints.ai -- Powerful small language models in 9 days

Join us at Discord: https://discord.com/invite/RSHk22Z29j **Install dependencies** ```bash pip install transformers # Omit `flash-attn` if not supported by your hardware pip install flash-attn --no-build-isolation ``` **Usage** ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto # INITIALIZE the model and the tokenizer model = AutoModelForCausalLM.from_pretrained( "pints-ai/1.5-Pints-2k-v0.1", device_map=device, attn_implementation="flash_attention_2" # can be omitted if not supported ) tokenizer = AutoTokenizer.from_pretrained("pints-ai/1.5-Pints-2k-v0.1") # PREPARE and tokenize the prompt prompt = "Predict what life will be like 100 years from now." messages = [ {"role": "system", "content": "You are an AI assistant that follows instruction extremely well. Help as much as you can."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) input = tokenizer([text], return_tensors="pt").to(device) # GENERATE the response generated_ids = model.generate( input.input_ids, max_new_tokens=512 ) # DECODE the response input_length = len(input.input_ids[0]) # Remove the input and decode only the output response = tokenizer.decode(generated_ids[0][input_length:]) print(response) ``` **Compute Infrastructure**
This model can be served with a GPU containing at least 8GB of VRAM.

## Description 1.5 Pints is a Large Language Model that significantly advances the efficiency of LLM training by emphasizing data quality over quantity. Our [pre-training corpus](https://huggingface.co/datasets/pints-ai/Expository-Prose-V1) is a meticulously curated dataset of 57 billion tokens, thus making pre-training more accessible and environmentally-friendly.

## Results **MTBench**
[MTBench](https://huggingface.co/spaces/lmsys/mt-bench) is a popular evaluation harness that uses strong LLMs like GPT-4 to act as judges and assess the quality of the models' responses. | Model | Score | Parameter Size | Pretrain Tokens | |:-:|:-:|:-:|:-:| | meta-llama/Llama-2-7b-chat-hf | 6.27 | 7B | 2T | | microsoft/phi-2 | 5.83 | 2.7B | 1.4T | | google/gemma-2b-it | 5.44 | 2B | 3T | | stabilityai/stablelm-2-1_6b-chat | 4.7 | 1.6B | 2T | | **1.5-Pints-2K** | **3.73** | **1.57B** | **0.115T** | | TinyLlama/TinyLlama-1.1B-Chat-v1.0 | 3.72 | 1.1B | 3T | | **1.5-Pints-16K** | **3.40** | **1.57B** | **0.115T** | | apple/OpenELM-1_1B-Instruct | 3.34 | 1B | 1.8T | | microsoft/phi-1_5 | 3.33 | 1.3B | 0.15T | | databricks/dolly-v2-3b | 2.33 | 3B | 0.3T | | EleutherAI/pythia-2.8b | 1.81 | 2.8B | 0.3T | | tiiuae/falcon-rw-1b | 1.18 | 1B | 0.35T | The 16K context window version of 1.5-Pints can be found [here](https://huggingface.co/pints-ai/1.5-Pints-16K-v0.1). ## Technical Specifications **Architecture**
Llama 2 Autoregressive Model with **2K Context Window** and Mistral tokenizer. The model uses Float32 precision. | Parameters | Vocab Size | Embedding Size | Context Length | Layers | Heads | Query Groups | Intermediate Hidden Size | |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | 1,565,886,464 | 32,064 | 2,048 | 2,048 | 24 | 32 | 4 | 8,192 | **Context Lengths**
1.5-Pints comes in 2 context lengths - 16k (16,384) and 2k (2,048). **Prompt template**
This model has been finetuned and preference-optimized using the ChatML template. ``` <|im_start|>system {SYSTEM_PROMPT}<|im_end|> <|im_start|>user {PROMPT}<|im_end|> <|im_start|>assistant ```

## Uses **Direct Use**
This model is meant to be an efficient and fine-tunable helpful assistant. It is designed to excel in user assistance and reasoning, and rely less on internal knowledge and factuals. Thus, for knowledge retrieval purposes, it should be used with Retrieval Augmented Generation. **Downstream Use**
Given the size of this model, it is possible to launch multiple instances of it for use in agentic context without breaking the compute bank. **Recommendations**
- It is recommended to finetune this model for domain adaption, and use it for a specialized tasks. - To reap full performance, use a repetition penalty of 1.3 rather than 1.

## Training Data **Pre-Train Data**
Dataset: [pints-ai/Expository-Prose-V1](https://huggingface.co/datasets/pints-ai/Expository-Prose-V1) **Fine-Tune Data**
Corpora: - [HuggingFaceH4/ultrachat](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) - [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup) - [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) - [HuggingFaceH4/deita-10k-v0-sft](https://huggingface.co/datasets/HuggingFaceH4/deita-10k-v0-sft) - [WizardLM/WizardLM_evol_instruct_V2_196k](https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_V2_196k) - [togethercomputer/llama-instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct) - [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara) **DPO Data**
Dataset: [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)

## Training Procedure Both Pre-Train and Finetuning used [our fork](https://github.com/Pints-AI/1.5-Pints) of the [LitGPT Framework](https://github.com/Lightning-AI/litgpt). For DPO, we used the methods set out in [The Alignment Handbook](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_dpo.py). More details can be found in our [paper](https://arxiv.org/abs/2408.03506). ## Training Hyperparameters **Pre-Train**
| Hyperparameter | Value | |:-:|:-:| | Optimizer | AdamW(Beta1=0.9, Beta2=0.95) | | Learning Rate Scheduler | Cosine | | Max Learning Rate | 4.0x10-4 | | Min Learning Rate | 4.0x10-5 | | Warmup Steps | 2,000 | | Batch Size | 2,097,152 | | Weight Decay | 0.1 | | Gradient Clipping Threshold | 1.0 | **SFT**
| Hyperparameter | Value | |:-:|:-:| | Optimizer | AdamW(Beta1=0.9, Beta2=0.95) | | Warmup steps | 1,126 (10%) | Peak learning rate | 2e-5 | | Learning rate scheduler | Cosine | | Weight Decay | 0.1 | **DPO**
DPO parameters used are the exact same as those specified in [The Alignment Handbook](https://github.com/huggingface/alignment-handbook).

## Citation **Attribution** - **Developed by:** [calvintwr](https://huggingface.co/calvintwr), [lemousehunter](https://huggingface.co/lemousehunter) - **Funded by** [PintsAI](https://pints.ai/) - **Released by:** [PintsAI](https://pints.ai/) - **Model type:** Large Language Model - **Language(s) (NLP):** English - **License:** [MIT License](https://opensource.org/license/mit)

**BibTeX:** ```latex @misc{tan202415pintstechnicalreportpretraining, title={1.5-Pints Technical Report: Pretraining in Days, Not Months -- Your Language Model Thrives on Quality Data}, author={Calvin Tan and Jerome Wang}, year={2024}, eprint={2408.03506}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2408.03506}, } ``` **APA**
Tan, C., & Wang, J. (2024). 1.5-Pints Technical Report: Pretraining in days, not months -- Your language model thrives on quality data. arXiv. https://arxiv.org/abs/2408.03506

## Legal Warning Though best efforts has been made to ensure, as much as possible, that all texts in the training corpora are royalty free, this does not constitute a legal guarantee that such is the case. **By using any of the models, corpora or part thereof, the user agrees to bear full responsibility to do the necessary due diligence to ensure that he / she is in compliance with their local copyright laws**. Additionally, the **user agrees to bear any damages** arising as a direct cause (or otherwise) of using any artifacts released by the pints research team, as well as full responsibility for the consequences of his / her usage (or implementation) of any such released artifacts. The user also indemnifies Pints Research Team (and any of its members or agents) of any damage, related or unrelated, to the release or subsequent usage of any findings, artifacts or code by the team. For the avoidance of doubt, **any artifacts released by the Pints Research team are done so in accordance with the "fair use"** clause of Copyright Law, in hopes that this will aid the research community in bringing LLMs to the next frontier.