DCLM-1B-v0 / README.md
Achal Dave
Update readme
39adea4
|
raw
history blame
6.85 kB
metadata
license: apache-2.0
DCLM Logo

Model Card for DCLM-Baseline-1B

DCLM-Baseline-1B is a 1.4 billion parameter language model trained on the DCLM-Baseline dataset, which was curated as part of the DataComp for Language Models (DCLM) benchmark. This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.

Model Details

Size Training Tokens Layers Hidden Size Attention Heads Context Length
1.4B 2.6T 24 2048 16 2048

Model Description

  • Developed by: DataComp for Language Models (DCLM) Team
  • Model type: Decoder-only Transformer language model
  • Language(s): English (primarily)
  • License: Apache 2.0
  • Contact: [email protected]
  • Date: July 2024

Model Sources

Training Details

The model was trained using the following setup:

  • Architecture: Decoder-only Transformer
  • Framework: PyTorch with OpenLM
  • Optimizer: AdamW
  • Learning Rate: 1e-2 (peak)
  • Weight Decay: 1e-2
  • Batch Size: 2048 sequences
  • Sequence Length: 2048 tokens
  • Total Training Tokens: 2.6T
  • Hardware: Trained on H100 GPUs

For more detailed training information, please refer to Section 3.4 and Appendix F of the DCLM paper. To ensure our trained model is broadly useful, including for math and coding tasks, we combine our 3.8T DCLM-BASELINE with the StarCoder and ProofPile2 data to arrive at a 4.1T token dataset.

Evaluation

Here are the evaluation results for DCLM-Baseline-7B on various tasks (using llm-foundry eval suite)

Task Score
AGI Eval LSAT AR 0.2348
AGI Eval LSAT LR 0.3098
AGI Eval LSAT RC 0.3321
AGI Eval SAT English 0.3883
AGI Eval SAT Math (CoT) 0.0182
AQuA (CoT) 0.0245
ARC (challenge) 0.4343
ARC (easy) 0.7290
BBQ 0.4670
BigBench Conceptual Combinations 0.4660
BigBench Conlang Translation 0.0732
BigBench CS Algorithms 0.4515
BigBench Dyck Languages 0.1990
BigBench Elementary Math QA 0.2558
BigBench Language Identification 0.2911
BigBench Logical Deduction 0.2480
BigBench Misconceptions 0.5068
BigBench Novel Concepts 0.5312
BigBench Operators 0.2714
BigBench QA Wikidata 0.6687
BigBench Repeat Copy Logic 0.1562
BigBench Strange Stories 0.6839
BigBench Strategy QA 0.5762
BigBench Understanding Fables 0.4127
BoolQ 0.7131
CommonSenseQA 0.6110
COPA 0.7900
CoQA 0.4257
Enterprise PII Classification 0.5110
GPQA Diamond 0.2121
GPQA 0.2344
GSM8K (CoT) 0.0371
HellaSwag 0.7087
HellaSwag (zero-shot) 0.7001
Jeopardy 0.4218
LAMBADA (OpenAI) 0.6938
LogiQA 0.3026
MathQA 0.2598
MMLU (few-shot) 0.4193
MMLU (zero-shot) 0.3543
OpenBookQA 0.4380
PIQA 0.7786
PubMedQA (labeled) 0.2560
Simple Arithmetic (no spaces) 0.0280
Simple Arithmetic (with spaces) 0.0300
SIQA 0.6735
SQuAD 0.5424
SVAMP (CoT) 0.1800
TriviaQA (small subset) 0.3603
Winogender (MC female) 0.4833
Winogender (MC male) 0.5000
Winograd 0.8352
Winogrande 0.6527

Note: All scores are presented as decimal values between 0 and 1, representing the proportion of correct answers or the model's performance on each task.

Limitations and Biases

While DCLM-Baseline-1B demonstrates strong performance across a range of tasks, it's important to note:

  1. The model may exhibit biases present in its training data, which is derived from web crawl data.
  2. It has not undergone specific alignment or safety fine-tuning, so outputs should be used with caution.
  3. Performance on tasks not included in the evaluation suite may vary.
  4. The model's knowledge is limited to its training data cutoff date.

Ethical Considerations

Users should be aware that this model, like all large language models, can potentially generate harmful or biased content. It should not be used for making decisions about individuals or in sensitive applications without appropriate safeguards and human oversight.

Citation

If you use this model in your research, please cite:

@article{Li2024DataCompLM,
  title={DataComp-LM: In search of the next generation of training sets for language models},
  author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and [... full author list]},
  journal={arXiv preprint arXiv:2406.11794},
  year={2024}
}