samsja's picture
Update README.md
9688817 verified
|
raw
history blame
4.21 kB
metadata
license: apache-2.0
datasets:
  - PrimeIntellect/fineweb-edu
  - PrimeIntellect/fineweb
  - PrimeIntellect/StackV1-popular
  - mlfoundations/dclm-baseline-1.0-parquet
  - open-web-math/open-web-math
language:
  - en
pipeline_tag: text-generation

INTELLECT-1

Model Overview

INTELLECT-1 is the first collaboratively trained 10 billion parameter language model trained from scratch on 1 trillion tokens of English text and code.

Intellect 1 training visual

INTELLECT-1 was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute. The training code utilizes the prime framework, a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers. The key abstraction that allows dynamic scaling is the ElasticDeviceMesh which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node. The model was trained using the DiLoCo algorithms with 100 inner steps. The global all-reduce was done with custom int8 all-reduce kernels to reduce the communication payload required, greatly reducing the communication overhead by a factor 400x.

For more detailed technical insights, please refer to our technical paper.

Note: The model will immediately output EOS token if the BOS token is not set. This is a result of the tensor packing used during training. This can result in terrible eval scores.

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("PrimeIntellect/INTELLECT-1-Instruct")
tokenizer = AutoTokenizer.from_pretrained("PrimeIntellect/INTELLECT-1-Instruct")

input_text = "What is the Metamorphosis of Prime Intellect about?"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)

print(output_text)

Example text generation pipeline

import torch
from transformers import pipeline
torch.set_default_device("cuda")

pipe = pipeline("text-generation", model="PrimeIntellect/INTELLECT-1")
print(pipe("What is prime intellect ?"))

Model Details

  • Model Contributors: samsja, Prime Intellect, Arcee AI, kotaro, skre_0, marlo, rodeo, Herb, Olas, superchillen, Hugging Face, mev_pete, 0xfr_, dj, primeprimeint1234, Marco Giglio, realtek, Hyperbolic, hecataeus, NWO, Virtual Machine, droll, SemiAnalysis, waiting_, toptickcrypto, sto, Johannes, washout_segment_0b, klee
  • Release Date: 29 Nov 2024
  • Model License: Apache 2.0

Technical Specifications

Parameter Value
Parameter Size 10B
Number of Layers 42
Number of Attention Heads 32
Hidden Size 4096
Context Length 8192
Vocabulary Size 128256

Training Details:

  • Dataset: 55% fineweb-edu, 10% fineweb, 20% Stack V1, 10% dclm-baseline, 5% open-web-math
  • Tokens: 1 Trillion
  • Optimizer: Diloco/LocalSGD - Inner Optimizer: AdamW, Outer Optmizer: Nesterov SGD

Performance on benchmarks

Model Size Tokens MMLU GPQA GSM8K ARC-C Hellaswag
INTELLECT-Instruct 10B 1T 49.89 28.32 38.58 54.52 71.42
MPT-7B-Chat 7B 1T 36.29 26.79 8.26 51.02 75.88
Falcon-7B-Instruct 7B 1.5T 25.21 26.34 4.93 45.82 70.61
LLM360-AmberChat 7B 1.4T 36.02 27.23 6.14 43.94 73.94
LLaMA2-7B-Chat 7B 2T 47.20 28.57 23.96 53.33 78.69
LLaMA2-13B-Chat 13B 2T 53.51 28.35 37.15 59.73 82.47

Citations

If you use this model in your research, please cite it as follows:

@article{}