mptk-1b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
451a018
|
raw
history blame
2.73 kB
---
license: apache-2.0
language:
- ko
---
# MPTK-1B
MPTK-1B๋Š” ํ•œ๊ตญ์–ด/์˜์–ด์ฝ”๋“œ ๋ฐ์ดํ„ฐ์…‹์—์„œ ํ•™์Šต๋œ 1.3B ํŒŒ๋ผ๋ฏธํ„ฐ์˜ decoder-only transformer ์–ธ์–ด๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.
์ด ๋ชจ๋ธ์€ ๊ตฌ๊ธ€์˜ [TPU Research Cloud(TRC)](https://sites.research.google/trc/about/)๋ฅผ ํ†ตํ•ด ์ง€์›๋ฐ›์€ Cloud TPU๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
## Model Details
### Model Description
๋‹ค๋ฅธ decoder-only transformer์—์„œ ์ผ๋ถ€ ์ˆ˜์ •๋œ ์•„ํ‚คํ…์ฒ˜์ธ MPT๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค.
- [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค
- bias๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.
| Hyperparameter | Value |
|-----------------|-------|
| n_parameters | 1.3B |
| n_layers | 24 |
| n_heads | 16 |
| d_model | 2048 |
| vocab size | 50432 |
| sequence length | 2048 |
## Uses
## How to Get Started with the Model
fp16์œผ๋กœ ์‹คํ–‰ ์‹œ NaN์ด ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ fp32 ํ˜น์€ bf16๋กœ ์‹คํ–‰ํ•˜๊ธฐ๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("team-lucid/mptk-1b")
model = AutoModelForCausalLM.from_pretrained("team-lucid/mptk-1b")
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe(
'๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š”',
max_new_tokens=100,
do_sample=True,
)
)
```
## Training Details
### Training Data
[OSCAR](https://oscar-project.org/), mC4, wikipedia, namuwiki ๋“ฑ ํ•œ๊ตญ์–ด
๋ฐ์ดํ„ฐ์— [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), [The Stack](https://huggingface.co/datasets/bigcode/the-stack)
์—์„œ ์ผ๋ถ€๋ฅผ ์ถ”๊ฐ€ํ•ด ํ•™์Šตํ•˜์˜€์Šต๋‹ˆ๋‹ค.
#### Training Hyperparameters
| **Hyperparameter** | **Value** |
|--------------------|------------|
| Precision | bfloat16 |
| Optimizer | Lion |
| Learning rate | 2e-4 |
| Batch size | 1024 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_team-lucid__mptk-1b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 17.88 |
| ARC (25-shot) | 22.7 |
| HellaSwag (10-shot) | 25.48 |
| MMLU (5-shot) | 27.11 |
| TruthfulQA (0-shot) | 0.0 |
| Winogrande (5-shot) | 49.72 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 0.17 |