|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- wikipedia |
|
language: |
|
- ja |
|
- en |
|
--- |
|
|
|
# tiny-lm |
|
|
|
This repository provides a tiny 16M parameters language model for debugging and testing purposes. |
|
|
|
Trained on English and Japanese Wikipedia data. |
|
|
|
## How to use |
|
|
|
``` |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed |
|
|
|
model = AutoModelForCausalLM.from_pretrained("sbintuitions/tiny_lm") |
|
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/tiny_lm", use_fast=False) |
|
generator = pipeline("text-generation", model=model, tokenizer=tokenizer) |
|
|
|
print(generator("Hello", max_length=30, do_sample=True, top_k=1000)) |
|
``` |
|
|
|
## Model architecture |
|
A 4-layer, 512-hidden-size transformer-based language model. |
|
|
|
## Training |
|
The model was trained on English Wikipedia and Japanese Wikipedia to optimize a traditional language modelling objective for 25B tokens. |
|
|
|
## License |
|
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) |