Edit model card

PULI GPT-3SX (6.85 billion parameter)

For further details, see our demo site.

  • Hungarian GPT-NeoX model (6.7 billion parameter)
  • Trained with EleutherAI's GPT-NeoX github
  • Dataset: 36.3 billion words
  • Checkpoint: 150 000 steps

Limitations

  • max_seq_length = 2048

Citation

If you use this model, please cite the following paper:

@inproceedings {yang-puli,
    title = {Jönnek a nagyok! BERT-Large, GPT-2 és GPT-3 nyelvmodellek magyar nyelvre},
    booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
    year = {2023},
    publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
    address = {Szeged, Hungary},
    author = {Yang, Zijian Győző and Dodé, Réka and Ferenczi, Gergő and Héja, Enikő and Jelencsik-Mátyus, Kinga and Kőrös, Ádám and Laki, László János and Ligeti-Nagy, Noémi and Vadász, Noémi and Váradi, Tamás},
    pages = {247--262}
}

Usage

from transformers import GPTNeoXForCausalLM, AutoTokenizer

model = GPTNeoXForCausalLM.from_pretrained("NYTK/PULI-GPT-3SX")
tokenizer = AutoTokenizer.from_pretrained("NYTK/PULI-GPT-3SX")
prompt = "Elmesélek egy történetet a nyelvtechnológiáról."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids

gen_tokens = model.generate(
    input_ids,
    do_sample=True,
    temperature=0.9,
    max_length=100,
)

gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)

Usage with pipeline

from transformers import pipeline, GPTNeoXForCausalLM, AutoTokenizer

model = GPTNeoXForCausalLM.from_pretrained("NYTK/PULI-GPT-3SX")
tokenizer = AutoTokenizer.from_pretrained("NYTK/PULI-GPT-3SX")
prompt = "Elmesélek egy történetet a nyelvtechnológiáról."
generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer)

print(generator(prompt)[0]["generated_text"])
Downloads last month
131
Safetensors
Model size
6.85B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NYTK/PULI-GPT-3SX

Adapters
2 models
Finetunes
1 model
Quantizations
1 model

Space using NYTK/PULI-GPT-3SX 1