Edit model card

Gnerative Pretrained Tranformer with 3 Billion parameters for Norwegian. The model is continue trained using NorGPT-3B model on a selective documents from the pretraining dataset, which includes news articles, parlamentary speech, books and govermental reports.

It belongs to NorGLM, a suite of pretrained Norwegian Generative Language Models. NorGLM can be used for non-commercial purposes.

Run the Model

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "NorGLM/NorGPT-3B-continue"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map='auto',
    torch_dtype=torch.bfloat16
)

text = "Tom ønsket å gå på barene med venner"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)

Citation Information

If you feel our work is helpful, please cite our paper:

@article{liu2023nlebench+,
  title={NLEBench+ NorGLM: A Comprehensive Empirical Analysis and Benchmark Dataset for Generative Language Models in Norwegian},
  author={Liu, Peng and Zhang, Lemei and Farup, Terje Nissen and Lauvrak, Even W and Ingvaldsen, Jon Espen and Eide, Simen and Gulla, Jon Atle and Yang, Zhirong},
  journal={arXiv preprint arXiv:2312.01314},
  year={2023}
}
Downloads last month
59
Safetensors
Model size
3.09B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including NorGLM/NorGPT-3B-continue