Edit model card

Model Card for Munin 7B Alpha

The Munin 7B Alpha Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters, based on Mistral-7B-v0.1.

It has been trained on Danish Gigaword using continual pretraining.

For full details of this model please read our release blog post. The code-base can be found on our Git repo.

Note: This model is an Alpha model. We don't recommend using this model in production. If you do use the model, please let us know.

Notice

Munin 7B Alpha is, like Mistral 7B, a pretrained base model and therefore does not have any moderation mechanisms.

Development

The model is developed by the Danish Foundation Models Team

With Support From

Downloads last month
110
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for danish-foundation-models/munin-7b-alpha

Finetuned
(683)
this model
Adapters
2 models
Finetunes
2 models
Merges
9 models
Quantizations
2 models

Collection including danish-foundation-models/munin-7b-alpha