timpal0l's picture
Update README.md
c9ff5f4
|
raw
history blame
492 Bytes
metadata
license: apache-2.0
pipeline_tag: text-generation
language:
  - sv
  - en
tags:
  - pretrained
widget:
  - text: Jag tycker att det är roligt med

Model Card for Mistral-7B-v0.1-flashback

Mistral-7B-v0.1-flashback model is a continuation of the pretraining process for the base Mistral-7B-v0.1 model, utilizing around 40GB of forum threads from the Swedish website flashback.org. The training was done with the QLoRa method, training about 88 million parameters in a single epoch.