Model Card for Mistral-7B-v0.1-flashback
Mistral-7B-v0.1-flashback model is a continuation of the pretraining process for the base Mistral-7B-v0.1 model, utilizing around 40GB of forum threads from the Swedish website flashback.org. The training was done with the QLoRa method, training about 88 million parameters in a single epoch.
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.