If I trained a model on mistral already, do I need to start from scratch due to difficulties of fine-tuning?

#1
by brando - opened

I heard rumors that there was a bug with the mistral 7B tokenizer. I was asking because I wanted to know if I should re-train from scratch again or if using my current checkpoint is ok. What do you suggest?

Hi, I assume you meant to post this on the official model repo? If so I can recommend asking in the discord instead: https://discord.gg/mistralai

PS: This repo was an early port I did before mistral released their huggingface models, and shouldn't be used anymore.

Sign up or log in to comment