That was fast!

#1
by rollercoasterX - opened

Thank you, I was looking for these specific files in the original ministral repo but couldn't find them.
May I ask how you converted them? Just so I can learn more about the process.
Thank you!

@prince-canuma Thank you! Can you please explain how you converted the model files from mistral inference to hf transformers format? I've been racking my brain trying to figure out how to do this. πŸ˜‹

@bartowski GGUF quant this... 😏

My pleasure!

It required patching the HF and MLX converter because of the tokenizer.

To HF I just used the convert_to_hf_mistral.py in the transformer repo and changed the vocab size and skipped the tokenizer.

Thank you @prince-canuma , are there any downsides when skipping the tokenizer from being converted? I vaguely recall there being a "slow" and "fast" tokenizer.

Sign up or log in to comment