[AUTO] CVST Tokenizer Badger

#223
Mistral AI_ org

A scripted PR to update the status of the transformer tokenizer.


> [!CAUTION]
> ⚠️ 
> The `transformers` tokenizer might give incorrect results as it has not been tested by the Mistral team. To make sure that your encoding and decoding is correct, please use mistral-common as shown below: 
> 
> ```py
> from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
> from mistral_common.protocol.instruct.messages import UserMessage
> from mistral_common.protocol.instruct.request import ChatCompletionRequest
> 
> mistral_models_path = "MISTRAL_MODELS_PATH"
> 
> tokenizer = MistralTokenizer.v1()
> 
> completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
> 
> tokens = tokenizer.encode_chat_completion(completion_request).tokens
> 
> ## Inference with `mistral_inference`
> 
> from mistral_inference.model import Transformer
> from mistral_inference.generate import generate
> 
> model = Transformer.from_folder(mistral_models_path)
> out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
> result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
> 
> print(result)
> 
> ## Inference with hugging face `transformers`
> 
> from transformers import AutoModelForCausalLM, AutoTokenizer
> 
> device = "cuda"
> model = AutoModelForCausalLM.from_pretrained(mistralai/Mixtral-8x7B-Instruct-v0.1)
> model.to(device)
> 
> generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True)
> decoded = tokenizer.batch_decode(generated_ids)
> print(decoded[0])
> ```
> 
> PRs to correct the transformers tokenizer so that it gives 1-to-1 the same results as the mistral-common reference implementation are very welcome!
>             
        
patrickvonplaten changed pull request status to merged

Sign up or log in to comment