About the <|system|>, <|model|>, and <|user|> "tokens"

#6
by ta-ylor - opened

The model card refers to <|system|>, <|model|>, and <|user|> as tokens, but in tokenizer.json they don't appear, and I can't get them to be tokenized as single tokens when running the model myself. Is there a typo in the model card or am I misunderstanding something?

Can anyone teach me how to write a proper prompt?

Though it makes sense to have them be actual tokens, I'm not sure they mean token in the tokenizer.json sense, I think they mean token in the English sense (so it wouldn't get tokenised to a single token). But hopefully someone from the Pygmalion team can confirm.

My understanding is they should appear as plain text in your prompts, at least that's how I've had the most success. As an example from a project I'm working on.

First things first is taking my conversation history and formatting it for the model:

def formatConversation(messages: List[Dict[str, str]]):
    # The f-string shows how I add the tokens into the model (which could be entirely wrong but I've been getting some success with it when testing)
    return '\n'.join([f"<|{message['role']}|>{message['content']}" for message in messages])

Then I just pass in my conversation history and then append the <|assistant|> tag to the end of the text:

formattedConversation = formatConversation(conversation.messages) + '<|assistant|>'

Which gets passed to the text-generation pipeline to generate responses:

pipeline(
        formattedConversation, # <= convo history with assistant tag appended
        do_sample=True,
        top_k=10,
        num_return_sequences=1,
        eos_token_id=tokenizer.eos_token_id,
        max_length=200,
    )

I should probably add this is my first major LLM project so I'm still getting my head around the various config options (the ones provided to the pipeline in the snippet above I took from somewhere in the HuggingFace docs on the Llama2 Model this one is built on top of

Sign up or log in to comment