Duplicated from bigcode/bigcode-playground
https://github.com/facebookresearch/codellama/blob/main/llama/generation.py#L404
Maybe we should organize the prompt like this: <BOS><PRE> {pre} <SUF>{suf} <MID>
<BOS><PRE> {pre} <SUF>{suf} <MID>
You can use the add_special_tokens = True to automatically add this
add_special_tokens = True
· Sign up or log in to comment