这个模型好像不完整吧?
#1
by
jeffersonchou
- opened
少了tokenizer.model,tokenizer_config.json,tokenization_baichuan.py和special_tokens_map.json;可能还缺了added_tokens.json,config.json中的"vocab_size": 64016,跟官方64000不一致,能上传一下吗?感谢!
感谢提醒,已上传。
butyuhao
changed discussion status to
closed
butyuhao
changed discussion status to
open
butyuhao
changed discussion status to
closed