Interview request: genAI evaluation & documentation
#42 opened about 1 month ago
by
meggymuggy
Update README.md
#41 opened about 2 months ago
by
xianfeng6666
Deos baichuan2_13B_chat support function calling ?
#40 opened 7 months ago
by
daisr
Fast tokenizer
#39 opened 7 months ago
by
soominc
使用哪个加速推理框架比较好?
#38 opened 8 months ago
by
daisr
Adding `safetensors` variant of this model
#37 opened 8 months ago
by
SFconvertbot
Can not infer, depolyed Baichuan2-13b-Chat with 2x RTX3090 24G and FastChat.
#36 opened 8 months ago
by
Rethen
如何流式输出
3
#35 opened 8 months ago
by
DeyangKong
百川大模型本地量化部署的问题
1
#34 opened 9 months ago
by
Jason123321123
部署Baichuan2-13b-chat时候使用官网代码推理输出的时候报错
4
#31 opened 11 months ago
by
lhlnlp
update modeling_baichuan.py for torchscript mode with past_kv
#30 opened 11 months ago
by
changwangss
各位大佬,微调baichuan2-13b后得到pth文件,该如何推理
1
#29 opened 11 months ago
by
ddq2020
Asking about new version of Baichuan LLM
#28 opened 12 months ago
by
phamvantoan
Update modeling_baichuan.py
1
#27 opened 12 months ago
by
ybelkada
Asking about the performance of retrieving local data
#25 opened 12 months ago
by
phamvantoan
Asking about prompt template for Baichuan2-13B-Chat
#24 opened 12 months ago
by
phamvantoan
AttributeError: 'list' object has no attribute 'as_dict'
1
#23 opened 12 months ago
by
RR0825
how to accelerate the inference speed
2
#22 opened 12 months ago
by
tobywang
Baichuan 2 192k context length
#21 opened about 1 year ago
by
Ekolawole
baichuan 192K weight release?
#20 opened about 1 year ago
by
Yhyu13
AttributeError: 'BaichuanTokenizer' object has no attribute 'sp_model'
7
#18 opened about 1 year ago
by
lucasjin
关于Alibi位置编码
1
#17 opened about 1 year ago
by
Hunter1943
百川大模型本地部署
2
#14 opened about 1 year ago
by
Yuwh07
Update modeling_baichuan.py
#12 opened about 1 year ago
by
JaheimLee
xops使用与7B模型不一致
#11 opened about 1 year ago
by
JaheimLee