MiniCPM
Collection
The MiniCPM family of LLMs and VLLMs.
•
31 items
•
Updated
•
54
The MiniCPM-MoE-8x2B is a decoder-only transformer-based generative language model.
The MiniCPM-MoE-8x2B adopt a Mixture-of-Experts(MoE) architecture, which has 8 experts per layer and activates 2 of 8 experts for each token.
This is a model version after instruction tuning but without other rlhf methods. Chat template is automatically applied.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM-MoE-8x2B'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8)
print(responds)