Edit model card

https://github.com/FlagAI-Open/FlagAI/tree/master/examples/Aquila

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('qhduan/aquila-7b')
model = AutoModelForCausalLM.from_pretrained('qhduan/aquila-7b', trust_remote_code=True)
model = model.eval().half().cuda()
prompt = '北京在哪儿?'
with torch.no_grad():
    ret = model.generate(
        **tokenizer(prompt, return_tensors='pt').to('cuda'),
        do_sample=False,
        max_new_tokens=200,
        use_cache=True
    )
# 北京在哪儿? 北京是中国的首都,是中华人民共和国的首都,是全国政治、经济、文化、交通中心,是世界著名古都和现代化国际城市
print(tokenizer.decode(ret[0]))

Aquila-7B和Aquila-33B开源模型使用 智源Aquila系列模型许可协议, 原始代码基于Apache Licence 2.0。

Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.