Edit model card

drawing

How to use ・ 使い方

We recommend on running this model in an environment with at least 60GB of VRAM - ideally a A100 (80GB) GPU A100 (80GB)の1枚以上の環境がおすすめです

Huggingface

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("lightblue/ao-karasu-72B-AWQ-4bit")
model = AutoModelForCausalLM.from_pretrained("lightblue/ao-karasu-72B-AWQ-4bit", device_map="auto")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})

prompt = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)

pipe(prompt, max_new_tokens=100, do_sample=False, temperature=0.0, return_full_text=False)

vLLM

from vllm import LLM, SamplingParams

sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/aokarasu-72B-AWQ-4bit")

messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = llm.llm_engine.tokenizer.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]

outputs = llm.generate(prompts, sampling_params)
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Training details 学習詳細

English dev blog

日本語ブログ

Training data 学習データ

Roughly 20 million characters samples from a dataset of more than 1.1 billion characters, which was made up of:

~450 million characters from Wikipedia-based QA (same as Qarasu)

~200 million characters from technical blogs (new)

~200 million characters from Japanese QA site answers (new)

~100 million characters from LLM generated prompts and responses (same as Qarasu)

~70 million characters from news articles (new)

Training schedule

Training for ~1 day on a A100 (80GB) GPU

Downloads last month
18
Safetensors
Model size
11.8B params
Tensor type
FP16
·
I32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.