Edit model card

Sarashina2-7B

This repository provides large language models trained by SB Intuitions.

How to use

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed
 
model = AutoModelForCausalLM.from_pretrained("sbintuitions/sarashina2-7b", torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/sarashina2-7b")
# If you want to use slow tokenizer
# tokenizer = AutoTokenizer.from_pretrained("sbintuitions/sarashina2-7b", use_fast=False)
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
set_seed(123)
 
text = generator(
    "おはようございます、今日の天気は",
    max_length=30,
    do_sample=True,
    pad_token_id=tokenizer.pad_token_id,
    num_return_sequences=3,
)

for t in text:
  print(t)
 
# These examples are generated by sarashina2-7b parameters model
# {'generated_text': 'おはようございます、今日の天気は晴れです。ちょっと風が強い。\n昨日は、久しぶりにゆっくりとしていました。\n2週間位間があいてしまったかも、でもその間に'}
# {'generated_text': 'おはようございます、今日の天気は曇。朝は曇っていてどんよりしていましたね。昼からは晴れそうですが。気温は徐々に上昇しています。昨日は春らしい陽気でした。'}
# {'generated_text': 'おはようございます、今日の天気はくもり、少し寒気がします。 この土日に、家族で一泊二日で旅行に行ってきました。といっても、100キロ'}

Configuration

Parameters Vocab size Training tokens Architecture Position type Layers Hidden dim Attention heads
7B 102400 2.1T Llama2 RoPE 32 4096 32
13B 102400 2.1T Llama2 RoPE 40 5120 40
70B 102400 2.1T Llama2 RoPE 80 8192 64

Training Corpus

For our Japanese training data, we used a Japanese portion of the Common Crawl corpus, which is the largest Web corpus, as our training dataset. To clean the training corpus, we used CCNet and HojiChar. After cleaning, our Japanese training data contains about 1T tokens.

For our English training data, we extracted English documents from SlimPajama but we removed books3 corpus due to copyright infringement.

Tokenization

We use a sentencepiece tokenizer with a unigram language model and byte-fallback. We do not apply pre-tokenization with Japanese tokenizer. Thus, a user may directly feed raw sentences into the tokenizer.

Ethical Considerations and Limitations

Sarashina2 has not been tuned to follow an instruction yet. Therefore, sarashina2 might generate some meaningless sequences, some inaccurate instances or biased/objectionable outputs. Before using sarashina2, we would like developers to tune models based on human preferences and safety considerations.

License

MIT License

Downloads last month
6,173
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sbintuitions/sarashina2-7b

Quantizations
2 models

Collection including sbintuitions/sarashina2-7b