--- language: - ja tags: - japanese-stablelm - causal-lm pipeline_tag: text-generation datasets: - wikipedia - CulturaX license: - other extra_gated_prompt: >- By clicking "Agree", you agree to the [License Agreement](https://huggingface.co/stabilityai/japanese-stablelm-2-base-1_6b/blob/main/LICENSE.txt) and acknowledge Stability AI's [Privacy Policy](https://stability.ai/privacy-policy). extra_gated_fields: Name: text Email: text Country: country Organization or Affiliation: text Receive email updates and promotions on Stability AI products, services, and research?: type: select options: - Yes - No --- # Japanese Stable LM 2 Base 1.6B ![A beautiful anime-like hummingbird flying with the text "Japanese Stable LM 2" below it, with a lofi anime landscape of Mount Fuji forming the outline of the text "Japanese Stable LM 2"](./japanese-stablelm-bird.png) > A beautiful anime-like hummingbird flying with the text "Japanese Stable LM 2" below it, with a lofi anime landscape of Mount Fuji forming the outline of the text "Japanese Stable LM 2" — [Stable Diffusion 3](https://stability.ai/news/stable-diffusion-3) Please note: For commercial use, please refer to [https://stability.ai/membership](https://stability.ai/membership) ## Model Description `Japanese Stable LM 2 Base 1.6B` is a 1.6B-parameter decoder-only language model based on [Stable LM 2 1.6B](https://huggingface.co/stabilityai/stablelm-2-1_6b) that has been fine-tuned on a diverse collection of Japanese data, with the intent of maximizing downstream performance on Japanese language tasks. For an instruction-following model, check [Japanese Stable LM 2 Instruct 1.6B](https://huggingface.co/stabilityai/japanese-stablelm-2-instruct-1_6b). ## Usage Get started generating text with `Japanese Stable LM 2 Base 1.6B` by using the following code snippet: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "stabilityai/japanese-stablelm-2-base-1_6b" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) # The next line may need to be modified depending on the environment model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto", trust_remote_code=True, ) prompt = """ AI で科学研究を加速するには、 """.strip() inputs = tokenizer( prompt, add_special_tokens=True, return_tensors="pt" ).to(model.device) # this is for reproducibility. # feel free to change to get different result seed = 23 torch.manual_seed(seed) tokens = model.generate( **inputs, max_new_tokens=128, temperature=0.99, top_p=0.95, do_sample=True, ) out = tokenizer.decode(tokens[0], skip_special_tokens=True) print(out) ``` We suggest playing with different generation config (`top_p`, `repetition_penalty` etc) to find the best setup for your tasks. For example, use higher temperature for roleplay task, lower temperature for reasoning. ## Model Details * **Model type**: `Japanese Stable LM 2 Base 1.6B` models are auto-regressive language models based on the transformer decoder architecture. * **Language(s)**: Japanese * **License**: See the [LICENSE file](https://huggingface.co/stabilityai/japanese-stablelm-2-base-1_6b/blob/main/LICENSE.txt). * **Commercial License**: to use this model commercially, please refer to [https://stability.ai/membership](https://stability.ai/membership) * **Contact**: For technical questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow [@StabilityAI_JP](https://twitter.com/StabilityAI_JP). ## Model Architecture The model is a decoder-only transformer similar to the LLaMA ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)) architecture with the following modifications: | Parameters | Hidden Size | Layers | Heads | Sequence Length | |----------------|-------------|--------|-------|-----------------| | 1,644,417,024 | 2048 | 24 | 32 | 4096 | * **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf). * **Normalization**: LayerNorm ([Ba et al., 2016](https://arxiv.org/abs/1607.06450)) with learned bias terms as opposed to RMSNorm ([Zhang & Sennrich, 2019](https://arxiv.org/abs/1910.07467)). * **Biases**: We remove all bias terms from the feed-forward networks and multi-head self-attention layers, except for the biases of the query, key, and value projections ([Bai et al., 2023](https://arxiv.org/abs/2309.16609)). * **Tokenizer**: We use Arcade100k, a BPE tokenizer extended from OpenAI's [`tiktoken.cl100k_base`](https://github.com/openai/tiktoken). We split digits into individual tokens following findings by [Liu & Low (2023)](https://arxiv.org/abs/2305.14201). ## Training Dataset A mixture of the following corpora was used for continued pre-training. - [Japanese/English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) - [CulturaX](https://arxiv.org/abs/2309.09400) ## Use and Limitations ### Intended Use The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications. For commercial use, please refer to https://stability.ai/membership. ### Limitations and Bias ​As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others. ## Authors This model was developed by the Research & Development team at Stability AI Japan, and the development was led by Meng Lee (@leemeng) and Naoki Orii (@mrorii). The members of the team are as follows: - [Meng Lee](https://huggingface.co/leemeng) - [Naoki Orii](https://huggingface.co/mrorii) - [Paul McCann](https://huggingface.co/polm-stability) - [Yusuke Shibui](https://huggingface.co/cvusk) - [Fujiki Nakamura](https://huggingface.co/fujiki) - [Duy Phung](https://huggingface.co/pvduy) - Maksym Zhuravinskyi - Dakota Mahan - [Jerry Chi](https://jerrychi.com) ## How to cite ``` @misc{JapaneseStableLM2Base1.6B, url={[https://huggingface.co/stabilityai/japanese-stablelm-2-base-1_6b](https://huggingface.co/stabilityai/japanese-stablelm-base-2-1_6b)}, title={Japanese Stable LM 2 Base 1.6B}, author={Lee, Meng and Nakamura, Fujiki and McCann, Paul and Orii, Naoki and Shibui, Yusuke and Phung, Duy and Zhuravinskyi, Maksym and Mahan, Dakota and Chi, Jerry} } ```