File size: 2,345 Bytes
ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd ebdaf45 b0f06fd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
license: llama3.1
pipeline_tag: text-generation
language:
- ja
- en
tags:
- japanese
- llama
- llama-3
inference: false
---
# Llama-3.1-70B-Japanese-Instruct-2407
## Model Description
This is a Japanese continually pre-trained model based on [meta-llama/Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct).
## Usage
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("cyberagent/Llama-3.1-70B-Japanese-Instruct-2407", device_map="auto", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("cyberagent/Llama-3.1-70B-Japanese-Instruct-2407")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
messages = [
{"role": "user", "content": "AIによって私たちの暮らしはどのように変わりますか?"}
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
output_ids = model.generate(input_ids,
max_new_tokens=1024,
streamer=streamer)
```
## Prompt Format
Llama 3.1 Format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ assistant_message_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## License
[Meta Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
## Author
[Ryosuke Ishigami](https://huggingface.co/rishigami)
## How to cite
```tex
@misc{cyberagent-llama-3.1-70b-japanese-instruct-2407,
title={cyberagent/Llama-3.1-70B-Japanese-Instruct-2407},
url={https://huggingface.co/cyberagent/Llama-3.1-70B-Japanese-Instruct-2407},
author={Ryosuke Ishigami},
year={2024},
}
```
## Citations
```tex
@article{llama3.1modelcard,
title = {Llama 3.1 Model Card},
author = {AI@Meta},
year = {2024},
url = {https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md}
}
``` |