|
--- |
|
base_model: |
|
- tokyotech-llm/Swallow-70b-instruct-hf |
|
- allenai/tulu-2-dpo-70b |
|
tags: |
|
- mergekit |
|
- merge |
|
language: |
|
- en |
|
- ja |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
license: llama2 |
|
model_type: llama |
|
--- |
|
|
|
# Superswallow-70b-baseline |
|
|
|
**This model is provided for LLMs merging researchers.** |
|
|
|
This model does not achieve sufficient results in terms of "[Absorbing Abilities](https://arxiv.org/abs/2311.03099)". However, it is considered useful for benchmarking and model merging tests. |
|
|
|
For [13B of these baseline mode](https://huggingface.co/nitky/Superswallow-13B-baseline), I will add the [elyza/ELYZA-tasks-100](https://huggingface.co/datasets/elyza/ELYZA-tasks-100) benchmark and the technical report for merging LLMs trained in different languages at a later. |
|
|
|
**Important Notice:** |
|
|
|
This model partially utilizes the parameters of Tulu V2 DPO finetuned based on Llama 2, so it may inherit the AI2 ImpACT license. Please use the model keeping in mind that there may be changes regarding the license if AI2 contacts me. |
|
|
|
The [AI2 ImpACT license](https://allenai.org/impact-license) includes information about data artifacts and model artifacts, but does not cover the case of directly applying parts of the LLM parameters of a model artifact to other models. However, I respect their research and great work, so I will change the license immediately if AI2 contacts me. |
|
|
|
## Description |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The model was created by injecting the ability to follow user intent from [Tulu 2 DPO](https://arxiv.org/abs/2311.10702) into the [Swallow](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) instract model. |
|
|
|
It was a proof of concept for merging LLMs trained in other languages, and paid close attention to preserving the linguistic capabilities of the merge-based model. |
|
|
|
As far as I know, Swallow is the full set Llama 2 model(7B, 13B, 70B) that can output the most beautiful Japanese. Therefore, I used it as the base model for merging this time. Thank you for their wonderful work. |
|
|
|
## Test environment |
|
|
|
This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `Null preset` for Generation. |
|
|
|
- temperature: 1 |
|
- top_p: 1 |
|
- repetition_penalty: 1 |
|
- top_k: 0 |
|
|
|
## Prompt template: Swallow (Alpaca format) |
|
|
|
``` |
|
以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。リクエストを適切に完了するための回答を記述してください。 |
|
|
|
### 指示: |
|
{instruction} |
|
|
|
### 応答: |
|
``` |
|
|
|
## Use the instruct model |
|
|
|
``` |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
model_name = "nitky/Superswallow-70b-baseline" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto", load_in_4bit = True) |
|
|
|
|
|
PROMPT_DICT = { |
|
"prompt_input": ( |
|
"以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。" |
|
"リクエストを適切に完了するための回答を記述してください。\n\n" |
|
"### 指示:\n{instruction}\n\n### 入力:\n{input}\n\n### 応答:" |
|
|
|
), |
|
"prompt_no_input": ( |
|
"以下に、あるタスクを説明する指示があります。" |
|
"リクエストを適切に完了するための回答を記述してください。\n\n" |
|
"### 指示:\n{instruction}\n\n### 応答:" |
|
), |
|
} |
|
|
|
def create_prompt(instruction, input=None): |
|
""" |
|
Generates a prompt based on the given instruction and an optional input. |
|
If input is provided, it uses the 'prompt_input' template from PROMPT_DICT. |
|
If no input is provided, it uses the 'prompt_no_input' template. |
|
|
|
Args: |
|
instruction (str): The instruction describing the task. |
|
input (str, optional): Additional input providing context for the task. Default is None. |
|
|
|
Returns: |
|
str: The generated prompt. |
|
""" |
|
if input: |
|
# Use the 'prompt_input' template when additional input is provided |
|
return PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input) |
|
else: |
|
# Use the 'prompt_no_input' template when no additional input is provided |
|
return PROMPT_DICT["prompt_no_input"].format(instruction=instruction) |
|
|
|
# Example usage |
|
instruction_example = "以下のトピックに関する詳細な情報を提供してください。" |
|
input_example = "東京工業大学の主なキャンパスについて教えてください" |
|
prompt = create_prompt(instruction_example, input_example) |
|
|
|
input_ids = tokenizer.encode( |
|
prompt, |
|
add_special_tokens=False, |
|
return_tensors="pt" |
|
) |
|
|
|
tokens = model.generate( |
|
input_ids.to(device=model.device), |
|
max_new_tokens=200, |
|
temperature=0.7, |
|
top_p=0.9, |
|
repetition_penalty=1.15, |
|
top_k=20, |
|
do_sample=True, |
|
) |
|
|
|
out = tokenizer.decode(tokens[0], skip_special_tokens=True) |
|
print(out) |
|
|
|
``` |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [tokyotech-llm/Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: tokyotech-llm/Swallow-70b-instruct-hf |
|
# no parameters necessary for base model |
|
- model: allenai/tulu-2-dpo-70b # follow user intent |
|
parameters: |
|
density: 1 |
|
weight: |
|
- filter: mlp |
|
value: 0.1 |
|
- filter: self_attn |
|
value: 0.45 |
|
- value: 0 # fallback for rest of tensors. |
|
merge_method: dare_ties |
|
base_model: tokyotech-llm/Swallow-70b-instruct-hf |
|
dtype: bfloat16 |
|
tokenizer_source: union |
|
|
|
``` |
|
|