|
--- |
|
license: llama2 |
|
--- |
|
|
|
# Dromedary-2 Model Card |
|
|
|
## Model details |
|
|
|
<div style="display: flex; justify-content: center; align-items: center;"> |
|
<img src="https://raw.githubusercontent.com/IBM/SALMON/main/assets/images/salmon_logo_with_text.jpeg" alt="SALMON Logo" style="height: 256px; margin-right: 10px;"/> |
|
<img src="https://raw.githubusercontent.com/IBM/Dromedary/main/assets/images/dromedary_logo.svg" alt="Dromedary Logo" style="height: 256px;"/> |
|
</div> |
|
|
|
**Model type:** |
|
Dromedary-2 is an open-source self-aligned language model trained in minimal human supervision with the SALMON (Self-Alignment with Principle-Following Reward Models) technique. |
|
The base language model is LLaMA-70b, based on the transformer architecture. |
|
|
|
**NOTE: *Dromedary-2* is trained with [QLoRA](https://github.com/artidoro/qlora) and the bfloat16 data type.** While it is [possible](https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930) to merge the QLoRA weights with the quantized model and thus enable inference with libraries such as [TGI](https://github.com/huggingface/text-generation-inference) and [vLLM](https://github.com/vllm-project/vllm), we found the merged weights can lead to degenerated performance. Therefore, we recommend directly loading the QLoRA weights with the [PEFT-LoRA](https://github.com/huggingface/peft) framework. |
|
|
|
Please check the [inference section](https://github.com/IBM/SALMON/tree/main/inference) of our repo for the complete inference code. |
|
|
|
```python |
|
system_prompt = ( |
|
"# Dromedary\n\n## System Overview\n\n" |
|
"Consider an AI assistant whose codename is Dromedary, developed by the Self-Align team. " |
|
"Dromedary is trained on data up until Sept-2022, and it endeavors to be a helpful, ethical and reliable assistant.\n\n" |
|
"## User Conversation\n\n" |
|
) |
|
user_prompt = "### User\n" |
|
assistant_prompt = "### Dromedary\n" |
|
seperator = "\n\n" |
|
|
|
dtype = torch.bfloat16 |
|
|
|
model_path = "path/to/llama-2-70b-hf" |
|
qlora_path = "path/to/dromedary-2-70b-qlora-delta-v0" # i.e., this model hub |
|
|
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_compute_dtype=dtype, |
|
bnb_4bit_use_double_quant=True, |
|
bnb_4bit_quant_type="nf4", |
|
) |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_path, |
|
load_in_4bit=True, |
|
device_map={"": "cuda:0"}, |
|
quantization_config=bnb_config, |
|
torch_dtype=dtype, |
|
) |
|
|
|
model = PeftModel.from_pretrained( |
|
model, |
|
qlora_path, |
|
is_trainable=False, |
|
) |
|
``` |
|
|
|
**Model date:** |
|
Dromedary-2 was trained between July 2023 and Aug 2023, but its knowledge only goes up until Sept-2022. |
|
|
|
**License:** |
|
LLaMA-2's bespoke license |
|
|
|
## More Information |
|
|
|
**Paper or resources for more information:** |
|
https://arxiv.org/abs/2310.05910 |
|
|
|
**Where to send questions or comments about the model:** |
|
https://github.com/IBM/SALMON/issues |
|
|
|
**Organizations developing the model:** |
|
The Self-Align team is a joint effort between CMU and IBM. |
|
|
|
## Intended use |
|
**Primary intended uses:** |
|
The primary use of Dromedary is research on the alignment of large language models. |
|
|
|
**Primary intended users:** |
|
The primary intended users of the model are researchers in artificial intelligence. |
|
|
|
## Training dataset |
|
6 In-Context Learning (ICL) exemplars |
|
|
|
90K unlabeled prompts from ShareGPT |
|
|
|
10K unlabeled prompts from databricks-dolly-15k |
|
|
|
10K unlabeled prompts from OpenAssistant Conversations |
|
|
|
40K unlabeled prompts from OpenOrca |
|
|
|
7.5K unlabeled prompts from MATH |
|
|
|
## Evaluation dataset |
|
We evaluate Dromedary-2 on: |
|
1. Chatbot benchmarks: Vicuna-Bench, MT-Bench, AlpacaEval |
|
2. Capability benchmarks: Big-Bench Hard (reasoning), HumanEval (coding), TydiQA (multilingualism) |
|
3. Truthfulness benchmarks: TruthfulQA |