Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,101 +1,97 @@
|
|
|
|
1 |
|
2 |
-
|
3 |
-
|
4 |
-
tags:
|
5 |
-
- merlinite
|
6 |
-
- mistral
|
7 |
-
- ibm
|
8 |
-
- lab
|
9 |
-
- labrador
|
10 |
-
- labradorite
|
11 |
-
license: apache-2.0
|
12 |
-
language:
|
13 |
-
- en
|
14 |
-
base_model: mistralai/Mistral-7B-v0.1
|
15 |
-
---
|
16 |
|
|
|
17 |
|
18 |
-
# Model Card for Merlinite 7b 🔥 [Paper](https://arxiv.org/abs/2403.01081)
|
19 |
|
20 |
-
###
|
21 |
|
22 |
-
|
|
|
|
|
|
|
23 |
|
24 |
-
###
|
25 |
|
26 |
-
|
27 |
-
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
28 |
-
| [Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) | RLHF | Llama-2-13b | Human Annotators | 6.65 | 54.58 | 59.81 | 82.52 | 75.93 | 34.80 |
|
29 |
-
| [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) | Progressive Training | Llama-2-13b | GPT-4 | 6.15 | 60.37 * | 59.73 | 79.86 | 78.22 | 48.22 |
|
30 |
-
| [WizardLM-13B-V1.2](https://huggingface.co/WizardLM/WizardLM-13B-V1.2) | Evol-Instruct | Llama-2-13b | GPT-4 | 7.20 | 54.83 | 60.24 | 82.62 | 76.40 | 43.75 |
|
31 |
-
| [Labradorite-13b](https://huggingface.co/ibm/labradorite-13b) | Large-scale Alignment for chatBots (LAB) | Llama-2-13b | Mixtral-8x7B-Instruct | 7.23 | 58.89 | 61.69 | 83.15 | 79.56 | 40.11 |
|
32 |
-
| [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | SFT | Mistral-7B-v0.1 | - | 6.84 | 60.37 | 63.65 | 84.76 | 76.80 | 41.85 |
|
33 |
-
| [zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | SFT/DPO | Mistral-7B-v0.1 | GPT-4 | 7.34 | 61.07 | 63.74 | 84.19 | 78.06 | 34.04 |
|
34 |
-
| [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | SFT | Mistral-7B-v0.1 | - | 7.6** | 60.78 | 63.14 | 84.88 | 77.19 | 40.03 |
|
35 |
-
| Merlinite-7b | Large-scale Alignment for chatBots (LAB) | Mistral-7B-v0.1 | Mixtral-8x7B-Instruct | 7.66 | 64.88 | 63.99 | 84.37 | 78.24 | 44.58 |
|
36 |
|
37 |
-
|
|
|
|
|
38 |
|
39 |
-
|
|
|
|
|
|
|
40 |
|
41 |
-
|
|
|
|
|
42 |
|
43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
|
45 |
-
|
|
|
|
|
|
|
46 |
|
47 |
-
|
48 |
-
2. Large-scale synthetic data generator
|
49 |
-
3. Two-phased-training with replay buffers
|
50 |
|
51 |
-
|
52 |
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
![Untitled](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/Untitled%201.png)
|
58 |
-
|
59 |
-
During the synthetic data generation, **unlike previous approaches where seed examples are uniformly drawn from the entire pool (i.e. self-instruct), we use the taxonomy to drive the sampling process**: For each knowledge/skill, we only use the local examples within the leaf node as seeds to prompt the teacher model.
|
60 |
-
This makes the teacher model better exploit the task distributions defined by the local examples of each node and the diversity in the taxonomy itself ensures the entire generation covers a wide range of tasks, as illustrated below. In turns, this allows for using Mixtral 8x7B as the teacher model for generation while performing very competitively with models such as ORCA-2, WizardLM, and Zephyr Beta that rely on synthetic data generated by much larger and capable models like GPT-4.
|
61 |
-
|
62 |
-
![intuition.png](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/intuition.png)
|
63 |
-
|
64 |
-
For adding new domain-specific knowledge, we provide an external knowledge source (document) and prompt the model to generate questions and answers based on the document.
|
65 |
-
Foundational skills such as reasoning and compositional skills such as creative writing are generated through in-context learning using the seed examples from the taxonomy.
|
66 |
-
|
67 |
-
Additionally, to ensure the data is high-quality and safe, we employ steps to check the questions and answers to ensure that they are grounded and safe. This is done using the same teacher model that generated the data.
|
68 |
-
|
69 |
-
Our training consists of two major phases: knowledge tuning and skills tuning.
|
70 |
-
There are two steps in knowledge tuning where the first step learns simple knowledge (short samples) and the second step learns complicated knowledge (longer samples).
|
71 |
-
The second step uses replay a replay buffer with data from the first step.
|
72 |
-
Both foundational skills and compositional skills are learned during the skills tuning phases, where a replay buffer of data from the knowledge phase is used.
|
73 |
-
Importantly, we use a set of hyper-parameters for training that are very different from standard small-scale supervised fine-training: larger batch size and carefully optimized learning rate and scheduler.
|
74 |
-
|
75 |
-
![Untitled](Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a/Untitled%202.png)
|
76 |
-
|
77 |
-
## Model description
|
78 |
|
79 |
-
|
80 |
-
- **License:** Apache 2.0
|
81 |
-
- **Base model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
82 |
-
- **Teacher Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
|
83 |
|
84 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
85 |
|
86 |
-
|
87 |
-
sys_prompt = "You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
|
88 |
|
89 |
-
|
90 |
-
|
91 |
```
|
92 |
|
93 |
-
|
94 |
-
|
95 |
-
## Bias, Risks, and Limitations
|
96 |
|
97 |
-
|
98 |
|
99 |
-
|
100 |
|
101 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Labrador Synthetic Data Generation Pipeline
|
2 |
|
3 |
+
## Introduction
|
4 |
+
This repository contains the Labrador synthetic data generation pipeline, which is used to generate synthetic data for various purposes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
+
## Run Instructions (Automation)
|
7 |
|
|
|
8 |
|
9 |
+
### Step 1: Environment Setup
|
10 |
|
11 |
+
1. Initialize a `.env` file with the following [access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#personal-access-tokens-classic):
|
12 |
+
```
|
13 |
+
GIT_ACCESS_TOKEN={ACCESS-TOKEN-TO-ACCESS-TAXONOMY-REPO} # this personal access token is used to access instruct-lab/taxonomy repo
|
14 |
+
```
|
15 |
|
16 |
+
### Step 2: Execution
|
17 |
|
18 |
+
To run the pipeline:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
+
1. Execute the following command:
|
21 |
+
|
22 |
+
NOTE: Depending on whether you are running on old or new vela, change this line in the orchestrator.py to use the appropriate old vela or new vela template. `save_job_with_jinja_template(cfg, "templates/labrador_datagen_vela.yaml.j2", output_dir=f"jobs/{branch}")`
|
23 |
|
24 |
+
|
25 |
+
```
|
26 |
+
python orchestrator.py branch-name
|
27 |
+
```
|
28 |
|
29 |
+
This will:
|
30 |
+
- Create a file with a list of leaf nodes in the `jobs` directory.
|
31 |
+
- Generate YAML files for each leaf node and store them in the `jobs` directory something like `test-7984f9cae729b798bed1ba222715b880.yaml`
|
32 |
|
33 |
+
2. To initiate the skill generation pipeline, run:
|
34 |
+
|
35 |
+
To trigger a job, take the above yaml and
|
36 |
+
|
37 |
+
```
|
38 |
+
oc create -f jobs/yaml_name.yaml
|
39 |
+
```
|
40 |
|
41 |
+
This command will execute the pipeline and store the results in the `new_data/labrador-datagen` directory within the COS bucket mounted on the Vela cluster.
|
42 |
+
|
43 |
+
|
44 |
+
## Run Instructions (Manual - Testing)
|
45 |
|
46 |
+
### Step 1: Run model
|
|
|
|
|
47 |
|
48 |
+
Run teacher model - this model can be replaced with any small model for testing purposes
|
49 |
|
50 |
+
```
|
51 |
+
text-generation-launcher -p 8080 --model-id mistralai/Mixtral-8x7B-Instruct-v0.1 --dtype bfloat16 --max-input-length 4096 --max-batch-prefill-tokens 4096 --max-total-tokens 12288
|
52 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
|
54 |
+
Next, set the following enviornment variables:
|
|
|
|
|
|
|
55 |
|
56 |
+
```
|
57 |
+
LEAF_NODE=knowledge/textbooks/ethics/qna.yaml # Path to the leaf node that you want to download
|
58 |
+
NUM_SAMPLES=30
|
59 |
+
NUM_GROUNDED_QUESTIONS=3
|
60 |
+
NUM_GEN_PROC=32
|
61 |
+
NUM_UTIL_PROC=8
|
62 |
+
SAVE_PATH=new_data/labrador_datagen # Path where you want to download the data
|
63 |
+
CONTEXT=0 # Set 0 for freeform and 1 for grounded
|
64 |
+
DATA_PATH=.
|
65 |
+
CHECKSUM=test
|
66 |
+
BRANCH_NAME=test # Branch name to download data from
|
67 |
+
KNOWLEDGE=1 # Set 0 for skills and 1 for knowledge
|
68 |
+
PARENT_DIR=$(dirname "$LEAF_NODE")
|
69 |
+
GIT_ACCESS_TOKEN= # Access token to access taxonomy repo
|
70 |
+
```
|
71 |
+
|
72 |
+
### Skills
|
73 |
+
|
74 |
+
Download data
|
75 |
+
```
|
76 |
+
wget --header "Authorization: token $GIT_ACCESS_TOKEN" --directory-prefix="$DATA_PATH/$PARENT_DIR" "https://raw.githubusercontent.com/instruct-lab/taxonomy/$BRANCH_NAME/$LEAF_NODE"
|
77 |
+
```
|
78 |
|
79 |
+
Run the Justfile using:
|
|
|
80 |
|
81 |
+
```
|
82 |
+
just run
|
83 |
```
|
84 |
|
85 |
+
The Justfile will check the context value. If the context is set to 1, it will run scripts for grounded data generation. If the context is set to 0, it will run scripts for freeform data generation and save the generated files in the root of the repo in the same directory structure.
|
|
|
|
|
86 |
|
87 |
+
### Knowledge
|
88 |
|
89 |
+
Download data
|
90 |
|
91 |
+
```
|
92 |
+
bash download_docs.sh
|
93 |
+
```
|
94 |
+
Run knowledge script
|
95 |
+
```
|
96 |
+
python knowledge_generation_pipeline.py
|
97 |
+
```
|