Text Generation
Transformers
Safetensors
English
llama
mergekit
Merge
conversational
text-generation-inference
Inference Endpoints
chargoddard commited on
Commit
cd740c0
1 Parent(s): f1f21e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -24
README.md CHANGED
@@ -6,37 +6,49 @@ library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
-
 
 
 
 
 
10
  ---
11
- # prometheus-8b-linear
12
-
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
-
15
- ## Merge Details
16
- ### Merge Method
17
 
18
- This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
19
 
20
- ### Models Merged
21
 
22
- The following models were included in the merge:
23
- * [chargoddard/prometheus-llama-3-8b-preference](https://huggingface.co/chargoddard/prometheus-llama-3-8b-preference)
24
- * [chargoddard/prometheus-llama-3-8b-absolute](https://huggingface.co/chargoddard/prometheus-llama-3-8b-absolute)
 
 
 
25
 
26
- ### Configuration
27
 
28
- The following YAML configuration was used to produce this model:
29
 
30
- ```yaml
31
- merge_method: linear
32
- models:
33
- - model: chargoddard/prometheus-llama-3-8b-preference
34
- parameters:
35
- weight: 0.5
36
- - model: chargoddard/prometheus-llama-3-8b-absolute
37
- parameters:
38
- weight: 0.5
39
- dtype: bfloat16
40
 
41
 
 
 
 
 
 
 
 
 
 
42
  ```
 
 
 
 
 
 
 
 
 
 
 
6
  tags:
7
  - mergekit
8
  - merge
9
+ license: apache-2.0
10
+ datasets:
11
+ - prometheus-eval/Preference-Collection
12
+ - prometheus-eval/Feedback-Collection
13
+ language:
14
+ - en
15
  ---
16
+ # prometheus-2-llama-3-8b
 
 
 
 
 
17
 
18
+ Replication of [prometheus-7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0) using [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base model.
19
 
20
+ As in their paper, two different models were trained on their preference and feedback datasets then linearly merged at equal weight.
21
 
22
+ Training hyperparameters:
23
+ * 1 epoch
24
+ * Learning rate 1e-5
25
+ * Effective batch size 128
26
+ * Cosine annealing
27
+ * ~5% warmup
28
 
 
29
 
30
+ Uses Llama 3 Instruct prompt format and the same prompts as prometheus-7b-v2.0. See that readme for info.
31
 
32
+ # Citations
 
 
 
 
 
 
 
 
 
33
 
34
 
35
+ ```bibtex
36
+ @misc{kim2023prometheus,
37
+ title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},
38
+ author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
39
+ year={2023},
40
+ eprint={2310.08491},
41
+ archivePrefix={arXiv},
42
+ primaryClass={cs.CL}
43
+ }
44
  ```
45
+ ```bibtex
46
+ @misc{kim2024prometheus,
47
+ title={Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models},
48
+ author={Seungone Kim and Juyoung Suk and Shayne Longpre and Bill Yuchen Lin and Jamin Shin and Sean Welleck and Graham Neubig and Moontae Lee and Kyungjae Lee and Minjoon Seo},
49
+ year={2024},
50
+ eprint={2405.01535},
51
+ archivePrefix={arXiv},
52
+ primaryClass={cs.CL}
53
+ }
54
+ ```