macadeliccc commited on
Commit
a360270
1 Parent(s): 45abfb8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -6
README.md CHANGED
@@ -7,16 +7,26 @@ tags:
7
  - merge
8
 
9
  ---
10
- # NeuralCorso-7B
11
 
12
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
13
 
14
- ## Merge Details
15
- ### Merge Method
16
 
17
- This model was merged using the SLERP merge method.
18
 
19
- ### Models Merged
 
 
 
 
 
 
 
 
 
 
 
20
 
21
  The following models were included in the merge:
22
  * [macadeliccc/MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO)
 
7
  - merge
8
 
9
  ---
10
+ # OmniCorso-7B
11
 
12
+ This model is a finetune of [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) using jondurbin/truthy-dpo-v0.1
13
 
14
+ ![MBX-v3-orca](MBX-v3-orca.png)
 
15
 
16
+ ## Code Example
17
 
18
+ ```python
19
+ from transformers import AutoModelForCausalLM, AutoTokenizer
20
+
21
+ tokenizer = AutoTokenizer.from_pretrained("macadeliccc/MBX-7B-v3-DPO")
22
+ model = AutoModelForCausalLM.from_pretrained("macadeliccc/MBX-7B-v3-DPO")
23
+
24
+ messages = [
25
+ {"role": "system", "content": "Respond to the users request like a pirate"},
26
+ {"role": "user", "content": "Can you write me a quicksort algorithm?"}
27
+ ]
28
+ gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
29
+ ```
30
 
31
  The following models were included in the merge:
32
  * [macadeliccc/MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO)