Weyaxi commited on
Commit
1aae4ac
1 Parent(s): bbdb1a2

pre model card

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md CHANGED
@@ -15,4 +15,83 @@ language:
15
  - en
16
  base_model: meta-math/MetaMath-Mistral-7B
17
  ---
 
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  - en
16
  base_model: meta-math/MetaMath-Mistral-7B
17
  ---
18
+ # 🔢 Einstein-v6-7B
19
 
20
+ This model is a full fine-tuned version of [meta-math/MetaMath-Mistral-7B](meta-math/MetaMath-Mistral-7B) on the following datasets:
21
+
22
+ - 🧮 [TIGER-Lab/MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
23
+ - 📐 [microsoft/orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k)
24
+
25
+ This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
26
+
27
+ This model's training was sponsored by [sablo.ai](https://sablo.ai).
28
+
29
+ <details><summary>See axolotl config</summary>
30
+
31
+ axolotl version: `0.4.0`
32
+ ```yaml
33
+
34
+ ```
35
+
36
+ </details><br>
37
+
38
+ # 💬 Prompt Template
39
+
40
+ You can use this prompt template while using the model:
41
+
42
+ ### Alpaca
43
+
44
+ ```
45
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
46
+
47
+ ### Instruction:
48
+ {instruction}
49
+
50
+ ### Response:
51
+
52
+ ```
53
+
54
+ This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
55
+ `tokenizer.apply_chat_template()` method:
56
+
57
+ ```python
58
+ messages = [
59
+ {"role": "system", "content": "You are helpful AI asistant."},
60
+ {"role": "user", "content": "Hello!"}
61
+ ]
62
+ gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
63
+ model.generate(**gen_input)
64
+ ```
65
+
66
+ # 🔄 Quantizationed versions
67
+
68
+ Quantizationed versions of this model is currently not available. It will be available soon :)
69
+
70
+ # 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
71
+
72
+ # 🤖 Additional information about training
73
+
74
+ This model is full fine-tuned for 2 epoch.
75
+
76
+ Total number of steps was x.
77
+
78
+ <details><summary>Loss graph</summary>
79
+
80
+
81
+ </details><br>
82
+
83
+ # 🤝 Acknowledgments
84
+
85
+ Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model.
86
+
87
+ Thanks to all the dataset authors mentioned in the datasets section.
88
+
89
+ Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model.
90
+
91
+ Thanks to all open source AI community.
92
+
93
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
94
+
95
+ If you would like to support me:
96
+
97
+ [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)