OwenArli commited on
Commit
2438d08
1 Parent(s): 3447b90

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -3
README.md CHANGED
@@ -1,3 +1,52 @@
1
- ---
2
- license: gemma
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ ---
4
+ # Gemma-2-2B-ArliAI-RPMax-v1.1
5
+ =====================================
6
+
7
+ ## RPMax Series Overview
8
+
9
+ | [2B](https://huggingface.co/ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1) |
10
+ [3.8B](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) |
11
+ [8B](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1) |
12
+ [9B](https://huggingface.co/ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1) |
13
+ [12B](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1) |
14
+ [20B](https://huggingface.co/ArliAI/InternLM_2_5-20B-ArliAI-RPMax-v1.1) |
15
+ [70B](https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1) |
16
+
17
+ RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
18
+
19
+ Early tests by users mentioned that these models does not feel like any other RP models, having a different style and generally doesn't feel in-bred.
20
+
21
+ You can access the models at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/
22
+
23
+ We also have a models ranking page at https://www.arliai.com/models-ranking
24
+
25
+ Ask questions in our new Discord Server! https://discord.gg/aDVx6FZN
26
+
27
+ ## Model Description
28
+
29
+ Gemma-2-2B-ArliAI-RPMax-v1.1 is a variant based on Gemma 2 2B it.
30
+
31
+ ### Training Details
32
+
33
+ * **Sequence Length**: 4096
34
+ * **Training Duration**: Approximately less than 1 day on 2x3090Ti
35
+ * **Epochs**: 1 epoch training for minimized repetition sickness
36
+ * **QLORA**: 64-rank 128-alpha, resulting in ~2% trainable weights
37
+ * **Learning Rate**: 0.00001
38
+ * **Gradient accumulation**: Very low 32 for better learning.
39
+
40
+ ## Quantization
41
+
42
+ The model is available in quantized formats:
43
+
44
+ * **FP16**: https://huggingface.co/ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1
45
+ * **GPTQ_Q4**: https://huggingface.co/ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1-GPTQ_Q4
46
+ * **GPTQ_Q8**: https://huggingface.co/ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1-GPTQ_Q8
47
+ * **GGUF**: https://huggingface.co/ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1-GGUF
48
+
49
+
50
+ ## Suggested Prompt Format
51
+
52
+ Gemma Instruct Prompt Format