ajibawa-2023 commited on
Commit
1af2e5f
1 Parent(s): 22e6399

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md CHANGED
@@ -1,3 +1,77 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ **General-Stories-Mistral-7B**
5
+
6
+ This model is based on my dataset [Children-Stories-Collection](https://huggingface.co/datasets/ajibawa-2023/Children-Stories-Collection) which has over 0.9 million stories meant for Young Children (age 6 to 12).
7
+
8
+ Drawing upon synthetic datasets meticulously designed with the developmental needs of young children in mind, Young-Children-Storyteller is more than just a tool—it's a companion on the journey of discovery and learning.
9
+ With its boundless storytelling capabilities, this model serves as a gateway to a universe brimming with wonder, adventure, and endless possibilities.
10
+
11
+ Whether it's embarking on a whimsical adventure with colorful characters, unraveling mysteries in far-off lands, or simply sharing moments of joy and laughter, Young-Children-Storyteller fosters a love for language and storytelling from the earliest of ages.
12
+ Through interactive engagement and age-appropriate content, it nurtures creativity, empathy, and critical thinking skills, laying a foundation for lifelong learning and exploration.
13
+
14
+ Rooted in a vast repository of over 0.9 million specially curated stories tailored for young minds, Young-Children-Storyteller is poised to revolutionize the way children engage with language and storytelling.
15
+
16
+ Kindly note this is qLoRA version, another exception.
17
+
18
+
19
+ **GGUF & Exllama**
20
+
21
+ Standard Q_K & GGUF: TBA
22
+
23
+ Exllama: TBA
24
+
25
+
26
+
27
+ **Training**
28
+
29
+ Entire dataset was trained on 4 x A100 80GB. For 2 epoch, training took more than 30 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral-7B-v0.1.
30
+
31
+ **Example Prompt:**
32
+
33
+ This model uses **ChatML** prompt format.
34
+
35
+ ```
36
+ <|im_start|>system
37
+ You are a Helpful Assistant.<|im_end|>
38
+ <|im_start|>user
39
+ {prompt}<|im_end|>
40
+ <|im_start|>assistant
41
+
42
+ ```
43
+ You can modify above Prompt as per your requirement.
44
+
45
+
46
+ I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
47
+
48
+ Thank you for your love & support.
49
+
50
+ **Example Output**
51
+
52
+ Example 1
53
+
54
+
55
+
56
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/mVLGRiYKzFCC2wAJOejLP.jpeg)
57
+
58
+
59
+
60
+ Example 2
61
+
62
+
63
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/FwCUW9FDDnmBpdnqraWNF.jpeg)
64
+
65
+
66
+
67
+ Example 3
68
+
69
+
70
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/w0D_eX3xG6MnX5wWD8LT9.jpeg)
71
+
72
+
73
+ Example 4
74
+
75
+
76
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/HaJ91YQ9d57SGv7BwTcv_.jpeg)
77
+