Steelskull commited on
Commit
9f1e8f0
1 Parent(s): 9fb44f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -101,22 +101,22 @@ code {
101
  <div class="container">
102
  <div class="header">
103
  <h1>L3-NA-Aethora-15B</h1>
 
104
  </div>
105
  <div class="info">
106
  <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/W0qzZK_V1Zt1GdgCIsnrP.png">
107
  <p>The Skullery Presents L3-NA-Aethora-15B.</p>
 
108
  <p><strong>Creator:</strong> <a href="https://huggingface.co/steelskull" target="_blank">Steelskull</a></p>
109
  <p><strong>Dataset:</strong> <a href="https://huggingface.co/datasets/TheSkullery/Aether-Lite-V1.2" target="_blank">Aether-Lite-V1.2</a></p>
110
  <p><strong>Trained:</strong> 4 x A100 for 15 hours Using RsLora and DORA</p>
111
  <h1>About L3-NA-Aethora-15B:</h1>
112
  <pre><code>L3 = Llama3
113
- NA = NON-ABLITERATED
114
- </code></pre>
115
  <p>L3-NA-Aethora-15B was crafted by using a modified DUS (Depth Up Scale) merge (originally used by @Elinas) by using passthrough merge to create a 15b model, with specific adjustments (zeroing) to 'o_proj' and 'down_proj', enhancing its efficiency and reducing perplexity. This created Meta-Llama-3-15b-Instruct.<br>
116
  <p>Meta-Llama-3-15b-Instruct was then trained for 4 epochs using Rslora & DORA training methods on the Aether-Lite-V1.2 dataset, containing ~82000 high quality samples, designed to strike a fine balance between creativity, slop, and intelligence at about a 60/40 split</p>
117
  <p>This model is trained on the L3 prompt format.</p>
118
  <p></p>
119
- <p><strong>This is the NON-Abliterated VERSION and a TEST Model</strong></p>
120
  <h2>Quants:</h2>
121
  <p></p>
122
  <h2>Dataset Summary: (Filtered)</h2>
 
101
  <div class="container">
102
  <div class="header">
103
  <h1>L3-NA-Aethora-15B</h1>
104
+ <p><strong>This is the NON-Abliterated VERSION and Experimental!!</strong></p>
105
  </div>
106
  <div class="info">
107
  <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/W0qzZK_V1Zt1GdgCIsnrP.png">
108
  <p>The Skullery Presents L3-NA-Aethora-15B.</p>
109
+ <p><strong>This is the NON-Abliterated VERSION and Experimental!!</strong></p>
110
  <p><strong>Creator:</strong> <a href="https://huggingface.co/steelskull" target="_blank">Steelskull</a></p>
111
  <p><strong>Dataset:</strong> <a href="https://huggingface.co/datasets/TheSkullery/Aether-Lite-V1.2" target="_blank">Aether-Lite-V1.2</a></p>
112
  <p><strong>Trained:</strong> 4 x A100 for 15 hours Using RsLora and DORA</p>
113
  <h1>About L3-NA-Aethora-15B:</h1>
114
  <pre><code>L3 = Llama3
115
+ NA = NON-ABLITERATED</code></pre>
 
116
  <p>L3-NA-Aethora-15B was crafted by using a modified DUS (Depth Up Scale) merge (originally used by @Elinas) by using passthrough merge to create a 15b model, with specific adjustments (zeroing) to 'o_proj' and 'down_proj', enhancing its efficiency and reducing perplexity. This created Meta-Llama-3-15b-Instruct.<br>
117
  <p>Meta-Llama-3-15b-Instruct was then trained for 4 epochs using Rslora & DORA training methods on the Aether-Lite-V1.2 dataset, containing ~82000 high quality samples, designed to strike a fine balance between creativity, slop, and intelligence at about a 60/40 split</p>
118
  <p>This model is trained on the L3 prompt format.</p>
119
  <p></p>
 
120
  <h2>Quants:</h2>
121
  <p></p>
122
  <h2>Dataset Summary: (Filtered)</h2>