antiven0m commited on
Commit
e314966
1 Parent(s): 5eb9ddf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +219 -29
README.md CHANGED
@@ -18,46 +18,236 @@ title: ""
18
  authors:
19
  - user: antiven0m
20
  ---
21
- ![](https://ii.imgur.com/HRgbsPm.png)
22
- # Speculor-2.7B
23
- The integration of AI in mental health care is a hot-button issue. Concerns range from privacy issues to the fear of an impersonal approach in a field that is, at its roots, human. Despite these challenges, I believe in the potential for AI to provide meaningful support in the mental health domain through thoughtful application.
 
 
 
 
24
 
25
- This model was built to enhance the support network for people navigating mental health challenges, aiming to work alongside human professionals rather than replace them. It could be a step forward in using technology to improve access and comprehension in mental health care.
 
 
 
 
26
 
27
- To create this Phi-2 fine-tune, I used public datasets like counseling Q&A platforms from sources such as WebMD, synthetic mental health conversations, general instructions, and some personification data.
 
 
 
 
 
 
28
 
29
- ## Proposed Use Case
30
- To ensure the models application remains safe and thoughtful, I propose the following:
31
- - Flagging System: Developing an automated flagging system that analyzes user conversations for keywords or phrases indicative of crisis or severe mental health concerns. Upon triggering, the system would direct users towards professional help and relevant mental health information.
32
- - Third-Party Moderation: Implementing trained human moderators who can assess user input, particularly in cases where responses suggest potential crisis situations. Moderators could then intervene and connect users with appropriate mental health resources.
33
 
34
- ![potential usecase](https://i.imgur.com/T3ooSGA.png)
 
 
 
 
35
 
36
- ## Why It Might Be Useful
37
- - **Easy Access**: Since this LLM model can run on a wide variety of hardware, it's available for free to anyone with a decent device. No GPU required. (See [here](https://huggingface.co/antiven0m/speculor-2.7b-GGUF).)
38
- - **Practice Opening Up**: It can be really hard to openly share your deepest struggles and mental health issues, even with professionals. Chatting with a LLM first could help take some of the pressure off and let you get comfortable putting those thoughts and feelings into words, and develop a more appropriate way to discuss them with a licensed mental health professional.
39
- - **Exploratory Tool**: The low-pressure nature allows users to safely explore different perspectives, thought experiments, or hypothetical scenarios related to their mental wellbeing.
40
- - **Available 24/7**: Unlike humans with limited hours, an LLM is available around the clock for whenever someone needs mental health information or wants to talk through their thoughts and feelings.
41
 
42
- The idea is, by having low-stakes conversations with this LLM, you may find it easier to then talk more clearly about your situation with an actual therapist or mental health provider when you're ready. It's like a casual warm-up for the real deal.
 
 
 
 
 
 
 
43
 
44
- Of course, a LLM can never replace human experts. But sometimes getting the hang of openly discussing this stuff, even with a machine, can be a small stepping stone that leads to getting proper care.
 
 
 
 
 
 
45
 
46
- ## Limitations
47
- While I've aimed to make this model as helpful as possible, it's imperative that users understand its many constraints:
 
 
 
 
 
 
 
 
 
 
48
 
49
- - **No Medical Expertise**: This model does not have any clinical training or ability to provide professional medical diagnosis or treatment plans. Its knowledge purely comes from the data it was trained on.
50
- - **Lack of Human Context**: As an AI system, it cannot fully comprehend the depth and complexity of human experiences, emotions and individual contexts the way an actual human professional would.
51
- - **Limited Knowledge**: The model's responses are limited to the information contained in its training data. There will inevitably be gaps in its understanding of certain mental health topics, complexities and nuances.
52
- - **No Crisis Intervention**: This model is not designed or capable of providing any kind of crisis intervention or emergency response and cannot help in those acute, high-risk situations.
 
53
 
54
- I want to emphasize that this model is intended as a supportive tool, not at all a replacement for professional mental health support and services from qualified human experts, and certainly not an "AI Therapist". Users should approach their conversations with appropriate expectations.
 
 
 
 
 
 
55
 
56
- ## Privacy
57
- It's important to note that while the model itself does not save or transmit data, some frontends or interfaces used to interact with the LLMs may locally store logs of your conversations on your device. Be aware of this possibility and take appropriate precautions!
 
 
 
 
 
58
 
59
- ---
 
 
60
 
61
- `Disclaimer: I am not a medical health professional. This LLM is not a substitute for professional mental health services or licensed therapists. It provides general information and supportive conversation, but cannot diagnose conditions, provide treatment plans, or handle crisis situations. Please seek qualified human care for any serious mental health needs.`
 
 
 
 
 
 
 
 
 
 
 
62
 
63
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  authors:
19
  - user: antiven0m
20
  ---
21
+ <style>
22
+ body {
23
+ font-family: 'Poppins', sans-serif;
24
+ line-height: 1.6;
25
+ color: #fff;
26
+ background: radial-gradient(ellipse at bottom, #1b2735 0%, #090a0f 100%);
27
+ }
28
 
29
+ .container {
30
+ max-width: 1000px;
31
+ margin: 0 auto;
32
+ padding: 40px;
33
+ }
34
 
35
+ .panel {
36
+ background: linear-gradient(135deg, rgba(47, 38, 86, 0.8) 0%, rgba(28, 16, 61, 0.8) 50%, rgba(22, 9, 48, 0.8) 100%);
37
+ padding: 30px;
38
+ border-radius: 10px;
39
+ box-shadow: 0 10px 20px rgba(0, 0, 0, 0.3);
40
+ margin-bottom: 40px;
41
+ }
42
 
43
+ .panel-header {
44
+ text-align: center;
45
+ margin-bottom: 20px;
46
+ }
47
 
48
+ .panel-header img {
49
+ max-width: 100%;
50
+ height: auto;
51
+ border-radius: 10px;
52
+ }
53
 
54
+ h1, h2, h3 {
55
+ font-weight: 700;
56
+ color: #fff;
57
+ text-shadow: 0 2px 4px rgba(0, 0, 0, 0.3);
58
+ }
59
 
60
+ h1 {
61
+ font-size: 60px;
62
+ margin-bottom: 20px;
63
+ text-align: center;
64
+ background: linear-gradient(90deg, #9370DB, #BA55D3, #9370DB);
65
+ -webkit-background-clip: text;
66
+ -webkit-text-fill-color: transparent;
67
+ }
68
 
69
+ h2 {
70
+ font-size: 32px;
71
+ margin-top: 40px;
72
+ margin-bottom: 20px;
73
+ padding-left: 40px;
74
+ position: relative;
75
+ }
76
 
77
+ h2::before {
78
+ content: "";
79
+ position: absolute;
80
+ left: 10px;
81
+ top: 50%;
82
+ transform: translateY(-50%);
83
+ width: 10px;
84
+ height: 10px;
85
+ background-color: #FFD700;
86
+ border-radius: 50%;
87
+ box-shadow: 0 0 10px rgba(255, 215, 0, 0.5);
88
+ }
89
 
90
+ h3 {
91
+ font-size: 24px;
92
+ margin-top: 30px;
93
+ margin-bottom: 15px;
94
+ }
95
 
96
+ p {
97
+ font-size: 18px;
98
+ line-height: 1.6;
99
+ margin-bottom: 20px;
100
+ text-shadow: 0 1px 2px rgba(0, 0, 0, 0.3);
101
+ padding: 0 20px;
102
+ }
103
 
104
+ ul {
105
+ font-size: 18px;
106
+ line-height: 1.6;
107
+ margin-bottom: 20px;
108
+ padding: 0 40px;
109
+ text-shadow: 0 1px 2px rgba(0, 0, 0, 0.3);
110
+ }
111
 
112
+ li {
113
+ margin-bottom: 10px;
114
+ }
115
 
116
+ .disclaimer {
117
+ background: linear-gradient(135deg, rgba(128, 0, 128, 0.8) 0%, rgba(255, 0, 255, 0.8) 50%, rgba(128, 0, 128, 0.8) 100%);
118
+ color: #fff;
119
+ padding: 20px;
120
+ border-radius: 5px;
121
+ font-size: 16px;
122
+ line-height: 1.4;
123
+ text-shadow: 0 1px 2px rgba(0, 0, 0, 0.3);
124
+ position: relative;
125
+ display: flex;
126
+ align-items: center;
127
+ }
128
 
129
+ .disclaimer::before {
130
+ content: "!";
131
+ position: absolute;
132
+ left: 10px;
133
+ top: 50%;
134
+ transform: translateY(-50%);
135
+ width: 30px;
136
+ height: 30px;
137
+ background-color: #FF6347;
138
+ border-radius: 50%;
139
+ box-shadow: 0 0 10px rgba(255, 99, 71, 0.5), inset 0 0 5px rgba(0, 0, 0, 0.5);
140
+ display: flex;
141
+ justify-content: center;
142
+ align-items: center;
143
+ font-size: 20px;
144
+ font-weight: bold;
145
+ color: #fff;
146
+ }
147
+
148
+ .disclaimer p {
149
+ margin-bottom: 0;
150
+ padding-left: 50px;
151
+ }
152
+
153
+ .widget-container {
154
+ background: linear-gradient(135deg, rgba(0, 191, 255, 0.8) 0%, rgba(0, 128, 128, 0.8) 50%, rgba(0, 191, 255, 0.8) 100%);
155
+ padding: 20px;
156
+ border-radius: 5px;
157
+ margin-bottom: 20px;
158
+ box-shadow: 0 5px 10px rgba(0, 0, 0, 0.2);
159
+ }
160
+
161
+ .widget-item {
162
+ margin-bottom: 10px;
163
+ }
164
+
165
+ .widget-item .title {
166
+ font-weight: 700;
167
+ font-size: 18px;
168
+ margin-bottom: 5px;
169
+ color: #FFD700;
170
+ text-shadow: 0 1px 2px rgba(0, 0, 0, 0.3);
171
+ }
172
+
173
+ .widget-item .text {
174
+ font-size: 16px;
175
+ color: #fff;
176
+ text-shadow: 0 1px 2px rgba(0, 0, 0, 0.3);
177
+ }
178
+
179
+ .usecase-container {
180
+ display: flex;
181
+ justify-content: center;
182
+ align-items: center;
183
+ margin-top: 40px;
184
+ margin-bottom: 40px;
185
+ }
186
+
187
+ .usecase-image {
188
+ max-width: 100%;
189
+ width: 100%;
190
+ height: auto;
191
+ border-radius: 10px;
192
+ }
193
+ </style>
194
+
195
+ <body>
196
+ <div class="container">
197
+ <h1>Speculor-2.7B</h1>
198
+
199
+ <div class="panel">
200
+ <div class="panel-header">
201
+ <img src="https://i.imgur.com/HRgbsPm.png" alt="Speculor-2.7B">
202
+ </div>
203
+ <p>The integration of AI in mental health care is a hot-button issue. Concerns range from privacy issues to the fear of an impersonal approach in a field that is, at its roots, human. Despite these challenges, I believe in the potential for AI to provide meaningful support in the mental health domain through thoughtful application.</p>
204
+ <p>This model was built to enhance the support network for people navigating mental health challenges, aiming to work alongside human professionals rather than replace them. It could be a step forward in using technology to improve access and comprehension in mental health care.</p>
205
+ <p>To create this Phi-2 fine-tune, I used public datasets like counseling Q&A platforms from sources such as WebMD, synthetic mental health conversations, general instructions, and some personification data.</p>
206
+ </div>
207
+
208
+ <div class="panel">
209
+ <h2>Proposed Use Case</h2>
210
+ <p>To ensure the models application remains safe and thoughtful, I propose the following:</p>
211
+ <ul>
212
+ <li><strong>Flagging System:</strong> Developing an automated flagging system that analyzes user conversations for keywords or phrases indicative of crisis or severe mental health concerns. Upon triggering, the system would direct users towards professional help and relevant mental health information.</li>
213
+ <li><strong>Third-Party Moderation:</strong> Implementing trained human moderators who can assess user input, particularly in cases where responses suggest potential crisis situations. Moderators could then intervene and connect users with appropriate mental health resources.</li>
214
+ </ul>
215
+ </div>
216
+
217
+ <div class="usecase-container">
218
+ <img src="https://i.imgur.com/T3ooSGA.png" alt="Potential Use Case" class="usecase-image">
219
+ </div>
220
+
221
+ <div class="panel">
222
+ <h2>Why It Might Be Useful</h2>
223
+ <ul>
224
+ <li><strong>Easy Access:</strong> Since this LLM model can run on a wide variety of hardware, it's available for free to anyone with a decent device. No GPU required. (<a href="https://huggingface.co/antiven0m/speculor-2.7b-GGUF" style="color: #FFD700; text-decoration: none;">See here</a>.)</li>
225
+ <li><strong>Practice Opening Up:</strong> It can be really hard to openly share your deepest struggles and mental health issues, even with professionals. Chatting with a LLM first could help take some of the pressure off and let you get comfortable putting those thoughts and feelings into words, and develop a more appropriate way to discuss them with a licensed mental health professional.</li>
226
+ <li><strong>Exploratory Tool:</strong> The low-pressure nature allows users to safely explore different perspectives, thought experiments, or hypothetical scenarios related to their mental wellbeing.</li>
227
+ <li><strong>Available 24/7:</strong> Unlike humans with limited hours, an LLM is available around the clock for whenever someone needs mental health information or wants to talk through their thoughts and feelings.</li>
228
+ </ul>
229
+ <p>The idea is, by having low-stakes conversations with this LLM, you may find it easier to then talk more clearly about your situation with an actual therapist or mental health provider when you're ready. It's like a casual warm-up for the real deal.</p>
230
+ </div>
231
+
232
+ <div class="panel">
233
+ <h2>Limitations</h2>
234
+ <p>While I've aimed to make this model as helpful as possible, it's imperative that users understand its many constraints:</p>
235
+ <ul>
236
+ <li><strong>No Medical Expertise:</strong> This model does not have any clinical training or ability to provide professional medical diagnosis or treatment plans. Its knowledge purely comes from the data it was trained on.</li>
237
+ <li><strong>Lack of Human Context:</strong> As an AI system, it cannot fully comprehend the depth and complexity of human experiences, emotions and individual contexts the way an actual human professional would.</li>
238
+ <li><strong>Limited Knowledge:</strong> The model's responses are limited to the information contained in its training data. There will inevitably be gaps in its understanding of certain mental health topics, complexities and nuances.</li>
239
+ <li><strong>No Crisis Intervention:</strong> This model is not designed or capable of providing any kind of crisis intervention or emergency response and cannot help in those acute, high-risk situations.</li>
240
+ </ul>
241
+ <p>I want to emphasize that this model is intended as a supportive tool, not at all a replacement for professional mental health support and services from qualified human experts, and certainly not an "AI Therapist". Users should approach their conversations with appropriate expectations.</p>
242
+ </div>
243
+
244
+ <div class="panel">
245
+ <h2>Privacy</h2>
246
+ <p>It's important to note that while the model itself does not save or transmit data, some frontends or interfaces used to interact with the LLMs may locally store logs of your conversations on your device. Be aware of this possibility and take appropriate precautions!</p>
247
+ </div>
248
+
249
+ <div class="disclaimer">
250
+ <p><strong>Disclaimer:</strong> I am not a medical health professional. This LLM is not a substitute for professional mental health services or licensed therapists. It provides general information and supportive conversation, but cannot diagnose conditions, provide treatment plans, or handle crisis situations. Please seek qualified human care for any serious mental health needs.</p>
251
+ </div>
252
+ </div>
253
+ </body>