base_model: google/gemma-2b
datasets:
- generator
library_name: peft
license: gemma
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: results
results: []
results
This model is a fine-tuned version of google/gemma-2b on the generator dataset.
description
Inspiration
The inspiration for WanOne stems from the challenges that the LGBTQ+ community faces, especially regarding mental health. Approximately 15 million adults identify as LGBTQ+, and this demographic faces unique difficulties, such as harassment, discrimination, and healthcare barriers. We realized there’s a pressing need for accessible, empathetic mental health support for individuals who might not have the resources for traditional therapy. Our goal was to create a mental health assistant that provides a safe space, offering comfort and guidance with a specific focus on LGBTQ+ experiences.
What it does
WanOne is an LGBTQ+ friendly AI mental health assistant designed to support users through conversations. The AI uses structured dialogue to build a relationship, check in on the user’s mood, explore deeper emotional concerns, and provide summaries to help users track their mental health over time. The assistant follows a mental health process that’s been fine-tuned for LGBTQ+ users, offering safety features such as conversation-ending controls and feedback buttons to give users more control over their interaction.
How we built it
We built WanOne using Intel Tiber Developer and fine-tuned a large language model (LLM) based on Google/GEMMA-2. Our process involved researching LGBTQ+ mental health challenges, conducting user interviews, and integrating the PHQ-9 depression severity measure to guide AI interactions. The AI was designed with a specific persona to simulate the empathic responses a therapist might give, ensuring the user feels validated and supported throughout their interaction.
Challenges we ran into
We faced significant challenges while using Intel Tiber Developer Cloud, as it was a tool we had never worked with before. Being relatively new, there was limited information available online, making it hard to find solutions quickly. We overcame this by methodically testing libraries one by one and diving deep into the documentation to ensure we could use the platform effectively. Though the learning curve was steep, this approach allowed us to build a functional and robust AI model.
Accomplishments that we're proud of
We are proud of creating an AI that addresses a crucial gap in LGBTQ+ mental health support. The fine-tuning process to tailor WanOne’s responses to be both empathetic and helpful was no small feat. We also integrated critical safety features, allowing users to manage their interactions in a way that promotes security and control. Additionally, we’re proud of building a fully functioning model within the constraints of the hackathon.
What we learned
We learned a lot about the complexities of designing AI for sensitive and nuanced interactions. Our user interviews underscored the importance of control and safety in AI interactions, leading us to incorporate features that empower users. Additionally, we deepened our understanding of the specific challenges LGBTQ+ individuals face and how AI can support mental health journeys in a meaningful way.
What's next for WanOne LGBTQ+ Assistant
Our next steps for WanOne include:
- Voice Chat: Many users expressed a desire for more conversational flexibility, and we plan to introduce voice interactions to enhance accessibility.
- Additional Personas: We aim to develop more AI personas to offer diverse treatment approaches and unique feedback styles, making the AI experience more personalized.
- Cybersecurity Enhancements: To safeguard user privacy, we plan to integrate Intel Cyber Trust Services for secure, confidential data handling in AI-powered healthcare.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1