--- library_name: transformers tags: - text-generation-inference - SmolLM2 license: mit datasets: - mrs83/kurtis_mental_health_final language: - en base_model: - HuggingFaceTB/SmolLM2-360M-Instruct pipeline_tag: text-generation --- # Model Card for Kurtis-SmolLM2-360M-Instruct This model has been fine-tuned using Kurtis, an experimental fine-tuning, inference and evaluation tool for Small Language Models. ## Model Details ### Model Description - **Developed by:** Massimo R. Scamarcia - **Funded by:** Massimo R. Scamarcia - (self-funded) - **Shared by:** Massimo R. Scamarcia - **Model type:** Transformer decoder - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model:** HuggingFaceTB/SmolLM2-360M-Instruct ### Model Sources - **Repository:** [https://github.com/mrs83/kurtis](https://github.com/mrs83/kurtis) ## Uses The model is intended for use in a conversational setting, particularly in mental health and therapeutic support scenarios. ### Direct Use Not suitable for production usage. ### Out-of-Scope Use This model should not be used for: - Making critical mental health decisions or diagnoses. - Replacing professional mental health services. - Applications where responses require regulatory compliance or are highly sensitive. - Generating responses without human supervision, especially in contexts that involve vulnerable individuals. ## Bias, Risks, and Limitations Misuse of this dataset could lead to providing inappropriate or harmful responses, so it should not be deployed without proper safeguards in place. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## How to Get Started with the Model WIP