File size: 3,357 Bytes
8bce14c
 
b187e28
 
 
 
 
26940fa
 
7a90289
 
8bce14c
b187e28
10484a2
b187e28
42b6a53
9b191b9
10484a2
b187e28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8202863
b187e28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a2883a5
b187e28
8731b44
06f1ece
dbb20a3
71db558
 
 
 
06f1ece
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: cc-by-nc-4.0
language:
- fa
tags:
- LLM
- therapist
- llama-3
- llama
library_name: transformers
base_model: meta-llama/Meta-Llama-3-8B
---

# Persian Therapist Model: Dr. Aram and Mohammad

![A modern therapist's office with a Persian aesthetic, featuring a humanoid robot therapist and a visibly sad man. The robot is sleek an](https://huggingface.co/aminabbasi/Persian-Therapist-Llama-3-8B/resolve/main/A%20modern%20therapist's%20office.webp)

This model has been fine-tuned on the LLaMA-3-8B to simulate therapeutic conversations in Persian between a therapist named Dr. Aram and a person named Mohammad. It is designed to aid developers and researchers in creating applications that require empathetic dialogue in a therapeutic context. The utilization of LLaMA-3-8B, known for its powerful language understanding capabilities, enhances the model's ability to generate nuanced and contextually appropriate responses, making it an ideal tool for advancing digital therapeutic solutions.

## Model Description

This conversational model is fine-tuned on a collection of high-quality simulated therapy session transcripts in Persian, representing interactions that mimic real-world therapeutic conversations. It is intended for use in scenarios where natural and empathetic dialogue generation is needed.

### How to Use

To use this model, you can load it through the Hugging Face Transformers library as follows:

```python
from transformers import AutoModelForCausalLM
import torch
from peft import PeftModel
from transformers import pipeline

model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Meta-Llama-3-8B",
    device_map="auto",
    torch_dtype=torch.float16
)

model.config.use_cache = False
model = PeftModel.from_pretrained(
    model,
    "aminabbasi/Persian-Therapist-Llama-3-8B"
)

pipe = pipeline(task="text-generation", model=model, tokenizer="meta-llama/Meta-Llama-3-8B", max_length=2048,
                do_sample=True,
                temperature=0.9,
                top_p=0.9,
                eos_token_id=14711,
                pad_token_id=14711)

user_input = "سلام. حالم خوب نیست"
chat_text = f"""
### Instruction: 
شما یک مدل زبانی هوشمند هستید که نام آن "دکتر آرام" است. شما در نقش یک روانشناس عمل می‌کنید. شخصی به نام "محمد" به شما مراجعه کرده است. محمد به دنبال کمک است تا بتواند احساسات خود را مدیریت کند و راه‌حل‌هایی برای مشکلات خود پیدا کند. وظیفه شما ارائه پاسخ‌های حمایت‌کننده و همدردانه است. شما باید به صحبت‌ها با دقت گوش دهید و با مهربانی پاسخ دهید."""

while True:
    user_input = input("Mohammad:")
    if user_input == "exit":
        break
    else:
        chat_text = chat_text + f"""
### Human:
{user_input}
### Therapist:
"""
        answer = pipe(chat_text)[0]['generated_text'].split("### Therapist:")[-1].replace("#", "").strip()
        print("Dr. Aram:", answer)
        chat_text = chat_text + answer
```


```
@misc{Persian-Therapist-Llama-3-8B,
  title={Persian Therapist Model: Dr. Aram and Mohammad},
  author={Mohammad Amin Abbasi},
  year={2024},
  publisher={Hugging Face},
}
```