ryefoxlime commited on
Commit
072ee0e
1 Parent(s): 8e0d21f

Updated README with Information about the FER Model

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md CHANGED
@@ -13,8 +13,40 @@ TADBot is small language model that is trained on the <input_data_set_name> data
13
  - Flask: A library used to create a server for TADBot.
14
  - Raspberry Pi: A small, low-cost computer used to host Test to Speech and Speech To Text models and TADBot server.
15
  - FER: A deep learning model used to detect emotions from faces in real-time using a webcam.
 
16
 
17
  # Features
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  # How It Works
20
 
 
13
  - Flask: A library used to create a server for TADBot.
14
  - Raspberry Pi: A small, low-cost computer used to host Test to Speech and Speech To Text models and TADBot server.
15
  - FER: A deep learning model used to detect emotions from faces in real-time using a webcam.
16
+ - S2T and T2S: Speech to Text and Text to Speech models used to convert speech to text and text to speech respectively.
17
 
18
  # Features
19
+ ## FER Model:
20
+ - TADBot uses a deep learning model to detect emotions from faces in real-time using a webcam. This allows TADBot to better understand the emotional context of a conversation and provide more appropriate and empathetic responses.
21
+ - The Data from the FER model is sent to the TADBot server and is used to identify the emotion from the image sent by the client. This information is then used to generate a more appropriate response from the model.
22
+ - The data is also logged seperatly in a text file which can be accessed by the client to track the change in emotion during the conversation. This can be used to provide insights into the conversation.
23
+ - The Data is not collected and erased after every conversation adhereing to the doctor-client privacy
24
+ > HLD for the FER model
25
+
26
+ ```mermaid
27
+ flowchart TD
28
+ %% User Interface Layer
29
+ A[Raspberry PI] -->|Sends Image| B[detecfaces.py]
30
+ B --->|Returns processed data| A
31
+
32
+ %%Server
33
+ subgraph Server
34
+ %% Processing Layer
35
+ B --> |Captured Image| T1[prediction.py]
36
+ M1[RAFDB trained model] --> |epoch with best acc 92%|B
37
+ T1-->|Top 3 emotions predicted| B
38
+
39
+ %%Model Layer
40
+ M1
41
+
42
+ %% Processing
43
+ subgraph Processing
44
+ T1 --> |Send Image|T2[detec_faces]
45
+ T2 --> |Returns a 224x224 face|T1
46
+ end
47
+ end
48
+ ```
49
+ ## S2T Model and T2S Model:
50
 
51
  # How It Works
52