File size: 1,632 Bytes
8b4df40
2c3e583
4d2f0d1
80cf034
 
 
 
 
 
2c3e583
 
 
 
 
 
 
 
8b4df40
4d2f0d1
 
 
80cf034
4d2f0d1
 
 
 
 
 
d205fc9
80cf034
d205fc9
4d2f0d1
d205fc9
4d2f0d1
 
80cf034
d205fc9
80cf034
 
4d2f0d1
2c3e583
4d2f0d1
 
 
d205fc9
2c3e583
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
library_name: adapter-transformers
base_model: openchat/openchat_3.5
license: mit
datasets:
- declare-lab/MELD
metrics:
- f1
tags:
- MELD
- Trigger
- 7B
- LoRA
- llama2
language:
- en
pipeline_tag: text-classification
---

# Model Card for Model ID

The model identfies the trigger for the emotion flip of the last utterance in multi party conversations.


## Model Details

### Model Description

The model presented here is tailored for the EDiReF shared task at SemEval 2024, specifically addressing Emotion Flip Reasoning (EFR) in English multi-party conversations.

The model utilizes the strengths of large language models (LLMs) pre-trained on extensive textual data, enabling it to capture complex linguistic patterns and relationships. To enhance its performance for the EFR task, the model has been finetuned using Quantized Low Rank Adaptation (QLoRA) on the dataset with strategic prompt engineering. This involves crafting input prompts that guide the model in identifying trigger utterances responsible for emotion-flips in multi-party conversations.

In summary, this model excels in pinpointing trigger utterances for emotion-flips in English dialogues, showcasing the effectiveness of openchat, LLM capabilities, QLoRA and strategic prompt engineering.


- **Developed by:** Hasan et al
- **Model type:** LoRA Adapter for openchat_3.5 (Text classification)
- **Language(s) (NLP):** English
- **License:** MIT

### Model Sources

<!-- Provide the basic links for the model. -->

- **Repository:** [Multi-Party-DialoZ](https://github.com/Zuhashaik/Multi-Party-DialoZ)
- **Paper [Soon]:** [More Information Needed]