Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ datasets:
|
|
12 |
---
|
13 |
# ConvoBrief: LoRA-enhanced BART Model for Dialogue Summarization
|
14 |
|
15 |
-
This model is a variant of the
|
16 |
|
17 |
## LoRA Configuration:
|
18 |
|
@@ -26,7 +26,7 @@ This model is a variant of the 'facebook/bart-large-cnn' model, optimized with L
|
|
26 |
This model has been fine-tuned using the PEFT (Parameter-Efficient Fine-Tuning) approach, striking a balance between dialogue summarization objectives for optimal performance.
|
27 |
## Usage:
|
28 |
|
29 |
-
Deploy this LoRA-enhanced BART model for dialogue summarization tasks,
|
30 |
```python
|
31 |
from peft import PeftModel, PeftConfig
|
32 |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
@@ -72,8 +72,6 @@ print("Original Dialogue:\n", full_dialogue)
|
|
72 |
print("Generated Summary:\n", summary[0]['summary_text'])
|
73 |
```
|
74 |
|
75 |
-
Feel free to customize and expand upon this description and usage example to provide additional context and details about your LoRA-enhanced BART model and how users can effectively use it for dialogue summarization tasks.
|
76 |
-
|
77 |
### Framework versions
|
78 |
|
79 |
- PEFT 0.4.0
|
|
|
12 |
---
|
13 |
# ConvoBrief: LoRA-enhanced BART Model for Dialogue Summarization
|
14 |
|
15 |
+
This model is a variant of the `facebook/bart-large-cnn` model, enhanced with Low-Rank Adaptation (LoRA) for dialogue summarization tasks. LoRA employs Low-Rank Attention to facilitate feature aggregation across different positions in the sequence, making it particularly effective for capturing the nuances of dialogues.
|
16 |
|
17 |
## LoRA Configuration:
|
18 |
|
|
|
26 |
This model has been fine-tuned using the PEFT (Parameter-Efficient Fine-Tuning) approach, striking a balance between dialogue summarization objectives for optimal performance.
|
27 |
## Usage:
|
28 |
|
29 |
+
Deploy this LoRA-enhanced BART model for dialogue summarization tasks, leveraging the power of Low-Rank Adaptation to capture contextual dependencies in conversations. Generate concise and informative summaries from conversational text, enhancing your applications with enriched context-awareness.
|
30 |
```python
|
31 |
from peft import PeftModel, PeftConfig
|
32 |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
|
|
72 |
print("Generated Summary:\n", summary[0]['summary_text'])
|
73 |
```
|
74 |
|
|
|
|
|
75 |
### Framework versions
|
76 |
|
77 |
- PEFT 0.4.0
|