awaisakhtar
commited on
Commit
•
56fbb87
1
Parent(s):
6f874c5
Update readme
Browse files
README.md
CHANGED
@@ -25,4 +25,51 @@ The following `bitsandbytes` quantization config was used during training:
|
|
25 |
|
26 |
- PEFT 0.5.0
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
|
|
25 |
|
26 |
- PEFT 0.5.0
|
27 |
|
28 |
+
# Project Title
|
29 |
+
|
30 |
+
Short description of your project or the model you've fine-tuned.
|
31 |
+
|
32 |
+
## Table of Contents
|
33 |
+
|
34 |
+
- [Overview](#overview)
|
35 |
+
- [Training Procedure](#training-procedure)
|
36 |
+
- [Quantization Configuration](#quantization-configuration)
|
37 |
+
- [Framework Versions](#framework-versions)
|
38 |
+
- [Usage](#usage)
|
39 |
+
- [Evaluation](#evaluation)
|
40 |
+
- [Contributing](#contributing)
|
41 |
+
- [License](#license)
|
42 |
+
|
43 |
+
## Overview
|
44 |
+
|
45 |
+
Provide a brief introduction to your project. Explain what your fine-tuned model does and its potential applications. Mention any notable achievements or improvements over the base model.
|
46 |
+
|
47 |
+
## Training Procedure
|
48 |
+
|
49 |
+
Describe the training process for your fine-tuned model. Include details such as:
|
50 |
+
- Dataset used (XSum).
|
51 |
+
- Amount of data used (3% of the dataset).
|
52 |
+
- Number of training epochs (1 epoch).
|
53 |
+
- Any specific data preprocessing or augmentation.
|
54 |
+
|
55 |
+
## Quantization Configuration
|
56 |
+
|
57 |
+
Explain the quantization configuration used during training. Include details such as:
|
58 |
+
- Quantization method (bitsandbytes).
|
59 |
+
- Whether you loaded data in 8-bit or 4-bit.
|
60 |
+
- Threshold and skip modules for int8 quantization.
|
61 |
+
- Use of FP32 CPU offload and FP16 weight.
|
62 |
+
- Configuration for 4-bit quantization (fp4, double quant, compute dtype).
|
63 |
+
|
64 |
+
## Framework Versions
|
65 |
+
|
66 |
+
List the versions of the frameworks or libraries you used for this project. Include specific versions, e.g., PEFT 0.5.0.
|
67 |
+
|
68 |
+
## Usage
|
69 |
+
|
70 |
+
Provide instructions on how to use your fine-tuned model. Include code snippets or examples on how to generate summaries using the model. Mention any dependencies that need to be installed.
|
71 |
+
|
72 |
+
```bash
|
73 |
+
# Example usage command
|
74 |
+
python generate_summary.py --model your-model-name --input input.txt --output output.txt
|
75 |
|