Update README.md
Browse files
README.md
CHANGED
@@ -28,5 +28,40 @@ that facilitates the fine-tuning of various well-known LLMs on custom data.
|
|
28 |
Parameter-efficient fine-tuning is achieved via the QLoRA method [Dettmers et al., 2023](https://proceedings.neurips.cc/paper_files/paper/2023/file/1feb87871436031bdc0f2beaa62a049b-Paper-Conference.pdf).
|
29 |
|
30 |
|
31 |
-
## Usage
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
Parameter-efficient fine-tuning is achieved via the QLoRA method [Dettmers et al., 2023](https://proceedings.neurips.cc/paper_files/paper/2023/file/1feb87871436031bdc0f2beaa62a049b-Paper-Conference.pdf).
|
29 |
|
30 |
|
31 |
+
## Usage Guide
|
32 |
|
33 |
+
This project was executed on an Ubuntu 22.04.3 system running Linux kernel 6.8.0-40-generic.
|
34 |
+
|
35 |
+
### Installation
|
36 |
+
|
37 |
+
To get started, you first need to set up the environment using the **LLaMA-Factory** project. Please refer to the official [LLaMA-Factory repository](https://github.com/hiyouga/LLaMA-Factory) for more details.
|
38 |
+
|
39 |
+
You can install the project by running the following commands:
|
40 |
+
|
41 |
+
```bash
|
42 |
+
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
|
43 |
+
cd LLaMA-Factory
|
44 |
+
pip install -e ".[torch,metrics]"
|
45 |
+
```
|
46 |
+
|
47 |
+
### Execution
|
48 |
+
In the DeMINT project, the model was utilized to create a REST API. Below is an example of how to configure and run it.
|
49 |
+
|
50 |
+
**Setting Server Configuration**
|
51 |
+
|
52 |
+
To specify the port and server address, use the following environment variables:
|
53 |
+
|
54 |
+
To set the port and the address of the server:
|
55 |
+
```bash
|
56 |
+
# Default 8000
|
57 |
+
export KIND_TEACHER_PORT=8000
|
58 |
+
# Default localhost
|
59 |
+
export KIND_TEACHER_HOST="localhost"
|
60 |
+
```
|
61 |
+
|
62 |
+
**Running the Program**
|
63 |
+
|
64 |
+
Once the environment is configured, you can execute the program by running the following command:
|
65 |
+
```bash
|
66 |
+
llamafactory-cli api run_api_inference_1.yaml
|
67 |
+
```
|