Anthonyg5005 commited on
Commit
9bcb78e
1 Parent(s): 5e17b4b

publish finished auto exl2 script

Browse files

finally after hours of writing, troubleshooting, and testing across multiple days of low motivation, I finally finish this (hopefully no bugs). I may add more comments though and a modified version for local quants, maybe some updates for more features as well.

README.md CHANGED
@@ -16,17 +16,18 @@ Feel free to send in PRs or use this code however you'd like.\
16
 
17
  - [Manage branches (create/delete)](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/manage%20branches.py)
18
 
 
 
19
  - [EXL2 Single Quant V3](https://colab.research.google.com/drive/1Vc7d6JU3Z35OVHmtuMuhT830THJnzNfS?usp=sharing) **(COLAB)**
20
 
21
- - [EXL2 Local Quant Windows](https://huggingface.co/Anthonyg5005/hf-scripts/resolve/main/exl2-windows-local/exl2-windows-local.zip?download=true)
22
 
23
- - [Upload folder to repo](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/upload%20folder%20to%20repo.py)
24
 
25
  ## work in progress/not tested ([unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch)
26
 
27
- - Auto exl2 upload script
28
- - Will create repo and create a specified number of custom quants on individual branches
29
- - Windows/Linux support
30
 
31
  ## other recommended stuff
32
 
@@ -38,19 +39,22 @@ Feel free to send in PRs or use this code however you'd like.\
38
  ## usage
39
 
40
  - Manage branches
41
- - Run script and follow prompts. You will be required to be logged in to HF Hub. If you are not logged in, you will need a WRITE token. You can get one in your [HuggingFace settings](https://huggingface.co/settings/tokens). May get some updates in the future for handling more situations. All active updates will be on the [unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch. Colab and Kaggle keys are supported.
42
 
43
- - EXL2 Single Quant
44
- - Allows you to quantize to exl2 using colab. This version creates a exl2 quant to upload to private repo. Should work on any Linux jupyterlab server with CUDA, ROCM should be supported by exl2 but not tested. Only 7B tested on colab.
45
 
46
  - EXL2 Local Quant Windows
47
- - Easily creates environment to quantize models to exl2 using Windows to your local machine.
48
 
49
  - Upload folder to repo
50
- - Uploads user specified folder to specified repo, can create private repos too. Not the same as git commit and push, instead uploads any additional files.
 
 
 
51
 
52
  - Download models
53
- - Make sure you have [huggingface_hub](https://pypi.org/project/huggingface-hub/) installed as it has the same dependencies. You can install it with '`pip install huggingface-hub`'. To use the script, open a terminal and run '`python download-model.py USER/MODEL:BRANCH`'. There's also a '`--help`' flag to show the available arguments. To download from private repositories, make sure to login using '`huggingface-cli login`' or (not recommended) `HF_TOKEN` environment variable.
54
 
55
  ## extras
56
 
 
16
 
17
  - [Manage branches (create/delete)](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/manage%20branches.py)
18
 
19
+ - [Auto EXL2 upload](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/auto-exl2-upload/auto-exl2-upload.zip?download=true)
20
+
21
  - [EXL2 Single Quant V3](https://colab.research.google.com/drive/1Vc7d6JU3Z35OVHmtuMuhT830THJnzNfS?usp=sharing) **(COLAB)**
22
 
23
+ - [EXL2 Local Quant - Windows](https://huggingface.co/Anthonyg5005/hf-scripts/resolve/main/exl2-windows-local/exl2-windows-local.zip?download=true)
24
 
25
+ - [Upload folder to HF](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/upload%20folder%20to%20repo.py)
26
 
27
  ## work in progress/not tested ([unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch)
28
 
29
+ - EXL2 Multi Quant local
30
+ - Will replace Local Quant Windows on this readme, and have Linux support. Modified version of Auto EXL2 upload without the upload and deleting part.
 
31
 
32
  ## other recommended stuff
33
 
 
39
  ## usage
40
 
41
  - Manage branches
42
+ - Run script and follow prompts. You will be required to be logged in to HF Hub. If you are not logged in, you will need a WRITE token. You can get one in your [HuggingFace settings](https://huggingface.co/settings/tokens). Colab and Kaggle secret keys are supported.
43
 
44
+ - Auto EXL2 upload
45
+ - This script is designed to automate the process of quantizing models to EXL2 and uploading them to the HF Hub as seperate branches. This is both available to run on Windows and Linux.
46
 
47
  - EXL2 Local Quant Windows
48
+ - Easily creates environment to quantize models to exl2 using Windows to your local machine. Replacing soon.
49
 
50
  - Upload folder to repo
51
+ - Uploads user specified folder to specified repo, can create private repos too. Not the same as git commit and push, instead uploads any additional files. This is more to be modified to your needs then used by itself.
52
+
53
+ - EXL2 Single Quant
54
+ - Allows you to quantize to exl2 using colab. This version creates a exl2 quant to upload to private repo. Only 7B tested on colab.
55
 
56
  - Download models
57
+ - To use the script, open a terminal and run '`python download-model.py USER/MODEL:BRANCH`'. There's also a '`--help`' flag to show the available arguments. To download from private repositories, make sure to login using '`huggingface-cli login`' or (not recommended) `HF_TOKEN` environment variable.
58
 
59
  ## extras
60
 
auto-exl2-upload/INSTRUCTIONS.txt ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ For NVIDIA cards install the CUDA toolkit
2
+
3
+ Nvidia Maxwell or higher
4
+ https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64
5
+
6
+ Nvidia Kepler or higher
7
+ https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Windows&target_arch=x86_64
8
+
9
+
10
+ Haven't done much testing but for Windows, Visual Studio with desktop development for C++ might be required. I've gotten cl.exe errors on a previous install
11
+
12
+
13
+ This may work with AMD cards but only on linux. I can't guarantee that it will work on AMD cards, I personally don't have one to test with. You may need to install stuff before starting. https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html
14
+
15
+
16
+
17
+ First setup your environment by using either windows.bat or linux.sh. If something fails during setup, then every file and folder except for windows.bat, linux.sh, and exl2-quant.py should be deleted then try again.
18
+
19
+ After setup is complete then you'll have a file called start-quant. Use this to run the quant script.
20
+
21
+ Make sure that your storage space is 3x the amount of the model's size. To mesure this, take the number of billion parameters and mutliply by two, afterwards mutliply by 3 and that's the recommended storage. There's a chance you may get away with 2.5x the size as well.
22
+ Make sure to also have a lot of RAM depending on the model.
23
+
24
+ If you close the terminal or the terminal crashes, check the last BPW it was on and enter the remaining quants you wanted. It should be able to pick up where it left off. Don't type the finished BPW as it will start from the beginning. You may also use ctrl + c pause at any time during the quant process.
25
+
26
+ Things may break in the future as it downloads the latest version of all the dependencies which may either change names or how they work. If something breaks, please open a discussion at https://huggingface.co/Anthonyg5005/hf-scripts/discussions
27
+
28
+
29
+ Credit to turboderp for creating exllamav2 and the exl2 quantization method.
30
+ https://github.com/turboderp
31
+
32
+ Credit to oobabooga the original download and safetensors scripts.
33
+ https://github.com/oobabooga
34
+
35
+ Credit to Lucain Pouget for maintaining huggingface-hub.
36
+ https://github.com/Wauplin
37
+
38
+ Only tested with CUDA 12.1 on Windows 11 and half-tested Linux through WSL2 but I don't have enough RAM to fully test but quantization did start.
auto-exl2-upload/auto-exl2-upload.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb52c9577ce88784c0831237f91a9d3edcdcafb3c85ec94b8dac85e036daf8a4
3
+ size 6692
auto-exl2-upload/exl2-quant.py ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #usually it's on the inside that counts, not this time. This script is a mess, but it works.
2
+ #import required modules
3
+ from huggingface_hub import login, get_token, whoami, repo_exists, file_exists, upload_folder, create_repo, upload_file, create_branch
4
+ import os
5
+ import sys
6
+ import subprocess
7
+ import glob
8
+
9
+ #define os differences
10
+ oname = os.name
11
+ if oname == 'nt':
12
+ osclear = 'cls'
13
+ osmv = 'move'
14
+ osrmd = 'rmdir /s /q'
15
+ oscp = 'copy'
16
+ pyt = 'venv\\scripts\\python.exe'
17
+ slsh = '\\'
18
+ elif oname == 'posix':
19
+ osclear = 'clear'
20
+ osmv = 'mv'
21
+ osrmd = 'rm -r'
22
+ oscp = 'cp'
23
+ pyt = './venv/bin/python'
24
+ slsh = '/'
25
+ else:
26
+ sys.exit('This script is not compatible with your machine.')
27
+ def clear_screen():
28
+ os.system(osclear)
29
+
30
+ #get token
31
+ if os.environ.get('KAGGLE_KERNEL_RUN_TYPE', None) is not None: #check if user in kaggle
32
+ from kaggle_secrets import UserSecretsClient # type: ignore
33
+ from kaggle_web_client import BackendError # type: ignore
34
+ try:
35
+ login(UserSecretsClient().get_secret("HF_TOKEN")) #login if token secret found
36
+ except BackendError:
37
+ print('''
38
+ When using Kaggle, make sure to use the secret key HF_TOKEN with a 'WRITE' token.
39
+ This will prevent the need to login every time you run the script.
40
+ Set your secrets with the secrets add-on on the top of the screen.
41
+ ''')
42
+ if get_token() is not None:
43
+ #if the token is found then log in:
44
+ login(get_token())
45
+ tfound = "Where are my doritos?" #doesn't matter what this is, only false is used
46
+ else:
47
+ #if the token is not found then prompt user to provide it:
48
+ login(input("API token not detected. Enter your HuggingFace (WRITE) token: "))
49
+ tfound = "false"
50
+
51
+ #if the token is read only then prompt user to provide a write token:
52
+ while True:
53
+ if whoami().get('auth', {}).get('accessToken', {}).get('role', None) != 'write':
54
+ clear_screen()
55
+ if os.environ.get('HF_TOKEN', None) is not None: #if environ finds HF_TOKEN as read-only then display following text and exit:
56
+ print('''
57
+ You have the environment variable HF_TOKEN set.
58
+ You cannot log in.
59
+ Either set the environment variable to a 'WRITE' token or remove it.
60
+ ''')
61
+ input("Press enter to continue.")
62
+ sys.exit("Exiting...")
63
+ if os.environ.get('COLAB_BACKEND_VERSION', None) is not None:
64
+ print('''
65
+ Your Colab secret key is read-only
66
+ Please switch your key to 'write' or disable notebook access on the left.
67
+ ''')
68
+ sys.exit("Stuck in loop, exiting...")
69
+ elif os.environ.get('KAGGLE_KERNEL_RUN_TYPE', None) is not None:
70
+ print('''
71
+ Your Kaggle secret key is read-only
72
+ Please switch your key to 'write' or unattach from notebook in add-ons at the top.
73
+ Having a read-only key attched will require login every time.
74
+ ''')
75
+ print("You do not have write access to this repository. Please use a valid token with (WRITE) access.")
76
+ login(input("Enter your HuggingFace (WRITE) token: "))
77
+ continue
78
+ break
79
+ clear_screen()
80
+
81
+ #get original model repo url
82
+ repo_url = input("Enter unquantized model repository (User/Repo): ")
83
+
84
+ #look for repo
85
+ if repo_exists(repo_url) == False:
86
+ print(f"Model repo doesn't exist at https://huggingface.co/{repo_url}")
87
+ sys.exit("Exiting...")
88
+ model = repo_url.replace("/", "_")
89
+ modelname = repo_url.split("/")[1]
90
+ clear_screen()
91
+
92
+ #ask for number of quants
93
+ qmount = int(input("Enter the number of quants you want to create: "))
94
+ qmount += 1
95
+ clear_screen()
96
+
97
+ #save bpw values
98
+ print(f"Type the BPW for the following {qmount - 1} quants. Recommend staying over 2.4 BPW. Use the vram calculator to find the best BPW values: https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator")
99
+ qnum = {}
100
+ for i in range(1, qmount):
101
+ qnum[f"bpw{i}"] = float(input(f"Enter BPW for quant {i} (2.00-8.00): ")) #convert input to float for proper sorting
102
+ clear_screen()
103
+
104
+ #collect all values in a list for sorting
105
+ bpwvalue = list(qnum.values())
106
+
107
+ #sort the list from smallest to largest
108
+ bpwvalue.sort()
109
+
110
+ if not os.path.exists(f"models{slsh}{model}{slsh}converted-st"): #check if model was converted to safetensors, skip download if it was
111
+ result = subprocess.run(f"{pyt} download-model.py {repo_url}", shell=True) #download model from hf (Credit to oobabooga for this script)
112
+ if result.returncode != 0:
113
+ print("Download failed.")
114
+ sys.exit("Exiting...")
115
+ clear_screen()
116
+
117
+ if not glob.glob(f"models/{model}/*.safetensors"): #check if safetensors model exists
118
+ convertst = input("Couldn't find safetensors model, do you want to convert to safetensors? (y/n): ")
119
+ while convertst != 'y' and convertst != 'n':
120
+ convertst = input("Please enter 'y' or 'n': ")
121
+ if convertst == 'y':
122
+ print("Converting weights to safetensors, please wait...")
123
+ result = subprocess.run(f"{pyt} convert-to-safetensors.py models{slsh}{model} --output models{slsh}{model}-st", shell=True) #convert to safetensors (Credit to oobabooga for this script as well)
124
+ if result.returncode != 0:
125
+ print("Converting failed. Please look for a safetensors model or convert model manually.")
126
+ sys.exit("Exiting...")
127
+ subprocess.run(f"{osrmd} models{slsh}{model}", shell=True)
128
+ subprocess.run(f"{osmv} models{slsh}{model}-st models{slsh}{model}", shell=True)
129
+ open(f"models{slsh}{model}{slsh}converted-st", 'w').close()
130
+ print("Finished converting")
131
+ else:
132
+ sys.exit("Can't quantize a non-safetensors model. Exiting...")
133
+ clear_screen()
134
+
135
+ #create new repo if one doesn't already exist
136
+ if repo_exists(f"{whoami().get('name', None)}/{modelname}-exl2") == False:
137
+ print("Creating model repository...")
138
+ create_repo(f"{whoami().get('name', None)}/{modelname}-exl2", private=True)
139
+ print(f"Created repo at https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2") #notify user of repo creation
140
+
141
+ #create the markdown file
142
+ print("Writing model card...")
143
+ with open('./README.md', 'w') as file:
144
+ file.write(f"# Exl2 quants for [{modelname}](https://huggingface.co/{repo_url})\n\n")
145
+ file.write("## Automatically quantized using the auto quant from [hf-scripts](https://huggingface.co/anthonyg5005/hf-scripts)\n\n")
146
+ file.write(f"Would recommend {whoami().get('name', None)} to change up this README to include more info.\n\n")
147
+ file.write("### BPW:\n\n")
148
+ for bpw in bpwvalue:
149
+ file.write(f"[{bpw}](https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2/tree/{bpw}bpw)\n\n")
150
+ print("Created README.md")
151
+
152
+ upload_file(path_or_fileobj="README.md", path_in_repo="README.md", repo_id=f"{whoami().get('name', None)}/{modelname}-exl2", commit_message="Add temp README") #upload md file
153
+ print("Uploaded README.md to main")
154
+ else:
155
+ input("repo already exists, are you resuming a previous process? (Press enter to continue, ctrl+c to exit)")
156
+
157
+ #start converting
158
+ for bpw in bpwvalue:
159
+ if os.path.exists(f"{model}-measure{slsh}measurement.json"): # Check if measurement.json exists
160
+ cmdir = False
161
+ mskip = f" -m {model}-measure{slsh}measurement.json" #skip measurement if it exists
162
+ else:
163
+ cmdir = True
164
+ mskip = ""
165
+ print(f"Starting quantization for BPW {bpw}")
166
+ os.makedirs(f"{model}-exl2-{bpw}bpw-WD", exist_ok=True) #create working directory
167
+ os.makedirs(f"{model}-exl2-{bpw}bpw", exist_ok=True) #create compile full directory
168
+ subprocess.run(f"{oscp} models{slsh}{model}{slsh}config.json {model}-exl2-{bpw}bpw-WD", shell=True) #copy config to working directory
169
+ #more settings exist in the convert.py script, to veiw them go to docs/convert.md or https://github.com/turboderp/exllamav2/blob/master/doc/convert.md
170
+ result = subprocess.run(f"{pyt} exllamav2/convert.py -i models/{model} -o {model}-exl2-{bpw}bpw-WD -cf {model}-exl2-{bpw}bpw -b {bpw}{mskip}", shell=True) #run quantization and exit if failed (Credit to turbo for his dedication to exl2)
171
+ if result.returncode != 0:
172
+ print("Quantization failed.")
173
+ sys.exit("Exiting...")
174
+ if cmdir == True:
175
+ os.makedirs(f"{model}-measure", exist_ok=True) #create measurement directory
176
+ subprocess.run(f"{oscp} {model}-exl2-{bpw}bpw-WD{slsh}measurement.json {model}-measure", shell=True) #copy measurement to measure directory
177
+ open(f"{model}-measure/Delete folder when no more quants are needed from this model", 'w').close()
178
+ try:
179
+ create_branch(f"{whoami().get('name', None)}/{modelname}-exl2", branch=f"{bpw}bpw") #create branch
180
+ except:
181
+ print(f"Branch {bpw} already exists, trying upload...")
182
+ upload_folder(folder_path=f"{model}-exl2-{bpw}bpw", repo_id=f"{whoami().get('name', None)}/{modelname}-exl2", commit_message=f"Add quant for BPW {bpw}", revision=f"{bpw}bpw") #upload quantized model
183
+ subprocess.run(f"{osrmd} {model}-exl2-{bpw}bpw-WD", shell=True) #remove working directory
184
+ subprocess.run(f"{osrmd} {model}-exl2-{bpw}bpw", shell=True) #remove compile directory
185
+
186
+ if file_exists(f"{whoami().get('name', None)}/{modelname}-exl2", "measurement.json") == False: #check if measurement.json exists in main
187
+ upload_file(path_or_fileobj=f"{model}-measure{slsh}measurement.json", path_in_repo="measurement.json", repo_id=f"{whoami().get('name', None)}/{modelname}-exl2", commit_message="Add measurement.json") #upload measurement.json to main
188
+
189
+ print(f'''Quants available at https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2
190
+ \nRepo is private, go to https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2/settings to make public if you'd like.''')
auto-exl2-upload/linux-setup.sh ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # converted from windows-setup.bat by github copilot
4
+
5
+ # check if "venv" subdirectory exists, if not, create one
6
+ if [ ! -d "venv" ]; then
7
+ python -m venv venv
8
+ else
9
+ echo "venv directory already exists. If something is broken, delete everything but exl2-quant.py and run this script again."
10
+ read -p "Press enter to continue"
11
+ exit
12
+ fi
13
+
14
+ # ask if the user has git installed
15
+ read -p "Do you have git and wget installed? (y/n) " gitwget
16
+
17
+ if [ "$gitwget" = "y" ]; then
18
+ echo "Setting up environment"
19
+ else
20
+ echo "Please install git and wget before running this script."
21
+ read -p "Press enter to continue"
22
+ exit
23
+ fi
24
+
25
+ # if CUDA version 12 install pytorch for 12.1, else if CUDA 11 install pytorch for 11.8. If ROCm, install pytorch for ROCm 5.7
26
+ read -p "Please enter your GPU compute version, CUDA 11/12 or AMD ROCm (11, 12, rocm): " pytorch_version
27
+
28
+ if [ "$pytorch_version" = "11" ]; then
29
+ echo "Installing PyTorch for CUDA 11.8"
30
+ venv/bin/python -m pip install torch --index-url https://download.pytorch.org/whl/cu118
31
+ elif [ "$pytorch_version" = "12" ]; then
32
+ echo "Installing PyTorch for CUDA 12.1"
33
+ venv/bin/python -m pip install torch
34
+ elif [ "$pytorch_version" = "rocm" ]; then
35
+ echo "Installing PyTorch for AMD ROCm 5.7"
36
+ venv/bin/python -m pip install torch --index-url https://download.pytorch.org/whl/rocm5.7
37
+ else
38
+ echo "Invalid compute version. Please enter 11, 12, or rocm."
39
+ read -p "Press enter to continue"
40
+ exit
41
+ fi
42
+
43
+ # download stuff
44
+ echo "Downloading files"
45
+ git clone https://github.com/turboderp/exllamav2
46
+ wget https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/convert-to-safetensors.py
47
+ wget https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/download-model.py
48
+
49
+ echo "Installing pip packages"
50
+
51
+ venv/bin/python -m pip install -r exllamav2/requirements.txt
52
+ venv/bin/python -m pip install huggingface-hub transformers accelerate
53
+ venv/bin/python -m pip install ./exllamav2
54
+
55
+ # create start-quant.sh
56
+ echo "#!/bin/bash" > start-quant.sh
57
+ echo "venv/bin/python exl2-quant.py" >> start-quant.sh
58
+ echo "read -p \"Press enter to continue\""
59
+ chmod +x start-quant.sh
60
+ echo "If you use ctrl+c to stop, you may need to also use 'pkill python' to stop running scripts."
61
+ echo "Environment setup complete. run start-quant.sh to start the quantization process."
62
+ read -p "Press enter to exit"
63
+ exit
auto-exl2-upload/windows-setup.bat ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ @echo off
2
+
3
+ setlocal
4
+
5
+ REM check if "venv" subdirectory exists, if not, create one
6
+ if not exist "venv\" (
7
+ python -m venv venv
8
+ ) else (
9
+ echo venv directory already exists. If something is broken, delete everything but exl2-quant.py and run this script again.
10
+ pause
11
+ exit
12
+ )
13
+
14
+ REM ask if the user has git installed
15
+ set /p gitwget="Do you have git and wget installed? (y/n) "
16
+
17
+ if "%gitwget%"=="y" (
18
+ echo "Setting up environment"
19
+ ) else (
20
+ echo Please install git and wget before running this script.
21
+ echo winget install wget
22
+ echo winget install git
23
+ pause
24
+ exit
25
+ )
26
+
27
+ REM if CUDA version 12 install pytorch for 12.1, else if CUDA 11 install pytorch for 11.8
28
+ echo CUDA path: %CUDA_HOME%
29
+ set /p cuda_version="Please enter your CUDA version (11 or 12): "
30
+
31
+ if "%cuda_version%"=="11" (
32
+ echo Installing PyTorch for CUDA 11.8...
33
+ venv\scripts\python.exe -m pip install torch --index-url https://download.pytorch.org/whl/cu118
34
+ ) else if "%cuda_version%"=="12" (
35
+ echo Installing PyTorch for CUDA 12.1...
36
+ venv\scripts\python.exe -m pip install torch --index-url https://download.pytorch.org/whl/cu121
37
+ ) else (
38
+ echo Invalid CUDA version. Please enter 11 or 12.
39
+ pause
40
+ exit
41
+ )
42
+
43
+ REM download stuff
44
+ echo Downloading files...
45
+ git clone https://github.com/turboderp/exllamav2
46
+ wget https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/convert-to-safetensors.py
47
+ wget https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/download-model.py
48
+
49
+ echo Installing pip packages...
50
+
51
+ venv\scripts\python.exe -m pip install -r exllamav2/requirements.txt
52
+ venv\scripts\python.exe -m pip install huggingface-hub transformers accelerate
53
+ venv\scripts\python.exe -m pip install .\exllamav2
54
+
55
+ REM create start-quant-windows.bat
56
+ echo @echo off > start-quant.bat
57
+ echo venv\scripts\python.exe exl2-quant.py >> start-quant.bat
58
+ echo REM tada sound for fun >> start-quant.bat
59
+ echo powershell -c (New-Object Media.SoundPlayer "C:\Windows\Media\tada.wav").PlaySync(); >> start-quant.bat
60
+ echo pause >> start-quant.bat
61
+ powershell -c (New-Object Media.SoundPlayer "C:\Windows\Media\tada.wav").PlaySync();
62
+ echo Environment setup complete. run start-quant.bat to start the quantization process.
63
+ pause
exl2-windows-local/instructions.txt CHANGED
@@ -12,4 +12,10 @@ make sure you setup the environment by using windows-setup.bat
12
  after everything is done just download a model using download-model.bat
13
  to quant, use convert-model-auto.bat. Enter the model's folder name, then the BPW for the model
14
 
15
- You can always pause the quantization process by pressing Ctrl + C and typing exit. All progress will be stored in the WD (working directory) folder. You can resume where you left off by running the convert-model-auto.bat script with the same arguments you used before.
 
 
 
 
 
 
 
12
  after everything is done just download a model using download-model.bat
13
  to quant, use convert-model-auto.bat. Enter the model's folder name, then the BPW for the model
14
 
15
+ You can always pause the quantization process by pressing Ctrl + C and typing exit. All progress will be stored in the WD (working directory) folder. You can resume where you left off by running the convert-model-auto.bat script with the same arguments you used before.
16
+
17
+ Credit to turboderp for creating exllamav2 and the exl2 quantization method.
18
+ https://github.com/turboderp
19
+
20
+ Credit to oobabooga the original download script.
21
+ https://github.com/oobabooga
ipynb/Multi_Quant_exl2.ipynb ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": null,
6
+ "metadata": {
7
+ "cellView": "form",
8
+ "id": "7PhL3HkpFeU7"
9
+ },
10
+ "outputs": [],
11
+ "source": [
12
+ "#@title Setup environment\n",
13
+ "#@markdown Takes about 15 minutes to finish\n",
14
+ "# download stuff\n",
15
+ "!git clone https://github.com/turboderp/exllamav2\n",
16
+ "!wget https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/convert-to-safetensors.py\n",
17
+ "!wget https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/download-model.py\n",
18
+ "!pip install -r exllamav2/requirements.txt\n",
19
+ "!pip install huggingface-hub transformers accelerate --upgrade\n",
20
+ "!pip install ./exllamav2"
21
+ ]
22
+ },
23
+ {
24
+ "cell_type": "code",
25
+ "execution_count": null,
26
+ "metadata": {
27
+ "cellView": "form",
28
+ "id": "CXbUzOmNHyff"
29
+ },
30
+ "outputs": [],
31
+ "source": [
32
+ "#@title Login to Huggingface - Required\n",
33
+ "#import required functions\n",
34
+ "import os\n",
35
+ "import sys\n",
36
+ "from huggingface_hub import login, get_token, whoami\n",
37
+ "\n",
38
+ "#get token\n",
39
+ "if os.environ.get('KAGGLE_KERNEL_RUN_TYPE', None) is not None: #check if user in kaggle\n",
40
+ " from kaggle_secrets import UserSecretsClient # type: ignore\n",
41
+ " from kaggle_web_client import BackendError # type: ignore\n",
42
+ " try:\n",
43
+ " login(UserSecretsClient().get_secret(\"HF_TOKEN\")) #login if token secret found\n",
44
+ " except BackendError:\n",
45
+ " print('''\n",
46
+ " When using Kaggle, make sure to use the secret key HF_TOKEN with a 'WRITE' token.\n",
47
+ " This will prevent the need to login every time you run the script.\n",
48
+ " Set your secrets with the secrets add-on on the top of the screen.\n",
49
+ " ''')\n",
50
+ "if get_token() is not None:\n",
51
+ " #if the token is found then log in:\n",
52
+ " login(get_token())\n",
53
+ "else:\n",
54
+ " #if the token is not found then prompt user to provide it:\n",
55
+ " login(input(\"API token not detected. Enter your HuggingFace (WRITE) token: \"))\n",
56
+ "\n",
57
+ "#if the token is read only then prompt user to provide a write token (Only required if user needs a WRITE token, remove if READ is enough):\n",
58
+ "while True:\n",
59
+ " if whoami().get('auth', {}).get('accessToken', {}).get('role', None) != 'write':\n",
60
+ " if os.environ.get('HF_TOKEN', None) is not None: #if environ finds HF_TOKEN as read-only then display following text and exit:\n",
61
+ " print('''\n",
62
+ " You have the environment variable HF_TOKEN set.\n",
63
+ " You cannot log in.\n",
64
+ " Either set the environment variable to a 'WRITE' token or remove it.\n",
65
+ " ''')\n",
66
+ " input(\"Press enter to continue.\")\n",
67
+ " sys.exit(\"Exiting...\")\n",
68
+ " if os.environ.get('COLAB_BACKEND_VERSION', None) is not None:\n",
69
+ " print('''\n",
70
+ " Your Colab secret key is read-only\n",
71
+ " Please switch your key to 'write' or disable notebook access on the left.\n",
72
+ " ''')\n",
73
+ " sys.exit(\"Stuck in a loop, exiting...\")\n",
74
+ " elif os.environ.get('KAGGLE_KERNEL_RUN_TYPE', None) is not None:\n",
75
+ " print('''\n",
76
+ " Your Kaggle secret key is read-only\n",
77
+ " Please switch your key to 'write' or unattach from notebook in add-ons at the top.\n",
78
+ " Having a read-only key attched will require login every time.\n",
79
+ " ''')\n",
80
+ " print(\"You do not have write access to this repository. Please use a valid token with (WRITE) access.\")\n",
81
+ " login(input(\"Enter your HuggingFace (WRITE) token: \"))\n",
82
+ " continue\n",
83
+ " break"
84
+ ]
85
+ },
86
+ {
87
+ "cell_type": "code",
88
+ "execution_count": null,
89
+ "metadata": {
90
+ "cellView": "form",
91
+ "id": "dxKEA7obHLoO"
92
+ },
93
+ "outputs": [],
94
+ "source": [
95
+ "#@title Start quant\n",
96
+ "#@markdown ### Using subprocess to execute scripts doesn't output on Colab. If something seems frozen, please wait. Any detected errors will automatically stop Colab\n",
97
+ "#import required modules\n",
98
+ "from huggingface_hub import login, get_token, whoami, repo_exists, model_info, upload_folder, create_repo, upload_file, create_branch\n",
99
+ "import os\n",
100
+ "import sys\n",
101
+ "import subprocess\n",
102
+ "import glob\n",
103
+ "\n",
104
+ "#define os differences\n",
105
+ "oname = os.name\n",
106
+ "if oname == 'nt':\n",
107
+ " osmv = 'move'\n",
108
+ " osrmd = 'rmdir /s /q'\n",
109
+ " oscp = 'copy'\n",
110
+ " pyt = 'venv\\\\scripts\\\\python.exe'\n",
111
+ " slsh = '\\\\'\n",
112
+ "elif oname == 'posix':\n",
113
+ " osmv = 'mv'\n",
114
+ " osrmd = 'rm -r'\n",
115
+ " oscp = 'cp'\n",
116
+ " pyt = 'python'\n",
117
+ " slsh = '/'\n",
118
+ "else:\n",
119
+ " sys.exit('This script is not compatible with your machine.')\n",
120
+ "\n",
121
+ "#get original model repo url\n",
122
+ "#@markdown Enter unquantized model repository (User/Repo):\n",
123
+ "repo_url = \"mistralai/Mistral-7B-Instruct-v0.2\" # @param {type:\"string\"}\n",
124
+ "\n",
125
+ "#look for repo\n",
126
+ "if repo_exists(repo_url) == False:\n",
127
+ " print(f\"Model repo doesn't exist at https://huggingface.co/{repo_url}\")\n",
128
+ " sys.exit(\"Exiting...\")\n",
129
+ "model = repo_url.replace(\"/\", \"_\")\n",
130
+ "modelname = repo_url.split(\"/\")[1]\n",
131
+ "print(\"\\n\\n\")\n",
132
+ "\n",
133
+ "#ask for number of quants\n",
134
+ "#@markdown Enter the number of quants you want to create:\n",
135
+ "quant_amount = \"5\" # @param {type:\"string\"}\n",
136
+ "qmount = int(quant_amount)\n",
137
+ "qmount += 1\n",
138
+ "\n",
139
+ "#save bpw values\n",
140
+ "#@markdown You will be asked the BPW values after running this section.\n",
141
+ "print(f\"Type the BPW for the following {qmount - 1} quants. Recommend staying over 2.4 BPW. Use the vram calculator to find the best BPW values: https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator\")\n",
142
+ "qnum = {}\n",
143
+ "for i in range(1, qmount):\n",
144
+ " qnum[f\"bpw{i}\"] = float(input(f\"Enter BPW for quant {i} (2.00-8.00): \")) #convert input to float for proper sorting\n",
145
+ "print(\"\\n\\n\")\n",
146
+ "\n",
147
+ "#collect all values in a list for sorting\n",
148
+ "bpwvalue = list(qnum.values())\n",
149
+ "\n",
150
+ "#sort the list from smallest to largest\n",
151
+ "bpwvalue.sort()\n",
152
+ "\n",
153
+ "if not os.path.exists(f\"models{slsh}{model}{slsh}converted-st\"): #check if model was converted to safetensors, skip download if it was\n",
154
+ " print(\"Starting download...\")\n",
155
+ " result = subprocess.run(f\"{pyt} download-model.py {repo_url}\", shell=True) #download model from hf (Credit to oobabooga for this script)\n",
156
+ " if result.returncode != 0:\n",
157
+ " print(\"Download failed.\")\n",
158
+ " sys.exit(\"Exiting...\")\n",
159
+ " print(\"Download finished\\n\\n\")\n",
160
+ "\n",
161
+ "#@markdown You will also be asked to convert model to safetensors if needed\n",
162
+ "if not glob.glob(f\"models/{model}/*.safetensors\"): #check if safetensors model exists\n",
163
+ " convertst = input(\"Couldn't find safetensors model, do you want to convert to safetensors? (y/n): \")\n",
164
+ " while convertst != 'y' and convertst != 'n':\n",
165
+ " convertst = input(\"Please enter 'y' or 'n': \")\n",
166
+ " if convertst == 'y':\n",
167
+ " print(\"Converting weights to safetensors, please wait...\")\n",
168
+ " result = subprocess.run(f\"{pyt} convert-to-safetensors.py models{slsh}{model} --output models{slsh}{model}-st\", shell=True) #convert to safetensors (Credit to oobabooga for this script as well)\n",
169
+ " if result.returncode != 0:\n",
170
+ " print(\"Converting failed. Please look for a safetensors model or convert model manually.\")\n",
171
+ " sys.exit(\"Exiting...\")\n",
172
+ " subprocess.run(f\"{osrmd} models{slsh}{model}\", shell=True)\n",
173
+ " subprocess.run(f\"{osmv} models{slsh}{model}-st models{slsh}{model}\", shell=True)\n",
174
+ " open(f\"models{slsh}{model}{slsh}converted-st\", 'w').close()\n",
175
+ " print(\"Finished converting\")\n",
176
+ " print(\"\\n\\n\")\n",
177
+ " else:\n",
178
+ " sys.exit(\"Can't quantize a non-safetensors model. Exiting...\")\n",
179
+ "\n",
180
+ "#create new repo if one doesn't already exist\n",
181
+ "if repo_exists(f\"{whoami().get('name', None)}/{modelname}-exl2\") == False:\n",
182
+ " print(\"Creating model repository...\")\n",
183
+ " create_repo(f\"{whoami().get('name', None)}/{modelname}-exl2\", private=True)\n",
184
+ " print(f\"Created repo at https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2\") #notify user of repo creation\n",
185
+ "\n",
186
+ " #create the markdown file\n",
187
+ " print(\"Writing model card...\")\n",
188
+ " with open('./README.md', 'w') as file:\n",
189
+ " file.write(f\"# Exl2 quants for [{modelname}](https://huggingface.co/{repo_url})\\n\\n\")\n",
190
+ " file.write(\"## Automatically quantized using the auto quant from [hf-scripts](https://huggingface.co/anthonyg5005/hf-scripts)\\n\\n\")\n",
191
+ " file.write(f\"Would recommend {whoami().get('name', None)} to change up this README to include more info.\\n\\n\")\n",
192
+ " file.write(\"### BPW:\\n\\n\")\n",
193
+ " for bpw in bpwvalue:\n",
194
+ " file.write(f\"[{bpw}](https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2/tree/{bpw}bpw)\\n\\n\")\n",
195
+ " print(\"Created README.md\")\n",
196
+ "\n",
197
+ " upload_file(path_or_fileobj=\"README.md\", path_in_repo=\"README.md\", repo_id=f\"{whoami().get('name', None)}/{modelname}-exl2\", commit_message=\"Add temp README\") #upload md file\n",
198
+ " print(\"Uploaded README.md to main\")\n",
199
+ "else:\n",
200
+ " print(f\"WARNING: repo already exists at https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2\")\n",
201
+ "\n",
202
+ "#start converting\n",
203
+ "for bpw in bpwvalue:\n",
204
+ " if os.path.exists(f\"{model}-measure{slsh}measurement.json\"): # Check if measurement.json exists\n",
205
+ " cmdir = False\n",
206
+ " mskip = f\" -m {model}-measure{slsh}measurement.json\" #skip measurement if it exists\n",
207
+ " else:\n",
208
+ " cmdir = True\n",
209
+ " mskip = \"\"\n",
210
+ " print(f\"Starting quantization for BPW {bpw}. Please wait, may take hours\")\n",
211
+ " os.makedirs(f\"{model}-exl2-{bpw}bpw-WD\", exist_ok=True) #create working directory\n",
212
+ " os.makedirs(f\"{model}-exl2-{bpw}bpw\", exist_ok=True) #create compile full directory\n",
213
+ " subprocess.run(f\"{oscp} models{slsh}{model}{slsh}config.json {model}-exl2-{bpw}bpw-WD\", shell=True) #copy config to working directory\n",
214
+ " #more settings exist in the convert.py script, to veiw them go to docs/convert.md or https://github.com/turboderp/exllamav2/blob/master/doc/convert.md\n",
215
+ " result = subprocess.run(f\"{pyt} exllamav2/convert.py -i models/{model} -o {model}-exl2-{bpw}bpw-WD -cf {model}-exl2-{bpw}bpw -b {bpw}{mskip} -ss 2048\", shell=True) #run quantization and exit if failed (Credit to turbo for his dedication to exl2)\n",
216
+ " if result.returncode != 0:\n",
217
+ " print(\"Quantization failed.\")\n",
218
+ " sys.exit(\"Exiting...\")\n",
219
+ " print(f\"Down quantizing BPW {bpw}. Starting upload\")\n",
220
+ " if cmdir == True:\n",
221
+ " os.makedirs(f\"{model}-measure\", exist_ok=True) #create measurement directory\n",
222
+ " subprocess.run(f\"{oscp} {model}-exl2-{bpw}bpw-WD{slsh}measurement.json {model}-measure\", shell=True) #copy measurement to measure directory\n",
223
+ " open(f\"{model}-measure/Delete folder when no more quants are needed from this model\", 'w').close()\n",
224
+ " try:\n",
225
+ " create_branch(f\"{whoami().get('name', None)}/{modelname}-exl2\", branch=f\"{bpw}bpw\") #create branch\n",
226
+ " except:\n",
227
+ " print(f\"Branch {bpw} already exists, trying upload...\")\n",
228
+ " upload_folder(folder_path=f\"{model}-exl2-{bpw}bpw\", repo_id=f\"{whoami().get('name', None)}/{modelname}-exl2\", commit_message=f\"Add quant for BPW {bpw}\", revision=f\"{bpw}bpw\") #upload quantized model\n",
229
+ " subprocess.run(f\"{osrmd} {model}-exl2-{bpw}bpw-WD\", shell=True) #remove working directory\n",
230
+ " subprocess.run(f\"{osrmd} {model}-exl2-{bpw}bpw\", shell=True) #remove compile directory\n",
231
+ "\n",
232
+ "upload_file(path_or_fileobj=f\"{model}-measure{slsh}measurement.json\", path_in_repo=\"measurement.json\", repo_id=f\"{whoami().get('name', None)}/{modelname}-exl2\", commit_message=\"Add measurement.json\") #upload measurement.json to main\n",
233
+ "\n",
234
+ "print(f'''Quants available at https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2\n",
235
+ " \\nRepo is private, go to https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2/settings to make public if you'd like.''')\n"
236
+ ]
237
+ }
238
+ ],
239
+ "metadata": {
240
+ "accelerator": "GPU",
241
+ "colab": {
242
+ "gpuType": "T4",
243
+ "provenance": []
244
+ },
245
+ "kernelspec": {
246
+ "display_name": "Python 3",
247
+ "name": "python3"
248
+ },
249
+ "language_info": {
250
+ "name": "python"
251
+ }
252
+ },
253
+ "nbformat": 4,
254
+ "nbformat_minor": 0
255
+ }