Anthonyg5005 commited on
Commit
4ca4e8f
1 Parent(s): 1e89e6b

add local version of script

Browse files

tested. Modified version of auto exl2 upload. also some small changes.

README.md CHANGED
@@ -16,14 +16,14 @@ Feel free to send in PRs or use this code however you'd like.\
16
 
17
  - [Manage branches (create/delete)](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/manage%20branches.py)
18
 
19
- - [Auto EXL2 upload](https://huggingface.co/Anthonyg5005/hf-scripts/resolve/main/auto-exl2-upload/auto-exl2-upload.zip?download=true)
20
 
21
- - [EXL2 Single Quant V3](https://colab.research.google.com/drive/1Vc7d6JU3Z35OVHmtuMuhT830THJnzNfS?usp=sharing) **(COLAB)**
22
-
23
- - [EXL2 Local Quant - Windows](https://huggingface.co/Anthonyg5005/hf-scripts/resolve/main/exl2-windows-local/exl2-windows-local.zip?download=true)
24
 
25
  - [Upload folder to HF](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/upload%20folder%20to%20repo.py)
26
 
 
 
27
  ## work in progress/not tested ([unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch)
28
 
29
  - EXL2 Multi Quant local
@@ -31,7 +31,7 @@ Feel free to send in PRs or use this code however you'd like.\
31
 
32
  ## other recommended stuff
33
 
34
- - [Exllama Discord server](https://discord.gg/NSFwVuCjRq) Free Exl2 quantizing bot sponsored by The Bloke and managed by kaltcit.
35
  - existing quants under the HF account [@blockblockblock](https://huggingface.co/blockblockblock)
36
 
37
  - [Download models (download HF Hub models) [Oobabooga]](https://github.com/oobabooga/text-generation-webui/blob/main/download-model.py)
@@ -41,11 +41,11 @@ Feel free to send in PRs or use this code however you'd like.\
41
  - Manage branches
42
  - Run script and follow prompts. You will be required to be logged in to HF Hub. If you are not logged in, you will need a WRITE token. You can get one in your [HuggingFace settings](https://huggingface.co/settings/tokens). Colab and Kaggle secret keys are supported.
43
 
44
- - Auto EXL2 upload
45
- - This script is designed to automate the process of quantizing models to EXL2 and uploading them to the HF Hub as seperate branches. This is both available to run on Windows and Linux.
46
 
47
- - EXL2 Local Quant Windows
48
- - Easily creates environment to quantize models to exl2 using Windows to your local machine. Replacing soon.
49
 
50
  - Upload folder to repo
51
  - Uploads user specified folder to specified repo, can create private repos too. Not the same as git commit and push, instead uploads any additional files. This is more to be modified to your needs then used by itself.
 
16
 
17
  - [Manage branches (create/delete)](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/manage%20branches.py)
18
 
19
+ - [Auto EXL2 HF upload](https://huggingface.co/Anthonyg5005/hf-scripts/resolve/main/auto-exl2-upload/auto-exl2-upload.zip?download=true)
20
 
21
+ - [EXL2 Local Quants](https://huggingface.co/Anthonyg5005/hf-scripts/resolve/main/exl2-multi-quant-local/exl2-multi-quant-local.zip?download=true)
 
 
22
 
23
  - [Upload folder to HF](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/upload%20folder%20to%20repo.py)
24
 
25
+ - [EXL2 Single Quant V3](https://colab.research.google.com/drive/1Vc7d6JU3Z35OVHmtuMuhT830THJnzNfS?usp=sharing) **(COLAB)**
26
+
27
  ## work in progress/not tested ([unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch)
28
 
29
  - EXL2 Multi Quant local
 
31
 
32
  ## other recommended stuff
33
 
34
+ - [Exllama Discord server](https://discord.gg/NSFwVuCjRq) Free Exl2 quantizing bot sponsored by The Bloke and Lambda Labs, managed by Kaltcit.
35
  - existing quants under the HF account [@blockblockblock](https://huggingface.co/blockblockblock)
36
 
37
  - [Download models (download HF Hub models) [Oobabooga]](https://github.com/oobabooga/text-generation-webui/blob/main/download-model.py)
 
41
  - Manage branches
42
  - Run script and follow prompts. You will be required to be logged in to HF Hub. If you are not logged in, you will need a WRITE token. You can get one in your [HuggingFace settings](https://huggingface.co/settings/tokens). Colab and Kaggle secret keys are supported.
43
 
44
+ - Auto EXL2 HF upload
45
+ - This script is designed to automate the process of quantizing models to EXL2 and uploading them to the HF Hub as seperate branches. This is both available to run on Windows and Linux. You will be required to be logged in to HF Hub. If you are not logged in, you will need a WRITE token.
46
 
47
+ - EXL2 Local Quants
48
+ - Easily creates environment to quantize models to exl2 to your local machine. Supports both Windows and Linux.
49
 
50
  - Upload folder to repo
51
  - Uploads user specified folder to specified repo, can create private repos too. Not the same as git commit and push, instead uploads any additional files. This is more to be modified to your needs then used by itself.
auto-exl2-upload/INSTRUCTIONS.txt CHANGED
@@ -10,7 +10,7 @@ https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Windows&targ
10
  Haven't done much testing but for Windows, Visual Studio with desktop development for C++ might be required. I've gotten cl.exe errors on a previous install
11
 
12
 
13
- This may work with AMD cards but only on linux. I can't guarantee that it will work on AMD cards, I personally don't have one to test with. You may need to install stuff before starting. https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html
14
 
15
 
16
 
 
10
  Haven't done much testing but for Windows, Visual Studio with desktop development for C++ might be required. I've gotten cl.exe errors on a previous install
11
 
12
 
13
+ This may work with AMD cards but only on linux and possibly WSL2. I can't guarantee that it will work on AMD cards, I personally don't have one to test with. You may need to install stuff before starting. https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html
14
 
15
 
16
 
auto-exl2-upload/auto-exl2-upload.zip CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb52c9577ce88784c0831237f91a9d3edcdcafb3c85ec94b8dac85e036daf8a4
3
- size 6692
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2696d6221f7867d2737e95ec1127df4364a22b3464a018baaa9ddd95df87e8a4
3
+ size 6726
auto-exl2-upload/exl2-quant.py CHANGED
@@ -1,4 +1,4 @@
1
- #usually it's on the inside that counts, not this time. This script is a mess, but it works.
2
  #import required modules
3
  from huggingface_hub import login, get_token, whoami, repo_exists, file_exists, upload_folder, create_repo, upload_file, create_branch
4
  import os
@@ -188,3 +188,11 @@ if file_exists(f"{whoami().get('name', None)}/{modelname}-exl2", "measurement.js
188
 
189
  print(f'''Quants available at https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2
190
  \nRepo is private, go to https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2/settings to make public if you'd like.''')
 
 
 
 
 
 
 
 
 
1
+ #usually it's what is on the inside that counts, not this time. This script is a mess, but at least it works.
2
  #import required modules
3
  from huggingface_hub import login, get_token, whoami, repo_exists, file_exists, upload_folder, create_repo, upload_file, create_branch
4
  import os
 
188
 
189
  print(f'''Quants available at https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2
190
  \nRepo is private, go to https://huggingface.co/{whoami().get('name', None)}/{modelname}-exl2/settings to make public if you'd like.''')
191
+
192
+ if tfound == 'false':
193
+ print(f'''
194
+ You are now logged in as {whoami().get('fullname', None)}.
195
+
196
+ To logout, use the hf command line interface 'huggingface-cli logout'
197
+ To view your active account, use 'huggingface-cli whoami'
198
+ ''')
auto-exl2-upload/linux-setup.sh CHANGED
@@ -55,9 +55,10 @@ venv/bin/python -m pip install ./exllamav2
55
  # create start-quant.sh
56
  echo "#!/bin/bash" > start-quant.sh
57
  echo "venv/bin/python exl2-quant.py" >> start-quant.sh
58
- echo "read -p \"Press enter to continue\""
 
59
  chmod +x start-quant.sh
60
  echo "If you use ctrl+c to stop, you may need to also use 'pkill python' to stop running scripts."
61
  echo "Environment setup complete. run start-quant.sh to start the quantization process."
62
  read -p "Press enter to exit"
63
- exit
 
55
  # create start-quant.sh
56
  echo "#!/bin/bash" > start-quant.sh
57
  echo "venv/bin/python exl2-quant.py" >> start-quant.sh
58
+ echo "read -p \"Press enter to continue\"" >> start-quant.sh
59
+ echo "exit" >> start-quant.sh
60
  chmod +x start-quant.sh
61
  echo "If you use ctrl+c to stop, you may need to also use 'pkill python' to stop running scripts."
62
  echo "Environment setup complete. run start-quant.sh to start the quantization process."
63
  read -p "Press enter to exit"
64
+ exit
exl2-multi-quant-local/INSTRUCTIONS.txt ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ For NVIDIA cards install the CUDA toolkit
2
+
3
+ Nvidia Maxwell or higher
4
+ https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64
5
+
6
+ Nvidia Kepler or higher
7
+ https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Windows&target_arch=x86_64
8
+
9
+
10
+ Haven't done much testing but for Windows, Visual Studio with desktop development for C++ might be required. I've gotten cl.exe errors on a previous install
11
+
12
+
13
+ This may work with AMD cards but only on linux and possibly WSL2. I can't guarantee that it will work on AMD cards, I personally don't have one to test with. You may need to install stuff before starting. https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html
14
+
15
+
16
+
17
+ First setup your environment by using either windows.bat or linux.sh. If something fails during setup, then every file and folder except for windows.bat, linux.sh, and exl2-quant.py should be deleted then try again.
18
+
19
+ After setup is complete then you'll have a file called start-quant. Use this to run the quant script.
20
+
21
+ Make sure that your storage space is 3x the amount of the model's size plus 1 more time per quant. To mesure this, take the number of billion parameters and mutliply by two, afterwards mutliply by 3 and that's the recommended storage. There's a chance you may get away with 2.5x the size as well.
22
+ Make sure to also have a lot of RAM depending on the model.
23
+
24
+ If you close the terminal or the terminal crashes, check the last BPW it was on and enter the remaining quants you wanted. It should be able to pick up where it left off. Don't type the finished BPW as it will start from the beginning. You may also use ctrl + c pause at any time during the quant process.
25
+
26
+ Things may break in the future as it downloads the latest version of all the dependencies which may either change names or how they work. If something breaks, please open a discussion at https://huggingface.co/Anthonyg5005/hf-scripts/discussions
27
+
28
+
29
+ Credit to turboderp for creating exllamav2 and the exl2 quantization method.
30
+ https://github.com/turboderp
31
+
32
+ Credit to oobabooga the original download and safetensors scripts.
33
+ https://github.com/oobabooga
34
+
35
+ Credit to Lucain Pouget for maintaining huggingface-hub.
36
+ https://github.com/Wauplin
37
+
38
+ Only tested with CUDA 12.1 on Windows 11 and half-tested Linux through WSL2 but I don't have enough RAM to fully test but quantization did start.
exl2-multi-quant-local/exl2-multi-quant-local.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8ab35ce67e701370b10168a4d745333e2cb70e098c5c469ea518e07bb57fe9d
3
+ size 5883
exl2-multi-quant-local/exl2-quant.py ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #usually it's what is on the inside that counts, not this time. This script is a mess, but at least it works.
2
+ #import required modules
3
+ from huggingface_hub import login, get_token, whoami, repo_exists
4
+ import os
5
+ import sys
6
+ import subprocess
7
+ import glob
8
+ import time
9
+
10
+ #define os differences
11
+ oname = os.name
12
+ if oname == 'nt':
13
+ osclear = 'cls'
14
+ osmv = 'move'
15
+ osrmd = 'rmdir /s /q'
16
+ oscp = 'copy'
17
+ pyt = 'venv\\scripts\\python.exe'
18
+ slsh = '\\'
19
+ elif oname == 'posix':
20
+ osclear = 'clear'
21
+ osmv = 'mv'
22
+ osrmd = 'rm -r'
23
+ oscp = 'cp'
24
+ pyt = './venv/bin/python'
25
+ slsh = '/'
26
+ else:
27
+ sys.exit('This script is not compatible with your machine.')
28
+ def clear_screen():
29
+ os.system(osclear)
30
+
31
+ #get token
32
+ if os.environ.get('KAGGLE_KERNEL_RUN_TYPE', None) is not None: #check if user in kaggle
33
+ from kaggle_secrets import UserSecretsClient # type: ignore
34
+ from kaggle_web_client import BackendError # type: ignore
35
+ try:
36
+ login(UserSecretsClient().get_secret("HF_TOKEN")) #login if token secret found
37
+ except BackendError:
38
+ print('''
39
+ When using Kaggle, make sure to use the secret key HF_TOKEN with a 'WRITE' token.
40
+ This will prevent the need to login every time you run the script.
41
+ Set your secrets with the secrets add-on on the top of the screen.
42
+ ''')
43
+ if get_token() is not None:
44
+ #if the token is found then log in:
45
+ login(get_token())
46
+ tfound = "true"
47
+ else:
48
+ #if the token is not found then prompt user to provide it:
49
+ tfound = "false"
50
+ try:
51
+ login(input("API token not detected. Enter your HuggingFace token (empty to skip): "))
52
+ except:
53
+ print("Skipping login... (Unable to access private or gated models)")
54
+ tfound = "false but skipped" #doesn't matter what this is, only false is used
55
+ time.sleep(3)
56
+
57
+ clear_screen()
58
+
59
+ #get original model repo url
60
+ repo_url = input("Enter unquantized model repository (User/Repo): ")
61
+
62
+ #look for repo
63
+ if repo_exists(repo_url) == False:
64
+ print(f"Model repo doesn't exist at https://huggingface.co/{repo_url}")
65
+ sys.exit("Exiting...")
66
+ model = repo_url.replace("/", "_")
67
+ modelname = repo_url.split("/")[1]
68
+ clear_screen()
69
+
70
+ #ask for number of quants
71
+ qmount = int(input("Enter the number of quants you want to create: "))
72
+ qmount += 1
73
+ clear_screen()
74
+
75
+ #save bpw values
76
+ print(f"Type the BPW for the following {qmount - 1} quants. Recommend staying over 2.4 BPW. Use the vram calculator to find the best BPW values: https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator")
77
+ qnum = {}
78
+ for i in range(1, qmount):
79
+ qnum[f"bpw{i}"] = float(input(f"Enter BPW for quant {i} (2.00-8.00): ")) #convert input to float for proper sorting
80
+ clear_screen()
81
+
82
+ #collect all values in a list for sorting
83
+ bpwvalue = list(qnum.values())
84
+
85
+ #sort the list from smallest to largest
86
+ bpwvalue.sort()
87
+
88
+ if not os.path.exists(f"models{slsh}{model}{slsh}converted-st"): #check if model was converted to safetensors, skip download if it was
89
+ result = subprocess.run(f"{pyt} download-model.py {repo_url}", shell=True) #download model from hf (Credit to oobabooga for this script)
90
+ if result.returncode != 0:
91
+ print("Download failed.")
92
+ sys.exit("Exiting...")
93
+ clear_screen()
94
+
95
+ if not glob.glob(f"models/{model}/*.safetensors"): #check if safetensors model exists
96
+ convertst = input("Couldn't find safetensors model, do you want to convert to safetensors? (y/n): ")
97
+ while convertst != 'y' and convertst != 'n':
98
+ convertst = input("Please enter 'y' or 'n': ")
99
+ if convertst == 'y':
100
+ print("Converting weights to safetensors, please wait...")
101
+ result = subprocess.run(f"{pyt} convert-to-safetensors.py models{slsh}{model} --output models{slsh}{model}-st", shell=True) #convert to safetensors (Credit to oobabooga for this script as well)
102
+ if result.returncode != 0:
103
+ print("Converting failed. Please look for a safetensors model or convert model manually.")
104
+ sys.exit("Exiting...")
105
+ subprocess.run(f"{osrmd} models{slsh}{model}", shell=True)
106
+ subprocess.run(f"{osmv} models{slsh}{model}-st models{slsh}{model}", shell=True)
107
+ open(f"models{slsh}{model}{slsh}converted-st", 'w').close()
108
+ print("Finished converting")
109
+ else:
110
+ sys.exit("Can't quantize a non-safetensors model. Exiting...")
111
+ clear_screen()
112
+
113
+ #start converting
114
+ for bpw in bpwvalue:
115
+ if os.path.exists(f"{model}-measure{slsh}measurement.json"): # Check if measurement.json exists
116
+ cmdir = False
117
+ mskip = f" -m {model}-measure{slsh}measurement.json" #skip measurement if it exists
118
+ else:
119
+ cmdir = True
120
+ mskip = ""
121
+ print(f"Starting quantization for BPW {bpw}")
122
+ os.makedirs(f"{model}-exl2-{bpw}bpw-WD", exist_ok=True) #create working directory
123
+ os.makedirs(f"{modelname}-exl2-quants{slsh}{modelname}-exl2-{bpw}bpw", exist_ok=True) #create compile full directory
124
+ subprocess.run(f"{oscp} models{slsh}{model}{slsh}config.json {model}-exl2-{bpw}bpw-WD", shell=True) #copy config to working directory
125
+ #more settings exist in the convert.py script, to veiw them go to docs/convert.md or https://github.com/turboderp/exllamav2/blob/master/doc/convert.md
126
+ result = subprocess.run(f"{pyt} exllamav2/convert.py -i models/{model} -o {model}-exl2-{bpw}bpw-WD -cf {modelname}-exl2-quants{slsh}{modelname}-exl2-{bpw}bpw -b {bpw}{mskip}", shell=True) #run quantization and exit if failed (Credit to turbo for his dedication to exl2)
127
+ if result.returncode != 0:
128
+ print("Quantization failed.")
129
+ sys.exit("Exiting...")
130
+ if cmdir == True:
131
+ os.makedirs(f"{model}-measure", exist_ok=True) #create measurement directory
132
+ subprocess.run(f"{oscp} {model}-exl2-{bpw}bpw-WD{slsh}measurement.json {model}-measure", shell=True) #copy measurement to measure directory
133
+ open(f"{model}-measure/Delete folder when no more quants are needed from this model", 'w').close()
134
+ subprocess.run(f"{osrmd} {model}-exl2-{bpw}bpw-WD", shell=True) #remove working directory
135
+
136
+ if tfound == 'false':
137
+ print(f'''
138
+ You are now logged in as {whoami().get('fullname', None)}.
139
+
140
+ To logout, use the hf command line interface 'huggingface-cli logout'
141
+ To view your active account, use 'huggingface-cli whoami'
142
+ ''')
143
+
144
+ print("Finished quantizing. Exiting...")
exl2-multi-quant-local/linux-setup.sh ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # converted from windows-setup.bat by github copilot
4
+
5
+ # check if "venv" subdirectory exists, if not, create one
6
+ if [ ! -d "venv" ]; then
7
+ python -m venv venv
8
+ else
9
+ echo "venv directory already exists. If something is broken, delete everything but exl2-quant.py and run this script again."
10
+ read -p "Press enter to continue"
11
+ exit
12
+ fi
13
+
14
+ # ask if the user has git installed
15
+ read -p "Do you have git and wget installed? (y/n) " gitwget
16
+
17
+ if [ "$gitwget" = "y" ]; then
18
+ echo "Setting up environment"
19
+ else
20
+ echo "Please install git and wget before running this script."
21
+ read -p "Press enter to continue"
22
+ exit
23
+ fi
24
+
25
+ # if CUDA version 12 install pytorch for 12.1, else if CUDA 11 install pytorch for 11.8. If ROCm, install pytorch for ROCm 5.7
26
+ read -p "Please enter your GPU compute version, CUDA 11/12 or AMD ROCm (11, 12, rocm): " pytorch_version
27
+
28
+ if [ "$pytorch_version" = "11" ]; then
29
+ echo "Installing PyTorch for CUDA 11.8"
30
+ venv/bin/python -m pip install torch --index-url https://download.pytorch.org/whl/cu118
31
+ elif [ "$pytorch_version" = "12" ]; then
32
+ echo "Installing PyTorch for CUDA 12.1"
33
+ venv/bin/python -m pip install torch
34
+ elif [ "$pytorch_version" = "rocm" ]; then
35
+ echo "Installing PyTorch for AMD ROCm 5.7"
36
+ venv/bin/python -m pip install torch --index-url https://download.pytorch.org/whl/rocm5.7
37
+ else
38
+ echo "Invalid compute version. Please enter 11, 12, or rocm."
39
+ read -p "Press enter to continue"
40
+ exit
41
+ fi
42
+
43
+ # download stuff
44
+ echo "Downloading files"
45
+ git clone https://github.com/turboderp/exllamav2
46
+ wget https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/convert-to-safetensors.py
47
+ wget https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/download-model.py
48
+
49
+ echo "Installing pip packages"
50
+
51
+ venv/bin/python -m pip install -r exllamav2/requirements.txt
52
+ venv/bin/python -m pip install huggingface-hub transformers accelerate
53
+ venv/bin/python -m pip install ./exllamav2
54
+
55
+ # create start-quant.sh
56
+ echo "#!/bin/bash" > start-quant.sh
57
+ echo "venv/bin/python exl2-quant.py" >> start-quant.sh
58
+ echo "read -p \"Press enter to continue\"" >> start-quant.sh
59
+ echo "exit" >> start-quant.sh
60
+ chmod +x start-quant.sh
61
+ echo "If you use ctrl+c to stop, you may need to also use 'pkill python' to stop running scripts."
62
+ echo "Environment setup complete. run start-quant.sh to start the quantization process."
63
+ read -p "Press enter to exit"
64
+ exit
exl2-multi-quant-local/windows-setup.bat ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ @echo off
2
+
3
+ setlocal
4
+
5
+ REM check if "venv" subdirectory exists, if not, create one
6
+ if not exist "venv\" (
7
+ python -m venv venv
8
+ ) else (
9
+ echo venv directory already exists. If something is broken, delete everything but exl2-quant.py and run this script again.
10
+ pause
11
+ exit
12
+ )
13
+
14
+ REM ask if the user has git installed
15
+ set /p gitwget="Do you have git and wget installed? (y/n) "
16
+
17
+ if "%gitwget%"=="y" (
18
+ echo "Setting up environment"
19
+ ) else (
20
+ echo Please install git and wget before running this script.
21
+ echo winget install wget
22
+ echo winget install git
23
+ pause
24
+ exit
25
+ )
26
+
27
+ REM if CUDA version 12 install pytorch for 12.1, else if CUDA 11 install pytorch for 11.8
28
+ echo CUDA path: %CUDA_HOME%
29
+ set /p cuda_version="Please enter your CUDA version (11 or 12): "
30
+
31
+ if "%cuda_version%"=="11" (
32
+ echo Installing PyTorch for CUDA 11.8...
33
+ venv\scripts\python.exe -m pip install torch --index-url https://download.pytorch.org/whl/cu118
34
+ ) else if "%cuda_version%"=="12" (
35
+ echo Installing PyTorch for CUDA 12.1...
36
+ venv\scripts\python.exe -m pip install torch --index-url https://download.pytorch.org/whl/cu121
37
+ ) else (
38
+ echo Invalid CUDA version. Please enter 11 or 12.
39
+ pause
40
+ exit
41
+ )
42
+
43
+ REM download stuff
44
+ echo Downloading files...
45
+ git clone https://github.com/turboderp/exllamav2
46
+ wget https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/convert-to-safetensors.py
47
+ wget https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/download-model.py
48
+
49
+ echo Installing pip packages...
50
+
51
+ venv\scripts\python.exe -m pip install -r exllamav2/requirements.txt
52
+ venv\scripts\python.exe -m pip install huggingface-hub transformers accelerate
53
+ venv\scripts\python.exe -m pip install .\exllamav2
54
+
55
+ REM create start-quant-windows.bat
56
+ echo @echo off > start-quant.bat
57
+ echo venv\scripts\python.exe exl2-quant.py >> start-quant.bat
58
+ echo REM tada sound for fun >> start-quant.bat
59
+ echo powershell -c (New-Object Media.SoundPlayer "C:\Windows\Media\tada.wav").PlaySync(); >> start-quant.bat
60
+ echo pause >> start-quant.bat
61
+ powershell -c (New-Object Media.SoundPlayer "C:\Windows\Media\tada.wav").PlaySync();
62
+ echo Environment setup complete. run start-quant.bat to start the quantization process.
63
+ pause