metadata
license: mit
About llamafile
github
llamafile 中文说明
The llamafile model collection in modelscope.cn
Qwen1.5-14B-Chat-llamafile in modelscope.cn
Useage
- Download model:
qwen1.5-14b-chat-q5_k_m.llamafile
- Run the model
- Windows
- Rename the file to
qwen1.5-14b-chat-q5_k_m.exe
- Open terminal window, and run:
\qwen1.5-14b-chat-q5_k_m.exe
- Open browser to http://127.0.0.1:8080 to start chatting
- Rename the file to
- Linux / macOS
- Add execution permissions:
chmod +x ./qwen1.5-14b-chat-q5_k_m.llamafile
- Run in terminal:
./qwen1.5-14b-chat-q5_k_m.llamafile
- Open browser to http://127.0.0.1:8080 to start chatting
- Add execution permissions:
- Windows
- Openai api usage
- api url:
http://127.0.0.1:8080/v1
- Python code:
#!/usr/bin/env python3 from openai import OpenAI client = OpenAI( base_url="http://127.0.0.1:8080/v1", # "http://<Your api-server IP>:port" api_key = "sk-no-key-required" ) completion = client.chat.completions.create( model="LLaMA_CPP", messages=[ {"role": "system", "content": "You are an AI assistant."}, {"role": "user", "content": "Write a story about dragon"} ] ) print(completion.choices[0].message)
- api url:
Parameter Description
-ngl 9999
indicates how many layers of the model are placed on the GPU to run, with the rest running on the CPU. If there is no GPU available, it can be set to-ngl 0
. The default is 9999, which means everything runs on the GPU (drivers and CUDA runtime environment must be installed).--host 0.0.0.0
is the hostname for the web service. If only local access is needed, it can be set to--host 127.0.0.1
. If set to0.0.0.0
, it can be accessed via IP within the network.--port 8080
is the port for the web service, with the default being8080
, which can be modified using this parameter.-t 16
is the number of threads. When running on the CPU, you can set the number of cores to run concurrently based on the CPU core count.- Other parameters can be viewed with
--help
.