license: apache-2.0
Yi-Coder-9B-Chat GGUF Model
This repository contains the GGUF version of the Yi-Coder-9B-Chat model. This quantized version allows for efficient inference on a wide range of hardware.
Model Description
Yi-Coder-9B-Chat is a powerful language model specifically designed for coding tasks and chat interactions. This GGUF version maintains the model's capabilities while offering improved performance and reduced resource requirements.
Usage
You can use this model with popular inference frameworks such as llama.cpp or Ollama. Below are instructions for using the model with llama.cpp.
Using with llama.cpp
Clone and build llama.cpp following their installation instructions.
Download the GGUF model file from this repository.
Run the model using the following command:
cd llama.cpp/build_cuda/bin
./llama-cli -m /path/to/Yi-Coder-9B-Chat-BF16.gguf -n -1 --color -r "User:" --in-prefix " " -i -e -p "User: Hi"
Replace /path/to/Yi-Coder-9B-Chat-BF16.gguf
with the actual path to the downloaded model file.
This command starts an interactive session with the model. You can type your prompts after the "User:" prefix.
Using with Ollama
Ollama provides an easy-to-use interface for running large language models. Follow the Ollama documentation to set up and use this model.
License
This model is released under the Apache 2.0 License. Please see the LICENSE file in this repository for full details.
Acknowledgements
This model is based on the Yi-Coder-9B-Chat model. We acknowledge the original creators and contributors of the base model.
Issues and Contributions
If you encounter any issues or have suggestions for improvements, please open an issue in this repository. Contributions are welcome!
Happy coding with Yi-Coder-9B-Chat GGUF!