Edit model card

Yi-Coder-9B-Chat GGUF Model

This repository contains the GGUF version of the Yi-Coder-9B-Chat model. This quantized version allows for efficient inference on a wide range of hardware.

Original model: 01-ai/Yi-Coder-9B-Chat

Model Description

Yi-Coder-9B-Chat is a powerful language model specifically designed for coding tasks and chat interactions. This GGUF version maintains the model's capabilities while offering improved performance and reduced resource requirements.

Usage

You can use this model with popular inference frameworks such as llama.cpp or Ollama. Below are instructions for using the model with llama.cpp.

Using with llama.cpp

  1. Clone and build llama.cpp following their installation instructions.

  2. Download the GGUF model file from this repository.

  3. Run the model using the following command:

cd llama.cpp/build_cuda/bin
./llama-cli -m /path/to/Yi-Coder-9B-Chat-BF16.gguf -n -1 --color -r "User:" --in-prefix " " -i -e -p "User: Hi"

Replace /path/to/Yi-Coder-9B-Chat-BF16.gguf with the actual path to the downloaded model file.

This command starts an interactive session with the model. You can type your prompts after the "User:" prefix.

Using with Ollama

Ollama provides an easy-to-use interface for running large language models. Follow the Ollama documentation to set up and use this model.

License

This model is released under the Apache 2.0 License. Please see the LICENSE file in this repository for full details.

Acknowledgements

This model is based on the Yi-Coder-9B-Chat model. We acknowledge the original creators and contributors of the base model.

Issues and Contributions

If you encounter any issues or have suggestions for improvements, please open an issue in this repository. Contributions are welcome!


Happy coding with Yi-Coder-9B-Chat GGUF!

Downloads last month
140
GGUF
Model size
8.83B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .