--- library_name: transformers license: llama2 --- Converted version of [CodeLlama-70b](https://huggingface.co/meta-llama/CodeLlama-70b-hf) to 4-bit using bitsandbytes. For more information about the model, refer to the model's page. ## Impact on performance In the following figure, we can see the impact on the performance of a set of models relative to the required RAM space. It is noticeable that the quantized models have equivalent performance while providing a significant gain in RAM usage. ![constellation](https://i.postimg.cc/QdTqLr0Z/constellation.png)