This is a https://huggingface.co/chavinlo/alpaca-native converted in OLD GGML (alpaca.cpp) format and quantized to 4 bits to run on CPU with 5GB of RAM.
For any additional information, please visit these repos:
alpaca.cpp repo: https://github.com/antimatter15/alpaca.cpp
llama.cpp repo: https://github.com/ggerganov/llama.cpp
original facebook llama(NOT ggml) repo: https://github.com/facebookresearch/llama