--- base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit language: - es license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf - q4_k_m - 4bit - sharegpt - pretaining - finetuning - Q5_K_M - Q8_0 - uss - Perú - Lambayeque - Chiclayo datasets: - ussipan/sipangpt pipeline_tag: text2text-generation --- # SipánGPT 0.3 Llama 3.2 1B GGUF - Modelo pre-entrenado para responder preguntas de la Universidad Señor de Sipán de Lambayeque, Perú. - Pre-trained model to answer questions from the Señor de Sipán University of Lambayeque, Peru. ## Testing the model ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644474219174daa2f6919d31/bFbrjYj94FxgzwAoz9Lr7.png) - Entrenado con 50000 conversaciones, el modelo puede generar alucinaciones. - Trained with 50000 conversations, the model can generate hallucinations # Uploaded model - **Developed by:** ussipan - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) --- ## SipánGPT 0.3 Llama 3.2 1B GGUF