File size: 1,578 Bytes
a9a0b5b
 
 
0309a18
a9a0b5b
 
 
 
 
 
 
0309a18
 
 
 
 
 
 
 
 
 
 
 
 
 
a9a0b5b
 
0309a18
 
 
 
 
 
 
 
 
 
 
 
a9a0b5b
 
 
 
 
 
 
 
 
0309a18
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
language:
- es
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
- q4_k_m
- 4bit
- sharegpt
- pretaining
- finetuning
- Q5_K_M
- Q8_0
- uss
- Perú
- Lambayeque
- Chiclayo
datasets:
- ussipan/sipangpt
pipeline_tag: text2text-generation
---

# SipánGPT 0.3 Llama 3.2 1B GGUF
- Modelo pre-entrenado para responder preguntas de la Universidad Señor de Sipán de Lambayeque, Perú.
- Pre-trained model to answer questions from the Señor de Sipán University of Lambayeque, Peru.

## Testing the model


![image/png](https://cdn-uploads.huggingface.co/production/uploads/644474219174daa2f6919d31/bFbrjYj94FxgzwAoz9Lr7.png)

- Entrenado con 50000 conversaciones, el modelo puede generar alucinaciones.
- Trained with 50000 conversations, the model can generate hallucinations

# Uploaded  model

- **Developed by:** ussipan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)

---

## SipánGPT 0.3 Llama 3.2 1B GGUF

<div style="display: flex; align-items: center; height: fit-content;">
  <img src="https://avatars.githubusercontent.com/u/60937214?v=4" width="40" style="margin-right: 10px;"/>
  <span>Hecho con ❤️ por Jhan Gómez P.</span>
</div>