Text Generation
GGUF
Italian
Inference Endpoints
File size: 1,162 Bytes
12641a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
license: apache-2.0
language:
- it
datasets:
- DeepMount00/gquad_it
base_model: DeepMount00/Minerva-3B-base-RAG
pipeline_tag: text-generation
---

# QuantFactory/Minerva-3B-base-RAG-GGUF
This is quantized version of [DeepMount00/Minerva-3B-base-RAG](https://huggingface.co/DeepMount00/Minerva-3B-base-RAG) created using llama.cpp

# Model Card for Minerva-3B-base-QA-v1.0

**Minerva-3B-base-RAG** is a specialized question-answering (QA) model derived through the finetuning of **Minerva-3B-base-v1.0**. This finetuning was independently conducted to enhance the model's performance for QA tasks, making it ideally suited for use in Retrieval-Augmented Generation (RAG) applications.

## Overview
- **Model Type**: Fine-tuned Large Language Model (LLM)
- **Base Model**: [Minerva-3B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0), developed by [Sapienza NLP](https://nlp.uniroma1.it) in collaboration with [Future Artificial Intelligence Research (FAIR)](https://fondazione-fair.it/) and [CINECA](https://www.cineca.it/)
- **Specialization**: Question-Answering (QA)
- **Ideal Use Case**: Retrieval-Augmented Generation applications

---