Text Generation
GGUF
Italian
Inference Endpoints
munish0838 commited on
Commit
12641a8
1 Parent(s): 3596744

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -0
README.md ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - it
5
+ datasets:
6
+ - DeepMount00/gquad_it
7
+ base_model: DeepMount00/Minerva-3B-base-RAG
8
+ pipeline_tag: text-generation
9
+ ---
10
+
11
+ # QuantFactory/Minerva-3B-base-RAG-GGUF
12
+ This is quantized version of [DeepMount00/Minerva-3B-base-RAG](https://huggingface.co/DeepMount00/Minerva-3B-base-RAG) created using llama.cpp
13
+
14
+ # Model Card for Minerva-3B-base-QA-v1.0
15
+
16
+ **Minerva-3B-base-RAG** is a specialized question-answering (QA) model derived through the finetuning of **Minerva-3B-base-v1.0**. This finetuning was independently conducted to enhance the model's performance for QA tasks, making it ideally suited for use in Retrieval-Augmented Generation (RAG) applications.
17
+
18
+ ## Overview
19
+ - **Model Type**: Fine-tuned Large Language Model (LLM)
20
+ - **Base Model**: [Minerva-3B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0), developed by [Sapienza NLP](https://nlp.uniroma1.it) in collaboration with [Future Artificial Intelligence Research (FAIR)](https://fondazione-fair.it/) and [CINECA](https://www.cineca.it/)
21
+ - **Specialization**: Question-Answering (QA)
22
+ - **Ideal Use Case**: Retrieval-Augmented Generation applications
23
+
24
+ ---