File size: 1,530 Bytes
7b8aece 0eef31a 7b8aece ed2e153 7b8aece |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
widget:
- text: "Jens Peter Hansen kommer fra Danmark"
language:
- fr
tags:
- llama
license: other
base_model:
- decapoda-research/llama-7b-hf
---
# Model Card: Llama-7b with LoRA Fine-tuning on QACR data
## Model Overview
- **Model Name**: Llama-7b
- **Model Architecture**: Transformer-based Language Model
- **Fine-tuning Method**: LoRA
- **Training Datasets**:
- Educational Question Generation Dataset (described in the dataset chart)
- Alpaca GPT-4 french dataset (chat instruction task)
- Dolly_fr dataset (chat instruction task)
## Model Details
- **Base Model**: decapoda-research/llama-7b-hf
- **Fine-tuning Approach**: LoRA fine-tuning method, which combines pre-training on a large corpus with additional task-specific fine-tuning.
- **Training Objective**: The model is trained to generate relevant and useful questions based on educational texts and to handle chat instruction tasks from the Alpaca GPT-4 and Dolly datasets.
- **Training Procedure**: The base Llama-7b model is first pretrained on a large corpus to learn general language patterns and representations. It is then fine-tuned using a combination of the aforementioned datasets to specialize in educational question generation and chat instruction tasks.
## Intended Use
- **Primary Task**: Question generation for educational purposes and chat instruction tasks.
- **Potential Use Cases**:
- Automated question generation for educational platforms and tutoring systems.
- Chat-based instruction and assistance in various domains.
|