File size: 1,241 Bytes
ab90101
 
cb7a30f
 
 
 
 
 
 
 
 
7dca5a4
 
ab90101
 
f34bcb9
 
c1d6236
 
f34bcb9
 
 
 
8c3bc5c
f34bcb9
 
815940e
c1d6236
f34bcb9
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
library_name: peft
license: llama2
datasets:
- vicgalle/alpaca-gpt4
language:
- en
pipeline_tag: text-generation
tags:
- llama-2
- llama
- instruct
- instruction
---

# Info

This model is an adapter model trained with [**QloRA**](https://arxiv.org/abs/2305.14314) technique.
 
* 📜 Model license: [Llama 2 Community License Agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
* 🏛️ Base Model: [Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf)
* 🖥️ Machine: Nvidia A100 (40 GB vRAM)
* 💵 Cost: $3.5
* ⌛ Training Time: 3 hour 22 minutes
* 📊 Dataset Used: [vicgalle/alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4)

You can acces Llama-2 paper by clicking  [here](https://arxiv.org/abs/2307.09288)

# Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))

|         | Average | ARC (25-shot) | HellaSwag (10-shot) | MMLU (5-shot) | TruthfulQA (0-shot) |
|---------|---------|---------------|---------------------|---------------|--------------------|
| Scores  | 67.3    | 66.38         | 84.51               | 62.75         | 55.57              |


# Loss Graph

![](https://i.imgur.com/xPRcRyM.png)