soujanyaporia
commited on
Commit
โข
22eeaa3
1
Parent(s):
5d4c8b9
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ datasets:
|
|
6 |
|
7 |
## ๐ฎ ๐ฆ Flan-Alpaca: Instruction Tuning from Humans and Machines
|
8 |
|
9 |
-
๐ฃ We developed Flacuna by fine-tuning Vicuna-13B on the Flan collection. Flacuna is better at problem-solving
|
10 |
|
11 |
๐ฃ Curious to know the performance of ๐ฎ ๐ฆ **Flan-Alpaca** on large-scale LLM evaluation benchmark, **InstructEval**? Read our paper [https://arxiv.org/pdf/2306.04757.pdf](https://arxiv.org/pdf/2306.04757.pdf). We evaluated more than 10 open-source instruction-tuned LLMs belonging to various LLM families including Pythia, LLaMA, T5, UL2, OPT, and Mosaic. Codes and datasets: [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval)
|
12 |
|
|
|
6 |
|
7 |
## ๐ฎ ๐ฆ Flan-Alpaca: Instruction Tuning from Humans and Machines
|
8 |
|
9 |
+
๐ฃ We developed Flacuna by fine-tuning Vicuna-13B on the Flan collection. Flacuna is better than Vicuna at problem-solving. Access the model here [https://huggingface.co/declare-lab/flacuna-13b-v1.0](https://huggingface.co/declare-lab/flacuna-13b-v1.0).
|
10 |
|
11 |
๐ฃ Curious to know the performance of ๐ฎ ๐ฆ **Flan-Alpaca** on large-scale LLM evaluation benchmark, **InstructEval**? Read our paper [https://arxiv.org/pdf/2306.04757.pdf](https://arxiv.org/pdf/2306.04757.pdf). We evaluated more than 10 open-source instruction-tuned LLMs belonging to various LLM families including Pythia, LLaMA, T5, UL2, OPT, and Mosaic. Codes and datasets: [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval)
|
12 |
|