szymonrucinski
commited on
Commit
•
4502c72
1
Parent(s):
8ea5f3c
Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,10 @@
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
---
|
|
|
|
|
|
|
|
|
4 |
## Introduction
|
5 |
Krakowiak-7B is a finetuned version of Meta's [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf). It was trained on the modified and updated dataset originally created by [Chris Ociepa](https://huggingface.co/datasets/szymonindy/ociepa-raw-self-generated-instructions-pl)
|
6 |
containing ~ 50K instructions. Making it one of the best and biggest available LLM's.
|
@@ -8,6 +12,7 @@ Name [krakowiak](https://www.youtube.com/watch?v=OeQ6jYzt6cM) refers to one of t
|
|
8 |
|
9 |
## How to test it?
|
10 |
The model can be ran using the Huggingface library or in the browser using this [Google Colab](https://colab.research.google.com/drive/1IM7j57g9ZHj-Pw2EXGyacNuKHjvK3pIc?usp=sharing)
|
|
|
11 |
## Training procedure
|
12 |
Model was trained for 3 epochs, feel free [to read the report](https://api.wandb.ai/links/szymonindy/tkr343ad)
|
13 |
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
---
|
4 |
+
<a target="_blank" href="https://colab.research.google.com/github/https://colab.research.google.com/drive/1IM7j57g9ZHj-Pw2EXGyacNuKHjvK3pIc?usp=sharing">
|
5 |
+
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
6 |
+
</a>
|
7 |
+
|
8 |
## Introduction
|
9 |
Krakowiak-7B is a finetuned version of Meta's [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf). It was trained on the modified and updated dataset originally created by [Chris Ociepa](https://huggingface.co/datasets/szymonindy/ociepa-raw-self-generated-instructions-pl)
|
10 |
containing ~ 50K instructions. Making it one of the best and biggest available LLM's.
|
|
|
12 |
|
13 |
## How to test it?
|
14 |
The model can be ran using the Huggingface library or in the browser using this [Google Colab](https://colab.research.google.com/drive/1IM7j57g9ZHj-Pw2EXGyacNuKHjvK3pIc?usp=sharing)
|
15 |
+
|
16 |
## Training procedure
|
17 |
Model was trained for 3 epochs, feel free [to read the report](https://api.wandb.ai/links/szymonindy/tkr343ad)
|
18 |
|