h4rz3rk4s3 commited on
Commit
f448a61
1 Parent(s): be79651

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md CHANGED
@@ -1,3 +1,48 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
4
+ tags:
5
+ - TinyLlama
6
+ - QLoRA
7
+ - Politics
8
+ - EU
9
+ - sft
10
+ language:
11
+ - en
12
  ---
13
+
14
+ # TinyParlaMintLlama-1.1B
15
+
16
+ TinyParlaMintLlama-1.1B is a QLoRA SFT fine-tune of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using a sample of a concentrated version of the English [ParlaMint] (https://www.clarin.si/repository/xmlui/handle/11356/1864) Dataset. The model was fine-tuned for ~12h on one A100 40GB on ~125M tokens.
17
+
18
+ The goal of this project is to study the potential for improving the domain-specific (in this case political) knowledge of small (<3B) LLMs by concentrating the training datasets TF-IDF in respect to the underlying Topics found in the origianl Dataset.
19
+
20
+ The used training data contains speeches from the **Austrian**, **Danish**, **French**, **British**, **Hungarian**, **Dutch**, **Norwegian**, **Polish**, **Swedish** and **Turkish** Parliament. The concentrated ParlaMint Dataset as well as more information about the used sample will soon be added.
21
+
22
+
23
+ ## 💻 Usage
24
+
25
+ ```python
26
+ !pip install -qU transformers accelerate
27
+ from transformers import AutoTokenizer
28
+ import transformers
29
+ import torch
30
+ model = "h4rz3rk4s3/TinyParlaMintLlama-1.1B"
31
+ messages = [
32
+ {
33
+ "role": "system",
34
+ "content": "You are a professional writer of political speeches.",
35
+ },
36
+ {"role": "user", "content": "Write a short speech on Brexit and it's impact on the European Union."},
37
+ ]
38
+
39
+ tokenizer = AutoTokenizer.from_pretrained(model)
40
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
41
+ pipeline = transformers.pipeline(
42
+ "text-generation",
43
+ model=model,
44
+ device_map="auto",
45
+ )
46
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
47
+ print(outputs[0]["generated_text"])
48
+ ```