matthieumeeus97 commited on
Commit
4bc57ad
1 Parent(s): 0bafe8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -2
README.md CHANGED
@@ -16,9 +16,9 @@ model-index:
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # Llama-2-7b-hf-lora-tokentrans-it
20
 
21
- This model is a fine-tuned version of [llama-2-nl/Llama-2-7b-hf-lora-tokentrans-sft](https://huggingface.co/llama-2-nl/Llama-2-7b-hf-lora-tokentrans-sft) on the BramVanroy/ultra_feedback_dutch dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.3913
24
  - Rewards/chosen: 0.1776
@@ -30,6 +30,43 @@ It achieves the following results on the evaluation set:
30
  - Logits/rejected: 1.1696
31
  - Logits/chosen: 1.5756
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  ## Model description
34
 
35
  More information needed
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # ChocoLlama/ChocoLlama-2-7B-tokentrans-instruct
20
 
21
+ This model is a fine-tuned version of [ChocoLlama/ChocoLlama-2-7B-tokentrans-base](https://huggingface.co/ChocoLlama/ChocoLlama-2-7B-tokentrans-base) on the BramVanroy/ultra_feedback_dutch dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.3913
24
  - Rewards/chosen: 0.1776
 
30
  - Logits/rejected: 1.1696
31
  - Logits/chosen: 1.5756
32
 
33
+ # Use the model
34
+
35
+ ```
36
+ from transformers import AutoTokenizer, AutoModelForCausalLM
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained('ChocoLlama/ChocoLlama-2-7B-tokentrans-instruct')
39
+ model = AutoModelForCausalLM.from_pretrained('ChocoLlama/ChocoLlama-2-7B-tokentrans-instruct', device_map="auto")
40
+
41
+ messages = [
42
+ {"role": "system", "content": "Je bent een artificiële intelligentie-assistent en geeft behulpzame, gedetailleerde en beleefde antwoorden op de vragen van de gebruiker."},
43
+ {"role": "user", "content": "Jacques brel, Willem Elsschot en Jan Jambon zitten op café. Waar zouden ze over babbelen?"},
44
+ ]
45
+
46
+ input_ids = tokenizer.apply_chat_template(
47
+ messages,
48
+ add_generation_prompt=True,
49
+ return_tensors="pt"
50
+ ).to(model.device)
51
+
52
+ new_terminators = [
53
+ tokenizer.eos_token_id,
54
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
55
+ ]
56
+
57
+ outputs = model.generate(
58
+ input_ids,
59
+ max_new_tokens=512,
60
+ eos_token_id=new_terminators,
61
+ do_sample=True,
62
+ temperature=0.8,
63
+ top_p=0.95,
64
+ )
65
+ response = outputs[0][input_ids.shape[-1]:]
66
+ print(tokenizer.decode(response, skip_special_tokens=True))
67
+
68
+ ```
69
+
70
  ## Model description
71
 
72
  More information needed