macadeliccc commited on
Commit
e75eaf7
1 Parent(s): 5e32e01

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -1,3 +1,51 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ # Piccolo-2x7b
5
+
6
+
7
+ **In loving memory of my dog Klaus (Piccolo)**
8
+
9
+ _~ Piccolo (Italian): the little one ~_
10
+
11
+ ![piccolo.png](piccolo.png)
12
+
13
+
14
+ # Code Example
15
+
16
+ Inference and Evaluation colab available [here](https://colab.research.google.com/drive/1ZqLNvVvtFHC_4v2CgcMVh7pP9Fvx0SbI?usp=sharing)
17
+
18
+ ```python
19
+ from transformers import AutoModelForCausalLM, AutoTokenizer
20
+
21
+ def generate_response(prompt):
22
+ """
23
+ Generate a response from the model based on the input prompt.
24
+ Args:
25
+ prompt (str): Prompt for the model.
26
+
27
+ Returns:
28
+ str: The generated response from the model.
29
+ """
30
+ inputs = tokenizer(prompt, return_tensors="pt")
31
+ outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)
32
+
33
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
34
+
35
+ return response
36
+
37
+ model_id = "macadeliccc/piccolo-2x7b"
38
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
39
+ model = AutoModelForCausalLM.from_pretrained(model_id,load_in_4bit=True)
40
+
41
+ prompt = "What is the best way to train Cane Corsos?"
42
+
43
+ print("Response:")
44
+ print(generate_response(prompt), "\n")
45
+ ```
46
+
47
+ The model is capable of quality code, math, and logical reasoning. Try whatever questions you think of.
48
+
49
+ # Evaluations
50
+
51
+ TODO