Ramikan-BR
commited on
Commit
•
6dfc80e
1
Parent(s):
70582b4
Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,100 @@ tags:
|
|
12 |
- sft
|
13 |
---
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
# Uploaded model
|
16 |
|
17 |
- **Developed by:** Ramikan-BR
|
|
|
12 |
- sft
|
13 |
---
|
14 |
|
15 |
+
1 - # alpaca_prompt = Copied from above
|
16 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
17 |
+
inputs = tokenizer(
|
18 |
+
[
|
19 |
+
alpaca_prompt.format(
|
20 |
+
# "Continue the fibonnaci sequence.", # instruction
|
21 |
+
# "1, 1, 2, 3, 5, 8", # input
|
22 |
+
"", # output - leave this blank for generation!
|
23 |
+
)
|
24 |
+
], return_tensors = "pt").to("cuda")
|
25 |
+
|
26 |
+
outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True)
|
27 |
+
tokenizer.batch_decode(outputs)
|
28 |
+
|
29 |
+
['Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Input:\nContinue the fibonnaci sequence.\n\n### Output:\n1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196728, 318101']
|
30 |
+
|
31 |
+
2 - # alpaca_prompt = Copied from above
|
32 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
33 |
+
inputs = tokenizer(
|
34 |
+
[
|
35 |
+
alpaca_prompt.format(
|
36 |
+
# "What is fibonacci sequence?", # instruction
|
37 |
+
"", # input
|
38 |
+
"", # output - leave this blank for generation!
|
39 |
+
)
|
40 |
+
], return_tensors = "pt").to("cuda")
|
41 |
+
|
42 |
+
from transformers import TextStreamer
|
43 |
+
text_streamer = TextStreamer(tokenizer)
|
44 |
+
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 256)
|
45 |
+
|
46 |
+
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
47 |
+
|
48 |
+
### Input:
|
49 |
+
What is fibonacci sequence?
|
50 |
+
|
51 |
+
### Output:
|
52 |
+
The Fibonacci sequence is a series of numbers in which each number is the sum of the two preceding ones, usually starting with 0 and 1. The sequence goes like this: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196728, 328101, 544829, 973530, 1518361, 2492891, 4011452, 6504307, 9518768, 15023075
|
53 |
+
|
54 |
+
3 - if False:
|
55 |
+
from unsloth import FastLanguageModel
|
56 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
57 |
+
model_name = "lora_model", # YOUR MODEL YOU USED FOR TRAINING
|
58 |
+
max_seq_length = max_seq_length,
|
59 |
+
dtype = dtype,
|
60 |
+
load_in_4bit = load_in_4bit,
|
61 |
+
)
|
62 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
63 |
+
|
64 |
+
# alpaca_prompt = You MUST copy from above!
|
65 |
+
|
66 |
+
inputs = tokenizer(
|
67 |
+
[
|
68 |
+
alpaca_prompt.format(
|
69 |
+
# "Crie uma IA. Ela será treinada para conversar por chat e escrever códigos em python conforme solicitada, após ser treinada para essas tarefas.", # instruction
|
70 |
+
"", # input
|
71 |
+
"", # output - leave this blank for generation!
|
72 |
+
)
|
73 |
+
], return_tensors = "pt").to("cuda")
|
74 |
+
|
75 |
+
from transformers import TextStreamer
|
76 |
+
text_streamer = TextStreamer(tokenizer)
|
77 |
+
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 4096)
|
78 |
+
|
79 |
+
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
80 |
+
|
81 |
+
### Input:
|
82 |
+
Crie uma IA. Ela será treinada para conversar por chat e escrever codigos em python conforme solicitada, após ser treinada para essas tarefas.
|
83 |
+
|
84 |
+
### Output:
|
85 |
+
Here is a simple Python program that uses the OpenAI's ChatGPT API to simulate a chatbot:
|
86 |
+
|
87 |
+
```python
|
88 |
+
import openai
|
89 |
+
from openai import ChatGPT
|
90 |
+
|
91 |
+
# Initialize the ChatGPT API
|
92 |
+
openai.api_key = "YOUR_API_KEY"
|
93 |
+
|
94 |
+
# Create a ChatGPT model
|
95 |
+
model = ChatGPT(model_name="gpt-3.5-turbo")
|
96 |
+
|
97 |
+
# Create a prompt
|
98 |
+
prompt = "Write a python program that takes a number as input and prints out the square of that number."
|
99 |
+
|
100 |
+
# Send the prompt to the ChatGPT model
|
101 |
+
response = model.create(input=prompt)
|
102 |
+
|
103 |
+
# Print the response
|
104 |
+
print(response)
|
105 |
+
```
|
106 |
+
|
107 |
+
This program will output a Python program that takes a number as input and prints out the square of that number.<|endoftext|>
|
108 |
+
|
109 |
# Uploaded model
|
110 |
|
111 |
- **Developed by:** Ramikan-BR
|