jeffreymeetkai
commited on
Commit
•
9d78f40
1
Parent(s):
89e6716
Update README.md
Browse files
README.md
CHANGED
@@ -21,12 +21,12 @@ The model determines when to execute functions, whether in parallel or serially,
|
|
21 |
|
22 |
## How to Get Started
|
23 |
|
24 |
-
We provide custom code for
|
25 |
|
26 |
```python
|
27 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
28 |
|
29 |
-
tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-small-v2.5"
|
30 |
model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-small-v2.5", device_map="auto", trust_remote_code=True)
|
31 |
|
32 |
tools = [
|
@@ -51,7 +51,6 @@ tools = [
|
|
51 |
messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}]
|
52 |
|
53 |
final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
|
54 |
-
tokenizer.padding_side = "left"
|
55 |
inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda")
|
56 |
pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
|
57 |
print(tokenizer.decode(pred.cpu()[0]))
|
@@ -61,7 +60,7 @@ print(tokenizer.decode(pred.cpu()[0]))
|
|
61 |
|
62 |
We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages.
|
63 |
|
64 |
-
This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message
|
65 |
|
66 |
```python
|
67 |
from openai import OpenAI
|
|
|
21 |
|
22 |
## How to Get Started
|
23 |
|
24 |
+
We provide custom code for parsing raw model responses into a JSON object containing role, content and tool_calls fields. This enables the users to read the function-calling output of the model easily.
|
25 |
|
26 |
```python
|
27 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
28 |
|
29 |
+
tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-small-v2.5")
|
30 |
model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-small-v2.5", device_map="auto", trust_remote_code=True)
|
31 |
|
32 |
tools = [
|
|
|
51 |
messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}]
|
52 |
|
53 |
final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
|
|
|
54 |
inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda")
|
55 |
pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
|
56 |
print(tokenizer.decode(pred.cpu()[0]))
|
|
|
60 |
|
61 |
We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages.
|
62 |
|
63 |
+
This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message using a pre-defined Transformers Jinja chat template. This means that the lists of messages can be formatted for you with the apply_chat_template() method within our server:
|
64 |
|
65 |
```python
|
66 |
from openai import OpenAI
|