akjindal53244 commited on
Commit
9cb1e56
1 Parent(s): 1d7e631

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -1
README.md CHANGED
@@ -99,6 +99,7 @@ Llama-3.1-Storm-8B is a powerful generalist model useful for diverse application
99
  1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
100
  2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
101
  3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
 
102
 
103
 
104
 
@@ -170,7 +171,7 @@ Here are the available functions:
170
  <tools>{}</tools>
171
 
172
  For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
173
- <tool_call>{{"tool_name": <function-name>, "tool_arguments": <args-dict>}}</tool_call>"""
174
 
175
  # Convert the tools list to a string representation
176
  tools_str = json.dumps(tools_list, ensure_ascii=False)
@@ -217,6 +218,58 @@ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tok
217
  print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
218
  ```
219
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
220
 
221
  ## Alignment Note
222
  While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
 
99
  1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
100
  2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
101
  3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
102
+ 4. Ollama: `ollama run ajindal/llama3.1-storm:8b`
103
 
104
 
105
 
 
171
  <tools>{}</tools>
172
 
173
  For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
174
+ <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>"""
175
 
176
  # Convert the tools list to a string representation
177
  tools_str = json.dumps(tools_list, ensure_ascii=False)
 
218
  print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
219
  ```
220
 
221
+ #### Use with [Ollma](https://ollama.com/)
222
+ ```
223
+ import ollama
224
+
225
+ tools = [{
226
+ 'type': 'function',
227
+ 'function': {
228
+ 'name': 'get_current_weather',
229
+ 'description': 'Get the current weather for a city',
230
+ 'parameters': {
231
+ 'type': 'object',
232
+ 'properties': {
233
+ 'city': {
234
+ 'type': 'string',
235
+ 'description': 'The name of the city',
236
+ },
237
+ },
238
+ 'required': ['city'],
239
+ },
240
+ },
241
+ },
242
+ {
243
+ 'type': 'function',
244
+ 'function': {
245
+ 'name': 'get_places_to_vist',
246
+ 'description': 'Get places to visit in a city',
247
+ 'parameters': {
248
+ 'type': 'object',
249
+ 'properties': {
250
+ 'city': {
251
+ 'type': 'string',
252
+ 'description': 'The name of the city',
253
+ },
254
+ },
255
+ 'required': ['city'],
256
+ },
257
+ },
258
+ },
259
+ ]
260
+
261
+ response = ollama.chat(
262
+ model='ajindal/llama3.1-storm:8b',
263
+ messages=[
264
+ {'role': 'system', 'content': 'Do not answer to nay vulgar questions.'},
265
+ {'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'}
266
+ ],
267
+ tools=tools
268
+ )
269
+
270
+ print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"}
271
+ ```
272
+
273
 
274
  ## Alignment Note
275
  While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.