Spaces:
Running
on
Zero
Running
on
Zero
title: Custom Models | |
In addition to hosted and local language models, Open Interpreter also supports custom models. | |
As long as your system can accept an input and stream an output (and can be interacted with via a Python generator) it can be used as a language model in Open Interpreter. | |
Simply replace the OpenAI-compatible `completions` function in your language model with one of your own: | |
```python | |
def custom_language_model(openai_message): | |
""" | |
OpenAI-compatible completions function (this one just echoes what the user said back). | |
""" | |
users_content = openai_message[-1].get("content") # Get last message's content | |
# To make it OpenAI-compatible, we yield this first: | |
yield {"delta": {"role": "assistant"}} | |
for character in users_content: | |
yield {"delta": {"content": character}} | |
# Tell Open Interpreter to power the language model with this function | |
interpreter.llm.completion = custom_language_model | |
``` | |
Then, set the following settings: | |
``` | |
interpreter.llm.context_window = 2000 # In tokens | |
interpreter.llm.max_tokens = 1000 # In tokens | |
interpreter.llm.supports_vision = False # Does this completions endpoint accept images? | |
interpreter.llm.supports_functions = False # Does this completions endpoint accept/return function calls? | |
``` | |
And start using it: | |
``` | |
interpreter.chat("Hi!") # Returns/displays "Hi!" character by character | |
``` |