performance-llm-board / app_constants.py
piotr-szleg-bards-ai's picture
2024-03-15 14:12:25 Publish script update
15822f7
import pandas as pd
README = """
This project compares different large language models and their providers for real time applications and mass data processing.
While other benchmarks compare LLMs on different human intelligence tasks this benchmark focuses on features related to business and engineering aspects such as response times, pricing and data streaming capabilities.
To preform evaluation we chose a task of newspaper articles summarization from [GEM/xlsum](https://huggingface.co/datasets/GEM/xlsum) dataset as it represents a very standard type of task where model has to understand unstructured natural language text, process it and output text in a specified format.
For this version we chose English, Ukrainian and Japanese languages, with Japanese representing languages using logographic alphabets. This enables us to also validate the effectiveness of the LLM for different language groups.
Each of the models was asked to summarize the text using the following prompt:
```
{}
```
Where {{language}} stands for original language of the text as we wanted to avoid the model translating the text to English during summarization.
LLM was asked to return the output in three formats: markdown, json and function call. Note that currently function calls are only supported by Open AI API.
To do that we added following text to the query:
{}
All of the call were made from the same machine with the same internet connection with usage of the LiteLLM library which may adds some time overhead compared to pure curl calls. Call were made from Poland, UTC +1.
Please take a look at the following project and let us know if you have any questions or suggestions.
"""
JS = """
function test() {
var google_button = document.querySelector('#google-button')
var open_ai_button = document.querySelector('#open-ai-button')
var filter_textbox = document.querySelector('#filter-textbox textarea')
var filter_button = document.querySelector('#filter-button')
console.log(google_button, filter_textbox, filter_button)
function for_button(button, search_query) {
button.onclick = function() {
filter_textbox.value = search_query
var input_event = new InputEvent('input', {
bubbles: true,
cancelable: true,
composed: true
})
filter_textbox.dispatchEvent(input_event);
setTimeout(
()=>filter_button.click(),
1000
)
}
}
for_button(google_button, "gemini-pro | PaLM 2")
for_button(open_ai_button, "gpt-4 | gpt-4-turbo | gpt-3.5-turbo")
}
"""
TIME_PERIODS_EXPLANATION_DF = pd.DataFrame(
{
"time_of_day": [
"early morning",
"morning",
"afternoon",
"late afternoon",
"evening",
"late evening",
"midnight",
"night",
],
"hour_range": ["6-8", "9-11", "12-14", "15-17", "18-20", "21-23", "0-2", "3-5"],
}
)