Spaces:
Running
Running
File size: 3,041 Bytes
b7b9e52 15822f7 b7b9e52 15822f7 b7b9e52 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
import pandas as pd
README = """
This project compares different large language models and their providers for real time applications and mass data processing.
While other benchmarks compare LLMs on different human intelligence tasks this benchmark focuses on features related to business and engineering aspects such as response times, pricing and data streaming capabilities.
To preform evaluation we chose a task of newspaper articles summarization from [GEM/xlsum](https://huggingface.co/datasets/GEM/xlsum) dataset as it represents a very standard type of task where model has to understand unstructured natural language text, process it and output text in a specified format.
For this version we chose English, Ukrainian and Japanese languages, with Japanese representing languages using logographic alphabets. This enables us to also validate the effectiveness of the LLM for different language groups.
Each of the models was asked to summarize the text using the following prompt:
```
{}
```
Where {{language}} stands for original language of the text as we wanted to avoid the model translating the text to English during summarization.
LLM was asked to return the output in three formats: markdown, json and function call. Note that currently function calls are only supported by Open AI API.
To do that we added following text to the query:
{}
All of the call were made from the same machine with the same internet connection with usage of the LiteLLM library which may adds some time overhead compared to pure curl calls. Call were made from Poland, UTC +1.
Please take a look at the following project and let us know if you have any questions or suggestions.
"""
JS = """
function test() {
var google_button = document.querySelector('#google-button')
var open_ai_button = document.querySelector('#open-ai-button')
var filter_textbox = document.querySelector('#filter-textbox textarea')
var filter_button = document.querySelector('#filter-button')
console.log(google_button, filter_textbox, filter_button)
function for_button(button, search_query) {
button.onclick = function() {
filter_textbox.value = search_query
var input_event = new InputEvent('input', {
bubbles: true,
cancelable: true,
composed: true
})
filter_textbox.dispatchEvent(input_event);
setTimeout(
()=>filter_button.click(),
1000
)
}
}
for_button(google_button, "gemini-pro | PaLM 2")
for_button(open_ai_button, "gpt-4 | gpt-4-turbo | gpt-3.5-turbo")
}
"""
TIME_PERIODS_EXPLANATION_DF = pd.DataFrame(
{
"time_of_day": [
"early morning",
"morning",
"afternoon",
"late afternoon",
"evening",
"late evening",
"midnight",
"night",
],
"hour_range": ["6-8", "9-11", "12-14", "15-17", "18-20", "21-23", "0-2", "3-5"],
}
) |