Spaces:
Runtime error
Runtime error
Etash Guha
commited on
Commit
•
5731c25
1
Parent(s):
d4db51c
changed name:
Browse files
app.py
CHANGED
@@ -15,7 +15,7 @@ if 'response_content' not in st.session_state:
|
|
15 |
# Creating main columns for the chat and runtime notifications
|
16 |
chat_col = st.container()
|
17 |
|
18 |
-
chat_col.title("
|
19 |
description = """This demo is an implementation of Language Agent Tree Search (LATS) (https://arxiv.org/abs/2310.04406) with Samba-1 in the backend. Thank you to the original authors of demo on which this is based from [Lapis Labs](https://lapis.rocks/)!
|
20 |
|
21 |
Given Samba-1's lightning quick inference, not only can we accelerate our system's speeds but also improve our system's accuracy. Using many inference calls in this LATS style, we can solve programming questions with higher accuracy. In fact, this system reaches **GPT-3.5 accuracy on HumanEval Python**, 74% accuracy, with LLaMa 3 8B, taking 8 seconds on average. This is a 15.5% boost on LLaMa 3 8B alone.
|
|
|
15 |
# Creating main columns for the chat and runtime notifications
|
16 |
chat_col = st.container()
|
17 |
|
18 |
+
chat_col.title("LATS powered by SambaNova")
|
19 |
description = """This demo is an implementation of Language Agent Tree Search (LATS) (https://arxiv.org/abs/2310.04406) with Samba-1 in the backend. Thank you to the original authors of demo on which this is based from [Lapis Labs](https://lapis.rocks/)!
|
20 |
|
21 |
Given Samba-1's lightning quick inference, not only can we accelerate our system's speeds but also improve our system's accuracy. Using many inference calls in this LATS style, we can solve programming questions with higher accuracy. In fact, this system reaches **GPT-3.5 accuracy on HumanEval Python**, 74% accuracy, with LLaMa 3 8B, taking 8 seconds on average. This is a 15.5% boost on LLaMa 3 8B alone.
|