Spaces:
Sleeping
Sleeping
ehristoforu
commited on
Commit
•
ca0f21b
1
Parent(s):
aa4baec
Update app.py
Browse files
app.py
CHANGED
@@ -13,14 +13,7 @@ DEFAULT_MAX_NEW_TOKENS = 1024
|
|
13 |
MAX_INPUT_TOKEN_LENGTH = 4000
|
14 |
|
15 |
DESCRIPTION = """
|
16 |
-
#
|
17 |
-
|
18 |
-
💻 This Space demonstrates model [Mistral-7b-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) by Mistral AI, a Mistral-chat model with 7B parameters fine-tuned for chat instructions and specialized on many tasks. Feel free to play with it, or duplicate to run generations without a queue! If you want to run your own service, you can also [deploy the model on Inference Endpoints](https://huggingface.co/inference-endpoints).
|
19 |
-
|
20 |
-
🔎 For more details about the Mistral family of models and how to use them with `transformers`, take a look [at our blog post](https://huggingface.co/blog/Andyrasika/mistral-7b-empowering-conversation).
|
21 |
-
|
22 |
-
🏃🏻 Check out our [Playground](https://huggingface.co/spaces/osanseviero/mistral-super-fast) for a super-fast tasks completion demo that leverages a streaming [inference endpoint](https://huggingface.co/inference-endpoints).
|
23 |
-
|
24 |
"""
|
25 |
|
26 |
def clear_and_save_textbox(message: str) -> tuple[str, str]:
|
|
|
13 |
MAX_INPUT_TOKEN_LENGTH = 4000
|
14 |
|
15 |
DESCRIPTION = """
|
16 |
+
# [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
"""
|
18 |
|
19 |
def clear_and_save_textbox(message: str) -> tuple[str, str]:
|