Spaces:
Running
Running
Update app.py
Browse files
app.py
CHANGED
@@ -16,10 +16,16 @@ examples = [
|
|
16 |
]
|
17 |
|
18 |
title = "InstructionGen: Instructions from Text"
|
19 |
-
description = "This demo compares the [flan-t5-small-instructiongen](https://huggingface.co/pszemraj/flan-t5-small-instructiongen) and [bart-base-instructiongen](https://huggingface.co/pszemraj/bart-base-instructiongen) models on 'creating' an instruction for arbitrary text."
|
20 |
article = """---
|
|
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
|
25 |
def inference(text):
|
|
|
16 |
]
|
17 |
|
18 |
title = "InstructionGen: Instructions from Text"
|
19 |
+
description = "This demo compares the [flan-t5-small-instructiongen](https://huggingface.co/pszemraj/flan-t5-small-instructiongen) and [bart-base-instructiongen](https://huggingface.co/pszemraj/bart-base-instructiongen) models on 'creating' an instruction for arbitrary text. Note that [the dataset](https://huggingface.co/datasets/pszemraj/fleece2instructions) & models are trained to **only** generate an `instruction`relevant to some text, and **do not** expect/recover the (potential) `inputs`"
|
20 |
article = """---
|
21 |
+
These models generate instructions **only** for Large Language Models (LLMs) from arbitrary text. They are fine-tuned on the [fleece2instructions](https://huggingface.co/datasets/pszemraj/fleece2instructions) dataset, which is a filtered/formatted version of the [alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) dataset.
|
22 |
|
23 |
+
Example of the difference:
|
24 |
+
- LLM instruction: "What is the capital of France?"
|
25 |
+
- Instruction+specific inputs:
|
26 |
+
Instruction: "Provide information on the following:"
|
27 |
+
Specific Inputs: {"category": "geography", "question": "capital of France"}
|
28 |
+
"""
|
29 |
|
30 |
|
31 |
def inference(text):
|