dhmeltzer commited on
Commit
f10d1e0
1 Parent(s): ce56c40

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +3 -4
app.py CHANGED
@@ -11,13 +11,12 @@ def main():
11
  We include the output from four different models, the [BART-Large](https://huggingface.co/dhmeltzer/bart-large_askscience-qg) and [FLAN-T5-Base](https://huggingface.co/dhmeltzer/flan-t5-base_askscience-qg) models \
12
  fine-tuned on the r/AskScience split of the [ELI5 dataset](https://huggingface.co/datasets/eli5) as well as the zero-shot output \
13
  of the [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl) model and the [GPT-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5) model.\
14
- \n\n For a more thorough discussion of question generation see this [report](https://wandb.ai/dmeltzer/Question_Generation/reports/Exploratory-Data-Analysis-for-r-AskScience--Vmlldzo0MjQwODg1?accessToken=fndbu2ar26mlbzqdphvb819847qqth2bxyi4hqhugbnv97607mj01qc7ed35v6w8) for EDA on the r/AskScience dataset and this \
15
  [report](https://api.wandb.ai/links/dmeltzer/7an677es) for details on our training procedure.\
16
  \n\nThe two fine-tuned models (BART-Large and FLAN-T5-Base) are hosted on AWS using a combination of AWS Sagemaker, Lambda, and API gateway.\
17
  GPT-3.5 is called using the OpenAI API and the FLAN-T5-XXL model is hosted by HuggingFace and is called with their Inference API.\
18
  \n \n **Disclaimer**: When first running this application it may take approximately 30 seconds for the first two responses to load because of the cold start problem with AWS Lambda.\
19
- You may recieve also an error message when calling the FLAN-T5-XXL model since the Inference API takes around 20 seconds to load the model.\
20
- Both issues should disappear on any subsequent calls to the application.")
21
 
22
  AWS_checkpoints = {}
23
  AWS_checkpoints['BART-Large']='https://8hlnvys7bh.execute-api.us-east-1.amazonaws.com/beta/'
@@ -44,7 +43,7 @@ def main():
44
 
45
  # User search
46
  user_input = st.text_area("Question Generator",
47
- """Black holes are the most gravitationally dense objects in the universe.""")
48
 
49
  if user_input:
50
 
 
11
  We include the output from four different models, the [BART-Large](https://huggingface.co/dhmeltzer/bart-large_askscience-qg) and [FLAN-T5-Base](https://huggingface.co/dhmeltzer/flan-t5-base_askscience-qg) models \
12
  fine-tuned on the r/AskScience split of the [ELI5 dataset](https://huggingface.co/datasets/eli5) as well as the zero-shot output \
13
  of the [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl) model and the [GPT-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5) model.\
14
+ For a more thorough discussion of question generation see this [report](https://wandb.ai/dmeltzer/Question_Generation/reports/Exploratory-Data-Analysis-for-r-AskScience--Vmlldzo0MjQwODg1?accessToken=fndbu2ar26mlbzqdphvb819847qqth2bxyi4hqhugbnv97607mj01qc7ed35v6w8) for EDA on the r/AskScience dataset and this \
15
  [report](https://api.wandb.ai/links/dmeltzer/7an677es) for details on our training procedure.\
16
  \n\nThe two fine-tuned models (BART-Large and FLAN-T5-Base) are hosted on AWS using a combination of AWS Sagemaker, Lambda, and API gateway.\
17
  GPT-3.5 is called using the OpenAI API and the FLAN-T5-XXL model is hosted by HuggingFace and is called with their Inference API.\
18
  \n \n **Disclaimer**: When first running this application it may take approximately 30 seconds for the first two responses to load because of the cold start problem with AWS Lambda.\
19
+ The models will respond quicker on any subsequent calls to the application.")
 
20
 
21
  AWS_checkpoints = {}
22
  AWS_checkpoints['BART-Large']='https://8hlnvys7bh.execute-api.us-east-1.amazonaws.com/beta/'
 
43
 
44
  # User search
45
  user_input = st.text_area("Question Generator",
46
+ """Black holes can evaporate by emitting Hawking radiation.""")
47
 
48
  if user_input:
49