dhmeltzer commited on
Commit
694ef2b
1 Parent(s): 8df9ec0

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +4 -6
app.py CHANGED
@@ -13,11 +13,9 @@ def main():
13
  of the [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl) model and the [GPT-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5) model.\
14
  \n\n For a more thorough discussion of question generation see this [report](https://wandb.ai/dmeltzer/Question_Generation/reports/Exploratory-Data-Analysis-for-r-AskScience--Vmlldzo0MjQwODg1?accessToken=fndbu2ar26mlbzqdphvb819847qqth2bxyi4hqhugbnv97607mj01qc7ed35v6w8) for EDA on the r/AskScience dataset and this \
15
  [report](https://api.wandb.ai/links/dmeltzer/7an677es) for details on our training procedure.\
16
- \n\n \
17
- The two fine-tuned models (BART-Large and FLAN-T5-Base) are hosted on AWS using a combination of AWS Sagemaker, Lambda, and API gateway. \
18
- \ GPT-3.5 is called using the OpenAI API and the FLAN-T5-XXL model is hosted by HuggingFace and is called with their Inference API.\
19
- \n \n \
20
- **Disclaimer**: You may recieve an error message when calling the FLAN-T5-XXL model since the Inference API takes around 20 seconds to load the model.\
21
  ")
22
 
23
  AWS_checkpoints = {}
@@ -49,7 +47,7 @@ def main():
49
 
50
  if user_input:
51
 
52
- for name, url in AWS_checkpoints.values():
53
  headers={'x-api-key': key}
54
 
55
  input_data = json.dumps({'inputs':user_input})
 
13
  of the [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl) model and the [GPT-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5) model.\
14
  \n\n For a more thorough discussion of question generation see this [report](https://wandb.ai/dmeltzer/Question_Generation/reports/Exploratory-Data-Analysis-for-r-AskScience--Vmlldzo0MjQwODg1?accessToken=fndbu2ar26mlbzqdphvb819847qqth2bxyi4hqhugbnv97607mj01qc7ed35v6w8) for EDA on the r/AskScience dataset and this \
15
  [report](https://api.wandb.ai/links/dmeltzer/7an677es) for details on our training procedure.\
16
+ \n\nThe two fine-tuned models (BART-Large and FLAN-T5-Base) are hosted on AWS using a combination of AWS Sagemaker, Lambda, and API gateway.\
17
+ GPT-3.5 is called using the OpenAI API and the FLAN-T5-XXL model is hosted by HuggingFace and is called with their Inference API.\
18
+ \n \n **Disclaimer**: You may recieve an error message when calling the FLAN-T5-XXL model since the Inference API takes around 20 seconds to load the model.\
 
 
19
  ")
20
 
21
  AWS_checkpoints = {}
 
47
 
48
  if user_input:
49
 
50
+ for name, url in AWS_checkpoints.items():
51
  headers={'x-api-key': key}
52
 
53
  input_data = json.dumps({'inputs':user_input})