alexpap commited on
Commit
0b646bf
1 Parent(s): 24f7c24

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -2
app.py CHANGED
@@ -136,7 +136,7 @@ elif menu == "Training":
136
  st.markdown('''
137
  To train a QA-NLU model on the data we created, we use the `run_squad.py` script from [huggingface](https://github.com/huggingface/transformers/blob/master/examples/legacy/question-answering/run_squad.py) and a SQuAD-trained QA model as our base. As an example, we can use `deepset/roberta-base-squad2` model from [here](https://huggingface.co/deepset/roberta-base-squad2) (assuming 8 GPUs are present):
138
 
139
- ```
140
  mkdir models
141
 
142
  python -m torch.distributed.launch --nproc_per_node=8 run_squad.py \
@@ -158,7 +158,7 @@ elif menu == "Training":
158
  --save_steps 100000 \
159
  --gradient_accumulation_steps 8 \
160
  --seed $RANDOM
161
- ```
162
  ''')
163
 
164
  elif menu == "Evaluation":
 
136
  st.markdown('''
137
  To train a QA-NLU model on the data we created, we use the `run_squad.py` script from [huggingface](https://github.com/huggingface/transformers/blob/master/examples/legacy/question-answering/run_squad.py) and a SQuAD-trained QA model as our base. As an example, we can use `deepset/roberta-base-squad2` model from [here](https://huggingface.co/deepset/roberta-base-squad2) (assuming 8 GPUs are present):
138
 
139
+ ````
140
  mkdir models
141
 
142
  python -m torch.distributed.launch --nproc_per_node=8 run_squad.py \
 
158
  --save_steps 100000 \
159
  --gradient_accumulation_steps 8 \
160
  --seed $RANDOM
161
+ ````
162
  ''')
163
 
164
  elif menu == "Evaluation":