gmurro commited on
Commit
c81451e
1 Parent(s): b2502b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -23,6 +23,7 @@ This model is intended to be used for automatic podcast summarisation. Given the
23
  ## Training and evaluation data
24
  In our solution, an extractive module is developed to select salient chunks from the transcript, which serve as the input to an abstractive summarizer.
25
  An extensive pre-processing on the creator-provided descriptions is performed selecting a subset of the corpus that is suitable for the training supervised model.
 
26
  We split the filtered dataset into train/dev sets of 69,336/7,705 episodes.
27
  The test set consists of 1,027 episodes. Only 1025 have been used because two of them did not contain an episode description.
28
 
@@ -41,8 +42,12 @@ print(summary[0]['summary_text'])
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
45
- - training_precision: float32
 
 
 
 
46
 
47
  ### Training results
48
 
 
23
  ## Training and evaluation data
24
  In our solution, an extractive module is developed to select salient chunks from the transcript, which serve as the input to an abstractive summarizer.
25
  An extensive pre-processing on the creator-provided descriptions is performed selecting a subset of the corpus that is suitable for the training supervised model.
26
+
27
  We split the filtered dataset into train/dev sets of 69,336/7,705 episodes.
28
  The test set consists of 1,027 episodes. Only 1025 have been used because two of them did not contain an episode description.
29
 
 
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
+ - ```python
46
+ optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
47
+ ```
48
+ - ```python
49
+ training_precision: float32
50
+ ```
51
 
52
  ### Training results
53