DunnBC22 commited on
Commit
d15c8a3
1 Parent(s): bcf19ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -7,11 +7,11 @@ metrics:
7
  model-index:
8
  - name: led-base-16384-text_summarization_data
9
  results: []
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
  # led-base-16384-text_summarization_data
16
 
17
  This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset.
@@ -25,15 +25,17 @@ It achieves the following results on the evaluation set:
25
 
26
  ## Model description
27
 
28
- More information needed
 
 
29
 
30
  ## Intended uses & limitations
31
 
32
- More information needed
33
 
34
  ## Training and evaluation data
35
 
36
- More information needed
37
 
38
  ## Training procedure
39
 
@@ -63,4 +65,4 @@ The following hyperparameters were used during training:
63
  - Transformers 4.26.1
64
  - Pytorch 1.12.1
65
  - Datasets 2.9.0
66
- - Tokenizers 0.12.1
 
7
  model-index:
8
  - name: led-base-16384-text_summarization_data
9
  results: []
10
+ language:
11
+ - en
12
+ pipeline_tag: summarization
13
  ---
14
 
 
 
 
15
  # led-base-16384-text_summarization_data
16
 
17
  This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset.
 
25
 
26
  ## Model description
27
 
28
+ This is a text summarization model.
29
+
30
+ For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Text%20Summarization/Text-Summarized%20Data%20-%20Comparison/LED%20-%20Text%20Summarization%20-%204%20Epochs.ipynb
31
 
32
  ## Intended uses & limitations
33
 
34
+ This model is intended to demonstrate my ability to solve a complex problem using technology.
35
 
36
  ## Training and evaluation data
37
 
38
+ Dataset Source: https://www.kaggle.com/datasets/cuitengfeui/textsummarization-data
39
 
40
  ## Training procedure
41
 
 
65
  - Transformers 4.26.1
66
  - Pytorch 1.12.1
67
  - Datasets 2.9.0
68
+ - Tokenizers 0.12.1