Librarian Bot: Add base_model information to model

#3
Files changed (1) hide show
  1. README.md +190 -176
README.md CHANGED
@@ -1,193 +1,206 @@
1
  ---
 
 
2
  license:
3
  - bsd-3-clause
4
  - apache-2.0
 
5
  tags:
6
  - generated_from_trainer
7
  - lay summary
8
  - narrative
9
  - biomedical
10
  - long document summary
11
- metrics:
12
- - rouge
13
  datasets:
14
  - pszemraj/scientific_lay_summarisation-elife-norm
15
- language:
16
- - en
17
- library_name: transformers
18
  pipeline_tag: summarization
19
  widget:
20
- - text: >-
21
- large earthquakes along a given fault segment do not occur at random
22
- intervals because it takes time to accumulate the strain energy for the
23
- rupture. The rates at which tectonic plates move and accumulate strain at
24
- their boundaries are approximately uniform. Therefore, in first
25
- approximation, one may expect that large ruptures of the same fault
26
- segment will occur at approximately constant time intervals. If subsequent
27
- main shocks have different amounts of slip across the fault, then the
28
- recurrence time may vary, and the basic idea of periodic mainshocks must
29
- be modified. For great plate boundary ruptures the length and slip often
30
- vary by a factor of 2. Along the southern segment of the San Andreas fault
31
- the recurrence interval is 145 years with variations of several decades.
32
- The smaller the standard deviation of the average recurrence interval, the
33
- more specific could be the long term prediction of a future mainshock.
34
- example_title: earthquakes
35
- - text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates are fed into a neural network that predicts values in the reconstructed domain. Then, this domain is mapped to the sensor domain where sensor measurements are available as supervision. Class and Section Problems Addressed Generalization (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid Representations (Section 3) Computation & memory efficiency, representation capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques in the neural field toolbox each addresses problems that arise in learning, inference, and control. (Section 3). We can supervise reconstruction via differentiable forward maps that transform Or project our domain (e.g, 3D reconstruction via 2D images; Section 4) With appropriate network architecture choices, we can overcome neural network spectral biases (blurriness) and efficiently compute derivatives and integrals (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations, and to achieve editable representations (Section 6). Collectively, these classes constitute a ''toolbox'' of techniques to help solve problems with neural fields There are three components in a conditional neural field: (1) An encoder or inference function € that outputs the conditioning latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS a latent code Or feature code_ (2) A mapping function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the most probable z given the observations O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding schemes with different optimality guarantees (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable prior over the sur- face in its reconstruction domain to generalize to the partial observations. A neural network expresses a prior via the function space of its architecture and parameters 0, and generalization is influenced by the inductive bias of this function space (Section 5).'
36
- example_title: scientific paper
37
- - text: >-
38
- Is a else or outside the cob and tree written being of early client rope
39
- and you have is for good reasons. On to the ocean in Orange for time. By's
40
- the aggregate we can bed it yet. Why this please pick up on a sort is do
41
- and also M Getoi's nerocos and do rain become you to let so is his brother
42
- is made in use and Mjulia's's the lay major is aging Masastup coin present
43
- sea only of Oosii rooms set to you We do er do we easy this private
44
- oliiishs lonthen might be okay. Good afternoon everybody. Welcome to this
45
- lecture of Computational Statistics. As you can see, I'm not socially my
46
- name is Michael Zelinger. I'm one of the task for this class and you might
47
- have already seen me in the first lecture where I made a quick appearance.
48
- I'm also going to give the tortillas in the last third of this course. So
49
- to give you a little bit about me, I'm a old student here with better
50
- Bulman and my research centres on casual inference applied to biomedical
51
- disasters, so that could be genomics or that could be hospital data. If
52
- any of you is interested in writing a bachelor thesis, a semester paper
53
- may be mastathesis about this topic feel for reach out to me. you have my
54
- name on models and my email address you can find in the directory I'd Be
55
- very happy to talk about it. you do not need to be sure about it, we can
56
- just have a chat. So with that said, let's get on with the lecture.
57
- There's an exciting topic today I'm going to start by sharing some slides
58
- with you and later on during the lecture we'll move to the paper. So bear
59
- with me for a few seconds. Well, the projector is starting up. Okay, so
60
- let's get started. Today's topic is a very important one. It's about a
61
- technique which really forms one of the fundamentals of data science,
62
- machine learning, and any sort of modern statistics. It's called cross
63
- validation. I know you really want to understand this topic I Want you to
64
- understand this and frankly, nobody's gonna leave Professor Mineshousen's
65
- class without understanding cross validation. So to set the stage for
66
- this, I Want to introduce you to the validation problem in computational
67
- statistics. So the problem is the following: You trained a model on
68
- available data. You fitted your model, but you know the training data you
69
- got could always have been different and some data from the environment.
70
- Maybe it's a random process. You do not really know what it is, but you
71
- know that somebody else who gets a different batch of data from the same
72
- environment they would get slightly different training data and you do not
73
- care that your method performs as well. On this training data. you want to
74
- to perform well on other data that you have not seen other data from the
75
- same environment. So in other words, the validation problem is you want to
76
- quantify the performance of your model on data that you have not seen. So
77
- how is this even possible? How could you possibly measure the performance
78
- on data that you do not know The solution to? This is the following
79
- realization is that given that you have a bunch of data, you were in
80
- charge. You get to control how much that your model sees. It works in the
81
- following way: You can hide data firms model. Let's say you have a
82
- training data set which is a bunch of doubtless so X eyes are the features
83
- those are typically hide and national vector. It's got more than one
84
- dimension for sure. And the why why eyes. Those are the labels for
85
- supervised learning. As you've seen before, it's the same set up as we
86
- have in regression. And so you have this training data and now you choose
87
- that you only use some of those data to fit your model. You're not going
88
- to use everything, you only use some of it the other part you hide from
89
- your model. And then you can use this hidden data to do validation from
90
- the point of you of your model. This hidden data is complete by unseen. In
91
- other words, we solve our problem of validation.
92
- example_title: transcribed audio - lecture
93
- - text: >-
94
- Transformer-based models have shown to be very useful for many NLP tasks.
95
- However, a major limitation of transformers-based models is its O(n^2)O(n
96
- 2) time & memory complexity (where nn is sequence length). Hence, it's
97
- computationally very expensive to apply transformer-based models on long
98
- sequences n > 512n>512. Several recent papers, e.g. Longformer, Performer,
99
- Reformer, Clustered attention try to remedy this problem by approximating
100
- the full attention matrix. You can checkout 🤗's recent blog post in case
101
- you are unfamiliar with these models.
102
-
103
- BigBird (introduced in paper) is one of such recent models to address this
104
- issue. BigBird relies on block sparse attention instead of normal
105
- attention (i.e. BERT's attention) and can handle sequences up to a length
106
- of 4096 at a much lower computational cost compared to BERT. It has
107
- achieved SOTA on various tasks involving very long sequences such as long
108
- documents summarization, question-answering with long contexts.
109
-
110
- BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of
111
- this post is to give the reader an in-depth understanding of big bird
112
- implementation & ease one's life in using BigBird with 🤗Transformers.
113
- But, before going into more depth, it is important to remember that the
114
- BigBird's attention is an approximation of BERT's full attention and
115
- therefore does not strive to be better than BERT's full attention, but
116
- rather to be more efficient. It simply allows to apply transformer-based
117
- models to much longer sequences since BERT's quadratic memory requirement
118
- quickly becomes unbearable. Simply put, if we would have compute & ∞
119
- time, BERT's attention would be preferred over block sparse attention
120
- (which we are going to discuss in this post).
121
-
122
- If you wonder why we need more compute when working with longer sequences,
123
- this blog post is just right for you!
124
-
125
- Some of the main questions one might have when working with standard
126
- BERT-like attention include:
127
-
128
- Do all tokens really have to attend to all other tokens? Why not compute
129
- attention only over important tokens? How to decide what tokens are
130
- important? How to attend to just a few tokens in a very efficient way? In
131
- this blog post, we will try to answer those questions.
132
-
133
- What tokens should be attended to? We will give a practical example of how
134
- attention works by considering the sentence 'BigBird is now available in
135
- HuggingFace for extractive question answering'. In BERT-like attention,
136
- every word would simply attend to all other tokens.
137
-
138
- Let's think about a sensible choice of key tokens that a queried token
139
- actually only should attend to by writing some pseudo-code. Will will
140
- assume that the token available is queried and build a sensible list of
141
- key tokens to attend to.
142
-
143
- >>> # let's consider following sentence as an example >>> example =
144
- ['BigBird', 'is', 'now', 'available', 'in', 'HuggingFace', 'for',
145
- 'extractive', 'question', 'answering']
146
-
147
- >>> # further let's assume, we're trying to understand the representation
148
- of 'available' i.e. >>> query_token = 'available' >>> # We will initialize
149
- an empty `set` and fill up the tokens of our interest as we proceed in
150
- this section. >>> key_tokens = [] # => currently 'available' token doesn't
151
- have anything to attend Nearby tokens should be important because, in a
152
- sentence (sequence of words), the current word is highly dependent on
153
- neighboring past & future tokens. This intuition is the idea behind the
154
- concept of sliding attention.
155
- example_title: bigbird blog intro
156
- - text: >-
157
- To be fair, you have to have a very high IQ to understand Rick and Morty.
158
- The humour is extremely subtle, and without a solid grasp of theoretical
159
- physics most of the jokes will go over a typical viewer's head. There's
160
- also Rick's nihilistic outlook, which is deftly woven into his
161
- characterisation- his personal philosophy draws heavily from Narodnaya
162
- Volya literature, for instance. The fans understand this stuff; they have
163
- the intellectual capacity to truly appreciate the depths of these jokes,
164
- to realise that they're not just funny- they say something deep about
165
- LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots-
166
- of course they wouldn't appreciate, for instance, the humour in Rick's
167
- existential catchphrase 'Wubba Lubba Dub Dub,' which itself is a cryptic
168
- reference to Turgenev's Russian epic Fathers and Sons. I'm smirking right
169
- now just imagining one of those addlepated simpletons scratching their
170
- heads in confusion as Dan Harmon's genius wit unfolds itself on their
171
- television screens. What fools.. how I pity them. 😂
172
-
173
- And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot
174
- see it. It's for the ladies' eyes only- and even then they have to
175
- demonstrate that they're within 5 IQ points of my own (preferably lower)
176
- beforehand. Nothin personnel kid 😎
177
- example_title: Richard & Mortimer
178
- - text: >-
179
- The tower is 324 metres (1,063 ft) tall, about the same height as an
180
- 81-storey building, and the tallest structure in Paris. Its base is
181
- square, measuring 125 metres (410 ft) on each side. During its
182
- construction, the Eiffel Tower surpassed the Washington Monument to become
183
- the tallest man-made structure in the world, a title it held for 41 years
184
- until the Chrysler Building in New York City was finished in 1930. It was
185
- the first structure to reach a height of 300 metres. Due to the addition
186
- of a broadcasting aerial at the top of the tower in 1957, it is now taller
187
- than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters,
188
- the Eiffel Tower is the second tallest free-standing structure in France
189
- after the Millau Viaduct.
190
- example_title: eiffel
 
 
 
 
 
 
 
 
 
 
 
 
 
191
  parameters:
192
  max_length: 64
193
  min_length: 8
@@ -197,6 +210,7 @@ parameters:
197
  encoder_no_repeat_ngram_size: 4
198
  length_penalty: 0.4
199
  num_beams: 4
 
200
  ---
201
 
202
 
 
1
  ---
2
+ language:
3
+ - en
4
  license:
5
  - bsd-3-clause
6
  - apache-2.0
7
+ library_name: transformers
8
  tags:
9
  - generated_from_trainer
10
  - lay summary
11
  - narrative
12
  - biomedical
13
  - long document summary
 
 
14
  datasets:
15
  - pszemraj/scientific_lay_summarisation-elife-norm
16
+ metrics:
17
+ - rouge
 
18
  pipeline_tag: summarization
19
  widget:
20
+ - text: large earthquakes along a given fault segment do not occur at random intervals
21
+ because it takes time to accumulate the strain energy for the rupture. The rates
22
+ at which tectonic plates move and accumulate strain at their boundaries are approximately
23
+ uniform. Therefore, in first approximation, one may expect that large ruptures
24
+ of the same fault segment will occur at approximately constant time intervals.
25
+ If subsequent main shocks have different amounts of slip across the fault, then
26
+ the recurrence time may vary, and the basic idea of periodic mainshocks must be
27
+ modified. For great plate boundary ruptures the length and slip often vary by
28
+ a factor of 2. Along the southern segment of the San Andreas fault the recurrence
29
+ interval is 145 years with variations of several decades. The smaller the standard
30
+ deviation of the average recurrence interval, the more specific could be the long
31
+ term prediction of a future mainshock.
32
+ example_title: earthquakes
33
+ - text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
34
+ are fed into a neural network that predicts values in the reconstructed domain.
35
+ Then, this domain is mapped to the sensor domain where sensor measurements are
36
+ available as supervision. Class and Section Problems Addressed Generalization
37
+ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
38
+ Representations (Section 3) Computation & memory efficiency, representation capacity,
39
+ editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
40
+ 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
41
+ 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
42
+ in the neural field toolbox each addresses problems that arise in learning, inference,
43
+ and control. (Section 3). We can supervise reconstruction via differentiable forward
44
+ maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
45
+ Section 4) With appropriate network architecture choices, we can overcome neural
46
+ network spectral biases (blurriness) and efficiently compute derivatives and integrals
47
+ (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
48
+ and to achieve editable representations (Section 6). Collectively, these classes
49
+ constitute a ''toolbox'' of techniques to help solve problems with neural fields
50
+ There are three components in a conditional neural field: (1) An encoder or inference
51
+ function that outputs the conditioning latent variable 2 given an observation
52
+ 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
53
+ a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
54
+ parameters O: Y(z) = O; (3) The neural field itself $. The encoder finds the
55
+ most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
56
+ the inverse conditional probability to find the most probable 0 given Z: arg-
57
+ max P(Olz). We discuss different encoding schemes with different optimality guarantees
58
+ (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
59
+ mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
60
+ a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
61
+ prior over the sur- face in its reconstruction domain to generalize to the partial
62
+ observations. A neural network expresses a prior via the function space of its
63
+ architecture and parameters 0, and generalization is influenced by the inductive
64
+ bias of this function space (Section 5).'
65
+ example_title: scientific paper
66
+ - text: 'Is a else or outside the cob and tree written being of early client rope
67
+ and you have is for good reasons. On to the ocean in Orange for time. By''s the
68
+ aggregate we can bed it yet. Why this please pick up on a sort is do and also
69
+ M Getoi''s nerocos and do rain become you to let so is his brother is made in
70
+ use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
71
+ Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
72
+ be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
73
+ As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
74
+ task for this class and you might have already seen me in the first lecture where
75
+ I made a quick appearance. I''m also going to give the tortillas in the last third
76
+ of this course. So to give you a little bit about me, I''m a old student here
77
+ with better Bulman and my research centres on casual inference applied to biomedical
78
+ disasters, so that could be genomics or that could be hospital data. If any of
79
+ you is interested in writing a bachelor thesis, a semester paper may be mastathesis
80
+ about this topic feel for reach out to me. you have my name on models and my email
81
+ address you can find in the directory I''d Be very happy to talk about it. you
82
+ do not need to be sure about it, we can just have a chat. So with that said, let''s
83
+ get on with the lecture. There''s an exciting topic today I''m going to start
84
+ by sharing some slides with you and later on during the lecture we''ll move to
85
+ the paper. So bear with me for a few seconds. Well, the projector is starting
86
+ up. Okay, so let''s get started. Today''s topic is a very important one. It''s
87
+ about a technique which really forms one of the fundamentals of data science,
88
+ machine learning, and any sort of modern statistics. It''s called cross validation.
89
+ I know you really want to understand this topic I Want you to understand this
90
+ and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
91
+ cross validation. So to set the stage for this, I Want to introduce you to the
92
+ validation problem in computational statistics. So the problem is the following:
93
+ You trained a model on available data. You fitted your model, but you know the
94
+ training data you got could always have been different and some data from the
95
+ environment. Maybe it''s a random process. You do not really know what it is,
96
+ but you know that somebody else who gets a different batch of data from the same
97
+ environment they would get slightly different training data and you do not care
98
+ that your method performs as well. On this training data. you want to to perform
99
+ well on other data that you have not seen other data from the same environment.
100
+ So in other words, the validation problem is you want to quantify the performance
101
+ of your model on data that you have not seen. So how is this even possible? How
102
+ could you possibly measure the performance on data that you do not know The solution
103
+ to? This is the following realization is that given that you have a bunch of data,
104
+ you were in charge. You get to control how much that your model sees. It works
105
+ in the following way: You can hide data firms model. Let''s say you have a training
106
+ data set which is a bunch of doubtless so X eyes are the features those are typically
107
+ hide and national vector. It''s got more than one dimension for sure. And the
108
+ why why eyes. Those are the labels for supervised learning. As you''ve seen before,
109
+ it''s the same set up as we have in regression. And so you have this training
110
+ data and now you choose that you only use some of those data to fit your model.
111
+ You''re not going to use everything, you only use some of it the other part you
112
+ hide from your model. And then you can use this hidden data to do validation from
113
+ the point of you of your model. This hidden data is complete by unseen. In other
114
+ words, we solve our problem of validation.'
115
+ example_title: transcribed audio - lecture
116
+ - text: 'Transformer-based models have shown to be very useful for many NLP tasks.
117
+ However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
118
+ & memory complexity (where nn is sequence length). Hence, it''s computationally
119
+ very expensive to apply transformer-based models on long sequences n > 512n>512.
120
+ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
121
+ try to remedy this problem by approximating the full attention matrix. You can
122
+ checkout 🤗''s recent blog post in case you are unfamiliar with these models.
123
+
124
+ BigBird (introduced in paper) is one of such recent models to address this issue.
125
+ BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
126
+ attention) and can handle sequences up to a length of 4096 at a much lower computational
127
+ cost compared to BERT. It has achieved SOTA on various tasks involving very long
128
+ sequences such as long documents summarization, question-answering with long contexts.
129
+
130
+ BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
131
+ post is to give the reader an in-depth understanding of big bird implementation
132
+ & ease one''s life in using BigBird with 🤗Transformers. But, before going into
133
+ more depth, it is important to remember that the BigBird''s attention is an approximation
134
+ of BERT''s full attention and therefore does not strive to be better than BERT''s
135
+ full attention, but rather to be more efficient. It simply allows to apply transformer-based
136
+ models to much longer sequences since BERT''s quadratic memory requirement quickly
137
+ becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
138
+ would be preferred over block sparse attention (which we are going to discuss
139
+ in this post).
140
+
141
+ If you wonder why we need more compute when working with longer sequences, this
142
+ blog post is just right for you!
143
+
144
+ Some of the main questions one might have when working with standard BERT-like
145
+ attention include:
146
+
147
+ Do all tokens really have to attend to all other tokens? Why not compute attention
148
+ only over important tokens? How to decide what tokens are important? How to attend
149
+ to just a few tokens in a very efficient way? In this blog post, we will try to
150
+ answer those questions.
151
+
152
+ What tokens should be attended to? We will give a practical example of how attention
153
+ works by considering the sentence ''BigBird is now available in HuggingFace for
154
+ extractive question answering''. In BERT-like attention, every word would simply
155
+ attend to all other tokens.
156
+
157
+ Let''s think about a sensible choice of key tokens that a queried token actually
158
+ only should attend to by writing some pseudo-code. Will will assume that the token
159
+ available is queried and build a sensible list of key tokens to attend to.
160
+
161
+ >>> # let''s consider following sentence as an example >>> example = [''BigBird'',
162
+ ''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
163
+ ''question'', ''answering'']
164
+
165
+ >>> # further let''s assume, we''re trying to understand the representation of
166
+ ''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
167
+ empty `set` and fill up the tokens of our interest as we proceed in this section.
168
+ >>> key_tokens = [] # => currently ''available'' token doesn''t have anything
169
+ to attend Nearby tokens should be important because, in a sentence (sequence of
170
+ words), the current word is highly dependent on neighboring past & future tokens.
171
+ This intuition is the idea behind the concept of sliding attention.'
172
+ example_title: bigbird blog intro
173
+ - text: 'To be fair, you have to have a very high IQ to understand Rick and Morty.
174
+ The humour is extremely subtle, and without a solid grasp of theoretical physics
175
+ most of the jokes will go over a typical viewer''s head. There''s also Rick''s
176
+ nihilistic outlook, which is deftly woven into his characterisation- his personal
177
+ philosophy draws heavily from Narodnaya Volya literature, for instance. The fans
178
+ understand this stuff; they have the intellectual capacity to truly appreciate
179
+ the depths of these jokes, to realise that they''re not just funny- they say something
180
+ deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots-
181
+ of course they wouldn''t appreciate, for instance, the humour in Rick''s existential
182
+ catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s
183
+ Russian epic Fathers and Sons. I''m smirking right now just imagining one of those
184
+ addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius
185
+ wit unfolds itself on their television screens. What fools.. how I pity them.
186
+ 😂
187
+
188
+ And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it.
189
+ It''s for the ladies'' eyes only- and even then they have to demonstrate that
190
+ they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel
191
+ kid 😎'
192
+ example_title: Richard & Mortimer
193
+ - text: The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey
194
+ building, and the tallest structure in Paris. Its base is square, measuring 125
195
+ metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed
196
+ the Washington Monument to become the tallest man-made structure in the world,
197
+ a title it held for 41 years until the Chrysler Building in New York City was
198
+ finished in 1930. It was the first structure to reach a height of 300 metres.
199
+ Due to the addition of a broadcasting aerial at the top of the tower in 1957,
200
+ it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters,
201
+ the Eiffel Tower is the second tallest free-standing structure in France after
202
+ the Millau Viaduct.
203
+ example_title: eiffel
204
  parameters:
205
  max_length: 64
206
  min_length: 8
 
210
  encoder_no_repeat_ngram_size: 4
211
  length_penalty: 0.4
212
  num_beams: 4
213
+ base_model: pszemraj/long-t5-tglobal-base-16384-book-summary
214
  ---
215
 
216