Librarian Bot: Add base_model information to model
#3
by
librarian-bot
- opened
README.md
CHANGED
@@ -1,9 +1,15 @@
|
|
1 |
---
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
datasets:
|
4 |
- pszemraj/scientific_lay_summarisation-plos-norm
|
5 |
-
language:
|
6 |
-
- en
|
7 |
widget:
|
8 |
- text: large earthquakes along a given fault segment do not occur at random intervals
|
9 |
because it takes time to accumulate the strain energy for the rupture. The rates
|
@@ -18,39 +24,38 @@ widget:
|
|
18 |
deviation of the average recurrence interval, the more specific could be the long
|
19 |
term prediction of a future mainshock.
|
20 |
example_title: earthquakes
|
21 |
-
- text:
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
\ this function space (Section 5)."
|
54 |
example_title: scientific paper
|
55 |
- text: 'Is a else or outside the cob and tree written being of early client rope
|
56 |
and you have is for good reasons. On to the ocean in Orange for time. By''s the
|
@@ -102,70 +107,93 @@ widget:
|
|
102 |
the point of you of your model. This hidden data is complete by unseen. In other
|
103 |
words, we solve our problem of validation.'
|
104 |
example_title: transcribed audio - lecture
|
105 |
-
- text:
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
149 |
example_title: bigbird blog intro
|
150 |
-
- text:
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
|
|
|
|
167 |
example_title: Richard & Mortimer
|
168 |
-
- text:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
169 |
example_title: eiffel
|
170 |
parameters:
|
171 |
max_length: 64
|
@@ -177,12 +205,7 @@ parameters:
|
|
177 |
length_penalty: 0.4
|
178 |
num_beams: 4
|
179 |
pipeline_tag: summarization
|
180 |
-
|
181 |
-
- lay summaries
|
182 |
-
- paper summaries
|
183 |
-
- biology
|
184 |
-
- medical
|
185 |
-
library_name: transformers
|
186 |
---
|
187 |
|
188 |
# long-t5-tglobal-base-sci-simplify
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
license: apache-2.0
|
5 |
+
library_name: transformers
|
6 |
+
tags:
|
7 |
+
- lay summaries
|
8 |
+
- paper summaries
|
9 |
+
- biology
|
10 |
+
- medical
|
11 |
datasets:
|
12 |
- pszemraj/scientific_lay_summarisation-plos-norm
|
|
|
|
|
13 |
widget:
|
14 |
- text: large earthquakes along a given fault segment do not occur at random intervals
|
15 |
because it takes time to accumulate the strain energy for the rupture. The rates
|
|
|
24 |
deviation of the average recurrence interval, the more specific could be the long
|
25 |
term prediction of a future mainshock.
|
26 |
example_title: earthquakes
|
27 |
+
- text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
|
28 |
+
are fed into a neural network that predicts values in the reconstructed domain.
|
29 |
+
Then, this domain is mapped to the sensor domain where sensor measurements are
|
30 |
+
available as supervision. Class and Section Problems Addressed Generalization
|
31 |
+
(Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
|
32 |
+
Representations (Section 3) Computation & memory efficiency, representation capacity,
|
33 |
+
editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
|
34 |
+
5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
|
35 |
+
6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
|
36 |
+
in the neural field toolbox each addresses problems that arise in learning, inference,
|
37 |
+
and control. (Section 3). We can supervise reconstruction via differentiable forward
|
38 |
+
maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
|
39 |
+
Section 4) With appropriate network architecture choices, we can overcome neural
|
40 |
+
network spectral biases (blurriness) and efficiently compute derivatives and integrals
|
41 |
+
(Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
|
42 |
+
and to achieve editable representations (Section 6). Collectively, these classes
|
43 |
+
constitute a ''toolbox'' of techniques to help solve problems with neural fields
|
44 |
+
There are three components in a conditional neural field: (1) An encoder or inference
|
45 |
+
function € that outputs the conditioning latent variable 2 given an observation
|
46 |
+
0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
|
47 |
+
a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
|
48 |
+
parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
|
49 |
+
most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
|
50 |
+
the inverse conditional probability to find the most probable 0 given Z: arg-
|
51 |
+
max P(Olz). We discuss different encoding schemes with different optimality guarantees
|
52 |
+
(Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
|
53 |
+
mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
|
54 |
+
a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
|
55 |
+
prior over the sur- face in its reconstruction domain to generalize to the partial
|
56 |
+
observations. A neural network expresses a prior via the function space of its
|
57 |
+
architecture and parameters 0, and generalization is influenced by the inductive
|
58 |
+
bias of this function space (Section 5).'
|
|
|
59 |
example_title: scientific paper
|
60 |
- text: 'Is a else or outside the cob and tree written being of early client rope
|
61 |
and you have is for good reasons. On to the ocean in Orange for time. By''s the
|
|
|
107 |
the point of you of your model. This hidden data is complete by unseen. In other
|
108 |
words, we solve our problem of validation.'
|
109 |
example_title: transcribed audio - lecture
|
110 |
+
- text: 'Transformer-based models have shown to be very useful for many NLP tasks.
|
111 |
+
However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
|
112 |
+
& memory complexity (where nn is sequence length). Hence, it''s computationally
|
113 |
+
very expensive to apply transformer-based models on long sequences n > 512n>512.
|
114 |
+
Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
|
115 |
+
try to remedy this problem by approximating the full attention matrix. You can
|
116 |
+
checkout 🤗''s recent blog post in case you are unfamiliar with these models.
|
117 |
+
|
118 |
+
BigBird (introduced in paper) is one of such recent models to address this issue.
|
119 |
+
BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
|
120 |
+
attention) and can handle sequences up to a length of 4096 at a much lower computational
|
121 |
+
cost compared to BERT. It has achieved SOTA on various tasks involving very long
|
122 |
+
sequences such as long documents summarization, question-answering with long contexts.
|
123 |
+
|
124 |
+
BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
|
125 |
+
post is to give the reader an in-depth understanding of big bird implementation
|
126 |
+
& ease one''s life in using BigBird with 🤗Transformers. But, before going into
|
127 |
+
more depth, it is important to remember that the BigBird''s attention is an approximation
|
128 |
+
of BERT''s full attention and therefore does not strive to be better than BERT''s
|
129 |
+
full attention, but rather to be more efficient. It simply allows to apply transformer-based
|
130 |
+
models to much longer sequences since BERT''s quadratic memory requirement quickly
|
131 |
+
becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
|
132 |
+
would be preferred over block sparse attention (which we are going to discuss
|
133 |
+
in this post).
|
134 |
+
|
135 |
+
If you wonder why we need more compute when working with longer sequences, this
|
136 |
+
blog post is just right for you!
|
137 |
+
|
138 |
+
Some of the main questions one might have when working with standard BERT-like
|
139 |
+
attention include:
|
140 |
+
|
141 |
+
Do all tokens really have to attend to all other tokens? Why not compute attention
|
142 |
+
only over important tokens? How to decide what tokens are important? How to attend
|
143 |
+
to just a few tokens in a very efficient way? In this blog post, we will try to
|
144 |
+
answer those questions.
|
145 |
+
|
146 |
+
What tokens should be attended to? We will give a practical example of how attention
|
147 |
+
works by considering the sentence ''BigBird is now available in HuggingFace for
|
148 |
+
extractive question answering''. In BERT-like attention, every word would simply
|
149 |
+
attend to all other tokens.
|
150 |
+
|
151 |
+
Let''s think about a sensible choice of key tokens that a queried token actually
|
152 |
+
only should attend to by writing some pseudo-code. Will will assume that the token
|
153 |
+
available is queried and build a sensible list of key tokens to attend to.
|
154 |
+
|
155 |
+
>>> # let''s consider following sentence as an example >>> example = [''BigBird'',
|
156 |
+
''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
|
157 |
+
''question'', ''answering'']
|
158 |
+
|
159 |
+
>>> # further let''s assume, we''re trying to understand the representation of
|
160 |
+
''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
|
161 |
+
empty `set` and fill up the tokens of our interest as we proceed in this section.
|
162 |
+
>>> key_tokens = [] # => currently ''available'' token doesn''t have anything
|
163 |
+
to attend Nearby tokens should be important because, in a sentence (sequence of
|
164 |
+
words), the current word is highly dependent on neighboring past & future tokens.
|
165 |
+
This intuition is the idea behind the concept of sliding attention.'
|
166 |
example_title: bigbird blog intro
|
167 |
+
- text: 'To be fair, you have to have a very high IQ to understand Rick and Morty.
|
168 |
+
The humour is extremely subtle, and without a solid grasp of theoretical physics
|
169 |
+
most of the jokes will go over a typical viewer''s head. There''s also Rick''s
|
170 |
+
nihilistic outlook, which is deftly woven into his characterisation- his personal
|
171 |
+
philosophy draws heavily from Narodnaya Volya literature, for instance. The fans
|
172 |
+
understand this stuff; they have the intellectual capacity to truly appreciate
|
173 |
+
the depths of these jokes, to realise that they''re not just funny- they say something
|
174 |
+
deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots-
|
175 |
+
of course they wouldn''t appreciate, for instance, the humour in Rick''s existential
|
176 |
+
catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s
|
177 |
+
Russian epic Fathers and Sons. I''m smirking right now just imagining one of those
|
178 |
+
addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius
|
179 |
+
wit unfolds itself on their television screens. What fools.. how I pity them.
|
180 |
+
😂
|
181 |
+
|
182 |
+
And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it.
|
183 |
+
It''s for the ladies'' eyes only- and even then they have to demonstrate that
|
184 |
+
they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel
|
185 |
+
kid 😎'
|
186 |
example_title: Richard & Mortimer
|
187 |
+
- text: The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey
|
188 |
+
building, and the tallest structure in Paris. Its base is square, measuring 125
|
189 |
+
metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed
|
190 |
+
the Washington Monument to become the tallest man-made structure in the world,
|
191 |
+
a title it held for 41 years until the Chrysler Building in New York City was
|
192 |
+
finished in 1930. It was the first structure to reach a height of 300 metres.
|
193 |
+
Due to the addition of a broadcasting aerial at the top of the tower in 1957,
|
194 |
+
it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters,
|
195 |
+
the Eiffel Tower is the second tallest free-standing structure in France after
|
196 |
+
the Millau Viaduct.
|
197 |
example_title: eiffel
|
198 |
parameters:
|
199 |
max_length: 64
|
|
|
205 |
length_penalty: 0.4
|
206 |
num_beams: 4
|
207 |
pipeline_tag: summarization
|
208 |
+
base_model: google/long-t5-tglobal-base
|
|
|
|
|
|
|
|
|
|
|
209 |
---
|
210 |
|
211 |
# long-t5-tglobal-base-sci-simplify
|