n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
Let's see how the scoring works in detail, scoring each node by its log probability. Recall from Eq. 10.10 that we can use the chain rule of probability to break down p(y|x) into the product of the probability of each word given its prior context, which we can turn into a sum of logs (for an output string of length t):
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
score(y) = log P(y|x) = log (P(y 1 |x)P(y 2 |y 1 , x)P(y 3 |y 1 , y 2 , x)...P(y t |y 1 , ..., y t−1 , x)) = t i=1 log P(y i |y 1 , ..., y i−1 , x) (10.19) Thus at each step, to compute the probability of a partial translation, we simply add the log probability of the prefix translation so far to the log probability of generating the next token. Fig. 10 .13 shows the scoring for the example sentence shown in Fig. 10 .12, using some simple made-up probabilities. Log probabilities are negative or 0, and the max of two log probabilities is the one that is greater (closer to 0). Figure 10 .13 Scoring for beam search decoding with a beam width of k = 2. We maintain the log probability of each hypothesis in the beam by incrementally adding the logprob of generating each next token. Only the top k paths are extended to the next step. Fig. 10 .14 gives the algorithm. One problem arises from the fact that the completed hypotheses may have different lengths. Because models generally assign lower probabilities to longer strings, a naive algorithm would also choose shorter strings for y. This was not an issue during the earlier steps of decoding; due to the breadth-first nature of beam search all the hypotheses being compared had the same length. The usual solution to this is function BEAMDECODE(c, beam width) returns best paths y 0 , h 0 ← 0 path ← () complete paths ← () state ← (c, y 0 , h 0 , path)
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
;initial state frontier ← state ;initial frontier while frontier contains incomplete paths and beamwidth > 0 extended frontier ← for each state ∈ frontier do y ← DECODE(state) for each word i ∈ Vocabulary do successor ← NEWSTATE(state, i, y i ) new agenda ← ADDTOBEAM(successor, extended frontier, beam width)
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
for each state in extended frontier do if state is complete do complete paths ← APPEND(complete paths, state) extended frontier ← REMOVE(extended frontier, state) beam width ← beam width -1 frontier ← extended frontier to apply some form of length normalization to each of the hypotheses, for example simply dividing the negative log probability by the number of words:
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
score(y) = − log P(y|x) = 1 T t i=1 − log P(y i |y 1 , ..., y i−1 , x) (10.20)
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
Beam search is common in large production MT systems, generally with beam widths k between 5 and 10. What do we do with the resulting k hypotheses? In some cases, all we need from our MT algorithm is the single best hypothesis, so we can return that. In other cases our downstream application might want to look at all k hypotheses, so we can pass them all (or a subset) to the downstream application with their respective scores.
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
10.6 Encoder-Decoder with Transformers
10
Machine Translation and Encoder-Decoder Models
10.6
Encoder-Decoder with Transformers
nan
nan
The encoder-decoder architecture can also be implemented using transformers (rather than RNN/LSTMs) as the component modules. At a high-level, the architecture, sketched in Fig. 10 .15, is quite similar to what we saw for RNNs. It consists of an encoder that takes the source language input words X = x 1 , ..., x T and maps them to an output representation H enc = h 1 , ..., h T ; usually via N = 6 stacked encoder blocks. The decoder, just like the encoder-decoder RNN, is essentially a conditional language model that attends to the encoder representation and generates the target words one by one, at each timestep conditioning on the source sentence and the previously generated target language words. Figure 10 .15 The encoder-decoder architecture using transformer components. The encoder uses the transformer blocks we saw in Chapter 9, while the decider uses a more powerful block with an extra encoder-decoder attention layer. The final output of the encoder H enc = h 1 , ..., h T is used to form the K and V inputs to the crossattention layer in each decoder block.
10
Machine Translation and Encoder-Decoder Models
10.6
Encoder-Decoder with Transformers
nan
nan
But the components of the architecture differ somewhat from the RNN and also from the transformer block we've seen. First, in order to attend to the source language, the transformer blocks in the decoder has an extra cross-attention layer. Recall that the transformer block of Chapter 9 consists of a self-attention layer that attends to the input from the previous layer, followed by layer norm, a feed forward layer, and another layer norm. The decoder transformer block includes an extra layer with a special kind of attention, cross-attention (also sometimes called cross-attention encoder-decoder attention or source attention). Cross-attention has the same form as the multi-headed self-attention in a normal transformer block, except that while the queries as usual come from the previous layer of the decoder, the keys and values come from the output of the encoder.
10
Machine Translation and Encoder-Decoder Models
10.6
Encoder-Decoder with Transformers
nan
nan
That is, the final output of the encoder H enc = h 1 , ..., h t is multiplied by the cross-attention layer's key weights W K and value weights W V , but the output from the prior decoder layer H dec [i−1] is multiplied by the cross-attention layer's query weights W Q : Figure 10 .16 The transformer block for the encoder and the decoder. Each decoder block has an extra cross-attention layer, which uses the output of the final encoder layer H enc = h 1 , ..., h t to produce its key and value vectors.
10
Machine Translation and Encoder-Decoder Models
10.6
Encoder-Decoder with Transformers
nan
nan
EQUATION
10
Machine Translation and Encoder-Decoder Models
10.6
Encoder-Decoder with Transformers
nan
nan
The cross attention thus allows the decoder to attend to each of the source language words as projected into the the entire encoder final output representations. The other attention layer in each decoder block, the self-attention layer, is the same causal (leftto-right) self-attention that we saw in Chapter 9. The self-attention in the encoder, however, is allowed to look ahead at the entire source language text.
10
Machine Translation and Encoder-Decoder Models
10.6
Encoder-Decoder with Transformers
nan
nan
In training, just as for RNN encoder-decoders, we use teacher forcing, and train autoregressively, at each time step predicting the next token in the target language, using cross-entropy loss.
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
nan
nan
Machine translation systems generally use a fixed vocabulary, A common way to generate this vocabulary is with the BPE or wordpiece algorithms sketched in Chapwordpiece ter 2. Generally a shared vocabulary is used for the source and target languages, which makes it easy to copy tokens (like names) from source to target, so we build the wordpiece/BPE lexicon on a corpus that contains both source and target language data. Wordpieces use a special symbol at the beginning of each token; here's a resulting tokenization from the Google MT system (Wu et al., 2016):
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
nan
nan
words:
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
nan
nan
Jet makers feud over seat width with big orders at stake wordpieces: J et makers fe ud over seat width with big orders at stake
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
nan
nan
We gave the BPE algorithm in detail in Chapter 2; here are more details on the wordpiece algorithm, which is given a training corpus and a desired vocabulary size and proceeds as follows:
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
nan
nan
1. Initialize the wordpiece lexicon with characters (for example a subset of Unicode characters, collapsing all the remaining characters to a special unknown character token). 2. Repeat until there are V wordpieces:
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
nan
nan
(a) Train an n-gram language model on the training corpus, using the current set of wordpieces. (b) Consider the set of possible new wordpieces made by concatenating two wordpieces from the current lexicon. Choose the one new wordpiece that most increases the language model probability of the training corpus.
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
nan
nan
A vocabulary of 8K to 32K word pieces is commonly used.
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
10.7.2
MT corpora
Machine translation models are trained on a parallel corpus, sometimes called a parallel corpus bitext, a text that appears in two (or more) languages. Large numbers of parallel corpora are available. Some are governmental; the Europarl corpus (Koehn,
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
10.7.2
MT corpora
Standard training corpora for MT come as aligned pairs of sentences. When creating new corpora, for example for underresourced languages or new domains, these sentence alignments must be created. Fig. 10 .17 gives a sample hypothetical sentence alignment. Given two documents that are translations of each other, we generally need two steps to produce sentence alignments:
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
10.7.2
MT corpora
• a cost function that takes a span of source sentences and a span of target sentences and returns a score measuring how likely these spans are to be translations. • an alignment algorithm that takes these scores to find a good alignment between the documents.
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
10.7.2
MT corpora
Since it is possible to induce multilingual sentence embeddings (Artetxe and Schwenk, 2019), cosine similarity of such embeddings provides a natural scoring function (Schwenk, 2018) . Thompson and Koehn (2019) give the following cost function between two sentences or spans x,y from the source and target documents respectively: c(x, y) =
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
10.7.2
MT corpora
(1 − cos(x, y))nSents(x) nSents(y) where nSents() gives the number of sentences (this biases the metric toward many alignments of single sentences instead of aligning very large spans). The denominator helps to normalize the similarities, and so x 1 , ..., x S , y 1 , ..., y S , are randomly selected sentences sampled from the respective documents. Usually dynamic programming is used as the alignment algorithm (Gale and Church, 1993), in a simple extension of the minimum edit distance algorithm we introduced in Chapter 2.
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
10.7.2
MT corpora
Figure 10 .17 A sample alignment between sentences in English and French, with sentences extracted from Antoine de Saint-Exupery's Le Petit Prince and a hypothetical translation. Sentence alignment takes sentences e 1 , ..., e n , and f 1 , ..., f n and finds minimal sets of sentences that are translations of each other, including single sentence mappings like (e 1 ,f 1 ), (e 4 -f 3 ), (e 5 -f 4 ), (e 6 -f 6 ) as well as 2-1 alignments (e 2 /e 3 ,f 2 ), (e 7 /e 8 -f 7 ), and null alignments (f 5 ).
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
10.7.2
MT corpora
Finally, it's helpful to do some corpus cleanup by removing noisy sentence pairs. This can involve handwritten rules to remove low-precision pairs (for example removing sentences that are too long, too short, have different URLs, or even pairs that are too similar, suggesting that they were copies rather than translations). Or pairs can be ranked by their multilingual embedding cosine score and low-scoring pairs discarded.
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
10.7.3
Backtranslation
We're often short of data for training MT models, since parallel corpora may be limited for particular languages or domains. However, often we can find a large monolingual corpus, to add to the smaller parallel corpora that are available.
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
10.7.3
Backtranslation
Backtranslation is a way of making use of monolingual corpora in the target backtranslation language by creating synthetic bitexts. In backtranslation, we train an intermediate target-to-source MT system on the small bitext to translate the monolingual target data to the source language. Now we can add this synthetic bitext (natural target sentences, aligned with MT-produced source sentences) to our training data, and retrain our source-to-target MT model. For example suppose we want to translate from Navajo to English but only have a small Navajo-English bitext, although of course we can find lots of monolingual English data. We use the small bitext to build an MT engine going the other way (from English to Navajo). Once we translate the monolingual English text to Navajo, we can add this synthetic Navajo/English bitext to our training data. Backtranslation has various parameters. One is how we generate the backtranslated data; we can run the decoder in greedy inference, or use beam search. Or we can do sampling, or Monte Carlo search. In Monte Carlo decoding, at each Monte Carlo search timestep, instead of always generating the word with the highest softmax probability, we roll a weighted die, and use it to choose the next word according to its 10.8 • MT EVALUATION 227 softmax probability. This works just like the sampling algorithm we saw in Chapter 3 for generating random sentences from n-gram language models. Imagine there are only 4 words and the softmax probability distribution at time t is (the: 0.6, green: 0.2, a: 0.1, witch: 0.1). We roll a weighted die, with the 4 sides weighted 0.6, 0.2, 0.1, and 0.1, and chose the word based on which side comes up. Another parameter is the ratio of backtranslated data to natural bitext data; we can choose to upsample the bitext data (include multiple copies of each sentence).
10
Machine Translation and Encoder-Decoder Models
10.7
Some practical details on building MT systems
10.7.3
Backtranslation
In general backtranslation works surprisingly well; one estimate suggests that a system trained on backtranslated text gets about 2/3 of the gain as would training on the same amount of natural bitext (Edunov et al., 2018).
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
nan
nan
Translations are evaluated along two dimensions:
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
nan
nan
1. adequacy: how well the translation captures the exact meaning of the source adequacy sentence. Sometimes called faithfulness or fidelity.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
nan
nan
2. fluency: how fluent the translation is in the target language (is it grammatical, fluency clear, readable, natural).
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
nan
nan
Using humans to evaluate is most accurate, but automatic metrics are also used for convenience.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.1
Using Human Raters to Evaluate MT
The most accurate evaluations use human raters, such as online crowdworkers, to evaluate each translation along the two dimensions. For example, along the dimension of fluency, we can ask how intelligible, how clear, how readable, or how natural the MT output (the target text) is. We can give the raters a scale, for example, from 1 (totally unintelligible) to 5 (totally intelligible, or 1 to 100, and ask them to rate each sentence or paragraph of the MT output.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.1
Using Human Raters to Evaluate MT
We can do the same thing to judge the second dimension, adequacy, using raters to assign scores on a scale. If we have bilingual raters, we can give them the source sentence and a proposed target sentence, and rate, on a 5-point or 100-point scale, how much of the information in the source was preserved in the target. If we only have monolingual raters but we have a good human translation of the source text, we can give the monolingual raters the human reference translation and a target machine translation and again rate how much information is preserved. An alternative is to do ranking: give the raters a pair of candidate translations, and ask them which one ranking they prefer.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.1
Using Human Raters to Evaluate MT
Training of human raters (who are often online crowdworkers) is essential; raters without translation expertise find it difficult to separate fluency and adequacy, and so training includes examples carefully distinguishing these. Raters often disagree (sources sentences may be ambiguous, raters will have different world knowledge, raters may apply scales differently). It is therefore common to remove outlier raters, and (if we use a fine-grained enough scale) normalizing raters by subtracting the mean from their scores and dividing by the variance.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
While humans produce the best evaluations of machine translation output, running a human evaluation can be time consuming and expensive. For this reason automatic metrics are often used. Automatic metrics are less accurate than human evaluation, but can help test potential system improvements, and even be used as an automatic loss function for training. In this section we introduce two families of such metrics, those based on character-or word-overlap and those based on embedding similarity.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
Automatic evaluation by Character Overlap: chrF
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
The simplest and most robust metric for MT evaluation is called chrF, which stands chrF for character F-score (Popović, 2015) . chrF (along with many other earlier related metrics like BLEU, METEOR, TER, and others) is based on a simple intuition derived from the pioneering work of Miller and Beebe-Center (1956): a good machine translation will tend to contain characters and words that occur in a human translation of the same sentence. Consider a test set from a parallel corpus, in which each source sentence has both a gold human target translation and a candidate MT translation we'd like to evaluate. The chrF metric ranks each MT target sentence by a function of the number of character n-gram overlaps with the human translation.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
Given the hypothesis and the reference, chrF is given a parameter k indicating the length of character n-grams to be considered, and computes the average of the k precisions (unigram precision, bigram, and so on) and the average of the k recalls (unigram recall, bigram recall, etc.):
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
chrP percentage of character 1-grams, 2-grams, ..., k-grams in the hypothesis that occur in the reference, averaged. chrR of character 1-grams, 2-grams,..., k-grams in the reference that occur in the hypothesis, averaged.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
The metric then computes an F-score by combining chrP and chrR using a weighting parameter β . It is common to set β = 2, thus weighing recall twice as much as precision:
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
chrFβ = (1 + β 2 ) chrP • chrR β 2 • chrP + chrR (10.24)
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
For β = 2, that would be:
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
chrF2 = 5 • chrP • chrR 4 • chrP + chrR
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
For example, consider two hypotheses that we'd like to score against the reference translation witness for the past. Here are the hypotheses along with chrF values computed using parameters k = β = 2 (in real examples, k would be a higher number like 6): REF: witness for the past, HYP1: witness of the past, chrF2,2 = .86 HYP2: past witness chrF2,2 = .62
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
Let's see how we computed that chrF value for HYP1 (we'll leave the computation of the chrF value for HYP2 as an exercise for the reader). First, chrF ignores spaces, so we'll remove them from both the reference and hypothesis: REF: witnessforthepast, (18 unigrams, 17 bigrams) HYP1: witnessofthepast, (17 unigrams, 16 bigrams)
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
Next let's see how many unigrams and bigrams match between the reference and hypothesis: unigrams that match: w i t n e s s f o t h e p a s t , (17 unigrams) bigrams that match: wi it tn ne es ss th he ep pa as st t, (13 bigrams)
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
We use that to compute the unigram and bigram precisions and recalls: unigram P: 17/17 = 1 unigram R: 17/18 = .944 bigram P: 13/16 = .813 bigram R: 13/17 = .765
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
Finally we average to get chrP and chrR, and compute the F-score: chrP = (17/17 + 13/16)/2 = .906 chrR = (17/18 + 13/17)/2 = .855 chrF2,2 = 5 chrP * chrR 4chrP + chrR = .86 chrF is simple, robust, and correlates very well with human judgments in many languages (Kocmi et al., 2021) . There are various alternative overlap metrics. For example, before the development of chrF, it was common to use a word-based overlap metric called BLEU (for BiLingual Evaluation Understudy), that is purely precision-based rather than combining precision and recall (Papineni et al., 2002) . The BLEU score for a corpus of candidate translation sentences is a function of the n-gram word precision over all the sentences combined with a brevity penalty computed over the corpus as a whole. Because BLEU is a word-based metric, it is very sensitive to word tokenization, making it difficult to compare across situations, and doesn't work as well in languages with complex morphology.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
Statstical Significance Testing for MT evals
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
Character or word overlap-based metrics like chrF (or BLEU, or etc.) are mainly used to compare two systems, with the goal of answering questions like: did the new algorithm we just invented improve our MT system? To know if the difference between the chrF scores of two MT systems is a significant difference, we use the paired bootstrap test, or the similar randomization test.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
To get a confidence interval on a single chrF score using the bootstrap test, recall from Section 4.9 that we take our test set (or devset) and create thousands of pseudotestsets by repeatedly sampling with replacement from the original test set. We now compute the chrF score of each of the pseudo-testsets. If we drop the top 2.5% and bottom 2.5% of the scores, the remaining scores will give us the 95% confidence interval for the chrF score of our system.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
To compare two MT systems A and B, we draw the same set of pseudo-testsets, and compute the chrF scores for each of them. We then compute the percentage of pseudo-test-sets in which A has a higher chrF score than B.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
chrF: Limitations
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.2
Automatic Evaluation
While automatic character and word-overlap metrics like chrF or BLEU are useful, they have important limitations. chrF is very local: a large phrase that is moved around might barely change the chrF score at all, and chrF can't evaluate crosssentence properties of a document like its discourse coherence (Chapter 22). chrF and similar automatic metrics also do poorly at comparing very different kinds of systems, such as comparing human-aided translation against machine translation, or different machine translation architectures against each other (Callison-Burch et al., 2006) . Instead, automatic overlap metrics like chrF are most appropriate when evaluating changes to a single system.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.3
Automatic Evaluation: Embedding-Based Methods
The chrF metric is based on measuring the exact character n-grams a human reference and candidate machine translation have in common. However, this criterion is overly strict, since a good translation may use alternate words or paraphrases. A solution first pioneered in early metrics like METEOR (Banerjee and Lavie, 2005) was to allow synonyms to match between the reference x and candidatex. More recent metrics use BERT or other embeddings to implement this intuition.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.3
Automatic Evaluation: Embedding-Based Methods
For example, in some situations we might have datasets that have human assessments of translation quality. Such datasets consists of tuples (x,x, r), where x = (x 1 , . . . , x n ) is a reference translation,x = (x 1 , . . . ,x m ) is a candidate machine translation, and r ∈ R is a human rating that expresses the quality ofx with respect to x. Given such data, algorithms like COMET (Rei et al., 2020) BLEURT (Sellam et al., 2020) train a predictor on the human-labeled datasets, for example by passing x andx through a version of BERT (trained with extra pretraining, and then finetuned on the human-labeled sentences), followed by a linear layer that is trained to predict r. The output of such models correlates highly with human labels.
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.3
Automatic Evaluation: Embedding-Based Methods
In other cases, however, we don't have such human-labeled datasets. In that case we can measure the similarity of x andx by the similarity of their embeddings. The BERTSCORE algorithm (Zhang et al., 2020) shown in Fig. 10 .18, for example, passes the reference x and the candidatex through BERT, computing a BERT embedding for each token x i andx j . Each pair of tokens (x i ,x j ) is scored by its cosine
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.3
Automatic Evaluation: Embedding-Based Methods
x i •x j |x i ||x j | .
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.3
Automatic Evaluation: Embedding-Based Methods
Each token in x is matched to a token inx to compute recall, and each token iñ x is matched to a token in x to compute precision (with each token greedily matched to the most similar token in the corresponding sentence). BERTSCORE provides precision and recall (and hence F 1 ):
10
Machine Translation and Encoder-Decoder Models
10.8
MT Evaluation
10.8.3
Automatic Evaluation: Embedding-Based Methods
EQUATION
10
Machine Translation and Encoder-Decoder Models
10.9
Bias and Ethical Issues
nan
nan
Machine translation raises many of the same ethical issues that we've discussed in earlier chapters. For example, consider MT systems translating from Hungarian (which has the gender neutral pronounő) or Spanish (which often drops pronouns) into English (in which pronouns are obligatory, and they have grammatical gender). When translating a reference to a person described without specified gender, MT systems often default to male gender (Schiebinger 2014 , Prates et al. 2019 . And MT systems often assign gender according to culture stereotypes of the sort we saw in Section 6.11. Fig. 10 .19 shows examples from Prates et al. 2019, in which Hungarian gender-neutralő is a nurse is translated with she, but gender-neutralő is a CEO is translated with he. Prates et al. (2019) find that these stereotypes can't completely be accounted for by gender bias in US labor statistics, because the biases are amplified by MT systems, with pronouns being mapped to male or female gender with a probability higher than if the mapping was based on actual labor employment statistics.
10
Machine Translation and Encoder-Decoder Models
10.9
Bias and Ethical Issues
nan
nan
Hungarian (gender neutral) source English MT output o egyápoló she is a nurse o egy tudós he is a scientist o egy mérnök he is an engineer o egy pék he is a baker o egy tanár she is a teacher o egy vesküvőszervező she is a wedding organizer o egy vezérigazgató he is a CEO Figure 10 .19 When translating from gender-neutral languages like Hungarian into English, current MT systems interpret people from traditionally male-dominated occupations as male, and traditionally female-dominated occupations as female (Prates et al., 2019) .
10
Machine Translation and Encoder-Decoder Models
10.9
Bias and Ethical Issues
nan
nan
Similarly, a recent challenge set, the WinoMT dataset (Stanovsky et al., 2019) shows that MT systems perform worse when they are asked to translate sentences that describe people with non-stereotypical gender roles, like "The doctor asked the nurse to help her in the operation".
10
Machine Translation and Encoder-Decoder Models
10.9
Bias and Ethical Issues
nan
nan
Many ethical questions in MT require further research. One open problem is developing metrics for knowing what our systems don't know. This is because MT systems can be used in urgent situations where human translators may be unavailable or delayed: in medical domains, to help translate when patients and doctors don't speak the same language, or in legal domains, to help judges or lawyers communicate with witnesses or defendants. In order to 'do no harm', systems need ways to assign confidence values to candidate translations, so they can abstain from giving confidence incorrect translations that may cause harm.
10
Machine Translation and Encoder-Decoder Models
10.9
Bias and Ethical Issues
nan
nan
Another is the need for low-resource algorithms that can translate to and from all the world's languages, the vast majority of which do not have large parallel training texts available. This problem is exacerbated by the tendency of many MT approaches to focus on the case where one of the languages is English (Anastasopoulos and Neubig, 2020). ∀ et al. (2020) propose a participatory design process to encourage content creators, curators, and language technologists who speak these low-resourced languages to participate in developing MT algorithms. They pro-low-resourced languages vide online groups, mentoring, and infrastructure, and report on a case study on developing MT algorithms for low-resource African languages.
10
Machine Translation and Encoder-Decoder Models
10.1
Summary
nan
nan
Machine translation is one of the most widely used applications of NLP, and the encoder-decoder model, first developed for MT is a key tool that has applications throughout NLP.
10
Machine Translation and Encoder-Decoder Models
10.1
Summary
nan
nan
• Languages have divergences, both structural and lexical, that make translation difficult. • The linguistic field of typology investigates some of these differences; languages can be classified by their position along typological dimensions like whether verbs precede their objects. • Encoder-decoder networks (either for RNNs or transformers) are composed of an encoder network that takes an input sequence and creates a contextualized representation of it, the context. This context representation is then passed to a decoder which generates a task-specific output sequence. • The attention mechanism in RNNs, and cross-attention in transformers, allows the decoder to view information from all the hidden states of the encoder. • For the decoder, choosing the single most probable token to generate at each step is called greedy decoding. • In beam search, instead of choosing the best token to generate at each timestep, we keep k possible tokens at each step. This fixed-size memory footprint k is called the beam width. • Machine translation models are trained on a parallel corpus, sometimes called a bitext, a text that appears in two (or more) languages. • Backtranslation is a way of making use of monolingual corpora in the target language by running a pilot MT engine backwards to create synthetic bitexts. • MT is evaluated by measuring a translation's adequacy (how well it captures the meaning of the source sentence) and fluency (how fluent or natural it is in the target language). Human evaluation is the gold standard, but automatic evaluation metrics like chrF, which measure character n-gram overlap with human translations, or more recent metrics based on embedding similarity, are also commonly used.
10
Machine Translation and Encoder-Decoder Models
10.11
Bibliographical and Historical Notes
nan
nan
MT was proposed seriously by the late 1940s, soon after the birth of the computer (Weaver, 1949 (Weaver, /1955 . In 1954, the first public demonstration of an MT system prototype (Dostert, 1955) led to great excitement in the press (Hutchins, 1997). The next decade saw a great flowering of ideas, prefiguring most subsequent developments. But this work was ahead of its time-implementations were limited by, for example, the fact that pending the development of disks there was no good way to store dictionary information.
10
Machine Translation and Encoder-Decoder Models
10.11
Bibliographical and Historical Notes
nan
nan
As high-quality MT proved elusive (Bar-Hillel, 1960) , there grew a consensus on the need for better evaluation and more basic research in the new fields of formal and computational linguistics. This consensus culminated in the famously critical ALPAC (Automatic Language Processing Advisory Committee) report of 1966 of (Pierce et al., 1966 that led in the mid 1960s to a dramatic cut in funding for MT in the US. As MT research lost academic respectability, the Association for Machine Translation and Computational Linguistics dropped MT from its name. Some MT developers, however, persevered, and there were early MT systems like Météo, which translated weather forecasts from English to French (Chandioux, 1976) , and industrial systems like Systran.
10
Machine Translation and Encoder-Decoder Models
10.11
Bibliographical and Historical Notes
nan
nan
In the early years, the space of MT architectures spanned three general models. In direct translation, the system proceeds word-by-word through the sourcelanguage text, translating each word incrementally. Direct translation uses a large bilingual dictionary, each of whose entries is a small program with the job of translating one word. In transfer approaches, we first parse the input text and then apply rules to transform the source-language parse into a target language parse. We then generate the target language sentence from the parse tree. In interlingua approaches, we analyze the source language text into some abstract meaning representation, called an interlingua. We then generate into the target language from this interlingual representation. A common way to visualize these three early approaches was the Vauquois triangle shown in Fig. 10 .20. The triangle shows the
10
Machine Translation and Encoder-Decoder Models
10.11
Bibliographical and Historical Notes
nan
nan
increasing depth of analysis required (on both the analysis and generation end) as we move from the direct approach through transfer approaches to interlingual approaches. In addition, it shows the decreasing amount of transfer knowledge needed as we move up the triangle, from huge amounts of transfer at the direct level (almost all knowledge is transfer knowledge for each word) through transfer (transfer rules only for parse trees or thematic roles) through interlingua (no specific transfer knowledge). We can view the encoder-decoder network as an interlingual approach, with attention acting as an integration of direct and transfer, allowing words or their representations to be directly accessed by the decoder. Statistical methods began to be applied around 1990, enabled first by the development of large bilingual corpora like the Hansard corpus of the proceedings of the Canadian Parliament, which are kept in both French and English, and then by the growth of the Web. Early on, a number of researchers showed that it was possible to extract pairs of aligned sentences from bilingual corpora, using words or simple cues like sentence length (Kay and Röscheisen 1988 , Gale and Church 1991 , Gale and Church 1993 , Kay and Röscheisen 1993 .
10
Machine Translation and Encoder-Decoder Models
10.11
Bibliographical and Historical Notes
nan
nan
At the same time, the IBM group, drawing directly on the noisy channel model for speech recognition, proposed two related paradigm for statistical MT. These statistical MT include the generative algorithms that became known as IBM Models 1 through IBM Models 5, implemented in the Candide system. The algorithms (except for the decoder) Candide were published in full detail-encouraged by the US government which had par- (Papineni et al., 2002) , NIST (Doddington, 2002) , TER (Translation Error Rate) (Snover et al., 2006) , Precision and Recall (Turian et al., 2003) , and METEOR (Banerjee and Lavie, 2005); character n-gram overlap methods like chrF (Popović, 2015) came later. More recent evaluation work, echoing the ALPAC report, has emphasized the importance of careful statistical methodology and the use of human evaluation (Kocmi et al., 2021; Marie et al., 2021) .
10
Machine Translation and Encoder-Decoder Models
10.11
Bibliographical and Historical Notes
nan
nan
The early history of MT is surveyed in Hutchins 1986 and 1997; Nirenburg et al. (2002) collects early readings. See Croft (1990) or Comrie (1989) for introductions to linguistic typology.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
nan
nan
nan
nan
"How much do we know at any time? Much more, or so I believe, than we know we know." Agatha Christie, The Moving Finger
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
nan
nan
nan
nan
Fluent speakers bring an enormous amount of knowledge to bear during comprehension and production of language. This knowledge is embodied in many forms, perhaps most obviously in the vocabulary. That is, in the rich representations associated with the words we know, including their grammatical function, meaning, real-world reference, and pragmatic function. This makes the vocabulary a useful lens to explore the acquisition of knowledge from text, by both people and machines.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
nan
nan
nan
nan
Estimates of the size of adult vocabularies vary widely both within and across languages. For example, estimates of the vocabulary size of young adult speakers of American English range from 30,000 to 100,000 depending on the resources used to make the estimate and the definition of what it means to know a word. What is agreed upon is that the vast majority of words that mature speakers use in their dayto-day interactions are acquired early in life through spoken interactions in context with care givers and peers, usually well before the start of formal schooling. This active vocabulary is extremely limited compared to the size of the adult vocabulary (usually on the order of 2000 words for young speakers) and is quite stable, with very few additional words learned via casual conversation beyond this early stage. Obviously, this leaves a very large number of words to be acquired by some other means.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
nan
nan
nan
nan
A simple consequence of these facts is that children have to learn about 7 to 10 words a day, every single day, to arrive at observed vocabulary levels by the time they are 20 years of age. And indeed empirical estimates of vocabulary growth in late elementary through high school are consistent with this rate. How do children achieve this rate of vocabulary growth given their daily experiences during this period? We know that most of this growth is not happening through direct vocabulary instruction in school since these methods are largely ineffective, and are not deployed at a rate that would result in the reliable acquisition of words at the required rate.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
nan
nan
nan
nan
The most likely remaining explanation is that the bulk of this knowledge acquisition happens as a by-product of reading. Research into the average amount of time children spend reading, and the lexical diversity of the texts they read, indicate that it is possible to achieve the desired rate. But the mechanism behind this rate of learning must be remarkable indeed, since at some points during learning the rate of vocabulary growth exceeds the rate at which new words are appearing to the learner!
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
nan
nan
nan
nan
Many of these facts have motivated approaches to word learning based on the distributional hypothesis, introduced in Chapter 6. This is the idea that something about what we're loosely calling word meanings can be learned even without any grounding in the real world, solely based on the content of the texts we've encountered over our lives. This knowledge is based on the complex association of words with the words they co-occur with (and with the words that those words occur with).
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
nan
nan
nan
nan
The crucial insight of the distributional hypothesis is that the knowledge that we acquire through this process can be brought to bear during language processing long after its initial acquisition in novel contexts. We saw in Chapter 6 that embeddings (static word representations) can be learned from text and then employed for other purposes like measuring word similarity or studying meaning change over time.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
nan
nan
nan
nan
In this chapter, we expand on this idea in two large ways. First, we'll introduce the idea of contextual embeddings: representations for words in context. The methods of Chapter 6 like word2vec or GloVe learned a single vector embedding for each unique word w in the vocabulary. By contrast, with contextual embeddings, such as those learned by popular methods like BERT (Devlin et al., 2019) or GPT (Radford et al., 2019) or their descendants, each word w will be represented by a different vector each time it appears in a different context.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
nan
nan
nan
nan
Second, we'll introduce in this chapter the idea of pretraining and fine-tuning. We call pretraining the process of learning some sort of representation of meaning for words or sentences by processing very large amounts of text. We'll call these pretrained models pretrained language models, since they can take the form of the transformer language models we introduced in Chapter 9. We call fine-tuning the process of taking the representations from these pretrained models, and further training the model, often via an added neural net classifier, to perform some downstream task like named entity tagging or question answering or coreference. The intuition is that the pretraining phase learns a language model that instantiates a rich representations of word meaning, that thus enables the model to more easily learn ('be fine-tuned to') the requirements of a downstream language understanding task. The pretrain-finetune paradigm is an instance of what is called transfer learning in machine learning: the method of acquiring knowledge from one task or domain, and then applying it (transferring it) to solve a new task. Of course, adding grounding from vision or from real-world interaction into pretrained models can help build even more powerful models, but even text alone is remarkably useful, and we will limit our attention here to purely textual models. There are two common paradigms for pretrained language models. One is the causal or left-to-right transformer model we introduced in Chapter 9. In this chapter we'll introduce a second paradigm, called the bidirectional transformer encoder, and the method of masked language modeling, introduced with the BERT model (Devlin et al., 2019 ) that allows the model to see entire texts at a time, including both the right and left context.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
nan
nan
nan
nan
Finally, we'll show how the contextual embeddings from these pretrained language models can be used to transfer the knowledge embodied in these models to novel applications via fine-tuning. Indeed, in later chapters we'll see pretrained language models fine-tuned to tasks from parsing to question answering, from information extraction to semantic parsing.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
Let's begin by introducing the bidirectional transformer encoder that underlies models like BERT and its descendants like RoBERTa (Liu et al., 2019) or SpanBERT (Joshi et al., 2020) . In Chapter 9 we explored causal (left-to-right) transformers that can serve as the basis for powerful language models-models that can easily be applied to autoregressive generation problems such as contextual generation, summarization and machine translation. However, when applied to sequence classification and labeling problems causal models have obvious shortcomings since they are based on an incremental, left-to-right processing of their inputs. If we want to assign the correct named-entity tag to each word in a sentence, or other sophisticated linguistic labels like the parse tags we'll introduce in later chapters, we'll want to be able to take into account information from the right context as we process each element. Fig. 11 .1, reproduced here from Chapter 9, illustrates the information flow in the purely left-to-right approach of Chapter 9. As can be seen, the hidden state computation at each point in time is based solely on the current and earlier elements of the input, ignoring potentially useful information located to the right of each tagging decision.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
Figure 11 .1 A causal, backward looking, transformer model like Chapter 9. Each output is computed independently of the others using only information seen earlier in the context.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
Figure 11 .2 Information flow in a bidirectional self-attention model. In processing each element of the sequence, the model attends to all inputs, both before and after the current one.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
Bidirectional encoders overcome this limitation by allowing the self-attention mechanism to range over the entire input, as shown in Fig. 11 .2. The focus of bidirectional encoders is on computing contextualized representations of the tokens in an input sequence that are generally useful across a range of downstream applications. Therefore, bidirectional encoders use self-attention to map sequences of input embeddings (x 1 , ..., x n ) to sequences of output embeddings the same length (y 1 , ..., y n ), where the output vectors have been contextualized using information from the entire input sequence.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
This contextualization is accomplished through the use of the same self-attention mechanism used in causal models. As with these models, the first step is to generate a set of key, query and value embeddings for each element of the input vector x through the use of learned weight matrices W Q , W K , and W V . These weights project each input vector x i into its specific role as a key, query, or value.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
q i = W Q x i ; k i = W K x i ; v i = W V x i (11.1)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
The output vector y i corresponding to each input element x i is a weighted sum of all the input value vectors v, as follows:
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
y i = n j=i α i j v j (11.2)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
The α weights are computed via a softmax over the comparison scores between every element of an input sequence considered as a query and every other element as a key, where the comparison scores are computed using dot products.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
α i j = exp(score i j ) n k=1 exp(score ik ) (11.3) score i j = q i • k j (11.4)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
Since each output vector, y i , is computed independently, the processing of an entire sequence can be parallelized via matrix operations. The first step is to pack the input embeddings x i into a matrix X ∈ R N×d h . That is, each row of X is the embedding of one token of the input. We then multiply X by the key, query, and value weight matrices (all of dimensionality d × d) to produce matrices Q ∈ R N×d , K ∈ R N×d , and V ∈ R N×d , containing all the key, query, and value vectors in a single step.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
Q = XW Q ; K = XW K ; V = XW V (11.5)
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
Given these matrices we can compute all the requisite query-key comparisons simultaneously by multiplying Q and K in a single operation. Fig. 11 .3 illustrates the result of this operation for an input with length 5.
11
Transfer Learning with Pretrained Language Models and Contextual Embeddings
11.1
Bidirectional Transformer Encoders
nan
nan
Finally, we can scale these scores, take the softmax, and then multiply the result by V resulting in a matrix of shape N × d where each row contains a contextualized output embedding corresponding to each token in the input.