n_chapter
stringclasses 10
values | chapter
stringclasses 10
values | n_section
stringlengths 3
5
| section
stringlengths 3
48
| n_subsection
stringlengths 3
6
| subsection
stringlengths 3
51
| text
stringlengths 1
2.65k
|
---|---|---|---|---|---|---|
5 | Logistic Regression | 5.8 | Advanced: Deriving the Gradient Equation | nan | nan | First, we want to know the derivative of the loss function with respect to a single weight w j (we'll need to compute it for each weight, and for the bias): |
5 | Logistic Regression | 5.8 | Advanced: Deriving the Gradient Equation | nan | nan | β L CE β w j = β β w j β [y log Ο (w β’ x + b) + (1 β y) log (1 β Ο (w β’ x + b))] = β β β w j y log Ο (w β’ x + b) + β β w j (1 β y) log [1 β Ο (w β’ x + b)] (5.41) |
5 | Logistic Regression | 5.8 | Advanced: Deriving the Gradient Equation | nan | nan | Next, using the chain rule, and relying on the derivative of log: |
5 | Logistic Regression | 5.8 | Advanced: Deriving the Gradient Equation | nan | nan | β L CE β w j = β y Ο (w β’ x + b) β β w j Ο (w β’ x + b) β 1 β y 1 β Ο (w β’ x + b) β β w j 1 β Ο (w β’ x + b) (5.42) |
5 | Logistic Regression | 5.8 | Advanced: Deriving the Gradient Equation | nan | nan | Rearranging terms: |
5 | Logistic Regression | 5.8 | Advanced: Deriving the Gradient Equation | nan | nan | β L CE β w j = β y Ο (w β’ x + b) β 1 β y 1 β Ο (w β’ x + b) β β w j Ο (w β’ x + b) (5.43) |
5 | Logistic Regression | 5.8 | Advanced: Deriving the Gradient Equation | nan | nan | And now plugging in the derivative of the sigmoid, and using the chain rule one more time, we end up with Eq. 5.44: |
5 | Logistic Regression | 5.8 | Advanced: Deriving the Gradient Equation | nan | nan | β L CE β w j = β y β Ο (w β’ x + b) Ο (w β’ x + b)[1 β Ο (w β’ x + b)] Ο (w β’ x + b)[1 β Ο (w β’ x + b)] β (w β’ x + b) β w j = β y β Ο (w β’ x + b) Ο (w β’ x + b)[1 β Ο (w β’ x + b)] Ο (w β’ x + b)[1 β Ο (w β’ x + b)]x j = β[y β Ο (w β’ x + b)]x j = [Ο (w β’ x + b) β y]x j (5.44) |
5 | Logistic Regression | 5.9 | Summary | nan | nan | This chapter introduced the logistic regression model of classification. |
5 | Logistic Regression | 5.9 | Summary | nan | nan | β’ Logistic regression is a supervised machine learning classifier that extracts real-valued features from the input, multiplies each by a weight, sums them, and passes the sum through a sigmoid function to generate a probability. A threshold is used to make a decision. β’ Logistic regression can be used with two classes (e.g., positive and negative sentiment) or with multiple classes (multinomial logistic regression, for example for n-ary text classification, part-of-speech labeling, etc.). β’ Multinomial logistic regression uses the softmax function to compute probabilities. β’ The weights (vector w and bias b) are learned from a labeled training set via a loss function, such as the cross-entropy loss, that must be minimized. β’ Minimizing this loss function is a convex optimization problem, and iterative algorithms like gradient descent are used to find the optimal weights. β’ Regularization is used to avoid overfitting. |
5 | Logistic Regression | 5.9 | Summary | nan | nan | β’ Logistic regression is also one of the most useful analytic tools, because of its ability to transparently study the importance of individual features. |
5 | Logistic Regression | 5.1 | Bibliographical and Historical Notes | nan | nan | Logistic regression was developed in the field of statistics, where it was used for the analysis of binary data by the 1960s, and was particularly common in medicine (Cox, 1969) . Starting in the late 1970s it became widely used in linguistics as one of the formal foundations of the study of linguistic variation (Sankoff and Labov, 1979) . Nonetheless, logistic regression didn't become common in natural language processing until the 1990s, when it seems to have appeared simultaneously from two directions. The first source was the neighboring fields of information retrieval and speech processing, both of which had made use of regression, and both of which lent many other statistical techniques to NLP. Indeed a very early use of logistic regression for document routing was one of the first NLP applications to use (LSI) embeddings as word representations (SchΓΌtze et al., 1995) . |
5 | Logistic Regression | 5.1 | Bibliographical and Historical Notes | nan | nan | At the same time in the early 1990s logistic regression was developed and applied to NLP at IBM Research under the name maximum entropy modeling or maximum entropy maxent (Berger et al., 1996) , seemingly independent of the statistical literature. Under that name it was applied to language modeling (Rosenfeld, 1996) , part-of-speech tagging (Ratnaparkhi, 1996) , parsing (Ratnaparkhi, 1997), coreference resolution (Kehler, 1997b) , and text classification (Nigam et al., 1999) . More on classification can be found in machine learning textbooks (Hastie et al. 2001 , Witten and Frank 2005 , Bishop 2006 , Murphy 2012 |
6 | Vector Semantics and Embeddings | nan | nan | nan | nan | Nets are for fish; Once you get the fish, you can forget the net. |
6 | Vector Semantics and Embeddings | nan | nan | nan | nan | Words are for meaning; Once you get the meaning, you can forget the words |
6 | Vector Semantics and Embeddings | nan | nan | nan | nan | εΊε(Zhuangzi), Chapter 26 |
6 | Vector Semantics and Embeddings | nan | nan | nan | nan | The asphalt that Los Angeles is famous for occurs mainly on its freeways. But in the middle of the city is another patch of asphalt, the La Brea tar pits, and this asphalt preserves millions of fossil bones from the last of the Ice Ages of the Pleistocene Epoch. One of these fossils is the Smilodon, or saber-toothed tiger, instantly recognizable by its long canines. Five million years ago or so, a completely different sabre-tooth tiger called Thylacosmilus lived in Argentina and other parts of South America. Thylacosmilus was a marsupial whereas Smilodon was a placental mammal, but Thylacosmilus had the same long upper canines and, like Smilodon, had a protective bone flange on the lower jaw. The similarity of these two mammals is one of many examples of parallel or convergent evolution, in which particular contexts or environments lead to the evolution of very similar structures in different species (Gould, 1980) . The role of context is also important in the similarity of a less biological kind of organism: the word. Words that occur in similar contexts tend to have similar meanings. This link between similarity in how words are distributed and similarity in what they mean is called the distributional hypothesis. The hypothesis was distributional hypothesis first formulated in the 1950s by linguists like Joos (1950), Harris (1954) , and Firth (1957), who noticed that words which are synonyms (like oculist and eye-doctor) tended to occur in the same environment (e.g., near words like eye or examined) with the amount of meaning difference between two words "corresponding roughly to the amount of difference in their environments" (Harris, 1954, 157) . In this chapter we introduce vector semantics, which instantiates this linguistic vector semantics hypothesis by learning representations of the meaning of words, called embeddings, embeddings directly from their distributions in texts. These representations are used in every natural language processing application that makes use of meaning, and the static embeddings we introduce here underlie the more powerful dynamic or contextualized embeddings like BERT that we will see in Chapter 11. |
6 | Vector Semantics and Embeddings | nan | nan | nan | nan | These word representations are also the first example in this book of representation learning, automatically learning useful representations of the input text. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Finding such self-supervised ways to learn representations of the input, instead of creating representations by hand via feature engineering, is an important focus of NLP research (Bengio et al., 2013). |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Let's begin by introducing some basic principles of word meaning. How should we represent the meaning of a word? In the n-gram models of Chapter 3, and in classical NLP applications, our only representation of a word is as a string of letters, or an index in a vocabulary list. This representation is not that different from a tradition in philosophy, perhaps you've seen it in introductory logic classes, in which the meaning of words is represented by just spelling the word with small capital letters; representing the meaning of "dog" as DOG, and "cat" as CAT. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Representing the meaning of a word by capitalizing it is a pretty unsatisfactory model. You might have seen a joke due originally to semanticist Barbara Partee (Carlson, 1977) : Q: What's the meaning of life? A: LIFE' Surely we can do better than this! After all, we'll want a model of word meaning to do all sorts of things for us. It should tell us that some words have similar meanings (cat is similar to dog), others are antonyms (cold is the opposite of hot), some have positive connotations (happy) while others have negative connotations (sad). It should represent the fact that the meanings of buy, sell, and pay offer differing perspectives on the same underlying purchasing event (If I buy something from you, you've probably sold it to me, and I likely paid you). More generally, a model of word meaning should allow us to draw inferences to address meaning-related tasks like question-answering or dialogue. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | In this section we summarize some of these desiderata, drawing on results in the linguistic study of word meaning, which is called lexical semantics; we'll return to lexical semantics and expand on this list in Chapter 18 and Chapter 10. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Lemmas and Senses Let's start by looking at how one word (we'll choose mouse) might be defined in a dictionary (simplified from the online dictionary WordNet): mouse (N) 1. any of numerous small rodents... 2. a hand-operated device that controls a cursor... |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Here the form mouse is the lemma, also called the citation form. The form lemma citation form mouse would also be the lemma for the word mice; dictionaries don't have separate definitions for inflected forms like mice. Similarly sing is the lemma for sing, sang, sung. In many languages the infinitive form is used as the lemma for the verb, so Spanish dormir "to sleep" is the lemma for duermes "you sleep". The specific forms sung or carpets or sing or duermes are called wordforms. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | wordform As the example above shows, each lemma can have multiple meanings; the lemma mouse can refer to the rodent or the cursor control device. We call each of these aspects of the meaning of mouse a word sense. The fact that lemmas can be polysemous (have multiple senses) can make interpretation difficult (is someone who types "mouse info" into a search engine looking for a pet or a tool?). Chapter 18 will discuss the problem of polysemy, and introduce word sense disambiguation, the task of determining which sense of a word is being used in a particular context. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Synonymy One important component of word meaning is the relationship between word senses. For example when one word has a sense whose meaning is identical to a sense of another word, or nearly identical, we say the two senses of those two words are synonyms. Synonyms include such pairs as couch/sofa vomit/throw up filbert/hazelnut car/automobile A more formal definition of synonymy (between words rather than senses) is that two words are synonymous if they are substitutable for one another in any sentence without changing the truth conditions of the sentence, the situations in which the sentence would be true. We often say in this case that the two words have the same propositional meaning. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | While substitutions between some pairs of words like car / automobile or water / H 2 O are truth preserving, the words are still not identical in meaning. Indeed, probably no two words are absolutely identical in meaning. One of the fundamental tenets of semantics, called the principle of contrast (Girard 1718, BrΓ©al 1897, Clark principle of contrast 1987), states that a difference in linguistic form is always associated with some difference in meaning. For example, the word H 2 O is used in scientific contexts and would be inappropriate in a hiking guide-water would be more appropriate-and this genre difference is part of the meaning of the word. In practice, the word synonym is therefore used to describe a relationship of approximate or rough synonymy. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Word Similarity While words don't have many synonyms, most words do have lots of similar words. Cat is not a synonym of dog, but cats and dogs are certainly similar words. In moving from synonymy to similarity, it will be useful to shift from talking about relations between word senses (like synonymy) to relations between words (like similarity). Dealing with words avoids having to commit to a particular representation of word senses, which will turn out to simplify our task. The notion of word similarity is very useful in larger semantic tasks. Knowing similarity how similar two words are can help in computing how similar the meaning of two phrases or sentences are, a very important component of tasks like question answering, paraphrasing, and summarization. One way of getting values for word similarity is to ask humans to judge how similar one word is to another. A number of datasets have resulted from such experiments. For example the SimLex-999 dataset (Hill et al., 2015) gives values on a scale from 0 to 10, like the examples below, which range from near-synonyms (vanish, disappear) to pairs that scarcely seem to have anything in common (hole, agreement): Consider the meanings of the words coffee and cup. Coffee is not similar to cup; they share practically no features (coffee is a plant or a beverage, while a cup is a manufactured object with a particular shape). But coffee and cup are clearly related; they are associated by co-participating in an everyday event (the event of drinking coffee out of a cup). Similarly scalpel and surgeon are not similar but are related eventively (a surgeon tends to make use of a scalpel). |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | One common kind of relatedness between words is if they belong to the same semantic field. A semantic field is a set of words which cover a particular semantic semantic field domain and bear structured relations with each other. For example, words might be related by being in the semantic field of hospitals (surgeon, scalpel, nurse, anesthetic, hospital), restaurants (waiter, menu, plate, food, chef), or houses (door, roof, kitchen, family, bed). Semantic fields are also related to topic models, like Latent topic models Dirichlet Allocation, LDA, which apply unsupervised learning on large sets of texts to induce sets of associated words from text. Semantic fields and topic models are very useful tools for discovering topical structure in documents. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | In Chapter 18 we'll introduce more relations between senses like hypernymy or IS-A, antonymy (opposites) and meronymy (part-whole relations). |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Semantic Frames and Roles Closely related to semantic fields is the idea of a semantic frame. A semantic frame is a set of words that denote perspectives or semantic frame participants in a particular type of event. A commercial transaction, for example, is a kind of event in which one entity trades money to another entity in return for some good or service, after which the good changes hands or perhaps the service is performed. This event can be encoded lexically by using verbs like buy (the event from the perspective of the buyer), sell (from the perspective of the seller), pay (focusing on the monetary aspect), or nouns like buyer. Frames have semantic roles (like buyer, seller, goods, money), and words in a sentence can take on these roles. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Knowing that buy and sell have this relation makes it possible for a system to know that a sentence like Sam bought the book from Ling could be paraphrased as Ling sold the book to Sam, and that Sam has the role of the buyer in the frame and Ling the seller. Being able to recognize such paraphrases is important for question answering, and can help in shifting perspective for machine translation. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Connotation Finally, words have affective meanings or connotations. The word connotations has different meanings in different fields, but here we use it to mean the aspects of a word's meaning that are related to a writer or reader's emotions, sentiment, opinions, or evaluations. For example some words have positive connotations (happy) while others have negative connotations (sad). Even words whose meanings are similar in other ways can vary in connotation; consider the difference in connotations between fake, knockoff, forgery, on the one hand, and copy, replica, reproduction on the other, or innocent (positive connotation) and naive (negative connotation). Some words describe positive evaluation (great, love) and others negative evaluation (terrible, hate). Positive or negative evaluation language is called sentiment, as we saw in Chapter 4, and word sentiment plays a role in important sentiment tasks like sentiment analysis, stance detection, and applications of NLP to the language of politics and consumer reviews. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Early work on affective meaning (Osgood et al., 1957) found that words varied along three important dimensions of affective meaning: |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | valence: the pleasantness of the stimulus arousal: the intensity of emotion provoked by the stimulus dominance: the degree of control exerted by the stimulus Thus words like happy or satisfied are high on valence, while unhappy or annoyed are low on valence. Excited is high on arousal, while calm is low on arousal. |
6 | Vector Semantics and Embeddings | 6.1 | Lexical Semantics | nan | nan | Controlling is high on dominance, while awed or influenced are low on dominance. Each word is thus represented by three numbers, corresponding to its value on each of the three dimensions: (1957) noticed that in using these 3 numbers to represent the meaning of a word, the model was representing each word as a point in a threedimensional space, a vector whose three dimensions corresponded to the word's rating on the three scales. This revolutionary idea that word meaning could be represented as a point in space (e.g., that part of the meaning of heartbreak can be represented as the point [2.45, 5.65, 3.58]) was the first expression of the vector semantics models that we introduce next. |
6 | Vector Semantics and Embeddings | 6.2 | Vector Semantics | nan | nan | Vectors semantics is the standard way to represent word meaning in NLP, helping vector semantics us model many of the aspects of word meaning we saw in the previous section. The roots of the model lie in the 1950s when two big ideas converged: Osgood's 1957 idea mentioned above to use a point in three-dimensional space to represent the connotation of a word, and the proposal by linguists like Joos (1950), Harris (1954), and Firth (1957) to define the meaning of a word by its distribution in language use, meaning its neighboring words or grammatical environments. Their idea was that two words that occur in very similar distributions (whose neighboring words are similar) have similar meanings. |
6 | Vector Semantics and Embeddings | 6.2 | Vector Semantics | nan | nan | For example, suppose you didn't know the meaning of the word ongchoi (a recent borrowing from Cantonese) but you see it in the following contexts: The fact that ongchoi occurs with words like rice and garlic and delicious and salty, as do words like spinach, chard, and collard greens might suggest that ongchoi is a leafy green similar to these other leafy greens. 1 We can do the same thing computationally by just counting words in the context of ongchoi. |
6 | Vector Semantics and Embeddings | 6.2 | Vector Semantics | nan | nan | The idea of vector semantics is to represent a word as a point in a multidimensional semantic space that is derived (in ways we'll see) from the distributions of word neighbors. Vectors for representing words are called embeddings (although embeddings the term is sometimes more strictly applied only to dense vectors like word2vec (Section 6.8), rather than sparse tf-idf or PPMI vectors (Section 6.3-Section 6.6)). The word "embedding" derives from its mathematical sense as a mapping from one space or structure to another, although the meaning has shifted; see the end of the chapter. Figure 6 .1 A two-dimensional (t-SNE) projection of embeddings for some words and phrases, showing that words with similar meanings are nearby in space. The original 60dimensional embeddings were trained for sentiment analysis. Simplified from Li et al. 2015with colors added for explanation. |
6 | Vector Semantics and Embeddings | 6.2 | Vector Semantics | nan | nan | The fine-grained model of word similarity of vector semantics offers enormous power to NLP applications. NLP applications like the sentiment classifiers of Chapter 4 or Chapter 5 depend on the same words appearing in the training and test sets. But by representing words as embeddings, classifiers can assign sentiment as long as it sees some words with similar meanings. And as we'll see, vector semantic models can be learned automatically from text without supervision. |
6 | Vector Semantics and Embeddings | 6.2 | Vector Semantics | nan | nan | In this chapter we'll introduce the two most commonly used models. In the tf-idf model, an important baseline, the meaning of a word is defined by a simple function of the counts of nearby words. We will see that this method results in very long vectors that are sparse, i.e. mostly zeros (since most words simply never occur in the context of others). We'll introduce the word2vec model family for constructing short, dense vectors that have useful semantic properties. We'll also introduce the cosine, the standard way to use embeddings to compute semantic similarity, between two words, two sentences, or two documents, an important tool in practical applications like question answering, summarization, or automatic essay grading. |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | nan | nan | "The most important attributes of a vector in 3-space are {Location, Location, Location}" Randall Munroe, https://xkcd.com/2358/ Vector or distributional models of meaning are generally based on a co-occurrence matrix, a way of representing how often words co-occur. We'll look at two popular matrices: the term-document matrix and the term-term matrix. |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | 6.3.1 | Vectors and documents | In a term-document matrix, each row represents a word in the vocabulary and each term-document matrix column represents a document from some collection of documents. Fig. 6 .2 shows a small selection from a term-document matrix showing the occurrence of four words in four plays by Shakespeare. Each cell in this matrix represents the number of times a particular word (defined by the row) occurs in a particular document (defined by the column). Thus fool appeared 58 times in Twelfth Night. |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | 6.3.1 | Vectors and documents | The term-document matrix of Fig. 6 .2 was first defined as part of the vector space model of information retrieval (Salton, 1971). In this model, a document is represented as a count vector, a column in Fig. 6.3. |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | 6.3.1 | Vectors and documents | To review some basic linear algebra, a vector is, at heart, just a list or array of numbers. So As You Like It is represented as the list [1,114,36,20] (the first column vector in Fig. 6.3) and Julius Caesar is represented as the list [7,62,1,2] (the third column vector). A vector space is a collection of vectors, characterized by their dimension. |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | 6.3.1 | Vectors and documents | In the example in Fig. 6 .3, the document vectors are of dimension 4, dimension just so they fit on the page; in real term-document matrices, the vectors representing each document would have dimensionality |V |, the vocabulary size. The ordering of the numbers in a vector space indicates different meaningful dimensions on which documents vary. Thus the first dimension for both these vectors corresponds to the number of times the word battle occurs, and we can compare each dimension, noting for example that the vectors for As You Like It and Twelfth Night have similar values (1 and 0, respectively) for the first dimension. Figure 6 .3 The term-document matrix for four words in four Shakespeare plays. The red boxes show that each document is represented as a column vector of length four. |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | 6.3.1 | Vectors and documents | We can think of the vector for a document as a point in |V |-dimensional space; thus the documents in Fig. 6 .3 are points in 4-dimensional space. Since 4-dimensional spaces are hard to visualize, Fig. 6 .4 shows a visualization in two dimensions; we've arbitrarily chosen the dimensions corresponding to the words battle and fool. Term-document matrices were originally defined as a means of finding similar documents for the task of document information retrieval. Two documents that are similar will tend to have similar words, and if two documents have similar words their column vectors will tend to be similar. The vectors for the comedies As You Like It [1, 114, 36, 20] and Twelfth Night [0,80,58,15] look a lot more like each other (more fools and wit than battles) than they look like Julius Caesar [7, 62, 1, 2] or Henry V [13, 89, 4, 3] . This is clear with the raw numbers; in the first dimension (battle) the comedies have low numbers and the others have high numbers, and we can see it visually in Fig. 6 .4; we'll see very shortly how to quantify this intuition more formally. |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | 6.3.1 | Vectors and documents | A real term-document matrix, of course, wouldn't just have 4 rows and columns, let alone 2. More generally, the term-document matrix has |V | rows (one for each word type in the vocabulary) and D columns (one for each document in the collection); as we'll see, vocabulary sizes are generally in the tens of thousands, and the number of documents can be enormous (think about all the pages on the web). |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | 6.3.1 | Vectors and documents | Information retrieval (IR) is the task of finding the document d from the D information retrieval documents in some collection that best matches a query q. For IR we'll therefore also represent a query by a vector, also of length |V |, and we'll need a way to compare two vectors to find how similar they are. (Doing IR will also require efficient ways to store and manipulate these vectors by making use of the convenient fact that these vectors are sparse, i.e., mostly zeros). Later in the chapter we'll introduce some of the components of this vector comparison process: the tf-idf term weighting, and the cosine similarity metric. |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | 6.3.2 | Words as vectors: document dimensions | We've seen that documents can be represented as vectors in a vector space. But vector semantics can also be used to represent the meaning of words. We do this by associating each word with a word vector-a row vector rather than a column row vector vector, hence with different dimensions, as shown in Fig. 6 Figure 6 .5 The term-document matrix for four words in four Shakespeare plays. The red boxes show that each word is represented as a row vector of length four. |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | 6.3.2 | Words as vectors: document dimensions | For documents, we saw that similar documents had similar vectors, because similar documents tend to have similar words. This same principle applies to words: similar words have similar vectors because they tend to occur in similar documents. The term-document matrix thus lets us represent the meaning of a word by the documents it tends to occur in. |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | 6.3.3 | Words as vectors: word dimensions | An alternative to using the term-document matrix to represent words as vectors of document counts, is to use the term-term matrix, also called the word-word matrix or the term-context matrix, in which the columns are labeled by words rather word-word matrix than documents. This matrix is thus of dimensionality |V |Γ|V | and each cell records the number of times the row (target) word and the column (context) word co-occur in some context in some training corpus. The context could be the document, in which case the cell represents the number of times the two words appear in the same document. It is most common, however, to use smaller contexts, generally a window around the word, for example of 4 words to the left and 4 words to the right, in which case the cell represents the number of times (in some training corpus) the column word occurs in such a Β±4 word window around the row word. For example here is one example each of some words in their windows: |
6 | Vector Semantics and Embeddings | 6.3 | Words and Vectors | 6.3.3 | Words as vectors: word dimensions | is traditionally followed by cherry pie, a traditional dessert often mixed, such as strawberry rhubarb pie. Apple pie computer peripherals and personal digital assistants. These devices usually a computer. This includes information available on the internet If we then take every occurrence of each word (say strawberry) and count the context words around it, we get a word-word co-occurrence matrix. Fig. 6 .6 shows a simplified subset of the word-word co-occurrence matrix for these four words computed from the Wikipedia corpus (Davies, 2015 Figure 6 .6 Co-occurrence vectors for four words in the Wikipedia corpus, showing six of the dimensions (hand-picked for pedagogical purposes). The vector for digital is outlined in red. Note that a real vector would have vastly more dimensions and thus be much sparser. Fig. 6 .6 that the two words cherry and strawberry are more similar to each other (both pie and sugar tend to occur in their window) than they are to other words like digital; conversely, digital and information are more similar to each other than, say, to strawberry. Note that |V |, the length of the vector, is generally the size of the vocabulary, often between 10,000 and 50,000 words (using the most frequent words in the training corpus; keeping words after about the most frequent 50,000 or so is generally not helpful). Since most of these numbers are zero these are sparse vector representations; there are efficient algorithms for storing and computing with sparse matrices. Now that we have some intuitions, let's move on to examine the details of computing word similarity. Afterwards we'll discuss methods for weighting cells. |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | To measure similarity between two target words v and w, we need a metric that takes two vectors (of the same dimensionality, either both with words as dimensions, hence of length |V |, or both with documents as dimensions as documents, of length |D|) and gives a measure of their similarity. By far the most common similarity metric is the cosine of the angle between the vectors. |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | The cosine-like most measures for vector similarity used in NLP-is based on the dot product operator from linear algebra, also called the inner product: |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | dot product inner product dot product(v, w) = v β’ w = N i=1 v i w i = v 1 w 1 + v 2 w 2 + ... + v N w N (6.7) |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | As we will see, most metrics for similarity between vectors are based on the dot product. The dot product acts as a similarity metric because it will tend to be high just when the two vectors have large values in the same dimensions. Alternatively, vectors that have zeros in different dimensions-orthogonal vectors-will have a dot product of 0, representing their strong dissimilarity. |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | This raw dot product, however, has a problem as a similarity metric: it favors long vectors. The vector length is defined as |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | vector length |v| = N i=1 v 2 i (6.8) |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | The dot product is higher if a vector is longer, with higher values in each dimension. More frequent words have longer vectors, since they tend to co-occur with more words and have higher co-occurrence values with each of them. The raw dot product thus will be higher for frequent words. But this is a problem; we'd like a similarity metric that tells us how similar two words are regardless of their frequency. |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | We modify the dot product to normalize for the vector length by dividing the dot product by the lengths of each of the two vectors. This normalized dot product turns out to be the same as the cosine of the angle between the two vectors, following from the definition of the dot product between two vectors a and b: |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | a β’ b = |a||b| cos ΞΈ a β’ b |a||b| = cos ΞΈ (6.9) |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | The cosine similarity metric between two vectors v and w thus can be computed as: |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | cosine cosine(v, w) = v β’ w |v||w| = N i=1 v i w i N i=1 v 2 i N i=1 w 2 i (6.10) |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | For some applications we pre-normalize each vector, by dividing it by its length, creating a unit vector of length 1. Thus we could compute a unit vector from a by unit vector dividing it by |a|. For unit vectors, the dot product is the same as the cosine. |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | The cosine value ranges from 1 for vectors pointing in the same direction, through 0 for orthogonal vectors, to -1 for vectors pointing in opposite directions. But since raw frequency values are non-negative, the cosine for these vectors ranges from 0-1. |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | Let's see how the cosine computes which of the words cherry or digital is closer in meaning to information, just using raw counts from the following shortened The model decides that information is way closer to digital than it is to cherry, a result that seems sensible. Fig. 6 .8 shows a visualization. |
6 | Vector Semantics and Embeddings | 6.4 | Cosine for measuring similarity | nan | nan | Figure 6 .8 A (rough) graphical demonstration of cosine similarity, showing vectors for three words (cherry, digital, and information) in the two dimensional space defined by counts of the words computer and pie nearby. The figure doesn't show the cosine, but it highlights the angles; note that the angle between digital and information is smaller than the angle between cherry and information. When two vectors are more similar, the cosine is larger but the angle is smaller; the cosine has its maximum (1) when the angle between two vectors is smallest (0 β’ ); the cosine of all other angles is less than 1. |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | The co-occurrence matrices above represent each cell by frequencies, either of words with documents ( Fig. 6.5 ), or words with other words (Fig. 6.6 ). But raw frequency is not the best measure of association between words. Raw frequency is very skewed and not very discriminative. If we want to know what kinds of contexts are shared by cherry and strawberry but not by digital and information, we're not going to get good discrimination from words like the, it, or they, which occur frequently with all sorts of words and aren't informative about any particular word. We saw this also in Fig. 6 .3 for the Shakespeare corpus; the dimension for the word good is not very discriminative between plays; good is simply a frequent word and has roughly equivalent high frequencies in each of the plays. |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | It's a bit of a paradox. Words that occur nearby frequently (maybe pie nearby cherry) are more important than words that only appear once or twice. Yet words that are too frequent-ubiquitous, like the or good-are unimportant. How can we balance these two conflicting constraints? |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | There are two common solutions to this problem: in this section we'll describe the tf-idf weighting, usually used when the dimensions are documents. In the next we introduce the PPMI algorithm (usually used when the dimensions are words). |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | The tf-idf weighting (the '-' here is a hyphen, not a minus sign) is the product of two terms, each term capturing one of these two intuitions: |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | The first is the term frequency (Luhn, 1957) : the frequency of the word t in the term frequency document d. We can just use the raw count as the term frequency: |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | tf t, d = count(t, d) (6.11) |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | More commonly we squash the raw frequency a bit, by using the log 10 of the frequency instead. The intuition is that a word appearing 100 times in a document doesn't make that word 100 times more likely to be relevant to the meaning of the document. Because we can't take the log of 0, we normally add 1 to the count: |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | 2 tf t, d = log 10 (count(t, d) + 1) (6.12) |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | If we use log weighting, terms which occur 0 times in a document would have tf = log 10 (1) = 0, 10 times in a document tf = log 10 (11) = 1.04, 100 times tf = log 10 (101) = 2.004, 1000 times tf = 3.00044, and so on. |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | The second factor in tf-idf is used to give a higher weight to words that occur only in a few documents. Terms that are limited to a few documents are useful for discriminating those documents from the rest of the collection; terms that occur frequently across the entire collection aren't as helpful. The document frequency document frequency df t of a term t is the number of documents it occurs in. Document frequency is not the same as the collection frequency of a term, which is the total number of times the word appears in the whole collection in any document. Consider in the collection of Shakespeare's 37 plays the two words Romeo and action. The words have identical collection frequencies (they both occur 113 times in all the plays) but very different document frequencies, since Romeo only occurs in a single play. If our goal is to find documents about the romantic tribulations of Romeo, the word Romeo should be highly weighted, but not action: |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | Collection Frequency Document Frequency Romeo 113 1 action 113 31 |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | We emphasize discriminative words like Romeo via the inverse document frequency or idf term weight (Sparck Jones, 1972) . The idf is defined using the fracidf tion N/df t , where N is the total number of documents in the collection, and df t is the number of documents in which term t occurs. The fewer documents in which a term occurs, the higher this weight. The lowest weight of 1 is assigned to terms that occur in all the documents. It's usually clear what counts as a document: in Shakespeare we would use a play; when processing a collection of encyclopedia articles like Wikipedia, the document is a Wikipedia page; in processing newspaper articles, the document is a single article. Occasionally your corpus might not have appropriate document divisions and you might need to break up the corpus into documents yourself for the purposes of computing idf. |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | Because of the large number of documents in many collections, this measure too is usually squashed with a log function. The resulting definition for inverse document frequency (idf) is thus |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | idf t = log 10 N df t (6.13) |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | Here are some idf values for some words in the Shakespeare corpus, ranging from extremely informative words which occur in only one play like Romeo, to those that occur in a few like salad or Falstaff, to those which are very common like fool or so common as to be completely non-discriminative since they occur in all 37 plays like good or sweet. 3 |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | w t, d = tf t, d Γ idf t (6.14) |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | Fig . 6 .9 applies tf-idf weighting to the Shakespeare term-document matrix in Fig. 6 .2, using the tf equation Eq. 6.12. Note that the tf-idf values for the dimension corresponding to the word good have now all become 0; since this word appears in every document, the tf-idf weighting leads it to be ignored. Similarly, the word fool, which appears in 36 out of the 37 plays, has a much lower weight. Figure 6 .9 A tf-idf weighted term-document matrix for four words in four Shakespeare plays, using the counts in Fig. 6.2 . For example the 0.049 value for wit in As You Like It is the product of tf = log 10 (20 + 1) = 1.322 and idf = .037. Note that the idf weighting has eliminated the importance of the ubiquitous word good and vastly reduced the impact of the almost-ubiquitous word fool. |
6 | Vector Semantics and Embeddings | 6.5 | TF-IDF: Weighting Terms in the Vector | nan | nan | The tf-idf weighting is the way for weighting co-occurrence matrices in information retrieval, but also plays a role in many other aspects of natural language processing. It's also a great baseline, the simple thing to try first. We'll look at other weightings like PPMI (Positive Pointwise Mutual Information) in Section 6.6. |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | An alternative weighting function to tf-idf, PPMI (positive pointwise mutual information), is used for term-term-matrices, when the vector dimensions correspond to words rather than documents. PPMI draws on the intuition that the best way to weigh the association between two words is to ask how much more the two words co-occur in our corpus than we would have a priori expected them to appear by chance. |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | Pointwise mutual information (Fano, 1961) 4 is one of the most important con-pointwise mutual information cepts in NLP. It is a measure of how often two events x and y occur, compared with what we would expect if they were independent: |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | I(x, y) = log 2 P(x, y) P(x)P(y) (6.16) |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | The pointwise mutual information between a target word w and a context word c (Church and Hanks 1989, Church and Hanks 1990) is then defined as: |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | PMI(w, c) = log 2 P(w, c) P(w)P(c) (6.17) |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | The numerator tells us how often we observed the two words together (assuming we compute probability by using the MLE). The denominator tells us how often we would expect the two words to co-occur assuming they each occurred independently; recall that the probability of two independent events both occurring is just the product of the probabilities of the two events. Thus, the ratio gives us an estimate of how much more the two words co-occur than we expect by chance. PMI is a useful tool whenever we need to find words that are strongly associated. |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | PMI values range from negative to positive infinity. But negative PMI values (which imply things are co-occurring less often than we would expect by chance) tend to be unreliable unless our corpora are enormous. To distinguish whether two words whose individual probability is each 10 β6 occur together less often than chance, we would need to be certain that the probability of the two occurring together is significantly different than 10 β12 , and this kind of granularity would require an enormous corpus. Furthermore it's not clear whether it's even possible to evaluate such scores of 'unrelatedness' with human judgments. For this reason it is more common to use Positive PMI (called PPMI) which replaces all negative PPMI PMI values with zero (Church and Hanks 1989 , Dagan et al. 1993 , Niwa and Nitta 1994 5 : |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | PPMI(w, c) = max(log 2 P(w, c) P(w)P(c) , 0) (6.18) |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | More formally, let's assume we have a co-occurrence matrix F with W rows (words) and C columns (contexts), where f i j gives the number of times word w i occurs in 4 PMI is based on the mutual information between two random variables X and Y , defined as: |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | I(X,Y ) = x y P(x, y) log 2 P(x, y) P(x)P(y) (6.15) |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | In a confusion of terminology, Fano used the phrase mutual information to refer to what we now call pointwise mutual information and the phrase expectation of the mutual information for what we now call mutual information 5 Positive PMI also cleanly solves the problem of what to do with zero counts, using 0 to replace the ββ from log(0). |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | context c j . This can be turned into a PPMI matrix where ppmi i j gives the PPMI value of word w i with context c j as follows: |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | p i j = f i j W i=1 C j=1 f i j , p i * = C j=1 f i j W i=1 C j=1 f i j , p * j = W i=1 f i j W i=1 C j=1 f i j (6.19) PPMI i j = max(log 2 p i j p i * p * j , 0) (6.20) |
6 | Vector Semantics and Embeddings | 6.6 | Pointwise Mutual Information (PMI) | nan | nan | Let's see some PPMI calculations. We'll use Fig. 6 .10, which repeats Fig. 6 .6 plus all the count marginals, and let's pretend for ease of calculation that these are the only words/contexts that matter. Figure 6 .10 Co-occurrence counts for four words in 5 contexts in the Wikipedia corpus, together with the marginals, pretending for the purpose of this calculation that no other words/contexts matter. |