Word2vec
Word2vec is a technique in natural language processing for obtaining vector representations of words. These vectors capture information about the meaning of the word based on the surrounding words. The word2vec algorithm estimates these representations by modeling text in a large corpus. Once trained, such a model can detect synonymous words or suggest additional words for a partial sentence. Word2vec was developed by Tomáš Mikolov, Kai Chen, Greg Corrado, Ilya Sutskever and Jeff Dean at Google, and published in 2013.
Word2vec represents a word as a high-dimension vector of numbers which capture relationships between words. In particular, words which appear in similar contexts are mapped to vectors which are nearby as measured by cosine similarity. This indicates the level of semantic similarity between the words, so for example the vectors for walk and ran are nearby, as are those for "but" and "however", and "Berlin" and "Germany".
Approach
Word2vec is a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words. Word2vec takes as its input a large corpus of text and produces a mapping of the set of words to a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a vector in the space.Word2vec can use either of two model architectures to produce these distributed representations of words: continuous bag of words or continuously sliding skip-gram. In both architectures, word2vec considers both individual words and a sliding context window as it iterates over the corpus.
The CBOW can be viewed as a 'fill in the blank' task, where the word embedding represents the way the word influences the relative probabilities of other words in the context window. Words which are semantically similar should influence these probabilities in similar ways, because semantically similar words should be used in similar contexts. The order of context words does not influence prediction.
In the continuous skip-gram architecture, the model uses the current word to predict the surrounding window of context words. The skip-gram architecture weighs nearby context words more heavily than more distant context words. According to the authors' note, CBOW is faster while skip-gram does a better job for infrequent words.
After the model is trained, the learned word embeddings are positioned in the vector space such that words that share common contexts in the corpus — that is, words that are semantically and syntactically similar — are located close to one another in the space. More dissimilar words are located farther from one another in the space.
Mathematical details
This section is based on expositions.A corpus is a sequence of words. Both CBOW and skip-gram are methods to learn one vector per word appearing in the corpus.
Let be the set of all words appearing in the corpus. Our goal is to learn one vector for each word.
The idea of skip-gram is that the vector of a word should be close to the vector of each of its neighbors. The idea of CBOW is that the vector-sum of a word's neighbors should be close to the vector of the word.
Continuous bag-of-words (CBOW)
The idea of CBOW is to represent each word with a vector, such that it is possible to predict a word using the sum of the vectors of its neighbors. Specifically, for each word in the corpus, the one-hot encoding of the word is used as the input to the neural network. The output of the neural network is a probability distribution over the dictionary, representing a prediction of individual words in the neighborhood of. The objective of training is to maximize.For example, if we want each word in the corpus to be predicted by every other word in a small span of 4 words. The set of relative indexes of neighbor words will be:, and the objective is to maximize.
In standard bag-of-words, a word's context is represented by a word-count of its neighboring words. For example, the "sat" in "the cat sat on the mat" is represented as. Note that the last word "mat" is not used to represent "sat", because it is outside the neighborhood.
In continuous bag-of-words, the histogram is multiplied by a matrix to obtain a continuous representation of the word's context. The matrix is also called a dictionary. Its columns are the word vectors. It has columns, where is the size of the dictionary. Let be the length of each word vector. We have.
For example, multiplying the word histogram with, we obtain.
This is then multiplied with another matrix of shape. Each row of it is a word vector. This results in a vector of length, one entry per dictionary entry. Then, apply the softmax to obtain a probability distribution over the dictionary.
This system can be visualized as a neural network, similar in spirit to an autoencoder, of architecture linear-linear-softmax, as depicted in the diagram. The system is trained by gradient descent to minimize the cross-entropy loss.
In full formula, the cross-entropy loss is:where the outer summation is over the words in a corpus, the quantity is the sum of a word's neighbors' vectors, etc.
Once such a system is trained, we have two trained matrices. Either the column vectors of or the row vectors of can serve as the dictionary. For example, the word "sat" can be represented as either the "sat"-th column of or the "sat"-th row of. It is also possible to simply define, in which case there would no longer be a choice.
Skip-gram
The idea of skip-gram is to represent each word with a vector, such that it is possible to predict the vectors of its neighbors using the vector of a word.The architecture is still linear-linear-softmax, the same as CBOW, but the input and the output are switched. Specifically, for each word in the corpus, the one-hot encoding of the word is used as the input to the neural network. The output of the neural network is a probability distribution over the dictionary, representing a prediction of individual words in the neighborhood of. The objective of training is to maximize.
In full formula, the loss function isSame as CBOW, once such a system is trained, we have two trained matrices. Either the column vectors of or the row vectors of can serve as the dictionary. It is also possible to simply define, in which case there would no longer be a choice.
Essentially, skip-gram and CBOW are exactly the same in architecture. They only differ in the objective function during training.
History
During the 1980s, there were some early attempts at using neural networks to represent words and concepts as vectors.In 2010, Tomáš Mikolov with co-authors applied a simple recurrent neural network with a single hidden layer to language modelling.
Word2vec was created, patented, and published in 2013 by a team of researchers led by Mikolov at Google over two papers. The original paper was rejected by reviewers for ICLR conference 2013. It also took months for the code to be approved for open-sourcing. Other researchers helped analyse and explain the algorithm.
Embedding vectors created using the Word2vec algorithm have some advantages compared to earlier algorithms such as those using n-grams and latent semantic analysis. GloVe was developed by a team at Stanford specifically as a competitor, and the original paper noted multiple improvements of GloVe over word2vec. Mikolov argued that the comparison was unfair as GloVe was trained on more data, and that the fastText project showed that word2vec is superior when trained on the same data.
As of 2022, the straight Word2vec approach was described as "dated". Transformer-based models, such as ELMo and BERT, which add multiple neural-network attention layers on top of a word embedding model similar to Word2vec, have come to be regarded as the state of the art in natural language processing.
Parameterization
Results of word2vec training can be sensitive to parametrization. The following are some important parameters in word2vec training.Training algorithm
A Word2vec model can be trained with hierarchical softmax and/or negative sampling. To approximate the conditional log-likelihood a model seeks to maximize, the hierarchical softmax method uses a Huffman tree to reduce calculation. The negative sampling method, on the other hand, approaches the maximization problem by minimizing the log-likelihood of sampled negative instances. According to the authors, hierarchical softmax works better for infrequent words while negative sampling works better for frequent words and better with low dimensional vectors. As training epochs increase, hierarchical softmax stops being useful.Sub-sampling
High-frequency and low-frequency words often provide little information. Words with a frequency above a certain threshold, or below a certain threshold, may be subsampled or removed to speed up training.Dimensionality
Quality of word embedding increases with higher dimensionality. But after reaching some point, marginal gain diminishes. Typically, the dimensionality of the vectors is set to be between 100 and 1,000.Context window
The size of the context window determines how many words before and after a given word are included as context words of the given word. According to the authors' note, the recommended value is 10 for skip-gram and 5 for CBOW.Extensions
There are a variety of extensions to word2vec.doc2vec
doc2vec, generates distributed representations of variable-length pieces of texts, such as sentences, paragraphs, or entire documents. doc2vec has been implemented in the C, Python and Java/Scala tools, with the Java and Python versions also supporting inference of document embeddings on new, unseen documents.doc2vec estimates the distributed representations of documents much like how word2vec estimates representations of words: doc2vec utilizes either of two model architectures, both of which are allegories to the architectures used in word2vec. The first, Distributed Memory Model of Paragraph Vectors, is identical to CBOW other than it also provides a unique document identifier as a piece of additional context. The second architecture, Distributed Bag of Words version of Paragraph Vector, is identical to the skip-gram model except that it attempts to predict the window of surrounding context words from the paragraph identifier instead of the current word.
doc2vec also has the ability to capture the semantic 'meanings' for additional pieces of 'context' around words; doc2vec can estimate the semantic embeddings for speakers or speaker attributes, groups, and periods of time. For example, doc2vec has been used to estimate the political positions of political parties in various Congresses and Parliaments in the U.S. and U.K., respectively, and various governmental institutions.