Language
Modeling
Introduction to N-grams
Dan Jurafsky
Probabilistic Language Models
• Today’s goal: assign a probability to a sentence
• Machine Translation:
• P(high winds tonite) > P(large winds tonite)
• Spell Correction
Why? • The office is about fifteen minuets from my house
• P(about fifteen minutes from) > P(about fifteen minuets from)
• Speech Recognition
• P(I saw a van) >> P(eyes awe of an)
• + Summarization, question-answering, etc., etc.!!
Dan Jurafsky
Probabilistic Language Modeling
• Goal: compute the probability of a sentence or
sequence of words:
P(W) = P(w1,w2,w3,w4,w5…wn)
• Related task: probability of an upcoming word:
P(w5|w1,w2,w3,w4)
• A model that computes either of these:
P(W) or P(wn|w1,w2…wn-1) is called a language model.
• Better: the grammar But language model or LM is standard
Dan Jurafsky
How to compute P(W)
• How to compute this joint probability:
• P(its, water, is, so, transparent, that)
• Intuition: let’s rely on the Chain Rule of Probability
Dan Jurafsky
Reminder: The Chain Rule
• Recall the definition of conditional probabilities
p(B|A) = P(A,B)/P(A) Rewriting: P(A,B) = P(A)P(B|A)
• More variables:
P(A,B,C,D) = P(A)P(B|A)P(C|A,B)P(D|A,B,C)
• The Chain Rule in General
P(x1,x2,x3,…,xn) = P(x1)P(x2|x1)P(x3|x1,x2)…P(xn|x1,…,xn-1)
The Chain Rule applied to compute
Dan Jurafsky
joint probability of words in sentence
P(“its water is so transparent”) =
P(its) × P(water|its) × P(is|its water)
× P(so|its water is) × P(transparent|its water is so)
Dan Jurafsky
How to estimate these probabilities
• Could we just count and divide?
• No! Too many possible sentences!
• We’ll never see enough data for estimating these
Dan Jurafsky
Markov Assumption
• Simplifying assumption:
Andrei Markov
• Or maybe
Dan Jurafsky
Markov Assumption
• In other words, we approximate each
component in the product
Dan Jurafsky
Simplest case: Unigram model
Some automatically generated sentences from a unigram model
fifth, an, of, futures, the, an, incorporated, a,
a, the, inflation, most, dollars, quarter, in, is,
mass
thrift, did, eighty, said, hard, 'm, july, bullish
that, or, limited, the
Dan Jurafsky
Bigram model
Condition on the previous word:
texaco, rose, one, in, this, issue, is, pursuing, growth, in,
a, boiler, house, said, mr., gurria, mexico, 's, motion,
control, proposal, without, permission, from, five, hundred,
fifty, five, yen
outside, new, car, parking, lot, of, the, agreement, reached
this, would, be, a, record, november
Dan Jurafsky
N-gram models
• We can extend to trigrams, 4-grams, 5-grams
• In general this is an insufficient model of language
• because language has long-distance dependencies:
“The computer which I had just put into the machine room on
the fifth floor crashed.”
• But we can often get away with N-gram models
Language
Modeling
Introduction to N-grams
Language
Modeling
Estimating N-gram
Probabilities
Dan Jurafsky
Estimating bigram probabilities
• The Maximum Likelihood Estimate
Dan Jurafsky
An example
<s> I am Sam </s>
<s> Sam I am </s>
<s> I do not like green eggs and ham </s>
Dan Jurafsky
More examples:
Berkeley Restaurant Project sentences
• can you tell me about any good cantonese restaurants close by
• mid priced thai food is what i’m looking for
• tell me about chez panisse
• can you give me a listing of the kinds of food that are available
• i’m looking for a good place to eat breakfast
• when is caffe venezia open during the day
Dan Jurafsky
Raw bigram counts
• Out of 9222 sentences
Dan Jurafsky
Raw bigram probabilities
• Normalize by unigrams:
• Result:
Dan Jurafsky
Bigram estimates of sentence probabilities
P(<s> I want english food </s>) =
P(I|<s>)
× P(want|I)
× P(english|want)
× P(food|english)
× P(</s>|food)
= .000031
Dan Jurafsky
What kinds of knowledge?
• P(english|want) = .0011
• P(chinese|want) = .0065
• P(to|want) = .66
• P(eat | to) = .28
• P(food | to) = 0
• P(want | spend) = 0
• P (i | <s>) = .25
Dan Jurafsky
Practical Issues
• We do everything in log space
• Avoid underflow
• (also adding is faster than multiplying)
Dan Jurafsky
Language Modeling Toolkits
• SRILM
• http://www.speech.sri.com/projects/srilm/
• KenLM
• https://kheafield.com/code/kenlm/
Dan Jurafsky
Google N-Gram Release, August 2006
…
Dan Jurafsky
Google N-Gram Release
• serve as the incoming 92
• serve as the incubator 99
• serve as the independent 794
• serve as the index 223
• serve as the indication 72
• serve as the indicator 120
• serve as the indicators 45
• serve as the indispensable 111
• serve as the indispensible 40
• serve as the individual 234
http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html
Dan Jurafsky
Google Book N-grams
• http://ngrams.googlelabs.com/
Language
Modeling
Estimating N-gram
Probabilities
Languag Evaluation and Perplexity
e
Modelin
g
How to evaluate N-gram models
"Extrinsic (in-vivo) Evaluation"
To compare models A and B
1. Put each model in a real task
• Machine Translation, speech recognition, etc.
2. Run the task, get a score for A and for B
• How many words translated correctly
• How many words transcribed correctly
3. Compare accuracy for A and B
Intrinsic (in-vitro) evaluation
Extrinsic evaluation not always possible
• Expensive, time-consuming
• Doesn't always generalize to other applications
Intrinsic evaluation: perplexity
• Directly measures language model performance at
predicting words.
• Doesn't necessarily correspond with real application
performance
• But gives us a single general metric for language models
• Useful for large language models (LLMs) as well as n-grams
Training sets and test sets
We train parameters of our model on a training set.
We test the model’s performance on data we
haven’t seen.
◦ A test set is an unseen dataset; different from training set.
◦ Intuition: we want to measure generalization to unseen data
◦ An evaluation metric (like perplexity) tells us how well
our model does on the test set.
Choosing training and test sets
• If we're building an LM for a specific task
• The test set should reflect the task language we
want to use the model for
• If we're building a general-purpose model
• We'll need lots of different kinds of training data
• We don't want the training set or the test set to
be just from one domain or author or language.
Training on the test set
We can’t allow test sentences into the training set
• Or else the LM will assign that sentence an artificially
high probability when we see it in the test set
• And hence assign the whole test set a falsely high
probability.
• Making the LM look better than it really is
This is called “Training on the test set”
Bad science!
33
Dev sets
• If we test on the test set many times we might
implicitly tune to its characteristics
• Noticing which changes make the model better.
• So we run on the test set only once, or a few times
• That means we need a third dataset:
• A development test set or, devset.
• We test our LM on the devset until the very end
• And then test our LM on the test set once
Intuition of perplexity as evaluation metric: How
good is our language model?
Intuition: A good LM prefers "real" sentences
• Assign higher probability to “real” or “frequently
observed” sentences
• Assigns lower probability to “word salad” or
“rarely observed” sentences?
Intuition of perplexity 2:
Predicting upcoming words
time 0.9
The Shannon Game: How well can we predict
dream 0.03
the next word?
• Once upon a ____ midnight 0.02
• That is a picture of a ____ …
• For breakfast I ate my usual ____ and 1e-100
Claude Shannon
Unigrams are terrible at this game (Why?)
A good LM is one that assigns a higher probability
to the next word that actually occurs
Picture credit: Historiska bildsamlingen
https://creativecommons.org/licenses/by/2.0/
Intuition of perplexity 3: The best language model is
one that best predicts the entire unseen test set
• We said: a good LM is one that assigns a higher
probability to the next word that actually occurs.
• Let's generalize to all the words!
• The best LM assigns high probability to the entire test
set.
• When comparing two LMs, A and B
• We compute PA(test set) and PB(test set)
• The better LM will give a higher probability to (=be less
surprised by) the test set than the other LM.
Intuition of perplexity 4: Use perplexity
instead of raw probability
• Probability depends on size of test set
• Probability gets smaller the longer the text
• Better: a metric that is per-word, normalized by length
• Perplexity is the inverse probability of the test set,
normalized by the number of words
Intuition of perplexity 5: the inverse
Perplexity is the inverse probability of the test set, normalized
by the number of words
(The inverse comes from the original definition of perplexity
from cross-entropy rate in information theory)
Probability range is [0,1], perplexity range is [1,∞]
Minimizing perplexity is the same as maximizing probability
Intuition of perplexity 6: N-grams
Chain rule:
Bigrams:
Intuition of perplexity 7:
Weighted average branching factor
Perplexity is also the weighted average branching factor of a language.
Branching factor: number of possible next words that can follow any word
Example: Deterministic language L = {red,blue, green}
Branching factor = 3 (any word can be followed by red, blue, green)
Now assume LM A where each word follows any other word with equal probability ⅓
Given a test set T = "red red red red blue"
PerplexityA(T) = PA(red red red red blue)-1/5 = ) )⅓((
1/5- 5
)⅓( = 3=
1-
But now suppose red was very likely in training set, such that for LM B:
◦ P(red) = .8 p(green) = .1 p(blue) = .1
We would expect the probability to be higher, and hence the perplexity to be smaller:
PerplexityB(T) = PB(red red red red blue)-1/5
= (.8 * .8 * .8 * .8 * .1) -1/5 =.04096 -1/5 = .527-1 = 1.89
Holding test set constant:
Lower perplexity = better language model
Training 38 million words, test 1.5 million words, WSJ
N-gram Unigram Bigram Trigram
Order
Perplexity 962 170 109
Languag Evaluation and Perplexity
e
Modelin
g
Languag Sampling and Generalization
e
Modelin
g
The Shannon (1948) Visualization Method
Sample words from an LM
Claude Shannon
Unigram:
REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME
CAN DIFFERENT NATURAL HERE HE THE A IN CAME THE TO
OF TO EXPERT GRAY COME TO FURNISHES THE LINE
MESSAGE HAD BE THESE.
Bigram:
THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER
THAT THE CHARACTER OF THIS POINT IS THEREFORE
ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO
EVER TOLD THE PROBLEM FOR AN UNEXPECTED.
How Shannon sampled those words in 1948
"Open a book at random and select a letter at random on the
page. This letter is recorded. The book is then opened to another
page and one reads until this letter is encountered. The
succeeding letter is then recorded. Turning to another page this
second letter is searched for and the succeeding letter recorded,
etc."
Sampling a word from a
distribution
Visualizing Bigrams the Shannon Way
Choose a random bigram (<s>, w)
<s> I
according to its probability p(w|<s>) I want
Now choose a random bigram (w, x) want to
according to its probability p(x|w) to eat
And so on until we choose </s> eat Chinese
Chinese food
Then string the words together
food </s>
I want to eat Chinese food
Note: there are other sampling
methods
Used for neural language models
Many of them avoid generating words from the very
unlikely tail of the distribution
We'll discuss when we get to neural LM decoding:
◦ Temperature sampling
◦ Top-k sampling
◦ Top-p sampling
Approximating Shakespeare
Shakespeare as corpus
N=884,647 tokens, V=29,066
Shakespeare produced 300,000 bigram types out of
V2= 844 million possible bigrams.
◦ So 99.96% of the possible bigrams were never seen (have
zero entries in the table)
◦ That sparsity is even worse for 4-grams, explaining why
our sampling generated actual Shakespeare.
The Wall Street Journal is not
Shakespeare
Can you guess the author? These 3-gram
sentences are sampled from an LM trained on who?
1) They also point to ninety nine point six
billion dollars from two hundred four oh
six three percent of the rates of interest
stores as Mexico and gram Brazil on market
conditions
2) This shall forbid it should be branded,
if renown made it empty.
3) “You are uniformly charming!” cried he,
with a smile of associating and now and
then I bowed and they perceived a chaise
and four to wish for. 53
Choosing training data
If task-specific, use a training corpus that has a similar
genre to your task.
• If legal or medical, need lots of special-purpose documents
Make sure to cover different kinds of dialects and
speaker/authors.
• Example: African-American Vernacular English (AAVE)
• One of many varieties that can be used by African Americans and others
• Can include the auxiliary verb finna that marks immediate future tense:
• "My phone finna die"
The perils of overfitting
N-grams only work well for word prediction if the
test corpus looks like the training corpus
• But even when we try to pick a good training
corpus, the test set will surprise us!
• We need to train robust models that generalize!
One kind of generalization: Zeros
• Things that don’t ever occur in the training set
• But occur in the test set
Zeros
Training set: • Test set
… ate lunch … ate lunch
… ate dinner … ate breakfast
… ate a
… ate the
P(“breakfast” | ate) = 0
Zero probability bigrams
Bigrams with zero probability
◦ Will hurt our performance for texts where those words
appear!
◦ And mean that we will assign 0 probability to the test set!
And hence we cannot compute perplexity (can’t
divide by 0)!
Languag Sampling and Generalization
e
Modelin
g
Language
Modeling
Smoothing: Add-one
(Laplace) smoothing
Dan Jurafsky
The intuition of smoothing (from Dan Klein)
• When we have sparse statistics:
P(w | denied the)
allegations
3 allegations
outcome
reports
2 reports
attack
…
request
claims
1 claims
man
1 request
7 total
• Steal probability mass to generalize better
P(w | denied the)
2.5 allegations
allegations
allegations
1.5 reports
outcome
0.5 claims
reports
attack
0.5 request
…
man
claims
request
2 other
7 total
Dan Jurafsky
Add-one estimation
• Also called Laplace smoothing
• Pretend we saw each word one more time than we did
• Just add one to all the counts!
• MLE estimate:
• Add-1 estimate:
Dan Jurafsky
Maximum Likelihood Estimates
• The maximum likelihood estimate
• of some parameter of a model M from a training set T
• maximizes the likelihood of the training set T given the model M
• Suppose the word “bagel” occurs 400 times in a corpus of a million words
• What is the probability that a random word from some other text will be
“bagel”?
• MLE estimate is 400/1,000,000 = .0004
• This may be a bad estimate for some other corpus
• But it is the estimate that makes it most likely that “bagel” will occur 400 times in
a million word corpus.
Dan Jurafsky
Berkeley Restaurant Corpus: Laplace
smoothed bigram counts
Dan Jurafsky
Laplace-smoothed bigrams
Dan Jurafsky
Reconstituted counts
Dan Jurafsky
Compare with raw bigram counts
Dan Jurafsky
Add-1 estimation is a blunt instrument
• So add-1 isn’t used for N-grams:
• We’ll see better methods
• But add-1 is used to smooth other NLP models
• For text classification
• In domains where the number of zeros isn’t so huge.
Language
Modeling
Smoothing: Add-one
(Laplace) smoothing
Language
Modeling
Interpolation, Backoff,
and Web-Scale LMs
Dan Jurafsky
Backoff and Interpolation
• Sometimes it helps to use less context
• Condition on less context for contexts you haven’t learned much about
• Backoff:
• use trigram if you have good evidence,
• otherwise bigram, otherwise unigram
• Interpolation:
• mix unigram, bigram, trigram
• Interpolation works better
Dan Jurafsky
Linear Interpolation
• Simple interpolation
• Lambdas conditional on context:
Dan Jurafsky
How to set the lambdas?
• Use a held-out corpus
Held-Out Test
Training Data Data Data
• Choose λs to maximize the probability of held-out data:
• Fix the N-gram probabilities (on the training data)
• Then search for λs that give largest probability to held-out set:
Dan Jurafsky
Unknown words: Open versus closed
vocabulary tasks
• If we know all the words in advanced
• Vocabulary V is fixed
• Closed vocabulary task
• Often we don’t know this
• Out Of Vocabulary = OOV words
• Open vocabulary task
• Instead: create an unknown word token <UNK>
• Training of <UNK> probabilities
• Create a fixed lexicon L of size V
• At text normalization phase, any training word not in L changed to <UNK>
• Now we train its probabilities like a normal word
• At decoding time
• If text input: Use UNK probabilities for any word not in training
Dan Jurafsky
Huge web-scale n-grams
• How to deal with, e.g., Google N-gram corpus
• Pruning
• Only store N-grams with count > threshold.
• Remove singletons of higher-order n-grams
• Entropy-based pruning
• Efficiency
• Efficient data structures like tries
• Bloom filters: approximate language models
• Store words as indexes, not strings
• Use Huffman coding to fit large numbers of words into two bytes
• Quantize probabilities (4-8 bits instead of 8-byte float)
Dan Jurafsky
Smoothing for Web-scale N-grams
• “Stupid backoff” (Brants et al. 2007)
• No discounting, just use relative frequencies
75
Dan Jurafsky
N-gram Smoothing Summary
• Add-1 smoothing:
• OK for text categorization, not for language modeling
• The most commonly used method:
• Extended Interpolated Kneser-Ney
• For very large N-grams like the Web:
• Stupid backoff
76
Dan Jurafsky
Advanced Language Modeling
• Discriminative models:
• choose n-gram weights to improve a task, not to fit the
training set
• Parsing-based models
• Caching Models
• Recently used words are more likely to appear
• These perform very poorly for speech recognition (why?)
Language
Modeling
Interpolation, Backoff,
and Web-Scale LMs