Stemming is a method in text processing that eliminates prefixes and suffixes from words, transforming them into their fundamental or root form, The main objective of stemming is to streamline and standardize words, enhancing the effectiveness of the natural language processing tasks. The article explores more on the stemming technique and how to perform stemming in Python.
What is Stemming in NLP?
Simplifying words to their most basic form is called stemming, and it is made easier by stemmers or stemming algorithms. For example, "chocolates" becomes "chocolate" and "retrieval" becomes "retrieve." This is crucial for pipelines for natural language processing, which use tokenized words that are acquired from the first stage of dissecting a document into its constituent words.
Stemming in natural language processing reduces words to their base or root form, aiding in text normalization for easier processing. This technique is crucial in tasks like text classification, information retrieval, and text summarization. While beneficial, stemming has drawbacks, including potential impacts on text readability and occasional inaccuracies in determining the correct root form of a word.
Why is Stemming important?
It is important to note that stemming is different from Lemmatization. Lemmatization is the process of reducing a word to its base form, but unlike stemming, it takes into account the context of the word, and it produces a valid word, unlike stemming which may produce a non-word as the root form.
Note: Do must go through concepts of 'tokenization.'
Some more example of stemming for root word "like" include:
->"likes"
->"liked"
->"likely"
->"liking"
Types of Stemmer in NLTK
Python NLTK contains a variety of stemming algorithms, including several types. Let's examine them down below.
1. Porter's Stemmer
It is one of the most popular stemming methods proposed in 1980. It is based on the idea that the suffixes in the English language are made up of a combination of smaller and simpler suffixes. This stemmer is known for its speed and simplicity. The main applications of Porter Stemmer include data mining and Information retrieval. However, its applications are only limited to English words. Also, the group of stems is mapped on to the same stem and the output stem is not necessarily a meaningful word. The algorithms are fairly lengthy in nature and are known to be the oldest stemmer.
Example: EED -> EE means “if the word has at least one vowel and consonant plus EED ending, change the ending to EE” as 'agreed' becomes 'agree'.
Implementation of Porter Stemmer
Python3
from nltk.stem import PorterStemmer
# Create a Porter Stemmer instance
porter_stemmer = PorterStemmer()
# Example words for stemming
words = ["running", "jumps", "happily", "running", "happily"]
# Apply stemming to each word
stemmed_words = [porter_stemmer.stem(word) for word in words]
# Print the results
print("Original words:", words)
print("Stemmed words:", stemmed_words)
Output:
Original words: ['running', 'jumps', 'happily', 'running', 'happily']
Stemmed words: ['run', 'jump', 'happili', 'run', 'happili']
- Advantage: It produces the best output as compared to other stemmers and it has less error rate.
- Limitation: Morphological variants produced are not always real words.
2. Lovins Stemmer
It is proposed by Lovins in 1968, that removes the longest suffix from a word then the word is recorded to convert this stem into valid words.
Example: sitting -> sitt -> sit
- Advantage: It is fast and handles irregular plurals like 'teeth' and 'tooth' etc.
- Limitation: It is time consuming and frequently fails to form words from stem.
3. Dawson Stemmer
It is an extension of Lovins stemmer in which suffixes are stored in the reversed order indexed by their length and last letter.
- Advantage: It is fast in execution and covers more suffices.
- Limitation: It is very complex to implement.
4. Krovetz Stemmer
It was proposed in 1993 by Robert Krovetz. Following are the steps:
1) Convert the plural form of a word to its singular form.
2) Convert the past tense of a word to its present tense and remove the suffix 'ing'.
Example: 'children' -> 'child'
- Advantage: It is light in nature and can be used as pre-stemmer for other stemmers.
- Limitation: It is inefficient in case of large documents.
5. Xerox Stemmer
Capable of processing extensive datasets and generating valid words, it has a tendency to over-stem, primarily due to its reliance on lexicons, making it language-dependent. This constraint implies that its effectiveness is limited to specific languages.
Example:
'children' -> 'child'
'understood' -> 'understand'
'whom' -> 'who'
'best' -> 'good'
6. N-Gram Stemmer
The algorithm, aptly named n-grams (typically n=2 or 3), involves breaking words into segments of length n and then applying statistical analysis to identify patterns. An n-gram is a set of n consecutive characters extracted from a word in which similar words will have a high proportion of n-grams in common.
Example: ‘INTRODUCTIONS’ for n=2 becomes : *I, IN, NT, TR, RO, OD, DU, UC, CT, TI, IO, ON, NS, S*
- Advantage: It is based on string comparisons and it is language dependent.
- Limitation: It requires space to create and index the n-grams and it is not time efficient.
7. Snowball Stemmer
The Snowball Stemmer, compared to the Porter Stemmer, is multi-lingual as it can handle non-English words. It supports various languages and is based on the 'Snowball' programming language, known for efficient processing of small strings.
The Snowball stemmer is way more aggressive than Porter Stemmer and is also referred to as Porter2 Stemmer. Because of the improvements added when compared to the Porter Stemmer, the Snowball stemmer is having greater computational speed.
Implementation of Snowball Stemmer
Python3
from nltk.stem import SnowballStemmer
# Choose a language for stemming, for example, English
stemmer = SnowballStemmer(language='english')
# Example words to stem
words_to_stem = ['running', 'jumped', 'happily', 'quickly', 'foxes']
# Apply Snowball Stemmer
stemmed_words = [stemmer.stem(word) for word in words_to_stem]
# Print the results
print("Original words:", words_to_stem)
print("Stemmed words:", stemmed_words)
Output:
Original words: ['running', 'jumped', 'happily', 'quickly', 'foxes']
Stemmed words: ['run', 'jump', 'happili', 'quick', 'fox']
8. Lancaster Stemmer
The Lancaster stemmers are more aggressive and dynamic compared to the other two stemmers. The stemmer is really faster, but the algorithm is really confusing when dealing with small words. But they are not as efficient as Snowball Stemmers. The Lancaster stemmers save the rules externally and basically uses an iterative algorithm.
Implementation of Lancaster Stemmer
Python3
from nltk.stem import LancasterStemmer
# Create a Lancaster Stemmer instance
stemmer = LancasterStemmer()
# Example words to stem
words_to_stem = ['running', 'jumped', 'happily', 'quickly', 'foxes']
# Apply Lancaster Stemmer
stemmed_words = [stemmer.stem(word) for word in words_to_stem]
# Print the results
print("Original words:", words_to_stem)
print("Stemmed words:", stemmed_words)
Output:
Original words: ['running', 'jumped', 'happily', 'quickly', 'foxes']
Stemmed words: ['run', 'jump', 'happy', 'quick', 'fox']
9. Regexp Stemmer
The Regexp Stemmer, or Regular Expression Stemmer, is a stemming algorithm that utilizes regular expressions to identify and remove suffixes from words. It allows users to define custom rules for stemming by specifying patterns to match and remove.
This method provides flexibility and control over the stemming process, making it suitable for specific applications where custom rule-based stemming is desired.
Implementation of Regexp Stemmer
Python3
from nltk.stem import RegexpStemmer
# Create a Regexp Stemmer with a custom rule
custom_rule = r'ing$'
regexp_stemmer = RegexpStemmer(custom_rule)
# Apply the stemmer to a word
word = 'running'
stemmed_word = regexp_stemmer.stem(word)
print(f'Original Word: {word}')
print(f'Stemmed Word: {stemmed_word}')
Output:
Original Word: running
Stemmed Word: runn
Applications of Stemming
- Stemming is used in information retrieval systems like search engines.
- It is used to determine domain vocabularies in domain analysis.
- To display search results by indexing while documents are evolving into numbers and to map documents to common subjects by stemming.
- Sentiment Analysis, which examines reviews and comments made by different users about anything, is frequently used for product analysis, such as for online retail stores. Before it is interpreted, stemming is accepted in the form of the text-preparation mean.
- A method of group analysis used on textual materials is called document clustering (also known as text clustering). Important uses of it include subject extraction, automatic document structuring, and quick information retrieval.
Disadvantages in Stemming
There are mainly two errors in stemming –
- Over-stemming: Over-stemming in natural language processing occurs when a stemmer produces incorrect root forms or non-valid words. This can result in a loss of meaning and readability. For instance, "arguing" may be stemmed to "argu," losing meaning. To address this, choosing an appropriate stemmer, testing on sample text, or using lemmatization can mitigate over-stemming issues. Techniques like semantic role labeling and sentiment analysis can enhance context awareness in stemming.
- Under-stemming: Under-stemming in natural language processing arises when a stemmer fails to produce accurate root forms or reduce words to their base form. This can result in a loss of information and hinder text analysis. For instance, stemming "arguing" and "argument" to "argu" may lose meaning. To mitigate under-stemming, selecting an appropriate stemmer, testing on sample text, or opting for lemmatization can be beneficial. Techniques like semantic role labeling and sentiment analysis enhance context awareness in stemming.
Advantages of Stemming
Stemming in natural language processing offers advantages such as text normalization, simplifying word variations to a common base form. It aids in information retrieval, text mining, and reduces feature dimensionality in machine learning. Stemming enhances computational efficiency, making it a valuable step in text pre-processing for various NLP applications.
Similar Reads
Natural Language Processing (NLP) Tutorial Natural Language Processing (NLP) is a branch of Artificial Intelligence (AI) that helps machines to understand and process human languages either in text or audio form. It is used across a variety of applications from speech recognition to language translation and text summarization.Natural Languag
5 min read
Introduction to NLP
Natural Language Processing (NLP) - OverviewNatural Language Processing (NLP) is a field that combines computer science, artificial intelligence and language studies. It helps computers understand, process and create human language in a way that makes sense and is useful. With the growing amount of text data from social media, websites and ot
9 min read
NLP vs NLU vs NLGNatural Language Processing(NLP) is a subset of Artificial intelligence which involves communication between a human and a machine using a natural language than a coded or byte language. It provides the ability to give instructions to machines in a more easy and efficient manner. Natural Language Un
3 min read
Applications of NLPAmong the thousands and thousands of species in this world, solely homo sapiens are successful in spoken language. From cave drawings to internet communication, we have come a lengthy way! As we are progressing in the direction of Artificial Intelligence, it only appears logical to impart the bots t
6 min read
Why is NLP important?Natural language processing (NLP) is vital in efficiently and comprehensively analyzing text and speech data. It can navigate the variations in dialects, slang, and grammatical inconsistencies typical of everyday conversations. Table of Content Understanding Natural Language ProcessingReasons Why NL
6 min read
Phases of Natural Language Processing (NLP)Natural Language Processing (NLP) helps computers to understand, analyze and interact with human language. It involves a series of phases that work together to process language and each phase helps in understanding structure and meaning of human language. In this article, we will understand these ph
7 min read
The Future of Natural Language Processing: Trends and InnovationsThere are no reasons why today's world is thrilled to see innovations like ChatGPT and GPT/ NLP(Natural Language Processing) deployments, which is known as the defining moment of the history of technology where we can finally create a machine that can mimic human reaction. If someone would have told
7 min read
Libraries for NLP
Text Normalization in NLP
Normalizing Textual Data with PythonIn this article, we will learn How to Normalizing Textual Data with Python. Let's discuss some concepts : Textual data ask systematically collected material consisting of written, printed, or electronically published words, typically either purposefully written or transcribed from speech.Text normal
7 min read
Regex Tutorial - How to write Regular Expressions?A regular expression (regex) is a sequence of characters that define a search pattern. Here's how to write regular expressions: Start by understanding the special characters used in regex, such as ".", "*", "+", "?", and more.Choose a programming language or tool that supports regex, such as Python,
6 min read
Tokenization in NLPTokenization is a fundamental step in Natural Language Processing (NLP). It involves dividing a Textual input into smaller units known as tokens. These tokens can be in the form of words, characters, sub-words, or sentences. It helps in improving interpretability of text by different models. Let's u
8 min read
Python | Lemmatization with NLTKLemmatization is a fundamental text pre-processing technique widely applied in natural language processing (NLP) and machine learning. Serving a purpose akin to stemming, lemmatization seeks to distill words to their foundational forms. In this linguistic refinement, the resultant base word is refer
6 min read
Introduction to StemmingStemming is a method in text processing that eliminates prefixes and suffixes from words, transforming them into their fundamental or root form, The main objective of stemming is to streamline and standardize words, enhancing the effectiveness of the natural language processing tasks. The article ex
8 min read
Removing stop words with NLTK in PythonIn natural language processing (NLP), stopwords are frequently filtered out to enhance text analysis and computational efficiency. Eliminating stopwords can improve the accuracy and relevance of NLP tasks by drawing attention to the more important words, or content words. The article aims to explore
9 min read
POS(Parts-Of-Speech) Tagging in NLPParts of Speech (PoS) tagging is a core task in NLP, It gives each word a grammatical category such as nouns, verbs, adjectives and adverbs. Through better understanding of phrase structure and semantics, this technique makes it possible for machines to study human language more accurately. PoS tagg
7 min read
Text Representation and Embedding Techniques
One-Hot Encoding in NLPNatural Language Processing (NLP) is a quickly expanding discipline that works with computer-human language exchanges. One of the most basic jobs in NLP is to represent text data numerically so that machine learning algorithms can comprehend it. One common method for accomplishing this is one-hot en
9 min read
Bag of words (BoW) model in NLPIn this article, we are going to discuss a Natural Language Processing technique of text modeling known as Bag of Words model. Whenever we apply any algorithm in NLP, it works on numbers. We cannot directly feed our text into that algorithm. Hence, Bag of Words model is used to preprocess the text b
4 min read
Understanding TF-IDF (Term Frequency-Inverse Document Frequency)TF-IDF (Term Frequency-Inverse Document Frequency) is a statistical measure used in natural language processing and information retrieval to evaluate the importance of a word in a document relative to a collection of documents (corpus). Unlike simple word frequency, TF-IDF balances common and rare w
6 min read
N-Gram Language Modelling with NLTKLanguage modeling is the way of determining the probability of any sequence of words. Language modeling is used in various applications such as Speech Recognition, Spam filtering, etc. Language modeling is the key aim behind implementing many state-of-the-art Natural Language Processing models.Metho
5 min read
Word Embedding using Word2VecWord Embedding is a language modelling technique that maps words to vectors (numbers). It represents words or phrases in vector space with several dimensions. Various methods such as neural networks, co-occurrence matrices and probabilistic models can generate word embeddings.. Word2Vec is also a me
6 min read
Pre-trained Word embedding using Glove in NLP modelsIn modern Natural Language Processing (NLP), understanding and processing human language in a machine-readable format is essential. Since machines interpret numbers, it's important to convert textual data into numerical form. One of the most effective and widely used approaches to achieve this is th
7 min read
Overview of Word Embedding using Embeddings from Language Models (ELMo)What is word embeddings? It is the representation of words into vectors. These vectors capture important information about the words such that the words sharing the same neighborhood in the vector space represent similar meaning. There are various methods for creating word embeddings, for example, W
2 min read
NLP Deep Learning Techniques
NLP Projects and Practice
Sentiment Analysis with an Recurrent Neural Networks (RNN)Recurrent Neural Networks (RNNs) are used in sequence tasks such as sentiment analysis due to their ability to capture context from sequential data. In this article we will be apply RNNs to analyze the sentiment of customer reviews from Swiggy food delivery platform. The goal is to classify reviews
5 min read
Text Generation using Recurrent Long Short Term Memory NetworkLSTMs are a type of neural network that are well-suited for tasks involving sequential data such as text generation. They are particularly useful because they can remember long-term dependencies in the data which is crucial when dealing with text that often has context that spans over multiple words
4 min read
Machine Translation with Transformer in PythonMachine translation means converting text from one language into another. Tools like Google Translate use this technology. Many translation systems use transformer models which are good at understanding the meaning of sentences. In this article, we will see how to fine-tune a Transformer model from
6 min read
Building a Rule-Based Chatbot with Natural Language ProcessingA rule-based chatbot follows a set of predefined rules or patterns to match user input and generate an appropriate response. The chatbot canât understand or process input beyond these rules and relies on exact matches making it ideal for handling repetitive tasks or specific queries.Pattern Matching
4 min read
Text Classification using scikit-learn in NLPThe purpose of text classification, a key task in natural language processing (NLP), is to categorise text content into preset groups. Topic categorization, sentiment analysis, and spam detection can all benefit from this. In this article, we will use scikit-learn, a Python machine learning toolkit,
5 min read
Text Summarizations using HuggingFace ModelText summarization is a crucial task in natural language processing (NLP) that involves generating concise and coherent summaries from longer text documents. This task has numerous applications, such as creating summaries for news articles, research papers, and long-form content, making it easier fo
5 min read
Advanced Natural Language Processing Interview QuestionNatural Language Processing (NLP) is a rapidly evolving field at the intersection of computer science and linguistics. As companies increasingly leverage NLP technologies, the demand for skilled professionals in this area has surged. Whether preparing for a job interview or looking to brush up on yo
9 min read