Explore 1.5M+ audiobooks & ebooks free for days

From $11.99/month after trial. Cancel anytime.

Natural Language Understanding: Fundamentals and Applications
Natural Language Understanding: Fundamentals and Applications
Natural Language Understanding: Fundamentals and Applications
Ebook102 pages1 hourArtificial Intelligence

Natural Language Understanding: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Natural Language Understanding


The field of artificial intelligence known as natural-language processing includes a subfield known as natural-language understanding (NLU), often known as natural-language interpretation (NLI), which deals with the reading comprehension of machines. Understanding natural language is seen as a challenging topic for artificial intelligence.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Natural Language Understanding


Chapter 2: Computational Linguistics


Chapter 3: Natural Language Processing


Chapter 4: Parsing


Chapter 5: Question Answering


Chapter 6: Semantic Role Labeling


Chapter 7: Computational Semantics


Chapter 8: Semantic Parsing


Chapter 9: Natural-language User Interface


Chapter 10: History of Natural Language Processing


(II) Answering the public top questions about natural language understanding.


(III) Real world examples for the usage of natural language understanding in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of natural language understanding' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of natural language understanding.

LanguageEnglish
PublisherOne Billion Knowledgeable
Release dateJul 5, 2023
Natural Language Understanding: Fundamentals and Applications

Other titles in Natural Language Understanding Series (30)

View More

Read more from Fouad Sabry

Related to Natural Language Understanding

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Reviews for Natural Language Understanding

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Natural Language Understanding - Fouad Sabry

    Chapter 1: Natural-language understanding

    News gathering, text classification, voice activated search, archiving, and massive content analysis using natural language understanding (NLU) or natural language interpretation (NLI).

    One of the earliest attempts at computerized natural-language understanding is the 1964 program STUDENT written by Daniel Bobrow for his PhD dissertation at MIT. In his dissertation, titled Natural Language Input for a Computer Problem Solving System, eight years after John McCarthy coined the term artificial intelligence, Bobrow demonstrated how a computer could use natural language input to solve algebra word problems.

    In 1966, MIT's Joseph Weizenbaum created ELIZA, an interactive program that allowed users to have a conversation with it in English about any topic (though psychotherapy was by far the most popular). Weizenbaum avoided the issue of providing ELIZA with a database of real-world knowledge or a rich lexicon by having the program work solely through the parsing and substitution of key words into canned phrases. Nonetheless, ELIZA became surprisingly popular for a plaything, and it can be viewed as an early forerunner to modern commercial systems like the one used by Ask.com. Students of Schank's at Yale University, including Robert Wilensky, Wendy Lehnert, and Janet Kolodner, made extensive use of this model, which was influenced in part by the work of Sydney Lamb.

    The augmented transition network (ATN) was first proposed by William A. Woods in 1970 as a way to represent unstructured text. In place of rules for phrase structure, ATNs relied on a set of finite state automata that were repeatedly invoked. For quite some time, people kept using ATNs and their more generalized format, generalized ATNs.

    SHRDLU was Terry Winograd's MIT doctoral thesis, which he completed in 1971. Within the confines of a world made of children's blocks, SHRDLU was able to understand basic English sentences and use them to guide a robotic arm. The positive results of SHRDLU's demonstration have given researchers new impetus to keep working in the area. Winograd was an advisor to Google co-founder and Stanford student Larry Page.

    SRI International's natural language processing group maintained its dedication to the field throughout the 1970s and 1980s. Several for-profit initiatives were launched as a direct result of this study; for example, Gary Hendrix founded Symantec Corporation in 1982 with the intention of creating a natural language interface for database queries on personal computers. However, with the introduction of mouse-driven GUIs, Symantec shifted gears. Many other commercial initiatives, such as those led by Larry R. Harris at the Artificial Intelligence Corporation and Roger Schank and his students at the Cognitive Systems Corp., were also launched around this time.

    This conclusion has the backing of John Ball, a cognitive scientist and the creator of Patom Theory. Narrowing the scope of an application has allowed natural language processing to make inroads for applications that support human productivity in service and ecommerce. Conventional natural language processing is still unable to account for the thousands of possible ways a human can make a request. Matching each word to its proper meaning based on the meanings of the other words in the sentence - as a three-year-old does without guesswork - is necessary if we are to have a meaningful conversation with machines.

    Everything from short, simple tasks like issuing commands to robots to highly complex endeavors like fully comprehending newspaper articles or poetry passages can be categorized under the umbrella term natural-language understanding, which is used to describe a wide range of computer applications. The management of simple queries to database tables with fixed schemata is on one end of the spectrum, while many real-world applications fall somewhere in the middle, like text classification for the automatic analysis of emails and their routing to a suitable department in a corporation, which does not require an in-depth understanding of the text.

    There have been numerous, varyingly complex attempts over the years to have computers process natural language or English-like sentences. While not all attempts have led to systems with profound comprehension, all have improved system usability. Like the Star Trek computer that speaks English, Wayne Ratliff created the Vulcan programming language with an English-like syntax. The dBase system, based on Vulcan, is widely credited with kickstarting the PC database market with its intuitive syntax. The semantics of natural language sentences are represented internally (often as first order logic) in systems that use a rich lexicon and are very different from those that have an easy-to-use or English-like syntax.

    This means that a system's complexity (and the challenges that come with it) and the kinds of applications it can handle are both dependent on the scope and depth of the understanding it aims to achieve. A system's breadth is proportional to the size of its vocabulary and grammar. The depth is the system's ability to comprehend speech at a level close to that of a native speaker. The simplest and most limited command interpreters are those that are based on the English language but can only handle a limited set of commands. Systems that are narrow but deep investigate and model underlying mechanisms of comprehension but are ultimately only superficial. Extremely comprehensive and in-depth systems are currently beyond the technological frontier.

    Natural-language-understanding systems, regardless of their methodology, all have a few things in common. To convert human language into an internal representation, the system requires a lexicon, parser, and grammar rules. The Wordnet lexicon, for example, was the result of many person-years of work to build a comprehensive lexicon with an appropriate ontology.

    {End Chapter 6}

    {End Chapter 1}

    Chapter 2: Computational linguistics

    An interdisciplinary discipline, computational linguistics focuses on the computer modeling of natural language, as well as

    Enjoying the preview?
    Page 1 of 1