Group Assignment: Unit One
Group Assignment: Unit One
Contents
Unit One
o 1.1 Symbolic NLP (1950s – early 1990s)
o 1.2 Statistical NLP (1990s–2010s)
o 1.3 Neural NLP (present)
1.4 Methods: Rules, statistics, neural networks
o 1.5 Statistical methods
o 1.6 Neural networks
o 1.7Common NLP tasks
o 1.8 Text and speech processing
o 1.9 Morphological analysis
o 1.10 Syntactic analysis
o 1.11 Lexical semantics (of individual words in context)
o 1.12 Relational semantics (semantics of individual sentences)
o 1.13 Discourse (semantics beyond individual sentences)
o 1.14 Higher-level NLP applications
o 1.15 General tendencies and (possible) future directions
o 1.16 Cognition and NLP
o Unit 2
o
o 2. Bitcoin’s origin ,early growth ,and evolution
o 2bitcoin used for?
o 2Bitcoin basic features
o 2Bitcoin’s economics feature
o 2 Who decides what Bit coin is?
Unit three
o Web 3.0 technology
o Web 3.0
o Key features of 3.0
o Layers of web3.0
o How does web 3.0 work
o How will web 3.0 change our lives
o Key Application of Web 3.0
o Advantage and disadvantage of web3.0
Group Assignment
Unit one(1) Natural
processing Language
Ambiguity
• We say some input is ambiguous if there are multiple alternative linguistic structures that can be
built for it. Example: - I made her duck.
• Possible interpretations:
1. I cooked waterfowl for her
2. I cooked waterfowl belonging to her
3. I created (plaster?) duck she owns.
4. I caused her to quickly lower her head or body.
5. I waived my magic want and turned her into undifferentiated waterfowl.
Disambiguation
• The models and algorithms as ways to resolve ordisambiguate these ambiguities.
• For example:
deciding whether duck is a verb or a noun can be solved by part-of-speech tagging. deciding whether
make means “create” or “cook” can be solved by word sense disambiguation.
• Resolution of part-of-speech and word sense ambiguities are two important kinds of lexical
disambiguation. A wide variety of tasks can be framed as lexical disambiguation problems. For
example,
• A text-to-speech synthesis system reading the word lead needs to decide whether it should be
pronounced as in lead pipe or as in lead me on.
Unit 2
Blockchain and BitCoin Technology
What is Blockchain?
Blockchain defined
Blockchain is defined as a ledger of decentralized data that is securely shared. Blockchain
technology enables a collective group of select participants to share data. With blockchain cloud
services, transactional data from multiple sources can be easily collected, integrated, and shared.
Data is broken up into shared blocks that are chained together with unique identifiers in the form
of cryptographic hashes.
Blockchain provides data integrity with a single source of truth, eliminating data duplication and
increasing security.
In a blockchain system, fraud and data tampering are prevented because data can’t be altered
without the permission of a quorum of the parties. A blockchain ledger can be shared, but not
altered. If someone tries to alter data, all participants will be alerted and will know who make the
attempt.
Trust, accountability, transparency, and security are forged into the chain. This enables many
types of organizations and trading partners to access and share data, a phenomenon known as
third-party, consensus-based trust.
All participants maintain an encrypted record of every transaction within a decentralized, highly
scalable, and resilient recording mechanism that cannot be repudiated. Blockchain does not
Group Assignment
require any additional overhead or intermediaries. Having a decentralized, single source of truth
reduces the cost of executing trusted business interactions among parties that may not fully trust
each other. In a permissioned blockchain, used by most enterprises, participants are authorized to
participate in the network, and each participant maintains an encrypted record of every
transaction.
Any company or group of companies that needs a secure, real-time, shareable record of
transactions can benefit from this unique technology. There is no single location where
everything is stored, leading to better security and availability, with no central point of
vulnerability.
To learn more about blockchain, its underlying technology, and use cases, here are some
important definitions.
Decentralized trust:
The key reason that organizations use blockchain technology, instead of other data stores, is
to provide a guarantee of data integrity without relying on a central authority. This is called
decentralized trust through reliable data.
Blockchain blocks:
The name blockchain comes from the fact that the data is stored in blocks, and each block is
connected to the previous block, making up a chainlike structure. With blockchain
technology, you can only add (append) new blocks to a blockchain. You can’t modify or
delete any block after it gets added to the blockchain.
Consensus algorithms:
Algorithms that enforce the rules within a blockchain system. Once the participating parties
set up rules for the blockchain, the consensus algorithm ensures that those rules are
followed.
Blockchain nodes:
Blockchain blocks of data are stored on nodes—the storage units that keep the data in sync
or up to date. Any node can quickly determine if any block has changed since it was added.
When a new, full node joins the blockchain network, it downloads a copy of all the blocks
currently on the chain. After the new node synchronizes with the other nodes and has the
latest blockchain version, it can receive any new blocks, just like other nodes.
Public blockchain.
A public, or permission-less, blockchain network is one where anyone can participate
without restrictions. Most types of cryptocurrencies run on a public blockchain that is
governed by rules or consensus algorithms.
Blockchain technology delivers specific business benefits that help companies in the following
ways:
Establishes trust among parties doing business together by offering reliable, shared data
Eliminates siloed data by integrating data into one system through a distributed ledger
shared within a network that permissioned parties can access
Offers a high level of security for data
Reduces the need for third-party intermediaries
Creates real-time, tamper-evident records that can be shared among all participants
Group Assignment
Allows participants to ensure the authenticity and integrity of products placed into the
stream of commerce
Enables seamless tracking and tracing of goods and services across the supply chain
Provides food safety with Oracle Blockchain Platform
Bitcoin is now a globally traded financial asset with daily settled volume
measured in the tens of billions of dollars. Although its regulatory status varies
by region and continues to evolve, Bitcoin is most commonly regulated as
either a currency or a commodity, and is legal to use (with varying levels of
restrictions) in all major economies. In June 2021, El Salvador became the first
country to mandate Bitcoin as legal tender.
Distributed: All Bitcoin transactions are recorded on a public ledger that has
come to be known as the 'blockchain.' The network relies on people voluntarily
storing copies of the ledger and running the Bitcoin protocol software. These
'nodes' contribute to the correct propagation of transactions across the
network by following the rules of the protocol as defined by the software
client. There are currently more than 80,000 nodes distributed globally, making
it next to impossible for the network to suffer downtime or lost information.
Peer-to-peer: Although nodes store and propagate the state of the network
(the 'truth'), payments effectively go directly from one person or business to
another. This means there’s no need for any ‘trusted third party’ to act as an
intermediary.
Permissionless: Anyone can use Bitcoin, there are no gatekeepers, and there is
no need to create a 'Bitcoin account.' Any and all transactions that follow the
rules of the protocol will be confirmed by the network along the defined
consensus mechanisms.
Censorship resistant: Since all Bitcoin transactions that follow the rules of the
protocol are valid, since transactions are pseudo-anonymous, and since users
themselves possess the 'key' to their bitcoin holdings, it is difficult for
authorities to ban individuals from using it or to seize their assets. This carries
important implications for economic freedom, and may even act as a
counteracting force to authoritarianism globally.
Public: All Bitcoin transactions are recorded and publicly available for anyone to
see. While this virtually eliminates the possibility of fraudulent transactions, it also
makes it possible to, in some cases, tie by deduction individual identities to
specific Bitcoin addresses. A number of efforts to enhance Bitcoin's privacy are
underway, but their integration into the protocol is ultimately subject to Bitcoin's
quasi-political governance process.
Group Assignment
Disinflationary: The rate that new bitcoins are added to the circulating supply
gradually decreases along a defined schedule that is built into the code.
Starting at 50 bitcoins per block (a new block is added approximately every 10
minutes), the issuance rate is cut in half approximately every four years. In May
2020, the third halving reduced the issuance rate from 12.5 to 6.25 bitcoins per
block. At that point 18,375,000 of the 21 million coins (87.5% of the total) had
been 'mined.' The fourth halving, in 2024, will reduce the issuance to 3.125
BTC, and so on until approximately the year 2136, when the final halving will
decrease the block reward to just 0.00000168 BTC.
Bitcoin is not a static protocol. It can and has integrated changes throughout
its lifetime, and it will continue to evolve. While there are a number of
formalized procedures for upgrading Bitcoin (see "How does Bitcoin
governance work?"), governance of the protocol is ultimately based on
deliberation, persuasion, and volition. In other words, people decide what
Bitcoin is.
In several instances, there have been significant disagreements amongst the
community as to the direction that Bitcoin should take. When such
disagreements cannot be resolved through deliberation and persuasion, a
portion of users may - of their own volition - choose to acknowledge a
different version of Bitcoin.
The alternative version of Bitcoin with the greatest number of adherents has
come to be known as Bitcoin Cash (BCH). It arose out of a proposal aiming to
solve scaling problems that had resulted in rising transaction costs and
increasing transaction confirmation times. This version of Bitcoin began on
August 1st, 2017.
Unit 3
Group Assignment
Trustless - Everyone will use Zero Trust, and network protection will reach
the edge.
Smart contracts that are open to everyone will relieve people of the need to
rely on a centralized organization (like a bank) to maintain data integrity.
The entertainment sector will significantly increase its revenue from the
metaverse.
It uses machine learning and artificial intelligence. The final result is the
formation of Web 3.0 to grow smarter and more receptive to user demands. If
these ideas are paired with Natural Language Processing (NLP), the result is a
computer that uses NLP.
3-D graphics are used. In fact, this is already evident in e-commerce, virtual
tours, and computer gaming.
It is applicable to:
Group Assignment
Metaverses: A limitless, virtual environment that is 3D-rendered
With Web 3.0, users will be able to sell their own data through decentralized
data networks, ensuring that they maintain ownership control. This data will
be produced by various powerful computing resources, such as mobile phones,
desktop computers, appliances, automobiles, and sensors.
Decentralization and open source software-based Web 3.0 will also be trustless
(i.e., participants will be able to interact directly without going via a trusted
intermediary) and permissionless (meaning that each individual can access
without any governing body's permission). This means that Web 3.0
applications—also known as dApps—will operate on blockchains,
Group Assignment
decentralized peer-to-peer networks, or a hybrid of the two —such
decentralized apps are referred to as dApps.
Artificial intelligence (AI) and machine learning: With the help of the
Semantic Web and natural language processing-based technologies, Web 3.0
will enable machines to comprehend information similarly to humans. Web
3.0 will also make use of machine learning, a subset of artificial intelligence
(AI) that mimics human learning by using data and algorithms, gradually
improving its accuracy. Instead of just targeted advertising, which makes up
the majority of present efforts, these capabilities will result in faster and more
relevant outcomes in a variety of fields like medical development and new
materials.
Connectivity and ubiquity: With Web 3.0, content and information are more
accessible across applications and with a growing number of commonplace
devices connected to the internet. The Internet of Things is one such example
Decentralized Data Network - Users will own their data on web 3.0 since data
is decentralized. Different data generators can sell or share their data without
losing ownership or relying on intermediaries using decentralized data
networks.
With our guiding principles established, we can start looking at how certain
web3 development features are meant to accomplish these objectives.
Data ownership: When you use a platform like Facebook or YouTube, these
businesses gather, own, and recoup your data. Your data is stored on your
cryptocurrency wallet in web3. On web3, you'll interact with apps and
communities through your wallet, and when you log off, you'll take your data
with you. Since you are the owner of the data, you may theoretically choose
whether to monetize it.
There are services that help customers connect to their cryptocurrency wallets
used for illegal behavior. However, your identity is concealed for daily use.
Although wallets increase the level of privacy for bitcoin transactions, privacy
coins like Zcash and Monero give transactions total anonymity. Blockchains
for privacy coins allow observers to track transactions, but they are unable to
view the wallets involved.
For everybody, Web 3.0 offers a much more individualized surfing experience.
Websites will be able to automatically adjust to our device, location, and any
accessibility needs we may have, and web apps will become far more receptive
to our usage patterns.
We believe that the emergence of Web 3.0 will improve our lives for the
following three reasons, which we believe are fairly appropriate:
2. Improved search
As was already mentioned, using a search engine in natural language is highly
effective. The benefits go far beyond the consumer as the learning curve
virtually disappears, and businesses are increasingly able to optimize their
websites for search engines in a more organic way as opposed to using
complicated keyword techniques.
NFT: Non-fungible Tokens (NFTs) are tokens that are individually unique and
are kept in a blockchain with a cryptographic hash.
Chain-crossing bridges: In the Web 3.0 age, there are numerous blockchains,
and cross-chain bridges provide some kind of connectivity between them.
DAOs: DAOs are poised to potentially take on the role of Web 3.0's governing
bodies, offering some structure and decentralized governance.
The data will be provided from any location and on any device.
Disadvantages -
To make the technology accessible to more people worldwide, the devices'
capabilities and qualities will need to be expanded.
Any websites built on web 1.0 technology will become obsolete once web 3.0 is
fully implemented on the Internet.
With easier access to a user's information and reduced privacy thanks to web
3.0, reputation management will be more important than ever.
Agumented reality
Augmented Reality Interfaces Mona Singh and Munindar P. Singh Abstract A confluence of
technological advances (in handheld and wearable sensing, computing and communications),
exploding amounts of information, and user receptiveness is fueling the rapid expansion of
augmented reality from a novelty concept to potentially the default interface modality in
coming years. This article provides a brief overview of AR from the perspective of its
application in natural web interfaces, including a discussion of the key concepts involved and
the technical and social challenges that remain. What is Augmented Reality? Augmented
Reality (AR) user interfaces have grown tremendously in the last few years. What is drawing
great interest to AR is not only the fact that AR involves novel or “cool” technologies, but that it
promises to help users overcome the information overload brought upon them by the Web. AR
helps present information in a succinct manner: the information comes to the user in its
Group Assignment
“natural” home—where the user can easily benefit from and act on it. We propose the following
definition of augmented reality: AR presents a view of the real, physical world that incorporates
additional information that serves to augment the view. Of course, all views of the world are
just that—views. An implicit intuition is that the first view is somehow direct or canonical in
that it can be treated as reality itself and further augmented with additional information. The
augmented “information” is information in the broadest sense and could include nonsense or
false information and express any data type (text, image, video, and so on). A baseline example
of AR according to the above definition would be a bird’s eye view or satellite picture of a city
(the “reality”) overlaid with street and building names (the “augmentation”). AR is most
naturally associated with settings where the aspect of reality considered is proximal to the user
and is current; the augmenting information can likewise be proximal and current, or not,
depending on the specific setting. Moreover, the most common settings involve visual
representation (whether still images or videos), although in principle one might augment any
interface modality. For example, an app may play audio signals from the environment along
with commentary on the relevant sounds (such as bird calls for ornithologists or various safety
warning chimes for training building occupants). 2 Examples and Nonexamples of Augmented
Reality We confine our attention to the uses of AR in providing natural web interfaces, and
especially to phone-based AR, which is becoming widely available. Here are some example AR
apps: Navigation. The directions a user is taking are highlighted, e.g., stating whether a turn is
coming up. In vehicular displays, the appropriate highway lane or next turn may be identified.
Figure 1 shows a screenshot of an Android AR navigation app
(https://play.google.com/store/apps/details?id=com.w.argps). Figure 1: Screenshot from the AR
GPS Drive/Walk Navigation app. Commerce. A common theme is presenting advertisements
according to the user’s location or, more specifically, regarding any object recognized in a
camera view. The figures below show how the Blippar app (http://blippar.com/) progresses,
beginning from the user pointing a phone camera at a grocery item. First, it recognizes the real-
world object (bottle). Next, it places an interactive object (recipe book) as an augmentation. 3
Figure 2: Blippar: identifying a product. Figure 3: Blippar: augmenting a product with an
interactive recipe book. Captioning. Generalizing from Blippar, a user would point a phone
camera at a scene. The phone would display a real-time image of the scene augmented with
metadata associated with scene or its salient parts. For example, a user may point a camera at a
remote mountain peak and see its name, height, and current weather. Or, the app may identify
landmarks in a city, or provide category descriptions (e.g., “restaurant” or “museum”) of
various buildings. Additional examples involve presenting art, education, gaming, and fashion.
An example in fashion is showing how the user would appear when wearing specified apparel.
Although our definition of AR is broad, it excludes certain applications even though they may
sometimes be described as AR. Immersive virtual reality (IVR). AR exposes the real world to
a user though with virtual information embedded in it whereas IVR places a user in a virtual
world (http://www.kinecthacks.com/augmented-reality-telepresence-via-kinect/). Photo
editing. An example is Mattel’s “digital mirror” (http://mashable.com/2013/02/11/barbie-
makeup-mirror/) wherein a user can edit a picture of herself with lipstick or glitter. Another is
the Snaps iPhone app (https://itunes.apple.com/us/app/snaps!/id600868427?mt=8). There is no
augmentation of reality in these cases. Were the edited pictures used in place of the original
faces in a real scene, we could consider such editing as a form of authoring for an AR app. 4
Augmented media. An example is the Guinness Book of World Records providing 3D
animations of some world records
(http://www.appsplayground.com/apps/2012/09/03/augmented-reality-sharks-star-inguinness-
world-records-2013-app/). The distinction between augmented reality and augmented media
falls along a continuum. One would imagine pure AR as the augmentation of “natural” reality.
Group Assignment
However, all too often AR would work only when the reality has been suitably prepped. An
example is the Amazon app. Here the user takes a picture of the barcode of a product of
interest. The app finds the product on Amazon and presents a user interface for immediate
purchase. The app relies upon a media object—a barcode—that would be embedded in the
product without regard to AR. Going further, one may affix QR codes on physical artifacts
specifically for AR (http://www.npr.org/2013/07/29/206728515/activists-artists-fightback-
against-baltimores-slumlords), in effect, treating the reality as less natural and more symbolic.
The extreme form is, as in the Guinness example above, where the user interaction follows
purely on the media object and has no bearing on the reality except to access the media object.
Architecture for AR The figure below shows a conceptual, reference architecture of an AR app,
including its essential components and some image-related annotations as examples. (AR could
potentially apply to any sense, including audio.) A Reality Sensor (camera) observes a part of
the reality. It passes the image it obtains along with metadata such as geolocation tags to the
Trigger Matcher. The Trigger Matcher checks if its input matches the relevant app-specific
trigger, such as the geolocation being near a specific landmark or the image showing the
landmark. It produces matched metadata, such as its semantic category and outline. The
Augmentation Selector takes the matched metadata from the Trigger Matcher and retrieves
relevant information, such as the year the landmark was built. It constructs an augmenting
image, such as a text bubble or a map pin placed relative to the original image, and passes it to
the Reality Augmenter. The Reality Augmenter combines the images, potentially as simply as
overlaying a map pin on the original image, and causes the combined image to be rendered for
the user. [AR-arch.pdf] Realizing Augmented Reality Enabling Technologies The above
architecture highlights the necessary enabling technologies. First, AR needs suitable sensors in
the environment and on the user’s person, including finegrained geolocation and image
recognition in order to obtain a sufficiently accurate representation of the reality. 5 Second,
trigger matching and image augmentation require ways to understand the scene in order to
determine the relevant components and display augmentations. These include techniques such
as image processing (with face recognition an important subcategory). Third, trigger matching
and subsequent user interaction presume ways to determine the user’s attention and immediate
context, e.g., via technologies for input modalities including gaze tracking, touch, and gesture
and speech recognition. Fourth, AR presupposes a substantial information infrastructure, e.g.,
accessible via cloud services, for obtaining pertinent components of the user’s longer term
context, including intent and activities and determining what components of the real-world to
augment, with what, and when. Additionally, AR requires significant computing and
communications infrastructure undergirding the above. User Platforms The above technologies
are realized on three main types of end-user platforms, each against a backdrop of cloud
services. Mobile phones are the most prevalent of these platforms today with vehicles and
wearable computers to follow soon. Modern phones include high-quality cameras, geolocation
capabilities, numerous other sensors, and sufficient computing and communications
capabilities. A driver in vehicle has a need for accessing information of nearby and upcoming
locations. The windshield of a vehicle provides an intuitive venue for rendering augmented
information. Vehicles have practically unlimited (electric) power and can support powerful
computing and communications. Wearable computers, of which Google Glass is a well-known
example, are becoming viable. Like smart phones and vehicles, wearable computers provide
numerous sensors and close access to a user’s current environment and the user’s immediate
context and attention. Wearable sensors, including on the user’s skin, clothing, or shoes, offer
access to a user’s biometric and environmental data and can thus enable smart apps. Today’s
wearable computers are, however, are limited in power, computing, and communications.
Toward a Taxonomy of AR Apps The following are the essential ingredients of an AR app: Their
possible ranges of varieties suggest a classification of AR apps. Trigger. The event or the
Group Assignment
observation upon which the augmentation occurs. Typical values are location or object
recognition (which could occur at multiple levels of granularity, ranging from types of objects to
faces of specific people). 6 A type of a location trigger is matching on GPS coordinates. For
example, Nokia City Lens (http://www.1800pocketpc.com/nokia-city-lens-augmented-reality-
location-app-forlumia-devices/) provides information about places of interest nearby. It enables
a user to search for restaurants, hotels, and shops, and obtain more information about them.
Blippar (Figures 2 and 3) exemplifies object recognition. Having a phone provide relevant
information from a barcode is quite common. The Amazon Mobile app
(https://www.amazon.com/gp/anywhere/sms/android) enables users to obtain the product
description from Amazon for any UPC symbol captured in a camera. Similar apps are available
from Google Shopper and eBay’s RedLaser (https://play.google.com/store/apps/details?
id=com.ebay.redlaser). An example of face recognition is Recognizr, a now-defunct augmented
ID app (http://www.tat.se/blog/tat-augmented-id/), which identifies a person and displays their
online profile and contact details. Interactivity. The extent to which the user can interact with
the augmented information through the app. In general, in apps where the reality is shown in a
direct view there may be occasion for the user to interact only with the augmented information,
not the reality. An example of no interactivity is road names augmented on a satellite image; an
example of medium interactivity is Blippar wherein a user can request a recipe or video by
selecting the appropriate marker. BMW Service’s app
(http://www.bmw.com/com/en/owners/service/augmented_reality_introduction_1.html) exhibits
medium interactivity: it displays servicing instructions and advances the instructions whenever
its user asks for the next step. An example of high interactivity is advertisement icons that open
up automatically to reveal discounts when approached. User interface modalities. A user may
interact with the augmented information through gesture, gaze, speech, and touch in addition to
traditional modalities such as joysticks. Touch and speech are common these days. Google Glass
provides a speech interface. Naturalness of view. The AR app could be triggered based on
natural reality (Recognizr) or require specific features embedded in the environment or
physical objects (Amazon). Opportunities and Prospects Modeling and applying user context
remains the key challenge of realizing high-quality user experience. AR promises a way to
present information and support user actions in ways that are sensitive to a user’s current
context. Usability Challenges AR faces the same core usability challenges as traditional
interfaces, such as the potential for overwhelming a user with too much information and
making it difficult for the user to determine a relevant action. However, AR exacerbates some of
these challenges because there may be 7 many kinds of augmentation possible at once and apps
that are to be proactive run the risk of overwhelming the user. Can the user tell the difference
between reality and the augmentation? Confusion may lead to user errors by conveying an
erroneous impression of the world. Is the augmentation aligned with reality? Maintaining
alignment is nontrivial because reality can change fast, especially in unanticipated ways. For
example, in an AR navigation app, the traffic signal may change state or an accident may occur
well before the augmented information is updated. How can a user transition between AR and
traditional apps? For example, a user searching for a product may need to move between an
AR-enabled app (to identify relevant products) and a traditional app (to search and purchase).
However, transitions across apps may be confusing if their underlying metaphors are
incompatible. How should the augmenting information be organized? For example, if a relevant
product comes in different varieties, colors, or prices, it would help to group related products in
a way that is coherent with the user’s intent. An AR app that presents all the information at
once may serve only to mislead the user. Social Challenges AR is strikingly different from
previous computing technologies both in terms of what it accomplishes and in terms of its
physical trappings. Just as for other new technologies, it might take years before people begin to
widely adopt it except in settings where there is a pressing need or a significant immediate
Group Assignment
benefit. Because AR is useful when the augmentations are salient given the user’s context,
including attributes and prior experiences, the violation of privacy of the user or those present
nearby is a potential risk. For example, an advertisement would be most useful if it were for
something the user wanted. However, a user upon receiving such an effective advertisement
might wonder about how his or her personal information has propagated across the value
chain. Business Models From the standpoint of business models, we anticipate that AR apps
would function like traditional apps in many respects. A key difference would be in terms of
who owns, i.e., controls, the AR space. Presumably, the current app (or the entity that controls
it) would control the display. For example, instead of advertisements being displayed for
keywords as in today’s web, in AR, advertisements may be displayed for appropriate triggers,
such as particular locations or patterns. However, just that apparently technical change from
keywords to locations or patterns may lead to the emergence of new entities in the business
ecosystem, such as those who would tackle maintaining the augmented information