Logic Gate - Unicode
Logic Gate - Unicode
o The logic gates are the main structural part of a digital system.
o Logic Gates are a block of hardware that produces signals of binary 1 or 0 when
input logic requirements are satisfied.
o Each gate has a distinct graphic symbol, and its operation can be described by
means of algebraic expressions.
o The seven basic logic gates includes: AND, OR, XOR, NOT, NAND, NOR, and
XNOR.
o The relationship between the input-output binary variables for each gate can be
represented in tabular form by a truth table.
o Each gate has one or two binary input variables designated by A and B and one
binary output variable designated by x.
AND GATE:
The AND gate is an electronic circuit which gives a high output only if all its inputs are
high. The AND operation is represented by a dot (.) sign.
OR GATE:
The OR gate is an electronic circuit which gives a high output if one or more of its
inputs are high. The operation performed by an OR gate is represented by a plus (+)
sign.
NOT GATE:
The NOT gate is an electronic circuit which produces an inverted version of the input at
its output. It is also known as an Inverter.
NAND GATE:
The NOT-AND (NAND) gate which is equal to an AND gate followed by a NOT gate.
The NAND gate gives a high output if any of the inputs are low. The NAND gate is
represented by a AND gate with a small circle on the output. The small circle
represents inversion.
NOR GATE:
The NOT-OR (NOR) gate which is equal to an OR gate followed by a NOT gate. The
NOR gate gives a low output if any of the inputs are high. The NOR gate is represented
by an OR gate with a small circle on the output. The small circle represents inversion.
EXCLUSIVE-NOR/Equivalence GATE:
The 'Exclusive-NOR' gate is a circuit that does the inverse operation to the XOR gate. It
will give a low output if one of its inputs is high but not both of them. The small circle
represents inversion.
UNICODE
Unicode is a universal character encoding standard. This standard
includes roughly 100000 characters to represent characters of
different languages. While ASCII uses only 1 byte the Unicode uses
4 bytes to represent characters. Hence, it provides a very wide
variety of encoding. It has three types namely UTF-8, UTF-16,
UTF-32. Among them, UTF-8 is used mostly it is also the default
encoding for many programming languages.
UCS
It is a very common acronym in the Unicode scheme. It stands
for Universal Character Set. Furthermore, it is the encoding
scheme for storing the Unicode text.
UTF
The UTF is the most important part of this encoding scheme. It
stands for Unicode Transformation Format. Moreover, this
defines how the code represents Unicode. It has 3 types as follows:
UTF-7
This scheme is designed to represent the ASCII standard. Since the
ASCII uses 7 bits encoding. It represents the ASCII characters in
emails and messages which use this standard.
UTF-8
It is the most commonly used form of encoding. Furthermore, it has
the capacity to use up to 4 bytes for representing the characters. It
uses:
UTF-32
It is a multibyte encoding scheme. Besides, it uses 4 bytes to
represent the characters.
● ASCII
● ISCII
Importance of Unicode
● As it is a universal standard therefore, it allows writing a single
application for various platforms. This means that we can develop
an application once and run it on various platforms in different
languages. Hence we don’t have to write the code for the same
application again and again. And therefore the development cost
reduces.
● Moreover, data corruption is not possible in it.
● It is a common encoding standard for many different languages
and characters.
● We can use it to convert from one coding scheme to another.
Since Unicode is the superset for all encoding schemes. Hence,
we can convert a code into Unicode and then convert it into
another coding standard.
● It is preferred by many coding languages. For example, XML tools
and applications use this standard only.
Advantages of Unicode
● It is a global standard for encoding.
● It has support for the mixed-script computer environment.
● The encoding has space efficiency and hence, saves memory.
● A common scheme for web development.
● Increases the data interoperability of code on cross platforms.
● Saves time and development cost of applications.
What is ASCII?
ASCII (American Standard Code for Information Interchange) is the most
common character encoding format for text data in computers and on the
internet. In standard ASCII-encoded data, there are unique values for 128
alphabetic, numeric or special additional characters and control codes.
ASCII characters may be represented in the following ways:
For example, the ASCII encoding for the lowercase letter "m" is represented in the
following ways:
ASCII characters were initially encoded into 7 bits and stored as 8-bit characters
with the most significant bit -- usually, the left-most bit -- set to 0.
The Internet Engineering Task Force (IETF) adopted ASCII as a standard for
internet data when it published "ASCII format for Network Interchange" as RFC
20 in 1969. That request for comments (RFC) document standardized the use of
ASCII for internet data and was accepted as a full standard in 2015.
In 2003, the IETF standardized the use of UTF-8 encoding for all web content
in RFC 3629.
Almost all computers now use ASCII or Unicode encoding. The exceptions are
some IBM mainframes that use the proprietary 8-bit code called Extended Binary
Coded Decimal Interchange Code (EBCDIC).
Programmers use the design of the ASCII character set to simplify certain
tasks. For example, using ASCII character codes, changing a single bit
easily converts text from uppercase to lowercase.
0100 0001
0110 0001
The difference is the third most significant bit. In decimal and hexadecimal,
this corresponds to: