0% found this document useful (0 votes)
59 views

Logic Gate - Unicode

The document discusses logic gates and their basic types. Logic gates are electronic circuits that produce binary outputs based on their inputs. The seven main logic gates are AND, OR, XOR, NOT, NAND, NOR and XNOR. Each gate is described along with its truth table and symbol.

Uploaded by

shreyaxchauhan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

Logic Gate - Unicode

The document discusses logic gates and their basic types. Logic gates are electronic circuits that produce binary outputs based on their inputs. The seven main logic gates are AND, OR, XOR, NOT, NAND, NOR and XNOR. Each gate is described along with its truth table and symbol.

Uploaded by

shreyaxchauhan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Logic Gates

o The logic gates are the main structural part of a digital system.
o Logic Gates are a block of hardware that produces signals of binary 1 or 0 when
input logic requirements are satisfied.
o Each gate has a distinct graphic symbol, and its operation can be described by
means of algebraic expressions.
o The seven basic logic gates includes: AND, OR, XOR, NOT, NAND, NOR, and
XNOR.
o The relationship between the input-output binary variables for each gate can be
represented in tabular form by a truth table.
o Each gate has one or two binary input variables designated by A and B and one
binary output variable designated by x.

AND GATE:
The AND gate is an electronic circuit which gives a high output only if all its inputs are
high. The AND operation is represented by a dot (.) sign.

OR GATE:
The OR gate is an electronic circuit which gives a high output if one or more of its
inputs are high. The operation performed by an OR gate is represented by a plus (+)
sign.
NOT GATE:
The NOT gate is an electronic circuit which produces an inverted version of the input at
its output. It is also known as an Inverter.

NAND GATE:
The NOT-AND (NAND) gate which is equal to an AND gate followed by a NOT gate.
The NAND gate gives a high output if any of the inputs are low. The NAND gate is
represented by a AND gate with a small circle on the output. The small circle
represents inversion.

NOR GATE:
The NOT-OR (NOR) gate which is equal to an OR gate followed by a NOT gate. The
NOR gate gives a low output if any of the inputs are high. The NOR gate is represented
by an OR gate with a small circle on the output. The small circle represents inversion.

Exclusive-OR/ XOR GATE:


The 'Exclusive-OR' gate is a circuit which will give a high output if one of its inputs is
high but not both of them. The XOR operation is represented by an encircled plus sign.

EXCLUSIVE-NOR/Equivalence GATE:
The 'Exclusive-NOR' gate is a circuit that does the inverse operation to the XOR gate. It
will give a low output if one of its inputs is high but not both of them. The small circle
represents inversion.
UNICODE
Unicode is a universal character encoding standard. This standard
includes roughly 100000 characters to represent characters of
different languages. While ASCII uses only 1 byte the Unicode uses
4 bytes to represent characters. Hence, it provides a very wide
variety of encoding. It has three types namely UTF-8, UTF-16,
UTF-32. Among them, UTF-8 is used mostly it is also the default
encoding for many programming languages.

UCS
It is a very common acronym in the Unicode scheme. It stands
for Universal Character Set. Furthermore, it is the encoding
scheme for storing the Unicode text.

● UCS-2: It uses two bytes to store the characters.


● UCS-4: It uses two bytes to store the characters.

UTF
The UTF is the most important part of this encoding scheme. It
stands for Unicode Transformation Format. Moreover, this
defines how the code represents Unicode. It has 3 types as follows:

UTF-7
This scheme is designed to represent the ASCII standard. Since the
ASCII uses 7 bits encoding. It represents the ASCII characters in
emails and messages which use this standard.

UTF-8
It is the most commonly used form of encoding. Furthermore, it has
the capacity to use up to 4 bytes for representing the characters. It
uses:

● 1 byte to represent English letters and symbols.


● 2 bytes to represent additional Latin and Middle Eastern letters
and symbols.
● 3 bytes to represent Asian letters and symbols.
● 4 bytes for other additional characters.
Moreover, it is compatible with the ASCII standard.

Its uses are as follows:

● Many protocols use this scheme.


● It is the default standard for XML files
● Some file systems Unix and Linux use it in some files.
● Internal processing of some applications.
● It is widely used in web development today.
● It can also represent emojis which is today a very important
feature of most apps.
UTF-16
It is an extension of UCS-2 encoding. Moreover, it uses to represent
the 65536 characters. Moreover, it also supports 4 bytes for
additional characters. Furthermore, it is used for internal processing
like in java, Microsoft windows, etc.

UTF-32
It is a multibyte encoding scheme. Besides, it uses 4 bytes to
represent the characters.

Browse more Topics under Internal Storage Encoding of


Characters

● ASCII
● ISCII

Importance of Unicode
● As it is a universal standard therefore, it allows writing a single
application for various platforms. This means that we can develop
an application once and run it on various platforms in different
languages. Hence we don’t have to write the code for the same
application again and again. And therefore the development cost
reduces.
● Moreover, data corruption is not possible in it.
● It is a common encoding standard for many different languages
and characters.
● We can use it to convert from one coding scheme to another.
Since Unicode is the superset for all encoding schemes. Hence,
we can convert a code into Unicode and then convert it into
another coding standard.
● It is preferred by many coding languages. For example, XML tools
and applications use this standard only.

Advantages of Unicode
● It is a global standard for encoding.
● It has support for the mixed-script computer environment.
● The encoding has space efficiency and hence, saves memory.
● A common scheme for web development.
● Increases the data interoperability of code on cross platforms.
● Saves time and development cost of applications.

Difference between Unicode and ASCII


The differences between them are as follows:
Unicode Coding Scheme ASCII Coding Scheme

● It uses variable bit


encoding according to ● It uses 7-bit encoding. As of
the requirement. For now, the extended form
example, UTF-8, UTF-16, uses 8-bit encoding.
UTF-32

● It is not a standard all over


● It is a standard form.
the world.

● It has only limited


● People use this scheme
characters hence, it cannot
all over the world.
be used all over the world.

● The Unicode characters


themselves involve all the
characters of the ASCII ● It has its equivalent coding
encoding. Therefore we characters in the Unicode.
can say that it is a
superset for it.

● It has more than 128,000 ● In contrast, it has only 256


characters. characters.

What is ASCII?
ASCII (American Standard Code for Information Interchange) is the most
common character encoding format for text data in computers and on the
internet. In standard ASCII-encoded data, there are unique values for 128
alphabetic, numeric or special additional characters and control codes.
ASCII characters may be represented in the following ways:

● as pairs of hexadecimal digits -- base-16 numbers, represented as 0 through 9


and A through F for the decimal values of 10-15;
● as three-digit octal (base 8) numbers;
● as decimal numbers from 0 to 127; or
● as 7-bit or 8-bit binary

For example, the ASCII encoding for the lowercase letter "m" is represented in the
following ways:

Character Hexadecimal Octal Decimal Binary (7 bit) Binary (8 bit)

m 0x6D /155 109 110 1101 0110 1101

ASCII characters were initially encoded into 7 bits and stored as 8-bit characters
with the most significant bit -- usually, the left-most bit -- set to 0.

Why is ASCII important?


ASCII was the first major character encoding standard for data processing. Most
modern computer systems use Unicode, also known as the Unicode Worldwide
Character Standard. It's a character encoding standard that includes ASCII
encodings.

The Internet Engineering Task Force (IETF) adopted ASCII as a standard for
internet data when it published "ASCII format for Network Interchange" as RFC
20 in 1969. That request for comments (RFC) document standardized the use of
ASCII for internet data and was accepted as a full standard in 2015.

ASCII encoding is technically obsolete, having been replaced by Unicode. Yet,


ASCII characters use the same encoding as the first 128 characters of the Unicode
Transformation Format 8, so ASCII text is compatible with UTF-8.

In 2003, the IETF standardized the use of UTF-8 encoding for all web content
in RFC 3629.
Almost all computers now use ASCII or Unicode encoding. The exceptions are
some IBM mainframes that use the proprietary 8-bit code called Extended Binary
Coded Decimal Interchange Code (EBCDIC).

How does ASCII work?


ASCII offers a universally accepted and understood character set for basic
data communications. It enables developers to design interfaces that both
humans and computers understand. ASCII codes a string of data as ASCII
characters that can be interpreted and displayed as readable plain text for
people and as data for computers.

Programmers use the design of the ASCII character set to simplify certain
tasks. For example, using ASCII character codes, changing a single bit
easily converts text from uppercase to lowercase.

The capital letter "A" is represented by the binary value:

0100 0001

The lowercase letter "a" is represented by the binary value:

0110 0001

The difference is the third most significant bit. In decimal and hexadecimal,
this corresponds to:

Character Binary Decimal Hexadecimal

A 0100 0001 65 0x41

a 0110 0001 97 0x61

Character Binary Decimal Hexadecimal

0 0011 0000 48 0x30


Character Binary Decimal Hexadecimal

1 0011 0001 49 0x31

2 0011 0010 50 0x32

3 0011 0011 51 0x33

4 0011 0100 52 0x34

5 0011 0101 53 0x35

6 0011 0110 54 0x36

7 0011 0111 55 0x37

8 0011 1000 56 0x38

9 0011 1001 57 0x39

The difference between upper- and lowercase characters is always 32


(0x20 in hexadecimal), so converting from upper- to lowercase and back is
a matter of adding or subtracting 32 from the ASCII character code.

Similarly, hexadecimal characters for the digits 0 through 9 are as follows:

Using this encoding, developers can easily convert ASCII digits to


numerical values by stripping off the four most significant bits of the binary
ASCII values (0011). This calculation can also be done by dropping the first
hexadecimal digit or by subtracting 48 from the decimal ASCII code.

You might also like