Course Contents
Chapter 1: Introduction
Introduction to Turing Machine and about some points that related to Turing
Machine
Chapter 2: Computability
How TMs can work, introductory of some about recursive functions, …
Chapter 3: Un-decidability
Turing decidable and Turing acceptable, the case of un-decidability, …
Chapter 4: Computational Complexity
Basics of Algorithm Analysis, like Big-O notation and polynomial times, …
1 Teaching Method: Lecture, group assignments and presentation
CHAPTER ONE
INTRODUCTION
TURING MACHINE
2
Topics to be discussed
Introduction to Turing Machine(TM)
Standard Turing Machine
Computability of TM
Techniques of TMs Constructions
Church’s Hypothesis
3
INTRODUCTION TO COMPUTATIONAL COMPLEXITY
Throughout history people had a notion of a process of producing an output
from a set of inputs in a finite number of steps, and they thought of
“computation” as “a person writing numbers on a scratch pad following
certain rules.”
After their success in defining computation, researchers focused on
understanding what problems are computable. They showed that several
interesting tasks are inherently uncomputable: No computer can solve
them without going into infinite loops (i.e., never halting) on certain inputs.
Computational complexity theory focuses on issues of computational
efficiency — quantifying the amount of computational resources required to
solve a given task.
We will quantify the efficiency of an algorithm by studying how its number of
basic operations scales as we increase the size of the input.
4
INTRODUCTION … CONT’D
Computation is a mathematically precise notion.
We will typically measure the computational efficiency of an algorithm as the
number of basic operations it performs as a function of its input length.
That is, the efficiency of an algorithm can be captured by a function T from the set
N of natural numbers to itself, i.e.,
T: N→N, such that T(n) is equal to the maximum number of basic operations that the algorithm
performs on inputs of length n.
For the grade-school algorithm we have at most T(n) = 2n2and for repeated addition at least T(n) =
n10n-1.
Throughout history people have been solving computational tasks using a wide
variety of methods, ranging from intuition and “eureka” moments to
mechanical devices such as abacus or slide rules to modern computers.
How can we find a simple mathematical model that captures all of these ways to
compute?
5
INTRODUCTION … CONT’D
Surprisingly enough, it turns out that there is a simple mathematical model
that suffices (be enough) for studying many questions about computation and its
efficiency—the Turing machine.
It suffices to restrict attention to this single model since it seems able to
simulate all physically realizable computational methods with little loss of
efficiency.
Thus the set of “efficiently solvable” computational tasks is at least as large
for the Turing machine as for any other method of computation. One possible
exception is the quantum computer model, but we do not currently know if it is
physically realizable.
In the following sections we discuss about WHAT IS COMPLEXITY THEORY? AND
the theoretical computational model of computing – the TURING MACHINE (TM).
6
WHAT IS COMPLEXITY THEORY?
Complexity theory is a central topic in theoretical computer science.
It has direct applications to computability theory and uses computation
models such as Turing machines to help test complexity.
Complexity theory helps computer scientists relate and group problems
together into complexity classes. Sometimes, if one problem can be
solved, it opens a way to solve other problems in its complexity class.
Complexity helps determine the difficulty of a problem, often
measured by how much time and space (memory) it takes to solve a
particular problem. For example, some problems can be solved
in polynomial amounts of time and others take exponential amounts of
time, with respect to the input size.
7
WHAT IS COMPLEXITY THEORY?
Closely related fields in theoretical computer science are analysis of
algorithms and computability theory.
A key distinction between analysis of algorithms and computational complexity
theory is that the former is devoted to analyzing the amount of resources
needed by a particular algorithm to solve a problem, whereas the latter asks a
more general question about all possible algorithms that could be used to solve
the same problem.
More precisely, computational complexity theory tries to classify problems
that can or cannot be solved with appropriately restricted resources.
In turn, imposing restrictions on the available resources is what distinguishes
computational complexity from computability theory: the latter theory asks
what kinds of problems can, in principle, be solved algorithmically.
8
WHAT IS COMPLEXITY THEORY?
One basic goal in complexity theory is to separate interesting complexity
classes.
To separate two complexity classes we need to exhibit a machine in one class
that gives a different answer on same input from every machine in the other
class.
Complexity theory can be encompasses four theories:
Self-organization
Network theory
Adaptive Systems theory
9 Non-linear system theory
The 4 main theories that contribute to the Complexity Theory
body of knowledge are:
Systems Theory / Self organization:
Often called the mother of Complexity Theory.
It deals with ideas surrounding self-organization and adaptability.
Computer science from Systems Theory is a major contributor to
understanding Complexity Theory
Chaos Theory:
Which is the study of non-linear systems, or things that my look
completely random but still have an underlying cause that may
not seem obvious on the surface.
From Chaos Theory, we gain an understanding of feedback loops
10
and non-linear systems.
Computability is the ability to solve a problem in an effective manner.
It is a key topic of the field of computability theory within mathematical
logic and the theory of computation with in computer science.
The computability of a problem is closely linked to the existence of an
algorithm to solve the problem.
Computation is a general term for any type of information processing
that can be represented as an algorithm precisely (mathematically).
Examples:
Adding two numbers in our brains, on a piece of paper or using a calculator.
Converting a decimal number to its binary presentation or vice versa.
11 Finding the greatest common divisors of two numbers. and etc.
Network Theory:
Network Theory is a more practical application that relies less on models
and more on real-world data.
It includes the study of the world’s supply chain, communication
networks, and social networks.
Adaptive Systems Theory:
This is the study of the actions and reactions to others behavior.
It studies the organization of things that do not have centralized
control.
12 They are governed by simple rules that emerge through interaction.
Introduction to Turing Machine (TM)
What is a Turing machine?
A Turing machine is a hypothetical machine thought of by the mathematician
Alan Turing in 1936.
Although its simplicity, the machine can simulate ANY computer algorithm,
no matter how complicated it is!
Above is a very simple representation of a Turing machine.
13
Turing Machine
Also similar to the finite state machine, except that the input is
provided on an execution "tape", which the Turing machine can
read from, write to, or move back and forth past its read/write
"head".
The tape is allowed to grow to arbitrary size.
The Turing machine is capable of performing complex
calculations which can have arbitrary duration.
This model is perhaps the most important model of
computation in computer science, as it simulates computation in
the absence of predefined resource limits.
14
15
It consists of an infinitely-long tape which acts like the memory in
a typical computer, or any other form of data storage.
The squares on the tape are usually blank at the start and can be
written with symbols.
In this case, the machine can only process the symbols 0 and 1 and
" " (blank), and is thus said to be a 3-symbol Turing machine.
At any one time, the machine has a head which is positioned over
16
one of the squares on the tape.
With this head, the machine can perform three very basic operations:
Read the symbol on the square under the head.
Edit the symbol by writing a new symbol or erasing it.
Move the tape left of right by one square so that the machine can read
and edit the symbol on a neighboring square.
A demonstration
As a trivial example to demonstrate these operations, let's try printing the
symbols "1 1 0" on an initially blank tape:
17
Finite set of operations/rules
The machine has a finite set of states, denoted Q.
The machine contains a “register” that can hold a single element of Q; this is the
”state” of the machine at that instant.
This state determines its action at the next computational step, which consists of
the
following:
(1) read the symbols in the cells directly under the k heads
(2) for the k - 1 read/write tapes replace each symbol with a new symbol (it has the
option of not changing the tape by writing down the old symbol again),
(3) change its register to contain another state from the finite set Q (it has the
option not to change its state by choosing the old state again) and
(4) move each head one cell to the left or to the right.
19
The Universal Turing Machine
General purpose computers are possible, by showing a universal Turing machine that
can simulate the execution of every other TM M given M’s description as input.
So used to having a universal computer on our desktops or even in our pockets, today
we take this notion for granted.
But it is good to remember why it was once counter-intuitive.
The parameters of the universal TM are fixed alphabet size, number of states, and
number of tapes.
The corresponding parameters for the machine being simulated could be much larger.
The reason this is not a hurdle is, of course, the ability to use encodings.
Even if the universal TM has a very simple alphabet, say {0, 1}, this is sufficient to
allow it to
represent the other machine’s state and
20 transition table on its tapes, and then follow along in the computation step by step.
Reducibility and NP-completeness
It turns out that the independent set problem is at least as hard
as any other language in NP (Nondeterministic Polynomial-time
prolem):
if it has a polynomial-time algorithm then so do all the problems
in NP.
This fascinating property is called NP-hardness. Since most
scientists conjecture that NP = P, (a problem is reducable)
the fact that a language is NP-hard can be viewed as evidence
that it cannot be decided in polynomial time.
How can we prove that a language B is at least as hard as some
22
other language A?
Computability of TM
The expressive power of Turing machines.
When you encounter Turing machines for the first time, it may not be clear that
they do indeed fully encapsulate our intuitive notion of computation.
It may be useful to work through some simple examples, such as expressing
the standard algorithms for addition and multiplication in terms of Turing
machines computing the corresponding functions.
You can also verify that you can simulate a program in your favorite
programming language using a Turing machine.
(The reverse direction also holds: most programming languages can simulate a
Turing machine.) See next example
23
Example
This example assumes some background in computing
We proof that Turing machines can simulate any program written in any of the
familiar programming languages such as C or Java.
First, recall that programs in these programming languages can be translated (the
technical term is compiled) into an equivalent machine language program.
This is a sequence of simple instructions to read from memory into one of a finite
number of registers, write a register’s contents to memory, perform basic
arithmetic operations, such as adding two registers, and control instructions that
perform actions conditioned on, say, whether a certain register is equal to zero.
All these operations can be easily simulated by a Turing machine.
The memory and register can be implemented using the machine’s tapes, while the
instructions can be encoded by the machine’s transition function.
24
Quantum Computers
Quantum computers are a new computational model that may be physically realizable and
may have an exponential advantage over ‘classical” computational models such as
probabilistic and deterministic Turing machines.
This section survey the basic principles of quantum computation and some of the
important algorithms in this model.
As complexity theorists, the main reason to study quantum computers is that they pose a
serious challenge to the strong Church-Turing thesis that stipulates that any physically
reasonable computation device can be simulated by a Turing machine with polynomial
overhead.
Quantum computers seem to violate no fundamental laws of physics and yet currently we
do not know any such simulation.
25
In fact, there is some evidence to the contrary: there is a polynomial-time
algorithm for quantum computers to factor integers, where despite much effort
no such algorithm is known for deterministic or probabilistic Turing
machines.
In fact, the conjectured hardness of this problem underlies of several
cryptographic schemes (such as the cryptosystem) that are currently widely
used for electronic commerce and other applications.
Physicists are also interested in quantum computers as studying them may
shed light on quantum mechanics, a theory which, despite its great success in
predicting experiments, is still not fully understood.
26
Church’s Hypothesis
In computability theory, the Church–Turing thesis (also known as
computability thesis, the Turing–Church thesis, the Church–Turing
conjecture/estimation, Church's thesis, Church's conjecture, and Turing's
thesis) is a hypothesis about the nature of computable functions.
It states that a function on the natural numbers can be calculated by an effective
method if and only if it is computable by a Turing machine.
The thesis is named after American mathematician Alonzo Church and the
British mathematician Alan Turing.
Before the precise definition of computable function, mathematicians often used
the informal term effectively calculable to describe functions that are computable
by paper-and-pencil methods.
In the 1930s, several independent attempts were made to formalize the notion of
27 computability:
Church’s Thesis for Turing Machine
In 1936, A method named as lambda-calculus was created by Alonzo Church in which
the Church numerals are well defined, i.e. the encoding of natural numbers.
Also in 1936, Turing machines (earlier called theoretical model for machines) was
created by Alan Turing, that is used for manipulating the symbols of string with the
help of tape.
Church Turing Thesis :
Turing machine is defined as an abstract representation of a computing device such as
hardware in computers.
Alan Turing proposed Logical Computing Machines (LCMs), i.e. Turing’s expressions
for Turing Machines.
This was done to define algorithms properly.
So, Church made a mechanical method named as ‘M’ for manipulation of strings by
using logic and mathematics.
28
This method M must pass the following statements:
Number of instructions in M must be finite.
Output should be produced after performing finite number of steps.
It should not be imaginary, i.e. can be made in real life.
It should not require any complex understanding.
Using these statements Church proposed a hypothesis called Church’s Turing
thesis that can be stated as:
“The assumption that the intuitive notion of computable functions can be
identified with partial recursive functions.”
29
In 1930, this statement was first formulated by Alonzo Church and is usually
referred to as Church’s thesis, or the Church-Turing thesis.
However, this hypothesis cannot be proved.
The recursive functions can be computable after taking following assumptions:
Each and every function must be computable.
Let ‘F’ be the computable function and after performing some elementary
operations to ‘F’, it will transform a new function ‘G’ then this function ‘G’
automatically becomes the computable function.
If any functions that follow above two assumptions must be states as computable
30
function.
O n e
a p t e r
o f C h
E nd
31