CSC 403 Lecture 2
CSC 403 Lecture 2
Preamble
Primary influences on language design has a profound effect on the basic Computer
architecture in which languages are developed around computer architecture, known as the
Von Neumann architecture (the most prevalent computer architecture). Pedagogy: Some
languages have better
―pedagogy‖ than others. That is, they are intrinsically easier to teach and to learn, they have
better textbooks; they are implemented in a better program development environment, they
are widely known and used by the best programmers in an application area.
Computer Architecture
A computer scientist John von Neumann in 1945 described a design architecture for an
electronic digital computer with subdivisions of a central arithmetic part, a central control
part, a memory to store both data and instructions, external storage, and input and output
mechanisms. John Von Neumann introduced the idea of the stored program which is used
to keep programmed instructions, as well as its data, in read-write, random-access memory
(RAM). Previously data and programs were stored in separate memories. Von Neumann
realized that data and programs are indistinguishable and can, therefore, use the same
memory. On a large scale, the ability to treat instructions as data is what makes assemblers,
compilers and other automated programming tools possible. One can "write programs which
write programs". This led to the introduction of compilers which accepted high level
language source code as input and produced binary code as output.
According to Von Neumann Architecture, the basic function performed by a computer is the
execution of a program. A program is a set of machine instructions. An instruction is a form
of control code, which supplies the information about an operation and the data on which
the operation is to be performed. The Von Neumann architecture uses a single processor
which follows a linear sequence of fetch-decode-execute. In order to do this, the processor
has to use some special registers, which are discrete memory locations with special purposes
attached. These are:
Register Meaning
PC Program Counter
CIR Current Instruction
Register
MAR Memory Address Register
MDR Memory Direct Register
IR Index Register
Accumulator Hold Result
• The program counter keeps track of where to find the next instruction so that a copy
of the instruction can be placed in the current instruction register. Sometimes the
program counter
is called the Sequence Control Register (SCR) as it controls the sequence in which
instructions are executed.
• The current instruction register holds the instruction that is to be executed. The
memory address register is used to hold the memory address that contains either the
next piece of data or an instruction that is to be used.
• The memory data register acts like a buffer and holds anything that is copied from
the memory ready for the processor to use it.
• The central processor contains the arithmetic-logic unit (also known as the arithmetic
unit) and the control unit.
• The arithmetic-logic unit (ALU) is where data is processed. This involves arithmetic
and logical operations. Arithmetic operations are those that add and subtract
numbers, and so on. Logical operations involve comparing binary patterns and
making decisions.
• The control unit fetches instructions from memory, decodes them and synchronises
the operations before sending signals to other parts of the computer.
• The accumulator is in the arithmetic unit, the program counter and the instruction
registers are in the control unit and the memory data register and memory address
register are in the processor.
• An index register is a microprocessor register used for modifying operand addresses
during the run of a program, typically for doing vector/array operations. Index
registers are used for a special kind of indirect addressing (covered in 3.5 (i) ) where
an immediate constant (i.e. which is part of the instruction itself) is added to the
contents of the index register to form the address to the actual operand or data.
The following is an algorithm that shows the steps in the cycle. At the end the cycle is reset
and the algorithm repeated.
• Load the address that is in the program counter (PC) into the memory address register
(MAR). 2. Increment the PC by 1.
• Load the instruction that is in the memory address given by the MAR into the
memory data register (MDR).
• Load the instruction that is now in the MDR into the current instruction register
(CIR). Decode the instruction that is in the CIR.
• If the instruction is a jump instruction, then a. Load the address part of the instruction
into the PC b. Reset by going to step 1.
• Execute the instruction.
• Reset by going to step 1.
Steps 1 to 4 are the fetch part of the cycle. Steps 5, 6a and 7 are the execute part of the cycle
and steps 6b and 8 are the reset part.
Step 1 simply places the address of the next instruction into the memory address register so
that the control unit can fetch the instruction from the right part of the memory. The program
counter is then incremented by 1 so that it contains the address of the next instruction,
assuming that the instructions are in consecutive locations. The memory data register is used
whenever anything is to go from the central processing unit to main memory, or vice versa.
Thus the next instruction is copied from memory into the MDR and is then copied into the
current instruction register. Now that the instruction has been fetched the control unit can
decode it and decide what has to be done. This is the execute part of the cycle. If it is an
arithmetic instruction, this can be executed and the cycle restarted as the PC contains the
address of the next instruction in order. However, if the instruction involves jumping to an
instruction that is not the next one in order, the PC has to be loaded with the address of the
instruction that is to be executed next. This address is in the address part of the current
instruction, hence the address part is loaded into the PC before the cycle is reset and starts
all over again.
CONCLUSION
Von Neumann architecture describes a design architecture for an electronic digital computer
with subdivisions of a central arithmetic part, a central control part, a memory to store both
data and instructions, external storage, and input and output mechanisms. The meaning of
the phrase has evolved to mean a stored-program computer. A stored-program digital
computer is one that keeps its programmed instructions, as well as its data, in read-write,
random-access memory (RAM). So he introduced the idea of the stored program. Previously
data and programs were stored in separate memories. Von Neumann realized that data and
programs are indistinguishable and can, therefore, use the same memory. On a large scale,
the ability to treat instructions as data is what makes assemblers, compilers and other
automated programming tools possible. One can "write programs which write programs".
This led to the introduction of compilers which accepted high level language source code as
input and produced binary code as output.
SUMMARY
The major influences on language design have been machine architecture and software
design methodologies which is a theoretical design for a stored program computer that
serves as the basis for almost all modern computers.
Preamble
When programs are developed to solve real-life problems like inventory management,
payroll processing, student admissions, examination result processing, etc. they tend to be
huge and complex. The approach to analyzing such complex problems, planning for
software development and controlling the development process is called programming
methodology. New software development methodologies (e.g. object Oriented Software
Development) led to new paradigms in programming and by extension, to new programming
languages. A programming paradigm is a pattern of problem-solving thought that underlies
a particular genre of programs and languages. Also a programming paradigm is the concept
by which the methodology of a programming language adheres to.
Language Paradigm
Paradigm is a model or world view. Paradigms are important because they define a
programming language and how it works. A great way to think about a paradigm is as a set
of ideas that a programming language can use to perform tasks in terms of machine-code at
a much higher level. These different approaches can be better in some cases, and worse in
others. A great rule of thumb when exploring paradigms is to understand what they are good
at. While it is true that most modern programming languages are general-purpose and can
do just about anything, it might be more difficult to develop a game, for example, in a
functional language than an object-oriented language. Many people classify languages into
these main paradigms:
These are mostly influenced by the von Neumann computer architecture. Problem is broken
down into procedures, or blocks of code that perform one task each. All procedures taken
together form the whole program. It is suitable only for small programs that have low level
of complexity. Typical elements of such languages are assignment statements, data
structures and type binding, as well as control mechanisms; active procedures manipulate
passive data objects. Example − For a calculator program that does addition, subtraction,
multiplication, division, square root and comparison, each of these operations can be
developed as separate procedures. In the main program each procedure would be invoked
on the basis of user‘s choice. E.g. FORTRAN Algol, Pascal, C/C++, C#, Java, Perl,
JavaScript, Visual BASIC.NET.
These type of languages have no assignment statements. Their syntax is closely related to
the formulation of mathematical functions. Thus, functions are central for functional
programming languages. Here the problem, or the desired solution, is broken down into
functional units. Each unit performs its own task and is self-sufficient. These units are then
stitched together to form the complete solution. Example − A payroll processing can have
functional units like employee data maintenance, basic salary calculation, gross salary
calculation, leave processing, loan repayment processing, etc. E.g. LISP, Scala, Haskell,
Python, Clojure, Erlang. It may also include OO (Object Oriented) concepts.
Facts and rules (the logic) are used to represent information (or knowledge) and a logical
inference process is used to produce results. In contrast to this, control structures are not
explicitly defined in a program, they are part of the programming language (inference
mechanism). Here the problem is broken down into logical units rather than functional units.
Example: In a school management system, users have very defined roles like class teacher,
subject teacher, lab assistant, coordinator, academic in-charge, etc. So the software can be
divided into units depending on user roles. Each user can have different interface,
permissions, etc. e.g. PROLOG, PERL, this may also include OO concepts.
know the object representing data as well as procedures. Data structures and their
appropriate manipulation processes are packed together to form a syntactical unit. Here the
solution revolves around entities or objects that are part of problem. The solution deals with
how to store data related to the entities, how the entities behave and how they interact with
each other to give a cohesive solution. Example − If we have to develop a payroll
management system, we will have entities like employees, salary structure, leave rules, etc.
around which the solution must be built. E.g. SIMULA 67, SMALLTALK, C++, Java,
Python, C#, Perl, Lisp or EIFFEL.
Scripting Language
Software developers may choose one or a combination of more than one of these
methodologies to develop a software. Note that in each of the methodologies discussed,
problem has to be broken down into smaller units. To do this, developers use any of the
following two approaches:
Top-down approach: The problem is broken down into smaller units, which may be further
broken down into even smaller units. Each unit is called a module. Each module is a self-
sufficient unit that has everything necessary to perform its task. The following illustration
shows an example of how you can follow modular approach to create different modules
while developing a payroll processing program.
Bottom-up Approach: In bottom-up approach, system design starts with the lowest level
of components, which are then interconnected to get higher level components. This process
continues till a hierarchy of all system components is generated. However, in real-life
scenario it is very difficult to know all lowest level components at the outset. So bottoms up
approach is used only for very simple problems. Let us look at the components of a
calculator program.
CONCLUSION
All programming paradigms have their benefits to both education and ability. Functional
languages historically have been very notable in the world of scientific computing. Of
course, taking a list of the most popular languages for scientific computing today, it would
be obvious that they are all multi-paradigm. Object-oriented languages also have their fair
share of great applications. Software development, game development, and graphics
programming are all great examples of where object-oriented programming is a great
approach to take.
The biggest note one can take from all of this information is that the future of software and
programming language is multi-paradigm. It is unlikely that anyone will be creating a purely
functional or object-oriented programming language anytime soon. If you ask me, this isn‘t
such a bad thing, as there are weaknesses and strengths to every programming approach that
you take, and a lot of true optimization is performing tests to see which methodology is more
efficient or better than the other overall. This also puts a bigger thumbtack into the idea that
everyone should know multiple languages from multiple paradigms. With the paradigms
merging using the power of generics, it is never known when one might run into a
programming concept from an entirely different programming language
SUMMARY
The unit has explained what programming language is, classification and explanation of
different programming language generation, basic components of each computer
programming generation. You also saw different characteristics of each programming
language generation in terms of computer characteristic, their capabilities, trend and
development in computer hardware for different generation.
Preamble
Programming paradigms, like software architecture, have trade-offs. In fact, many of the
same methods for comparing architectural designs apply just as well to language design.
Conceptual design involves a series of trade-off decisions among significant parameters
such as operating speeds, memory size, power, and I/O bandwidth - to obtain a
compromise design which best meets the performance requirements. Both the uncertainty
in these requirements and the important trade- off factors should be ascertained. Those
factors can be used to evaluate the design trade-offs (usually on a qualitative basis).
A trade-off is made when using an interpreted language. One can trade speed of development
for higher execution costs. Because each line of an interpreted program must be translated
each time it is executed, there is a higher overhead. Thus, an interpreted language is generally
more suited to ad hoc requests than predefined requests. That is especially true for programs
that are based around manipulating state over a long term. Trade-off is a clear, logically
simple structure that makes complex algorithms easy to build right and scales well but makes
stateful systems harder to build. There are many trade-offs in language design such as:
Reliability: this take into account the time required for malfunction detection and
reconfiguration or repair.
Development status and cost: are complex management factors which can have significant
effects on the design as well. They require the estimation of a number of items such as the
extent off off-the-shelf hardware use, design risks in developing new equipment using
advanced technologies, potential progress in the state of the art during the design and
development.
CONCLUSION
All programming paradigms have their benefits to both education and ability. Functional
languages historically have been very notable in the world of scientific computing. Of
course, taking a list of the most popular languages for scientific computing today, it would
be obvious that they are all multi-paradigm. Object-oriented languages also have their fair
share of great applications. Software development, game development, and graphics
programming are all great examples of where object-oriented programming is a great
approach to take.
The biggest note one can take from all of this information is that the future of software and
programming language is multi-paradigm. It is unlikely that anyone will be creating a purely
functional or object-oriented programming language anytime soon. If you ask me, this isn‘t
such a bad thing, as there are weaknesses and strengths to every programming approach that
you take, and a lot of true optimization is performing tests to see which methodology is more
efficient or better than the other overall. This also puts a bigger thumbtack into the idea that
everyone should know multiple languages from multiple paradigms. With the paradigms
merging using the power of generics, it is never known when one might run into a
programming concept from an entirely different programming language
SUMMARY
The major influences on language design have been machine architecture and software
design methodologies. Designing a programming language is primarily an engineering feat,
in which a long list of trade-offs must be made among features, constructs, and capabilities.
Implementation Method
Compilation
Compilation Process
Interpretation
Phases of Interpretation
Hybrid
Just-in-Time
Preamble
Implementation Method
Compilation
Programs can be translated into machine language, which can be executed directly on the
computer. This method is called a compiler implementation and has the advantage of very
fast program execution, once the translation process is complete. The language that a
compiler translates is called the source language. The process of compilation and program
execution takes place in several phases, the most important of which are shown in Figure
below:
Compilation Process
- Lexical analysis converts characters in the source program into lexical units (e.g.
identifiers, operators, keywords).
- Syntactic analysis: transforms lexical units into parse trees which represent the syntactic
structure of the program.
- Semantics analysis check for errors hard to detect during syntactic analysis; generate
intermediate code.
- Code generation – Machine code is generated
Interpretation
Pure interpretation lies at the opposite end (from compilation) of implementation methods.
With this approach, programs are interpreted by another program called an interpreter, with
no translation whatever. The interpreter program acts as a software simulation of a machine
whose fetch-execute cycle deals with high-level language program statements rather than
machine instructions. The process of pure interpretation is shown in Figure below:
Phases of Interpretation
Hybrid
Some language implementation systems are a compromise between compilers and pure
interpreters; they translate high-level language programs to an intermediate language
designed to allow easy interpretation. This method is faster than pure interpretation because
the source language statements are decoded only once. Such implementations are called
hybrid implementation systems. The process used in a hybrid implementation system is
shown in Figure below:
Instead of translating intermediate language code to machine code, it simply interprets the
intermediate code.
Just –in-time
After hybrid, then compile sub programs code the first time they are called. This
implementation initially translates programs to an intermediate language then compile the
intermediate language of the subprograms into machine code when they are called.
- Machine code version is kept for subsequent calls. Just-in-time systems are widely used
for Java programs. Also .NET languages are implemented with a JIT system.
Conclusion
Summary