Chapter 1
Chapter 1
Chapter One
Introduction to Programming
1.1. Overview
A Computer is an electronic device that accepts data, performs computations, and makes logical
decisions according to instructions that have been given to it; then produces meaningful
information in a form that is useful to the user. The terms computer programs, software
programs, or just programs are the instructions that tell the computer what to do. Computer
requires programs to function, and a computer programs does nothing unless its instructions are
executed by a CPU.
Computer programming (often shortened to programming or coding) is the process of writing,
testing, debugging/troubleshooting, and maintaining the source code of computer programs.
Writing computer programs means writing instructions that will make the computer follow and
run a program based on those instructions. Each instruction is relatively simple, yet because of
the computer's speed, it is able to run millions of instructions in a second. A computer program
usually consists of two elements:
Data – characteristics
Code – action
Computer programs (also known as source codes) are often written by professionals known as
Computer Programmers (simply Programmers). Source code is written in one of programming
languages.
A programming language is an artificial language that can be used to control the behavior of a
machine, particularly a computer. Programming languages, like natural language (such as
Amharic), are defined by syntactic and semantic rules which describe their structure and
meaning, respectively. The syntax of a language describes the possible combinations of symbols
that form a syntactically correct program. The meaning given to a combination of symbols is
handled by semantics.
The main purpose of programming languages is to provide instructions to a computer. As such,
programming languages differ from most other forms of human expression in that they require a
greater degree of precision and completeness.
When using a natural language to communicate with other people, human authors and speakers
can be ambiguous and make small errors, and still expect their intent to be understood. However,
computers do exactly what they are told to do, and cannot understand the code the programmer
"intended" to write. So computers need to be instructed to perform all the tasks. The combination
of the language definition, the program, and the program's inputs must fully specify the external
behavior that occurs when the program is executed. Computer languages have relatively few,
exactly defined rules for composition of programs and strictly controlled vocabularies in which
unknown words must be defined before they can be used.
Programming languages can be divided in to two major categories: low-level and high-level
languages.
1. Low-level Languages
Computers only understand one language and that is binary language or the language of 1s and
0s. Binary language is also known as machine language. In the initial years of computer
programming, all the instructions were given in binary form. Although the computer easily
understood these programs, it proved too difficult for a normal human being to remember all the
instructions in the form of 0s and 1s. Therefore, computers remained mystery to a common
person until other languages such as assembly language was developed, which were easier to
learn and understand.
Assembly language corresponds to symbolic instructions and executable machine codes, and
was created to use letters (called mnemonics) to each machine language instructions to make it
easier to remember or write. For example:
ADD A, B – adds two numbers in memory location A and B
Assembly language is nothing more than a symbolic representation of machine code, which
allows symbolic designation of memory locations. However, no matter how close assembly
language is to machine code, computers still cannot understand it. The assembly language must
be translated to machine code by a separate program called assembler. The machine instruction
created by the assembler from the original program (source code) is called object code. Thus
assembly languages are unique to a specific computer (machine). Assemblers are written for
each unique machine language.
2. High-level Languages
Although programming in assembly language is not as difficult and error prone as stringing
together ones and zeros, it is slow and cumbersome. In addition, it is hardware specific. The lack
The first part focuses on defining the problem and logical procedures to follow in solving
it.
The second introduces the means by which programmers communicate those procedures
to the computer system so that it can be executed.
Before a program is written, the programmer must clearly understand what data are to be used;
the desired result; and the procedure to be used to produce the result.
An algorithm is defined as a step-by-step sequence of instructions that must terminate and
describe how the data is to be processed to produce the desired outputs.
There are three commonly used tools to help to document program logic (the algorithm):
flowchart, structured chart, and pseudo code.
Pseudo code is a compact and informal high-level description of a computer algorithm that uses
the structural conventions of programming languages. No standard for pseudo code syntax exists,
as a program in pseudo code is not an executable program.
It may be easier for humans to read than conventional programming languages, and that
it may be a compact and environment-independent generic description of the key
principles of an algorithm.
Writing pseudo code will save time later during the construction & testing phase of a
program's development.
Example:
Write a program that obtains two integer numbers from the user. It will
print out the sum of those numbers.
Pseudo code:
CelsusToFarh
(main func)
centigard Fahrenheit
centigard
1.2.3. Flowchart
A flowchart is a schematic representation of an algorithm or a process. It doesn’t depend on any
particular programming language, so that it be can used to translate an algorithm to more than
one programming language. It uses different symbols (geometrical shapes) to represent different
processes.
The following table shows some of the common symbols:
Example 1: -
Draw flow chart of an algorithm to add two numbers and display their result.
Algorithm Description: Flowchart is:
Start
Read the rules of the two numbers (A and B)
Add A and B
Assign the sum of A and B to C
Read A, B
Display the result ( C)
C= A+B
Print C
End
Example 2:-
Write an algorithm description and draw a flow chart to check a number is negative or not.
Algorithm Description:
Read a number x
If x is less than zero write a message negative
Else write a message not negative