0% found this document useful (0 votes)
41 views60 pages

1 Chap, Sttipaza42

The document outlines the Computer Science 1 course at Morsli Abdellah University Center in Algeria, which has transitioned to English instruction as mandated by the Ministry of Higher Education. It covers fundamental concepts of computer science, including hardware, software, algorithms, and programming in Python, with a focus on practical sessions. The course is designed for first-year Bachelor's students in Science and Technology, emphasizing problem-solving and coding skills.

Uploaded by

wahab.ayoub1234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views60 pages

1 Chap, Sttipaza42

The document outlines the Computer Science 1 course at Morsli Abdellah University Center in Algeria, which has transitioned to English instruction as mandated by the Ministry of Higher Education. It covers fundamental concepts of computer science, including hardware, software, algorithms, and programming in Python, with a focus on practical sessions. The course is designed for first-year Bachelor's students in Science and Technology, emphasizing problem-solving and coding skills.

Uploaded by

wahab.ayoub1234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

People's Democratic Republic of Algeria

Ministry of Higher Education and Scientific Research


Morsli Abdellah University Center in Tipaza
Institute of Sciences

Course: Computer Science 1

‫اإلعالم اآللي‬
GIVEN BY DR. HADJ AMEUR & DR. MANSOURI

YEAR 2023-2024
USAGE OF THE ENGLISH LANGUAGE
• As you may already know, the Ministry of Higher Education and Scientific
Research in Algeria has issued an order for all universities to transition from
teaching in French to English, beginning this academic year.
• In compliance with the ministry's directive, we will be doing both our courses
and practical sessions in English.
WEBSITE FOR SHARING CS1 DOCUMENTS

https://tipazast.wixsite.com/informatique
FACEBOOK PAGE FOR SHARING CS1INFOS
CONTACT INFORMATION

Module: Computer Science 1


Subject Instructor: M. HADJ AMEUR & Mrs. Mansouri
Instructors' Emails: [email protected] & [email protected]
Coefficient: 02
Credits: 04
Total Course Hours: 45 Hours
Weekly Workload: 1h30min of lectures and 1h30 min of practical sessions
Assessment Method: Continuous assessment and final exam
Monitoring Method: Institute of Sciences, University Center of Tipaza
Designed for: First-year students of the Bachelor's degree program in Science and Technology
(ST)
MODULE OBJECTIVE
In this module, we want to show you what a computer is, how it's built, and how
it works. We'll also talk about things called 'algorithms' and 'programming,'
which are like instructions for computers.

This module includes course sessions and practical sessions

Course session : 1h30 Practical session : 1h30


MANAGEMENT NOTES
Attendance:
Attendance is not mandatory; students who do not find the course relevant or engaging
are welcome to request permission to leave if they believe their time would be better
spent elsewhere.
Participation and Questions:
• Students are encouraged to ask questions and actively participate in discussions.
• Please raise your hand before speaking or asking questions to ensure a smooth flow
of the class.
Disturbances:
Respect for one another is a fundamental principle here.
Disciplinary actions will be taken directly against anyone who disrupts the flow of the
course. This includes but is not limited to excessive talking, using electronic devices
unrelated to the course, or any behavior that hinders the learning environment.
GENERAL PRESENTATION OF THE COURSE
The Computer Science 1 course contains two distinct chapters:
1. In the first chapter, we'll start by talking about computers, both the physical stuff
inside them (hardware) and the software they run. We'll learn about how computers
understand binary code, do binary operations, and how they work in general.

2. In the second chapter, we'll begin learning to write computer programs using a
programming language called Python. We'll cover important ideas like using different
types of information, making decisions, repeating actions, and organizing data.

➔Through a combination of practical exercises and guided instruction, students will


acquire the skills necessary to write code and construct algorithms to address various
computational challenges.

➔All the details of this course are summarized in the concept map.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE

What is Computer Science ?


Computer science is the field that encompasses the design, development,
and use of software and computer hardware for processing and storing
digital data. The most important task in computer science is the writing of
computer programs.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
Why do you need Computer Science ?
Students pursuing degrees in electronic engineering, mechanical engineering, civil engineering, and
chemical engineering (process engineering), can benefit from computer science courses for several
important reasons for example:

11
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
Why do you need Computer Science ?

Thinking and Problem-Solving: You’ll learn


how to solve complex problems in a logical
manner.

Programming: You'll learn how to create software, which is important in jobs like automation and control
systems.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE

Why do you need Computer Science ?


Building Web sites Building Mobile Applications
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE

Evolution of information technology and computers


The history of computing is often divided into four distinct generations, each
marking a significant technological advancement in the development of
computers.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ EVOLUTION OF INFORMATION TECHNOLOGY AND COMPUTERS

1. The first generation of computers (vacuum tubes 1945-1955)


The first computers used delicate glass parts called vacuum tubes. They were big and
heavy, mostly used for math, storing data, and handling information. Even though they
had some problems, these computers were the beginning of technology progress.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ EVOLUTION OF INFORMATION TECHNOLOGY AND COMPUTERS
2. The second generation (transistors 1955-1965)
The second-generation computers used transistors instead of fragile vacuum
tubes. This made them much faster and used less energy. These computers were
smaller, more reliable, and didn't waste as much power.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ EVOLUTION OF INFORMATION TECHNOLOGY AND COMPUTERS

3. The third generation (integrated circuits 1965-1980)


This computer generation was all about something called 'integrated circuits.' They're
tiny electronic parts that have many electronic things inside them. Because of these
integrated circuits, computers during this time were faster and more reliable.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ EVOLUTION OF INFORMATION TECHNOLOGY AND COMPUTERS

4. The fourth generation (microprocessors from 1980 to today)


In the late 1970s, something called 'microprocessors' showed up. These were
special because they put all the parts needed for math, logic, and control on one
tiny chip. Computers using this tech were called 'microcomputers.' They led the
way for even smaller but more powerful computers, changing how we use them
with better features.

Microprocessors
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
What are Information Coding Systems?
In the field of computer science, data is stored and manipulated in the form of binary digits 0s and
1s. To facilitate the handling of this data, various numbering systems have been developed, such as
the decimal system, binary system, octal system, and hexadecimal system.

Numbers: 1, 3, 4.5, 60.33, …, etc.

Binary digits
Text Sound

Video

Pictures
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ INFORMATION CODING SYSTEMS
Decimal Encoding (base 10)
• In our daily lives, we use a base-10 numerical system (decimal system) composed of
symbols {0,1,2,3,4,5,6,7,8,9}.
• This system relies on the position of symbols, with each symbol holding a different
value based on its position within a number.
For example, the digit 2 in the number 253 has a distinct value compared to that in 523
The digit 2 in 253 represents hundreds, whereas the digit 2 in 523 represents tens. This
positional notation system simplifies numerical representation and calculations.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ INFORMATION CODING SYSTEMS
Binary Encoding (base 2)
Similar to what we've seen with decimal encoding, the binary numbering system uses
only two symbols: 0 and 1 to represent binary numbers.

Decimal (Base 10)


0,1,2,3,4,5,6,7,8,9

Binary (Base 2)
0,1
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ INFORMATION CODING SYSTEMS
Binary to decimal transformation
Starting from the right and moving left, the first position
represents 2 to the power of zero (which is 1), the second
position represents 2 to the power of one (which is 2), the
third position represents 2 to the power of two (which is
4), and so on.
For example, the binary number "1001" represents the decimal
number 9
𝟎 𝟏 𝟐
Decimal (253) 𝟏𝟎 = 3  10 +5  10 + 2  10
Base 10 = 3 + 50 + 200 = 253

𝟎 𝟏 𝟐 𝟑
Binary (1101)𝟐 = 1  2 +0  2 + 1  2 + 1  2
Base 2 = 1 + 0 + 4 + 8 = 13
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ INFORMATION CODING SYSTEMS
Octal Encoding (base 8)
Similar to what we've seen with decimal and
binary encoding, the octal numbering system
uses 8 symbols: {0, 1, 2, 3, 4, 5, 6, 7} to
represent numbers.

Decimal (Base 10)


0,1,2,3,4,5,6,7,8,9

Binary (Base 2)
0,1

Octal (Base 8)
0,1,2,3,4,5,6,7
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ INFORMATION CODING SYSTEMS
Hexadecimal Encoding (base 16)
Similar to what we've seen with decimal and binary encoding,
the hexadecimal numbering system uses 16 symbols:
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F} to represent
numbers.

Decimal (Base 10)


0,1,2,3,4,5,6,7,8,9

Octal (Base 16)


0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER

Conversion from Any Base to Decimal (X→10)


If we want to convert a number expressed in base X to the decimal base (base 10),
we multiply each digit from that number starting from the right side by the power
of the base X corresponding to its position (position starts from 0) and then add up
all the results to obtain the decimal value of the number.
The explained rule is illustrated in the figure below, where we want to convert a
number that is in base X and contains the digits a𝑛,…., a2, a1, a0 to base 10.
Conversion from Any Base to Decimal (X→10)

(a𝒏,…., a𝟐, a𝟏, a𝟎)(𝑿) = ?𝟏𝟎

𝟎 𝟏 𝟐 𝒏
(a𝒏,…., a𝟐, a𝟏, a𝟎)(𝑿) = a𝟎  X + a𝟏  X + a𝟐  X + ….. + a𝒏  X
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER

Conversion from Any Base to Decimal (X→10)

(a𝒏,…., a𝟐, a𝟏, a𝟎)(𝑿) = ?𝟏𝟎

𝟎 𝟏 𝟐 𝒏
(a𝒏,…., a𝟐, a𝟏, a𝟎)(𝑿) = a𝟎  X + a𝟏  X + a𝟐  X + ….. + a𝒏  X

Example:

(1𝟎𝟎𝟏)
3 2 1 0 (𝟐) = 𝟎 𝟏 𝟐
12 +02 +02 +12
𝟑

= 1 + 0 + 0 + 8 = 9
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER
Conversion from Decimal Base to Any Base (10→X)

To convert a decimal number (base 10) to any


other base, let's call it base X:
1. Divide the decimal number by the base X. The
result is the quotient, and the remainder is the
first digit in the new base X.
2. Continue this process with the quotient from
the previous step until the quotient becomes 0.
Each time you do this, you'll get another digit in
base X.
3. Write down these digits you found, from the
last one you got to the first one in order, and
you have successfully converted the decimal
number to base X.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER
Conversion from Decimal Base to Any Base (10→X)
Example: Conversion of a decimal number 167 to binary
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER

Conversion from Any Base to Any Base


To convert a number from base X to base Y, where both bases X
and Y are different from 10, you can follow these steps:
1. Convert the number from Base X to Base 10
2. Convert the resulted number from Base 10 to Base Y

To convert from X → Y
X → 10 and then 10→Y
Conversion from CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
Any Base to Any ➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER
Base Example: Convert 247 from Octal to Binary

Step 1 247(𝟖) = 7  8𝟎 + 4  8𝟏 + 2  8𝟐 = 𝟕 + 𝟑𝟐 + 𝟏𝟐𝟖 = 167(𝟏𝟎)


8 → 10

Step 2
10→2
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER

Conversion from Octal Base to Binary Base (8→2)


To convert an octal number to binary, follow these
steps:
1. Write down the octal number you want to convert.
For this example, let's use the octal number 357.
2. Break the octal number into its individual digits: 3,
5, and 7.
3. Convert each octal digit to its 3-bit binary
equivalent using a conversion table:
• 3 in octal = 011 in binary
• 5 in octal = 101 in binary
• 7 in octal = 111 in binary
4. Put the binary equivalents together to form the full
binary number: 357 in octal = 011 101 111 in
binary.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER
Conversion from Octal Base to Binary Base (8→2)
Example: Convert the octal number 51 to binary
To convert the octal number 51 to binary,
we replace each octal digit with its binary
equivalent:
• 5 = 101
• 1 = 001
This results in the binary number 101001.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER
Conversion from Binary Base to Octal Base (2→8)
To convert a octal number to binary, you need to break down
each octal digit into its 3-bit binary equivalent.

1. Write down the binary number you want to convert. For


this example, let's use 10010110.

2. Group the binary digits into sets of 3 bits, starting from


the right: 10010110 becomes: 10-010-110

3. Notice the last group on the left side only has 2 bits. We
need to pad it with zeros to make a complete 3-bit group:
010-010-110

4. Replace each 3-bit binary group with its octal equivalent,


based on the binary to octal conversion table.
• 010 in binary = 2 in octal
• 010 in binary = 2 in octal
• 110 in binary = 6 in octal

5. Put the octal equivalents together to form the octal


decimal number: 10010110 in binary = 226 in octal
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER
Conversion from Binary Base to Octal Base (2→8)
Example: Convert the binary numbers 101001 and
1011010 to octal

1. To convert the binary number 101001 to octal, we


group the digits in sets of three: 101-001.
2. Replace each 3-bit binary group with its octal
equivalent:
• 101=5 in octal
• 001=1 in octal
➔This results in octal 51.

1. To convert the binary number 1011010 to octal, we


group the digits in sets of three starting from the
right: 1-011-010
2. Replace each 3-bit binary group with its octal
equivalent:
• 001=1 in octal
• 011=3 in octal
• 010=2 in octal
➔This results in octal 132.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER
Conversion from Hexadecimal Base to Binary
Base (16→2)
To convert a hexadecimal number to binary, you need to
break down each hexadecimal digit into its 4-bit binary
equivalent.

1. Write down the hexadecimal number you want to


convert. For this example, let's use the hexadecimal
number A5C.

2. Break the hexadecimal number into its individual digits:


A-5-C

3. Look up the binary equivalent of each hexadecimal digit.


Use a conversion table that we have seen before.
• A in hexadecimal = 1010 in binary
• 5 in hexadecimal = 0101 in binary
• C in hexadecimal = 1100 in binary

4. Put the binary equivalents together to form the full binary


number: A5C in hexadecimal = 1010 0101 1100 in binary
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER
Conversion from Hexadecimal Base to Binary Base (16→2)
Example: Convert the hexadecimal number 1A9 to binary

1. Write down the hexadecimal number you want to


convert. For this example, let's use the hexadecimal
number 1A9.

2. Break the hexadecimal number into its individual digits:


1-A-9

3. Look up the binary equivalent of each hexadecimal


digit. Use a conversion table that we have seen before.
• 1 in hexadecimal = 0001 in binary
• A in hexadecimal = 1010 in binary
• 9 in hexadecimal = 1001 in binary

4. Put the binary equivalents together to form the full


binary number: 1A9 in hexadecimal = 0001 1010 1001
in binary
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER
Conversion from Binary to Hexadecimal Base
(2→16)
To convert a binary number to hexadecimal, follow these
steps:
1. Write down the binary number you want to convert. For
this example, let's use 110110101.

2. Group the binary digits into sets of 4 bits, starting from


the right: 110110101 becomes: 1-1011-0101

3. Notice the last group on the left side only has 1 bit. We
need to pad it with zeros to make a complete 4-bit group:
0001-1011-0101

4. Now replace each group with its hexadecimal equivalent:


• 0001 in binary = 1 in hexadecimal
• 1011 in binary = B in hexadecimal
• 0101 in binary = 5 in hexadecimal

5. Put the hexadecimal digits together: 110110101 in binary


= 1B5 in hexadecimal
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ CONVERSION OF NUMBERS FROM ONE CODING SYSTEM TO ANOTHER
Conversion from Binary to Hexadecimal Base (2➔16)
Example: Convert the binary number 110101001 to hexadecimal

1. Write down the binary number you want to convert:


110101001

2. Group the binary digits into sets of 4 bits, starting from


the right: 110101001 becomes: 1-1010-1001

3. Notice the last group on the left side only has 1 bit. We
need to pad it with zeros to make a complete 4-bit
group: 0001-1010-1001

4. Now replace each group with its hexadecimal


equivalent:
• 0001 in binary = 1 in hexadecimal
• 1010 in binary = A in hexadecimal
• 1001 in binary = 9 in hexadecimal

5. Put the hexadecimal digits together: 110101001 in


binary = 1A9 in hexadecimal
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ OPERATIONS BETWEEN BINARY NUMBERS

In addition to methods of conversion between different numerical bases, it's


also important to understand how to perform basic arithmetic operations with
binary numbers. Here, we will explore basic arithmetic operations in binary,
including addition, subtraction, multiplication, and division. Understanding
these operations is crucial for efficiently manipulating binary data in
computing.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ OPERATIONS BETWEEN BINARY NUMBERS
Binary Addition
Binary addition works similarly to decimal addition, but in this case, we only use the
binary digits 0 and 1. To add two binary numbers, we use the carry-over method.

Decimal Addition

+1
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE 1111(𝟐) = 15(𝟏𝟎)
➔ OPERATIONS BETWEEN BINARY NUMBERS
Binary Addition 111(𝟐) = 7(𝟏𝟎)

1
10110(𝟐) = 2𝟐(𝟏𝟎)
1 1
+1

1 0 1 1 0 Note (binary)
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ OPERATIONS BETWEEN BINARY NUMBERS
Binary Subtraction
Binary subtraction uses the same process as decimal subtraction, except it is done with
only 0s and 1s. We start by writing the two binary numbers one above the other, with the
number being subtracted on the bottom.

1 10 10 +10 1 10 10
+10

- 1 99 -
11 1
+1 +1
1 1

0 01 0 0 1
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ OPERATIONS BETWEEN BINARY NUMBERS
Binary Multiplication

Binary multiplication uses the same process as decimal multiplication, but


only with 0s and 1s. We use the shift-and-add method:
• Write the two binary numbers to multiply one above the other, with the
multiplicand on top and multiplier below.
• Start from the rightmost column of the multiplier.
• If the bit is 1, write down the multiplicand as is. If it is 0, write down
zeros.
• For subsequent bits, shift everything you have written so far by one
column to the left. Then if the bit is 1, add a copy of the multiplicand. If
it is 0, add zeros.

So binary multiplication uses the shift-and-add method, shifting previous


results left and selectively adding the multiplicand based on the multiplier
bits.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ OPERATIONS BETWEEN BINARY NUMBERS
Binary Division
Binary division uses the same process as decimal
division, but only with 0s and 1s. We use the
10101 100
subtraction-and-shift method: -
• Write the dividend on top and divisor below, 100 1 01
aligned on the right.
• Check if the divisor goes into the 2 rightmost - 1 01
bits of the dividend. If yes, subtract the divisor 100
and write a 1 above for the quotient bit. If no, 1
write a 0.
• Shift the divisor and partial remainder left by 1
column. Bring down the next digit from the
dividend.
• Repeat steps 2 and 3 until there are no more
digits left in the dividend.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
The general structure of a computer
What is a computer ?

A computer is an electronic device that can perform various tasks by executing a set of instructions
or programs. It processes data, performs calculations, stores and retrieves information, and
communicates with other devices. Computers come in various forms, from the standard desktop
(personal computers (PCs)) to movable devices like laptops, tablets, smartphones and more, and they are
widely used for a wide range of applications in business, education, entertainment, research, and many
other fields. The basic principles of how a computer operates are generally the same whatever its
purpose is.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
The general structure of a computer

But how do they work?


The figure presented below provides an overview of the
general structure of a computer and its main components,
which are essential for the proper functioning of the
computer.
It comprises three fundamental components: the
processor (CPU or central processing unit), the main
memory, and input/output devices. The processor is located
at the center of the structure, with the other components
interacting with it. Indeed, the processor serves as the
computer's brain, responsible for executing instructions and
coordinating the operations of all other components.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
The general structure of a computer
Central Processing Unit (CPU)
"Central Processing Unit" (CPU) refers to the primary hardware component of a
computer that carries out instructions and performs calculations in a computer
program. It is often considered the "brain" of the computer because it executes most
of the actual computations and controls the other hardware components (such as
input and output data (I/O)).

•Arithmetic Logic Unit (ALU): The ALU is responsible for performing arithmetic
and logical operations. It can perform addition, subtraction, multiplication, division,
and logical operations like AND, OR, and NOT. The ALU's results are used for data
processing.
•Control Unit: The Control Unit manages and coordinates the operations of various
CPU components. It fetches instructions from memory, decodes them, and controls
the execution of those instructions. It ensures that instructions are executed in the
correct sequence.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
The general structure of a computer
The main memory
Main memory, known as RAM, is vital for a computer. It's fast, temporary storage that the CPU
uses for active tasks. Unlike non-volatile ROM, which holds permanent data and boot instructions,
RAM is volatile and loses its content when the computer restarts. It significantly boosts computer
speed and responsiveness by facilitating quick data access for running programs.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
The general structure of a computer

Input/Output devices

Input/Output (I/O) devices are hardware components that


allow a computer to interact with the outside world, either by
receiving data from external sources (inputs) or by sending data
to external devices (outputs). These devices facilitate
communication and data transfer between the computer and its
users, as well as with other devices or systems.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
The general structure of a computer
Input/Output devices

Computer devices fall into three main types: input, output, and storage.
• Input Devices: Collect data for the computer, e.g., keyboard, mouse, scanner.
• Output Devices: Display information for users, e.g., screen, printer, speakers.
• Storage Devices: Store data permanently, e.g., hard drives, SSDs, external drives.

This classification clarifies the roles of these devices in gathering, presenting, and preserving data
in the computer system.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
The general structure of a computer
Input/Output devices

Storage devices vary in types, each with unique features:


• Hard Disk Drives (HDDs): Traditional, slower, high-capacity
devices with spinning disks.
• Solid-State Drives (SSDs): Faster, durable, energy-efficient
NAND flash memory storage.
• USB Flash Drives: Small, portable flash memory devices
connecting via USB ports.
• Memory Cards: Removable flash memory cards for cameras,
smartphones, etc.
• CDs and DVDs: Optical discs for distributing software, media,
read-only or read-write.

Choosing the right device relies on factors such as storage


capacity, speed, durability, and portability, which should
match your specific needs. In many cases, a combination of
these devices can fulfill different storage needs.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
The general structure of a computer
The speed of a CPU
The speed of a CPU, often referred to as its "clock speed" or "clock
frequency," is measured in Hertz (Hz) and represents how many
operations the CPU can execute in one second. Commonly, you'll see
CPU speeds expressed in gigahertz (GHz), where 1 GHz is equal to
one billion hertz.
Converting between CPU speed units can be done using
the following conversions:

1 Hertz (Hz) = 1 operations per second


1 Kilohertz (kHz) = 1,000 Hertz
1 Megahertz (MHz) = 1,000,000 Hertz
1 Gigahertz (GHz) = 1,000,000,000 Hertz
1 Terahertz (THz) = 1,000,000,000,000 Hertz
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
The general structure of a computer
The speed of a CPU
THz
1000
(103) Example:
GHz
35 THz = 35 * 106 MHz
1000
(103) 4,5 MHz = 4,5 * 106 Hz
MHz
1000 1000 GHz = 1000 * 106 KHz = 109 KHz
(103)
KHz
1000
(103)
Hz
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
The general structure of a computer
The speed of a CPU
THz
1000 1/1000
(103) (10-3)
Example:
GHz
1/1000 20 Hz = 20 * 10-6 = 2 * 10-5 MHz
1000
(10-3)
(103)
4,5 MHz = 4,5 * 10-3 GHz = 4,5 * 10-3 GHz
MHz
1000 1/1000 1000 KHz = 1000 * 10-9 THz = 103 * 10-9 = 10-6 THz
(103) (10-3)
KHz
1000 1/1000
(103) (10-3)
Hz
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
The general structure of a computer
Mass storage units
Mass Storage Units refer to devices or technologies used for the storage capacity of hard
drives, USB drives, memory cards etc. These units are designed to store significant amounts of
information.
As we have seen earlier, the most basic storage unit is the bit, which can take values of 0 or 1.
There is also the byte (octet in French), which is equivalent to 8 bits.

Byte Example:
01011110
Question1: 4 bit = ? Byte
Solution: 4 bit = 4/8 Byte = ½ Byte
Bit
Question2: 4 octet = ? bit
Solution : 4 octet = 4*8 bit = 32 bit
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES
When dealing with mass storage units and data sizes,
The general structure of a computer
Mass storage units
you'll often encounter the following units: TB
x 210 x 2 -10
➢ 1 Byte = 8 bits = 23 bits (/1024)
(1024)
➢ 1 Kilo Byte (KB) = 210 Byte
➢ 1 Mega Byte (MB) = 210 KB
GB
➢ 1 Giga Byte (GB) = 210 MB
➢ 1 Tera Byte (TB) = 210 GB x 210 x 2 -10
(1024) (/1024)

MB
Example:
x 2 -10
x 210
10 MB = 10 * 210 * 210 = 10 * 220 Byte (/1024)
(1024)
KB
2,5 KB = 2,5 * 2-10 * 2-10 = 2,5 * 2 -20 GB
x 2 -10
8 GB = 23 * 2-10 = 2-7 TB x 210 (/1024)
(1024)
Byte
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES

What means Computer’s Hardware ?


Computer hardware refers to the physical components of a computer system, including the central
processing unit (CPU), memory, storage devices, input and output devices, and all other tangible parts that
make up the computer. These components work together to process data and execute instructions in a
computer.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES

What is Computer’s Software?


Computer software encompasses a set of programs and instructions that
enable a computer's hardware to perform various tasks and functions,
controlling and coordinating its operations. It's the digital component that
tells the physical computer what to do.
Here are some common kinds of computer software:

1- Operating Systems (OS)


An operating system (OS) is system software that
manages computer hardware and provides services
for software applications. It acts as an intermediary
between users and the computer hardware,
facilitating the execution of programs and the
management of hardware resources.
CHAPTER 1: INTRODUCTION TO COMPUTER SCIENCE
➔ COMPUTER OPERATING PRINCIPLES

2- Application Software 3- Programming Languages


Application software, often called Programming languages are formal
"apps," refers to programs and software systems used to communicate
designed for specific tasks or instructions to computers and create
applications, such as word processing, software. They have specific syntax and
spreadsheet analysis, web browsing, or rules for writing code that performs
gaming. These software applications various tasks, from web development to
enable users to perform various functions data analysis.
on computers and devices.

You might also like