Gnagster Harikere... Infomation Security
Gnagster Harikere... Infomation Security
Usakere wanguu
Assignment 1
1. CIA Triad:
a. The CIA Triad is a model designed to guide policies for information security within an
organization
Confidentiality (C):
This is a set of rules that limits access to information, ensuring that sensitive data is only
accessible to authorized individuals.
Integrity (I):
This refers to the assurance that the information is trustworthy and accurate, hence data is
protected from deletion or modification from any unauthorized party.
Availability (A):
The Biba integrity model, on the other hand, focuses on maintaining data integrity by
preventing unauthorized users from modifying data. It operates on the principle of "No write
up, No read down," ensuring that users cannot write to a higher integrity level or read from
a lower integrity level, thus protecting the integrity of the data.
i.Banking Systems:
1
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
The two-factor authentication (debit card with the PIN code) provides
confidentiality before authorizing access to sensitive data.
The ATM and bank software ensure data integrity by maintaining all transfer
and withdrawal records made via the ATM in the user’s bank accounting.
The ATM provides availability as it is for public use and is accessible at
2. Principles of Security:
a. This principle of insecurity states that no matter what you do, who you are, and how
much you spend, you will never have a 100% secure environment.
b. The Principle of Least Privilege enhances organizational security by ensuring that users have
only the minimum level of access necessary to perform their job functions. By ensuring that
accounts have only the privileges necessary to do their job, you ensure that if an attacker
compromises an account, you minimize what information they can access. This limits the
damage of the attack.
2
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
Fence or a wall would be the first line of defence with only one or two entry points
which are well guarded and have security measures like crash barrier
Second line of defence would be a buffer zone between fence and the main
complex. This zone should not have any obstruction and should be under 24X7
surveillance
Third line of defence would be walls of the building with no windows, single opening
fire doors, alarms, etc
High security inside building is the 4th defence with third level of authorisation. No
food or liquid should be allowed
Fifth line of defence could be another authorisation to server floors which are
segmented with access to very limited people
b. Keeping a low profile for a data center is important as it reduces the likelihood of attracting
attention from potential attackers. A discreet location and minimal signage can help prevent
unauthorized individuals from targeting the facility, thereby enhancing overall security. This
approach is typically underpinned by robust physical security, including access control and
surveillance, that deters unauthorized entry. The strategy of low profile also secures sensitive
data and critical infrastructure from environmental threats and ensures operational
resiliency. Basically, a low profile substantially reinforces the security posture of a data
center and protects key organizational assets.
3
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
4. Importance of Logs:
Logs are essential to a network because they give the ability to troubleshoot, secure, investigate
or debug problems that arise in the system. The logs record messages and times of events
occurring on the system. It can also identify system problems that can result in server down time.
From a security point of view, the purpose of a log is to act as a red flag when something bad is
happening. Reviewing logs regularly could help identify malicious attacks on the system
The location and media on which logs will be made to reside. Logs can be stored in files on
file systems, files on SAN, NAS etc, lines in relational/transactional/noSQL databases.
Moreover, this could be on-premise, or cloud networks.
2. What to Log
Events which need to be logged are dependent on each organization. Some events which
can be logged are: Password changes; Unauthorized logins; Door access (Both entry and
exit); Server access (SSH, Database, etc); Data center environmental metrics
(temperature, humidity). Saved logs need to be analyzed so logging correct parameters is
important.
The retention period for logs shall be based on regulatory, operational, and business
requirements. Some industries, like finance and healthcare, have specific requirements for
log retention periods, such as GDPR and HIPAA. Logs that are crucial to historical analysis,
audits, or investigations may need to be retained longer. The cost of long-term storage is
high, and hence a balance needs to be struck between the cost of storage and the ease of
access to logs.
Assignment 2
1. a. Discuss the role of the OSI model in securing communications and highlight its
importance. (10 marks)
4
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
The OSI (Open Systems Interconnection) model is a conceptual framework used to understand
and implement networking protocols in seven layers. Each layer has specific functions and
protocols that contribute to the overall network communication process. Below are the roles of
the OSI model in securing communications and its importance:
Physical layer:
This is the lowest layer of the OSI model, which relates to the physical aspects such as cabling
and infrastructure used for networks to communicate. This layer is about the bit stream, which
can be in the form of electrical, light, or radio impulses. Two common threats associated with
this level are Denial-of-Service attacks and Data Duplication. Denial-of-Service attacks occur
when physical disconnection or cutting of cables disrupts communication, and Data Duplication
involves tapping into network cables to intercept and read the data stream without detection.
To minimize these threats, you should implement multiple circuits and utilize concealed cabling,
as this can significantly enhance security and prevent unauthorized access.
This layer of the OSI model handles frames, which consist of bits and contain source and
destination addresses known as MAC (Media Access Control) addresses. The Data link layer is
responsible for fundamental functions like managing flow, handling errors, and preventing
spanning trees. Common threats associated with this layer comprise ARP Spoofing, DHCP
Starvation Attacks, Spanning Tree Protocol Attacks, VLAN attacks, Cisco Discovery Protocol
Attacks, MAC flooding, and Content Addressable Memory Table Overflow Attacks.
Network Layer:
This layer is responsible for handling the transfer of data among different networks by obtaining
frames from the data link layer, sending them to the appropriate destination according to the
addresses included in the frame, and managing packet routing as well. However, this layer is
susceptible to significant attacks, especially ICMP DDoS attacks, such as the Smurf attack (an
amplified ping flood) and standard ping floods that inundate the network with ICMP packets.
Additionally, attackers can exploit routing protocols to execute Man-in-the-Middle (MiTM)
attacks, creating substantial threats to network security and data integrity.
Transport Layer
This layer delivers data, ensure error checking, flow control, and sequencing data packets. It is
responsible for end-to-end communication between two devices, taking data from the Session
layer and breaking it into smaller units called segments before passing them to the Network
layer. Protocols used in this layer are the TCP and UDP for data transmission. The most common
attack on this layer is the DDoS attack – SYN flood attack because of badly handled exception
of the TCP stack which exploits vulnerabilities in the TCP stack due to exceptions which are not
handled properly. A common measure to mitigate this threat is using the IPS which is the
Intrusion Prevention System which is used to detect and prevent such attacks at the transport
5
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
layer. The IPS also monitors network traffic for suspicious activity and can automatically act to
block or mitigate threats.
Session Layer
This layer is responsible for creating communication channels, called sessions, between different
devices and its role is to establish, terminate and manage connections between two hosts. It is
also responsible for authentication and reconnections, and it can set checkpoints during a data
transfer. The most dangerous threat to this layer is session hijacking. This threat is where an
attacker will impersonate a live session and act as a fake client to receive data from server and
the way it can be prevented is by encrypting all data using Transport Layer Security (TLS) thus
making it difficult for the attackers to temper around the data.
Presentation Layer
This layer is responsible for preparing data so that it can be used by the application layer and it
also makes the data presentable for applications to consume. It formats and encrypts data
before sending to lower layers and is also called syntax layer. The presentation layer is also
responsible for translation, encryption, and compression of data and is critical for security of
applications. Its significance in securing applications cannot be overstated, as it acts as a barrier
for data integrity and confidentiality. A serious attack on this layer is the breaching of TLS
(Transport Layer Security), which can compromise the encryption and protection mechanisms
intended to secure data during transmission.
Application Layer
This is the only layer that directly interacts with data from the user. Software applications like
web browsers and email clients rely on the application layer to initiate communications and
technologies used in this layer are HTTP (Hypertext Transfer Protocol), FTP (File Transfer
Protocol), SMTP (Simple Mail Transfer Protocol), DNS (Domain Name System), NFS (Network
File System), NTP (Network Time Protocol). Some of the attacks are HTTP floods and cache-
busting which degrade the performance of data transmission. Ensuring security at this layer is
essential for protecting user data and maintaining the integrity of application communications.
Standardization
The OSI model provides a standardized framework that guides the development and
implementation of security protocols and technologies. This ensures interoperability and
compatibility among different systems and devices, facilitating secure communication across
networks.
Security professionals can identify potential vulnerabilities at each layer and implement
adequate counter measures. This helps to minimize the risk of attacks and data breaches.
6
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
The OSI model's 7 layered structure allows for flexible and scalable security solutions.
Organizations can build and upgrade security measures at specific layers without disrupting the
whole network.
Security
By addressing security at each layer, the OSI model ensures a comprehensive security posture.
This layered defence approach makes it harder for attackers to exploit vulnerabilities, as they
would need to penetrate multiple layers of security.
b) Explain common attacks on the Physical and Data Link layers and methods to secure these
layers.
Physical Layer:
MAC Address Spoofing- Impersonating another device by using its MAC address.
Switch Attacks- Exploiting switch vulnerabilities to gain unauthorized access or disrupt
network traffic.
MAC Address Flooding- Overwhelming a switch with fake MAC addresses to cause a denial
of service.
Physical Layer:
Physical Barriers- Use locks, access control systems, and cameras to protect network
hardware.
Wireless Security- Use strong encryption protocols like WPA3 for Wi-Fi networks and ensure
proper configuration of access points.
Environmental Controls: Implement fire suppression systems, climate control, and power
backup systems to protect against environmental threats.
7
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
Port Security- Implement port security features on switches to limit the number of MAC
addresses allowed on a port.
The evolution of firewalls involves first generation firewalls such as Packet Filtering Firewall,
second generation firewalls such as Stateful Inspection Firewall, third generation firewalls such
as Application Layer Firewall and Next Generation Firewalls which are explained further below:
8
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
The earliest form of firewalls that inspect packets at the network layer and allow or deny traffic
based on source and destination IP addresses and ports. While simple, they lacked the ability to
understand higher-level applications.
Introduced in the 1990s, these firewalls maintain a state table to track active connections. They
provide better security than packet filtering by allowing only packets that are part of an
established connection.
These firewalls operate at the application layer, inspecting the payload of packets. They can
filter traffic based on specific applications, providing a higher level of security.
Next-Generation Firewalls:
These combine traditional firewall capabilities with advanced features such as intrusion
prevention, deep packet inspection, and application awareness. NGFWs can identify and block
sophisticated threats based on behaviour rather than just signatures.
Cloud-based Firewalls:
Due to the rise of cloud computing, firewalls have also evolved to operate in cloud environments,
providing scalable security solutions for distributed networks.
In conclusion, Firewalls serve as the first line of defence against unauthorized access and attacks.
They enforce security policies and help protect sensitive data. By controlling traffic flow, firewalls
reduce the risk of malware infections and data breaches.
Firewall evolution reflects the increasing complexity of network threats and the need for more
sophisticated security measures.
3. a) Detail session hijacking at the Session Layer and its mitigation techniques.
Session hijacking is a type of cyber-attack where an attacker takes control of an existing session
between two systems. This allows the attacker to intercept and manipulate data, steal sensitive
information, or even impersonate legitimate users. Once an attacker gains control of a session,
they can perform actions as if they were the legitimate user, potentially leading to data
breaches and unauthorized transactions.
Session Token Interception: The attacker captures a session token by sniffing network traffic
or exploiting vulnerabilities.
Session Token Prediction: The attacker predicts the session token using weak algorithms or
patterns.
Session Token Fixation: The attacker sets a predetermined session token for the victim, which
the attacker can later use.
9
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
This has consequences such as that unauthorized access to user accounts or sensitive data.
Impersonation of the victim to perform malicious actions.
Mitigation Techniques
Network Segmentation- Divide the network into smaller segments to limit the impact of a
successful attack.
Firewall Configuration- Configure firewalls to block unauthorized access to network
resources.
Patch Management- Keep all systems and software up-to-date with the latest security
patches.
Employee Training- Educate employees about the risks of social engineering attacks and
how to recognize phishing attempts.
Intrusion Detection Systems (IDS) - Monitor network traffic for signs of suspicious activity,
such as unusual login attempts or abnormal data transfers.
Https- Use https to encrypt communication between the client and server, making it more
difficult for attackers to intercept session IDs.
Strong Authentication- Implement strong authentication mechanisms like multi-factor
authentication (MFA) to make it harder for attackers to gain access to accounts.
Secure Session Initiation- Use secure protocols like TLS to establish encrypted sessions.
3. b. Explain how HTTP floods at the Application Layer disrupt services and methods to pre
An HTTP flood attack is a type of volumetric distributed denial-of-service (DDoS) attack designed
to overwhelm a targeted server with HTTP requests. Once the target has been saturated with
requests and is unable to respond to normal traffic, denial-of-service will occur for additional
requests from actual users.
How it goes
When an HTTP client like a web browser “talks” to an application or server, it sends an HTTP
request – generally one of two types of requests: GET or POST. A GET request is used to retrieve
standard, static content like images while POST requests are used to access dynamically
generated resources.
10
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
The attack is most effective when it forces the server or application to allocate the maximum
resources possible in response to each single request. This thereby leads to resource exhaustion
leading to slow performance or complete service outages. Thus, the perpetrator will generally
aim to overpower the server or application with multiple requests that are each as processing-
intensive as possible. This thereby slows and delay the service which affects the user experience.
For this reason, HTTP flood attacks using POST requests tend to be the most resource-effective
from the attacker’s perspective; as POST requests may include parameters that trigger complex
server-side processing. On the other hand, HTTP GET-based attacks are simpler to create, and
can more effectively scale in a botnet scenario.
HTTP GET attack - in this form of attack, multiple computers or other devices are coordinated
to send multiple requests for images, files, or some other asset from a targeted server. When
the target is inundated with incoming requests and responses, denial-of-service will occur to
additional requests from legitimate traffic sources.
HTTP POST attack - typically when a form is submitted on a website, the server must handle
the incoming request and push the data into a persistence layer, most often a database. The
process of handling the form data and running the necessary database commands is
relatively intensive compared to the amount of processing power and bandwidth required
to send the POST request. This attack utilizes the disparity in relative resource consumption,
by sending many post requests directly to a targeted server until its capacity is saturated
and denial-of-service occur
Traffic profiling.
By monitoring traffic and comparing IP addresses with data from an IP reputation database,
security teams can track and block abnormal activity that may be part of an HTTP flood attack.
WAFs deploy various techniques such as CAPTCHA and crypto challenges to detect HTTP flood
attacks.
Load balancers.
Load balancers may offer buffering and multiple connection management techniques that
prevent HTTP GET and POST requests from impacting web server resources.
11
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
Deploying a cloud-based service for DDoS protection can provide access to tools to identify
suspicious activity and respond quickly.
By increasing the number of concurrent HTTP connections that can be processed, organizations
may reduce vulnerability to HTTP flood attacks.
Restricting the number of incoming requests from a given IP address may prevent DDoS attacks.
However, standard rate-based detection may be ineffective at detecting HTTP floods since the
volume of traffic is not above an assumed high threshold limit. vent them.
A VIRTUAL LOCAL AREA NETWORK is a network configuration that partitions a physical network
into multiple logical networks. Devices within a VLAN communicate as if they were on the same
physical network, regardless of their physical location. VLANs operate at the data link layer of
the OSI model. A VLAN allows the creation of multiple, isolated broadcast domains within a
single switch or across multiple switches. Devices in different VLANs require a router or layer 3
to communicate.
Traffic Segmentation- VLANs segregate network traffic by isolating devices into different
groups, ensuring that traffic from one VLAN cannot access without routing. For example
isolating finance, HR. and IT departments within their own VLANs to secure sensitive data.
Access Control- Access to sensitive resources can be restricted based on VLAN assignments.
Devices that are unauthorized are prevented from communicating with VLAN protected
systems.
Guest Network Security- VLANs can create isolated environments for guest users or IoT
devices, preventing access to internal network resources.
VLAN Access Policies- Policies can be enforced based on VLANs to allow or deny traffic to
certain resources.
Improved Threat Isolation- Infected or compromised devices in one VLAN cannot easily
affect devices in another VLAN.
Advantages of VLANs:
Security – If one system is compromised in a flat network, all other machines can be attacked
easily
Redundancy – Lesser network redundancy is there in case of flat networks
Performance – Lesser no of broadcast messages because the machines are lesser in a VLAN
and hence bandwidth is better utilised
Scalability – Adding more IP addresses becomes easy in a VLAN
Implementation of VLAN:
12
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
VLANs are typically implemented using network switches that support VLAN tagging. Each
packet is tagged with a VLAN ID, which identifies the VLAN to which the packet belongs. The
switch then forwards the packet only to the appropriate VLAN.
b. Describe the use of Generic Routing Encapsulation (GRE) in securing network communications
GRE stands for Generic Routing Encapsulation. It is a communication protocol from Cisco that
encapsulates packets in order to route other protocols over IP network.
GRE encapsulates a data packet inside another IP packet, creating a virtual connection between
two endpoints.
GRE supports the transmission of a wide range of protocols, including IPv4, IPv6, and even non-
IP traffic over an IP traffic.
Generic Routing Encapsulation (GRE) is a versatile tunnelling protocol that, when used in
conjunction with security protocols like IPsec, plays a vital role in securing network
communications. By enabling the encapsulation of various protocols and supporting the
creation of secure point-to-point connections, GRE facilitates the secure transport of sensitive
data across untrusted networks. Its flexibility and simplicity make it a valuable tool in the
implementation of secure networking solutions, particularly in creating site-to-site VPNs and
maintaining network integrity.
Assignment 3
1.Encryption Fundamentals:
13
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
i) a type of encryption where only one key is A type of encryption that uses two keys :public
used to both encrypt and decrypt electronic key for encryption and private key for decryption
information
ii) The encryption process is very fast The encryption process is slow
iii) It only provides confidentiality It provides confidentiality, authenticity and non-
repudiation
iv) The size of cipher text is same or smaller than The size of cipher text is same or larger than the
the original plain text original plain text.
v) used when a large amount of data is required used to transfer small amount of data
to transfer
vi)examples of algorithms: AES(advanced Examples of algorithms: DSA (Digital Signature
encryption standard) and DES Algorithm) and RSA (Rivest Shamir Adleman
Algortihm)
Vii) resource utilization is low as compared to resource utilization is high.
asymmetric key encryption.
viii)typically shorter key lengths Longer key lengths
ix) Less scalable for large networks since each More scalable as one public key can be shared
pair of users requires a unique key. with many users .
x) Security depends on the secrecy of the key; if Provides a higher level of security for key
the key is compromised, so is the data. exchange; the private key remains secure at all
times.
b) Describe the Diffie-Hellman key exchange and its importance in cryptography. (10 marks)
- is a method for securely exchanging cryptographic keys over a public communications channel.
- this aproach eliminates the necessity for prior secret sharing by enabling users to create a shared
secret key that can be used for symmetric encryption.
How it works
- The Diffie-Hellman key exchange enables two parties to securely establish a shared secret over an
insecure channel.
-two parties agree on a certain large prime number and a certain base .
- each party selects a private key and computes their public key .
- after exchanging public keys, each party calculates the shared secret.
-Both reach the same shared secret, SSS, which may be utilized for symmetric encryption to provide
secure communication without exchanging keys directly.
14
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
-secure key exchange: securely establish a shared secret key over an insecure communication channel
without prior arrangement.
-resistance to eavesdropping: if an attacker intercepts the public keys exchanged between parties, the
shared secret remains secure because of the difficulty of solving the discrete logarithm problem.
-foundation for secure communication:is the foundation for many security protocols eg.SSL/TLS
-scalability: the technique can be applied to a number of situations, such as protecting two-way
communications or facilitating group key exchanges between several users.
- a typical illustration of a hybrid encryption model is the use of AES (Advanced Encryption Standard)
for data encryption and RSA (Rivest-Shamir-Adleman) for key exchange.
How it works
-key generation: for AES encryption, a sender creates a random symmetric key and the actual message
will be encrypted using this key.
-data encryption: using the created symmetric key, the sender encrypts the message using AES and this
enables the data to be encrypted quickly and effectively.
-key encryption : the recipient's public RSA key is then used by the sender to encrrypt the AES key and
this can only be decrypted with the recipient's private RSA key.
-transmission: the recipient receives both the RSA-encrypted AES key and the AES-encrypted message
from the sender.
-decryption: the recipient decrypts the AES key after receiving the data using their RSA private key and
the recipient then uses the decrypted AES key to decrypt the real message.
a) Compare block cipher and stream cipher in terms of performance and security. (10 marks)
15
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
ii) Generally slower due to the overhead of processing Typically faster, as it encrypts data
entire blocks. continuously and does not require
padding.
iii) uses either 64 bits or more than 64 bits Uses 8 bits
iv) provides strong security due to complex algorithms if not implemented correctly , can be less
and modes of operation e.g CBC secure; exposed to attacks ;if the same
key is reused.
v) uses both confusion and diffusion principle for the uses only confusion principle for the
conversion required for encryption conversion
vi) an error in one block only affect that block during an error in the stream can affect all
decryption. subsequent bits.
vii) the reverse encryption or decryption is Uses XOR for the encryption which can
comparatively complex since combination of more bits be easily reversed to the plain text
get encrypted in case of Block cipher
viii) complex algorithms with various modes of operation Simple algorithms, which are easy to
to enhance security. implement.
ix) uses Electronic Code Book (ECB) and Cipher Block uses CFB (Cipher Feedback) and OFB
Chaining (CBC) algorithm for encryption of plain text (Output Feedback) algorithm
x)require more memory for block storage and Uses less memory since it processes data
processing in smaller amounts .
b) Explain the working of AES and its advantages over DES. (15 mark
- The National Institute of Standards and Technology (NIST) created it in 2001 to take the role of the
previous Data Encryption Standard (DES).
-AES is reliable and effective for contemporary applications since it uses different key lengths and
operates on fixed block sizes.
-block and key size: operates on 128-bit blocks of data and supports key sizes of 128, 192, and 256
bits.
-key expansion : a key schedule is used to expand the original key into a set of round keys and the key
size determines the number of rounds;128-bit keys have 10 rounds, 192-bit keys have 12, and 256-bit
keys have 14.
-encryption process: the encryption process consists of several rounds and each process involving the
following steps;
SubBytes: Each byte of the block is replaced with another byte using a substitution table .
ShiftRows: Rows of the block are shifted cyclically to the left.
MixColumns: Each column of the block is mixed (combined) to increase diffusion.
AddRoundKey: The current block is combined with a round key derived from the original key.
16
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
-decryption process: similar steps are taken in the decryption process, but the processes are reversed,
and the original plaintext is recovered using different tables and shifts.
- provides a significantly larger key space compared to DES thereby much more resistant to brute-force
attacks.
- the structure of AES provides better diffusion and confusion, enhancing better security against various
cryptographic attacks as compared to DES.
-AES is resistant to Cryptanalysis and DES has been compromised by increases in processing power and
is susceptible due to its short key length.
-more flexible than DES since different key lengths can be chosen which allows AES to be tailored for
various security requirements.
- encryption standard that was certified by NIST and is widely used in many different
applications having a high degree of interoperability and trust is guaranteed by its standardization
than DES.
-serves as future proof against several attacks as many advancements are introduced than DES.
3.Authentication Mechanisms:
a) Discuss the "Something You Know," "Something You Have," and "Something You Are" authentication
methods. (15 marks)
- this is a way to identify a user using something like a PIN, a password, or a passphrase.
- to guarantee the other party's identification, a pre-shared secret between the parties is disclosed.
17
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
Prohibiting from using company name or anything related to organisation for passwords.
Prohibiting from using personal information for PINs and passwords e.g. birthdays
-this is a method of authentication that requires a user to have a physical item that verifies the identity.
-the physical items needed for authentication includes smart cards , mobile devices ,security tokens and
others.
Enhanced security
Two-factor authentication :mostly used with something you know to strengthen security
-this kind of authentication involves the use of biometric verification for authentication of user.
Iris scans
Face recognition
Retina cognition
Fingerprint
Voice
Typing rhythm
Handwritten signature
-its advantages is that has got high security and its user convenience.
-has got drawbacks which are:implementation costs and false acceptance sometimes.
b) Explain the need for Multi-Factor Authentication (MFA) with real-world examples. (10 marks)
-multi-factor authentication generally combines two or more independent credentials ;what the user
knows, such as a password; what the user has, such as a security token; and what the user is, by using
biometric verification methods.
-the three authentication mechanisms have got weaknesses ,for that reason they can not be used
alone ,need to be combined to come up with a strong authentication system.
18
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
i)increased security
- MFA reduces the dangers of compromised credentials e.g password ,by demanding extra verification
procedures.
-example: a high-profile breach on Twitter in 2020 resulted in the compromising of many accounts,
even in the case that passwords were obtained, MFA may have prevented unwanted access if it had
been implemented globally.
- user credentials are frequently the subject of phishing attacks, by mandating extra authentication
factors, MFA can prevent certain attacks.
-example: by delivering a verification number to users' mobile devices, Google Accounts enabled multi-
factor authentication (MFA), if a user unintentionally gives their password to a phishing website, an
attacker would still require the second factor to obtain access.
-sensitive information should be protected through MFA to avoid unauthorised access to it.
-example: a data breach on Facebook in 2019 revealed millions of user records therby with MFA that
breach should have been avoided.
- by adding layers of protection against identity theft, MFA makes it harder for hackers to impersonate
legimate users.
Example: For online transactions, banks and other financial institutions frequently need MFA in a such a
case whereby a user might be required to input a password before receiving a code on their mobile
device to verify a transaction.
v)adaptability to threats
- by using different authentication techniques, such biometric verification or one-time passwords, MFA
can adjust to changing threats.
-example: Numerous mobile banking applications provide a customizable security solution by enabling
customers to authenticate with their passwords and fingerprints or face recognition.
- putting MFA into practice increases user confidence in a platform's security protocols.
-example: a lot of online services, like Dropbox and Microsoft 365, advertise that they use multi-factor
authentication (MFA) to provide users peace of mind that their data is safe.
19
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
a) Describe the Mandatory Access Control (MAC) and Discretionary Access Control (DAC) models. (15
marks)
- is a model of access control where the operating system provides users with access based on data
confidentiality and user clearance levels.
- in this type of model, access is granted on a need to know basis: users have to prove a need for
information before gaining access.
-uses an implicit deny strategy , users are denied access to the related data by the system if they are
not explicitly given access to it.
- a user with top secret clearance can access information of lower level clearance but not the other way
round.
-MAC model has got advantages, reduces the possibility of unwanted access by offering a strong
security environment and is characterized with consistant enforcement .
-has got drawbacks which are inflexibility and complex administration of security labels.
- is an identity-based access control model that provides users a certain amount of control over their
data.
- Access rights for certain users or groups of users can be defined by data owners or any users with the
authority to control data.
- grants access based on the user's identification rather than their degree of permission.
- Each piece of data's access permissions are kept in an access-control list (ACL).
-its advantages are :flexibility but does not provide high level of security and its ease of use.
-has got drawbacks which are :has got potential for mismanagement through granting of excessive
permissions and potential weaknesses can be created since security policies my vary.
- for example : only the specific resource of the level 1 user is granted to the level 2 user if he need it;
full level 1 access is not granted.
b) Compare Role-Based Access Control (RBAC) and Lattice-Based Access Control (LBAC). (10 marks)
20
Software Eng Class Onlyyyy !!!!!!!!!!! Usakere wanguu
vi)roles can be narrow or broad affecting access Granular control based on defined lattice levels
levels
vii)separation of duties can be enforced through Enforces strict read/write policies based on
role assignments hierarchy
viii)easily scalable can grow cumbersome as security levels are
raised.
ix)multiple roles can be assigned to users for Users can only access data at lower or their
varying access security level.
x) roles in RBAC refer to the levels of access that a lattice is used to define the levels of security
employees have to the network that an Resource may have
21