0% found this document useful (0 votes)
12 views

rt

Uploaded by

y22cs094
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

rt

Uploaded by

y22cs094
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

L

CYBER SECURITY
A INTERNSHIP REPORT

Submitted in the partial fulfillment of requirements to

Summer Internship (CS- 353)

III/IV B.Tech (V Semester)

Submitted By

KURRA.HARSHA VARDHAN SAI (Y22CS099)

October, 2024
R.V.R. & J.C. COLLEGE OF ENGINEERING (Autonomous)
(Affiliated to Acharya Nagarjuna University)
Chandramoulipuram :: Chowdavaram
GUNTUR – 522 019

OMoARcPSD|23315195

i
ACKNOWLEDGEMENT

The successful completion of any task would be incomplete without proper suggestions,
guidance and environment. Combination of these three factors acts like backbone to our Internship
“ CYBER SECURITY”.

We would like to express our profound gratitude to Dr.M.Sreelatha, Head of the


Department of Computer Science and Engineering and Dr. Kolla Srinivas, Principal, for their
encouragement and support to carry out this Internship successfully.

We are very much thankful to PALOATLO NETWORKS for having allowed us to


conduct the study needed for the Internship.

Finally we submit our heartfelt thanks to all the staff in the Department of Computer
Science and Engineering and to all our friends for their cooperation during the Internship work.

KURRA.HARSHA VARDHAN SAI(Y22CS099)

iii
ABSTRACT

The objective of this CYBERSECURITY VIRTUAL INTERNSHIP PROGRAM is to

describe Wi-Fi vulnerabilities, attacks, and advanced persistent threats. Through complex

learnings and iterations, we have created a protection to critical systems and sensitive

information from digital attacks, called our Cybersecurity.

Cybersecurity is the technique of protecting internet-connected systems such as computers,

servers, mobile devices, electronic systems, networks, and data from malicious attacks is

known as cybersecurity. We can divide cybersecurity into parts one is cyber, and the other is

security. Cyber refers to the technology that includes systems, networks, programs, and data.

Security is concerned with the protection of systems, networks, applications, and information.

In some cases, it is also called electronic information technology or information technology

security. The Palo Alto Network Certification Learning Plan which focuses on the essential

skills and techniques required to configure a simple Palo Alto cybersecurity.

iv
CONTENTS
Page No.

Title Page i
Certificate ii
Acknowledgement iii
Abstract iv
Contents v
List of Figures vi
1: INTRODUCTION TO CYBER SECURITY 1

1.1 Key Concepts in Cybersecurity 1

1.2 Challenges in Cybersecurity 5

2: Fundamentals Of Cybersecurity 6

2.1 Distinguish between Web 2.0 and 3.0 7

2.2 Identify applications by their port number 10

2.3 circumvent port-based firewalls 12

2.4 MITRE ATT&CK Framework 15

3: Network Security Components 20

3.1 Describe the use of VLAN 20

3.2 IoT connectivity technologies 26

4: Cloud Technologies 28

4.1 Cloud Service Models 28

4.2 Cloud Deployment Models 30

4.3 Cloud multitenancy Enabling multitenancy 36

5: Elements of Security Operations 39

5.1 SOC business objectives 39

5.2 SOC business management and operations 42

CONCLUSION 46

REFERENCE
List of Figures
S.No Fig.No Figure Description Pg.No
1 1.1.1 Threats 1
2 1.1.2 Vulnerabilities 2
3 1.1.3 Attack Vectors 2
4 1.1.4 Protection Measures 3
5 1.1.5 Incident Response 3
6 1.1.6 Ethical Hacking 3
7 1.1.7 Data Privacy 4
8 1.1.8 Regulatory Compliance 4
9 1.2.1 Evolving Threat Landscape 5
10 1.2.2 Skill Shortage 5
11 1.2.3 Rapid Technology Advancements 6
12 2.1.1 Web 2.0 applications 7
13 2.1.2 Web content management (WCM) 9
14 2.2.1 Sweep scan 11
15 2.3.1 Hiding within SSL encryption 12
16 2.3.2 public-private partnerships 14
17 2.4.1 MITRE’S Approach 16
18 2.4.2 ATT&CK Framework 17
19 2.4.3 cyberattack lifecycle 18
20 3.1.1 Hold-down timers 23
21 3.1.2 IoT devices 25
22 3.2.1 IoT devices 27
23 3.2.2 Z-Wave 28
24 3.2.3 Zigbee/802.14 28
25 4.1.1 Platform as a service 29
26 4.1.2 Infrastructure as a service 30
27 4.2.1 cloud security responsibilities 31
28 4.2.2 Cloud computing 32
29 4.2.3 Security 33
30 4.3.1 Cloud multitenancy 36
31 4.3.2 Public Cloud 37
32 4.3.3 Private Cloud 38
33 4.3.4 Hybrid Cloud 39
34 5.1.1 SOC business objectives 40
35 5.1.2 Mission 40
36 5.1.3 Governance 42
37 5.1.4 Planning 42
S.No Fig.No Name of the figure Pg.No
38 5.2.1 SOC business management 42
39 5.2.2 Case Management 43
INTRODUCTION TO CYBER SECURITY
Cybersecurity is the practice of protecting computer systems, networks, and data from

unauthorized access, attacks, damage, or theft. As our world becomes increasingly connected

through the internet and digital technologies, the importance of cybersecurity has grown

exponentially. It is a crucial aspect of ensuring the confidentiality, integrity, and availability of

information and resources in the digital realm.

1.1 Key Concepts in Cybersecurity:

● Threats:
Cybersecurity addresses a wide range of threats, including malware (viruses, worms,

ransomware), phishing attacks, social engineering, denial-of-service (DoS) attacks, insider

threats, and more. These threats can be launched by hackers, cybercriminals, or even nation-

states.

Fig 1.1.1: Threats

● Vulnerabilities:
Vulnerabilities refer to weaknesses or flaws in computer systems, software, or networks that

can be exploited by attackers. Cybersecurity aims to identify and mitigate these vulnerabilities

to reduce the risk of attacks.

1
Fig 1.1.2: Vulnerabilities

● Attack Vectors:
Attack vectors are the paths or methods used by attackers to gain unauthorized access or

launch an attack. Common attack vectors include email attachments, malicious websites,

software vulnerabilities, and weak passwords.

Fig 1.1.3: Attack Vectors

● Protection Measures:
Cybersecurity employs various protective measures to safeguard systems and data. These

measures include firewalls, antivirus software, encryption, access controls, multi-factor

authentication, intrusion detection systems (IDS), and more.

2
Fig 1.1.4: Protection Measures

● Incident Response:
Despite the best preventive measures, cyber incidents may still occur. Incident response

involves identifying, containing, and mitigating the effects of a cybersecurity breach to

minimize damage and recover from the incident effectively.

Fig 1.1.5: Incident Response

● Ethical Hacking:
Ethical hacking, also known as penetration testing, involves simulating cyber-attacks on

systems or networks with permission to identify vulnerabilities and weaknesses.

Fig 1.1.6: Ethical Hacking

3
● Data Privacy:
Protecting the privacy of sensitive information is a critical aspect of cybersecurity. Data

breaches can lead to severe consequences, including financial losses and damage to an

organization's reputation.

Fig 1.1.7: Data Privacy

● Regulatory Compliance:
Many industries and countries have specific regulations and standards related to cybersecurity,

which organizations must adhere to. Non-compliance can result in legal consequences and

penalties.

Fig 1.1.8: Regulatory Compliance

4
1.2 Challenges in Cybersecurity:

● Evolving Threat Landscape:


Cyber threats are continually evolving, with attackers employing sophisticated techniques to

bypass traditional security measures.

Fig 1.2.1: Evolving Threat Landscape

● Skill Shortage:
There is a shortage of skilled cybersecurity professionals, making it challenging for

organizations to find and retain qualified personnel.Many industries and countries have specific

regulations and standards related to cybersecurity, which organizations must adhere to. Non-

compliance can result in legal consequences and penalties.

Fig 1.2.2: Skill Shortage

5
● Rapid Technology Advancements:
As technology advances, new devices and services introduce new security risks that need to

be addressed.

Fig 1.2.3: Rapid Technology Advancements

● Human Factor:

Humans can often be the weakest link in cybersecurity, as social engineering attacks target

individuals to gain unauthorized access.

As technology continues to advance, the importance of cybersecurity will only grow, and

organizations and individuals alike must prioritize security measures to protect their digital

presence.

Fundamentals Of Cybersecurity

The modern cybersecurity landscape is a rapidly evolving, hostile environment filled with

advanced threats and increasingly sophisticated threat actors. This section describes computing

trends that are shaping the cybersecurity landscape, application frameworks and attack (or

threat) vectors, cloud computing and SaaS application security challenges, various information

security and data protection regulations and standards, and some recent cyberattack examples.

6
● Note:

The terms “enterprise” and “business” are used throughout this guide to describe organizations,

networks, and applications in general. The use of these terms is not intended to exclude other

types of organizations, networks, or applications, and should be understood to include not only

large businesses and enterprises but also small and medium-size businesses (SMBs),

government, state-owned enterprises (SOEs), public services, military, healthcare, and

nonprofits, among others.

2.1 Distinguish between Web 2.0 and 3.0:

The nature of enterprise computing has changed dramatically over the past decade. Core

business applications now are commonly installed alongside Web 2.0 apps on a variety of

endpoints, and networks that were originally designed to share files and printers are now used

to collect massive volumes of data, exchange real-time information, transact online business,

and enable global collaboration. Many Web 2.0 apps are available as software-as-a-service

(SaaS), web-based, or mobile apps that can be easily installed by end users or that can be run

without installing any local programs or services on the endpoint.

Fig 2.1.1: Web 2.0 applications

7
• Accounting software :
It is used to process and record accounting data and transactions such as accounts payable,

accounts receivable, payroll, trial balances, and general ledger (GL) entries. Examples of

accounting software include Intacct, Microsoft Dynamics AX and GP, NetSuite, QuickBooks,

and Sage.

● Business intelligence (BI) :

consists of tools and techniques used to surface large amounts of raw unstructured data from
a variety of sources (such as data warehouses and data marts). BI and business analytics
software performs a variety of functions, including business performance management.

● enterprise content management:

systems are used to store and organize files from a central management interface, with features
such as indexing, publishing, search, workflow management, and versioning. Examples of
CMS and ECM software include EMC Documentum, HP Autonomy, Microsoft SharePoint,
and OpenText.

● Customer relationship management (CRM):

software is used to manage an organization’s customer (or client) information, including lead

validation, past sales, communication and interaction logs, and service history. Examples of

CRM suites include Microsoft Dynamics CRM, Salesforce.com, SugarCRM, and ZOHO.

● Database management systems (DBMS):

are used to administer databases, including the schemas, tables, queries, reports, views, and

other objects that comprise a database. Examples of DBMS software include Microsoft SQL

Server, MySQL, NoSQL, and Oracle Database.

8
● Enterprise resource planning (ERP):

systems provide an integrated view of core business processes, such as product and cost

planning, manufacturing or service delivery, of ERP AND SAP

● Enterprise asset management (EAM):

software is used to manage an organization’s physical assets throughout their entire lifecycle,

including acquisition, upgrade, maintenance, repair, replacement, decommissioning, and

disposal. EAM is commonly implemented as an integrated module of ERP systems. Examples

of EAM software include IBM Maximo, Infor EAM, and SAP.

● Supply chain management (SCM):

software is used to manage supply chain transactions, supplier relationships, and various

business processes, such as purchase order processing, inventory management, and warehouse

management. SCM software is commonly integrated with ERP systems. Examples of SCM

software include Fishbowl Inventory, Freightview, Infor Supply Chain Management, and Sage.

● Web content management (WCM):

software is used to manage website content, including administration, authoring,

collaboration, and publishing. Examples of web content management software include Drupal,

IBM FileNet, Joomla, and WordPress.

Fig 2.1.2: Web content management (WCM)

9
2.1.1 port-scanning methodologies:
A port scan is a method for determining which ports on a network are open. As ports on a

computer are the place where information is sent and received, port scanning is analogous to

knocking on doors to see if someone is home. Running a port scan on a network or server

reveals which ports are open and listening (receiving information), as well as revealing the

presence of security devices such as firewalls that are present between the sender and the target.

This technique is known as fingerprinting. It is also valuable for testing network security and

the strength of the system’s firewall. Due to this functionality, it is also a popular

reconnaissance tool for attackers seeking a weak point of access to break into a computer.

2.1.2 Non-standard ports Ports :


vary in their services offered. They are numbered from 0 to 65535, but certain ranges are more

frequently used. Ports 0 to 1023 are identified as the “well-known ports” or standard ports and

have been assigned services by the Internet Assigned Numbers Authority (IANA).

2.2 Identify applications by their port number :


A port scan sends a carefully prepared packet to each destination port number. The basic

techniques that port scanning software is capable of include :

● Vanilla:

the most basic scan; an attempt to connect to all 65,536 ports one at a time. A vanilla scan is a

full connect scan, meaning it sends a SYN flag (request to connect) and upon receiving a SYN-

ACK (acknowledgement of connection) response, sends back an ACK.

● SYN Scan:

Also referred to as a half-open scan, it only sends a SYN, and waits for a SYN-ACK response

from the target. If a response is received, the scanner never responds. Since the TCP connection

1
was not completed, the system doesn’t log the interaction, but the sender has learned if the port

is open or not.

● XMAS and FIN Scans:

an example of a suite of scans used to gather information without being logged by the target

system. In a FIN scan, an unsolicited FIN flag (used normally to end an established session)

will be sent to a port. The system’s response to this random flag can reveal the state of the port

or insight about the firewall. For example, a closed port that receives an unsolicited FIN packet,

will respond with a RST (an instantaneous abort) packet, but an open port will ignore it. An

XMAS scan simply sends a set of all the flags, creating a nonsensical interaction. The system’s

response can be interpreted to better understand the system’s ports and firewall.

● Sweep scan:

Pings the same port across a number of computers to identify which computers on the network

are active. This does not reveal information about the port’s state, instead it tells the sender

which systems on a network are active. Thus, it can be used as a preliminary scan.

Fig 2.2.1: Sweep scan

10
2.3 circumvent port-based firewalls:

Exploitation of vulnerabilities in core business applications has long been an attack vector, but

threat actors are constantly developing new tactics, techniques, and procedures (TTPs).

● Hiding within SSL encryption:

which masks the application traffic, for example, over TCP port 443 (HTTPS). More than half of all

web traffic is now encrypted.

Fig 2.3.1: Hiding within SSL encryption

2.3.1 common cloud computing service models:


Cloud computing is not a location but rather a pool of resources that can be rapidly provisioned

in an automated, on-demand manner.

2.3.2 Software as a service (SaaS):

Customers are provided access to an application running on a cloud infrastructure. The

application is accessible from various client devices and interfaces, but the customer has no

knowledge of, and does not manage or control, the underlying cloud infrastructure. The

customer may have access to limited user-specific application settings, and security of the

customer data still is the responsibility of the custome.

11
2.3.1.1 Platform as a service (PaaS):
Customers can deploy supported applications onto the provider’s cloud infrastructure, but the

customer has no knowledge of, and does not manage or control, the underlying cloud

infrastructure. The customer has control over the deployed applications and limited

configuration settings for the application-hosting environment. The company owns the

deployed applications and data, and therefore it is responsible for the security of those

applications and data.

Supply chain management (SCM) software is used to manage supply chain transactions,

supplier relationships, and various business processes, such as purchase order processing,

inventory management, and warehouse management. SCM software is commonly integrated

with ERP systems. Examples of SCM software include Fishbowl Inventory, Freightview,

Infor Supply Chain Management, and Sage X3.

Around the world, governments as well as private sector organizations are focused on

identifying and mitigating risks to the information and communications technology (ICT)

supply chain. In fact, efforts to disrupt or exploit supply chains have become, in the words of

a senior US Homeland Security Department official, a “principal attack vector” for

adversarial nations seeking to take advantage of vulnerabilities for espionage, sabotage or

other malicious activities. In this environment, strong supply chain security practices are a

differentiator for critical infrastructure organizations. But what, exactly, does a strong supply

chain security program look like? Recently, the U.S. Department of Commerce’s National

Institute of Standards and Technology (NIST) published a case study highlighting how Palo

Alto Networks uses supply chain best practices.

12
2.3.3end-to-end risk management:

We identify supply chain risks across our entire product lifecycle – design, sourcing,

manufacturing, fulfillment and service – and take proactive action to ensure the integrity of our

products. Risk assessments are performed early in the product development lifecycle to help

determine the feasibility of product design decisions.

2.2.1 Hardware manufacturing :

that enable us to more easily manage personnel and facility and product security. In fact, we
regularly consider geopolitical implications when making decisions to forgo suppliers and
manufacturing locations because it’s simply the right decision for product security.

2.2.5 public-private partnerships

designed to increase collaboration between public and private sector organizations and make

recommendations for enhancing supply chain security, such as our executive committee role

on the DHS ICT Supply Chain Risk Management Task Force.

Fig 2.3.2: public-private partnerships

2.2.1.1 roles within a SaaS environment :


A role defines the type of access that an administrator has to the firewall. The Administrator

Types are:

13
2.2.2 Role Based:

This allows custom roles you can configure for more granular access control over the

functional areas of the web interface, CLI, and XML API. For example, you can create an

Admin Role profile for your operations staff that provides access to the firewall and network

configuration areas of the web interface, then create a separate profile for your security

administrators that provides access to security policy definitions, logs, and reports. On a

firewall with multiple virtual systems, you can select whether the role defines access for all

virtual systems or specific virtual systems. When new features are added to the product, you

must update the roles with corresponding access privileges because the firewall does not

automatically add new features to custom role definitions

. ● Dynamic:

These include built-in roles that provide access to the firewall. When new features are added,

the firewall automatically updates the definitions of dynamic roles; you never need to manually

update them.

2.4 MITRE ATT&CK Framework :

The MITRE ATT&CK™ framework is a comprehensive matrix of tactics and techniques

designed for threat hunters, defenders, and red teams to help classify attacks, identify attack

attribution and objective, and assess an organization's risk. Organizations can use the

framework to identify security gaps and prioritize mitigations based on risk

14
• MITRE’S Approach:

is focused on articulating how detections occur rather than assigning scores to vendor

capabilities. MITRE categorizes each detection and capture. Detections are then organized

according to each technique. Techniques may have more than one detection if the capability

detects the technique in different ways, and detections they observe are included in the results.

While MITRE makes every effort to capture different detections, vendor capabilities may be

able to detect procedures in ways that MITRE did not capture.

For a detection to be included for a given technique, the detection must apply to that technique

specifically. For example, just because a detection applies to one technique in a step or sub-

step, that does not mean it applies to all techniques of that step.

Fig 2.4.1: MITRE’S Approach

To determine the appropriate category for a detection, MITRE reviews the screenshot(s)

provided, notes taken during the evaluation, results of follow-up questions to the vendor, and

vendor feedback on draft results.

MITRE also independently tests procedures in a separate lab environment as well as reviews

open-source tool detections and forensic artifacts. This testing informs what is considered a

detection for each technique.

15
FIG 2.4.2: ATT&CK Framework

● cyberattack lifecycle:
Modern cyberattack strategy has evolved from a direct attack against a high-value server or

asset (“shock and awe”) to a patient, multistep process that blends exploits, malware, stealth,

and evasion in a coordinated network attack (“low and slow”). The cyberattack lifecycle (see

following figure) illustrates the sequence of events that an attacker goes through to infiltrate a

network and exfiltrate (or steal) valuable data. Blocking of just one step breaks the chain and

can effectively defend an organization’s network and data against an attack. MITRE also

independently tests procedures in a separate lab environment as well as reviews open-source

tool detections and forensic artifacts. This testing informs what is considered a detection for

each technique.

16
FIG 2.4.3: cyberattack lifecycle

● Reconnaissance:
Like common criminals, attackers meticulously plan their cyberattacks. They research,

identify, and select targets, often extracting public information from targeted employees’ social

media profiles or from corporate websites, which can be useful for social engineering and

phishing schemes. Attackers also will scan for network vulnerabilities, services.

● Weaponization:
Next, attackers determine which methods to use to compromise a target endpoint. They may

choose to embed intruder code within seemingly innocuous files such as a PDF or Microsoft

Word document or email message. Or, for highly targeted attacks, attackers may customize

deliverables to match the specific interests of an individual within the target organization.

Breaking the cyberattack lifecycle at this phase of an attack is challenging because

weaponization typically occurs within the attacker’s network. However, analysis of artifacts

(both malware and weaponizer) can provide important threat intelligence to enable effective

zero-day protection when delivery (the next step) is attempted.

● Delivery:
Attackers next attempt to deliver their weaponized payload to a target endpoint, for example,

via email, instant messaging (IM), drive-by download (an end user’s web browser is redirected

17
to a webpage that automatically downloads malware to the endpoint in the background), or

infected file share. Breaking the cyberattack lifecycle at this phase of an attack requires

visibility into all network traffic (including remote and mobile devices) to effectively block

malicious or risky websites, applications, and IP addresses, and preventing known and

unknown malware and exploits.

● Exploitation:
After a weaponized payload is delivered to a target endpoint, it must be triggered. An end user

may unwittingly trigger an exploit, for example, by clicking a malicious link or opening an

infected attachment in an email, or an attacker may remotely trigger an exploit against a known

server vulnerability on the target network. Breaking the cyberattack lifecycle at this phase of

an attack, as during the Reconnaissance phase, begins with proactive and effective end-user

security awareness training that focuses.

● Installation:
Next, an attacker will escalate privileges on the compromised endpoint, for example, by

establishing remote shell access and installing root kits or other malware. With remote shell

access, the attacker has control of the endpoint and can execute commands in privileged mode

from a command line interface (CLI) as if physically sitting in front of the endpoint. The

attacker then will move laterally across the target’s network, executing attack code, identifying

other targets of opportunity, and compromising additional endpoints to establish persistence.

The way to break the cyberattack lifecycle at this phase of an attack is to limit or restrict the

attackers’ lateral movement within the network.

● Command and Control:


Attackers establish encrypted communication channels back to command-and-control (C2)
servers across the internet so that they can modify their attack objectives and methods as

18
additional targets of opportunity are identified within the victim network, or to evade any new
security countermeasures that the organization may attempt to deploy if attack artifacts are
discovered.

Network Security Components


A hub (or concentrator) is a network device that connects multiple devices such as desktop

computers, laptop docking stations, and printers on a LAN. Network traffic that is sent to a hub

is broadcast out of all ports on the hub, which can create network congestion and introduces

potential security risks (because broadcast data can be intercepted). mode from a command line

interface (CLI) as if physically sitting in front of the endpoint. The attacker then will move

laterally across the target’s network, executing attack code, identifying other targets of

opportunity, and compromising additional endpoints to establish persistence. The way to break

the cyberattack lifecycle at this phase of an attack is to limit or restrict the attackers’ lateral

movement within the network. A hub (or concentrator) is a network device that connects

multiple devices such as desktop computers, laptop docking stations, and printers on a LAN.

Network traffic that is sent to a hub is broadcast out of all ports on the hub, which can create

network congestion and introduces potential security risks

3.1 Describe the use of VLANs :

A VLAN is a set of devices or network nodes that communicate with each other as if they were

building a single LAN, when in fact they are present in one or more LAN sections. Virtual

local-area networks (VLANs) segment broadcast domains in a LAN, typically into logical

groups (such as business departments). VLANs are created on network switches.

19
• dynamic routing protocols :
A static routing protocol requires that routes be created and updated manually on a router or

other network device. If a static route is down, traffic can’t be automatically rerouted unless an

alternate route has been configured. Also, if the route is congested, traffic can’t be

automatically rerouted over the less congested alternate route. Static routing is practical only

in very small networks or for very limited, special-case routing scenarios (for example, a

destination that’s used as a backup route or is reachable only via a single router). However,

static routing has low bandwidth requirements (routing information isn’t broadcast across the

network) and some built-in security (users can route only to destinations that are specified in

statically defined routes).

• Distance-vector:
A distance-vector protocol makes routing decisions based on two factors: the distance (hop

count or other metric) and vector (the egress router interface). It periodically informs its peers

and/or neighbors of topology changes. Convergence is the time required for all routers in a

network to update their routing tables with the most current information (such as link status

changes), and it can be a significant problem for distance-vector protocols.

Without convergence, some routers in a network may be unaware of topology changes, which

causes the router to send traffic to an invalid destination. During convergence, routing

information is exchanged between routers, and the network slows down considerably.

Convergence can take several minutes in networks that use distance-vector protocols. Routing

Information Protocol (RIP) is an example of a distance-vector routing protocol that uses hop

count as its routing metric. To prevent routing loops, in which packets effectively get stuck

bouncing between various router nodes, RIP implements a hop limit of 15, which limits the

size of networks that RIP can support. After a data packet crosses 15 router nodes (hops)

20
between a source and a destination, the destination is considered unreachable. In addition to

hop limits, RIP employs four other mechanisms to prevent routing loops:

● Split horizon:

Prevents a router from advertising a route back out through the same interface from which the

route was learned.

● Triggered updates:

When a change is detected, the update is sent immediately instead of after the 30-second time

delay normally required to send a RIP update.

● Hold-down timers:

Cause a router to start a timer when the router first receives information that a

destination is unreachable. Subsequent updates about that destination will not be

accepted until the timer expires. This timer also helps avoid problems associated

with flapping. Flapping occurs when a route (or interface) repeatedly changes

state (up, down, up, down) over a short period of time. Without convergence, some

routers in a network may be unaware of topology changes, which causes the router to send

traffic to an invalid destination. During convergence, routing information is exchanged

between routers, and the network slows down considerably. Convergence can take several

minutes in networks that use distance-vector protocols. Routing Information Protocol (RIP) is

an example of a distance-vector routing protocol that uses hop count as its routing metric. To

prevent routing loops, in which packets effectively get stuck bouncing between various router

nodes, RIP implements a hop limit of 15, which limits the size of networks that RIP can

support. After a data packet crosses 15 router nodes (hops) between a source and a

destination.
21
The destination is considered unreachable. In addition to hop limits, RIP employs four other

mechanisms to prevent routing loops:

Fig 3.1.1: Hold-down timers

• security risks and IoT :


Identity of Things (IDoT) refers to Identity and Access Management (IAM) solutions for the

IoT. These solutions must be able to manage human-to-device, device-to-device, and/or device-

to-service/system IAM by:

22
● Creating a well-defined process for registering IoT devices. The type of data that the device

will be transmitting and receiving should shape the registration process.

● Defining security safeguards for data streams from IoT devices

● Outlining well-defined authentication and authorization processes for admin local access to

connected devices

● Creating safeguards for protecting different types of data, making sure to create privacy

safeguards for personally identifiable information (PII)

Though the IoT presents innovative new approaches and services in all industries, it also

presents new cybersecurity risks. According to research conducted by the Palo Alto Networks

Unit 42 threat intelligence team, the general security posture of IoT devices is declining, thus

leaving organizations vulnerable to new IoT-targeted malware and older attack techniques that

IT teams have long forgotten.

• IoT devices :
Ninety-eight percent of all IoT device traffic is unencrypted, thus exposing personal and

confidential data on the network. Attackers that have successfully bypassed the first line of

defense (most frequently via phishing attacks) and established C2 can listen to unencrypted

network traffic, collect personal or confidential information, and then exploit that data for profit

on the dark web. Fifty-seven percent of IoT devices are vulnerable to medium-severity or high-

severity attacks, thus making IoT the “low-hanging fruit” for attackers. Because of the

generally low patch level of IoT assets, the most frequent attacks are exploits via long-known

vulnerabilities and password attacks using default device passwords. Though the IoT presents

innovative new approaches and services in all industries, it also presents new cybersecurity

risks. According to research conducted by the Palo Alto Networks Unit 42 threat intelligence

23
team, the general security posture of IoT devices is declining, thus leaving organizations

vulnerable to new IoT-targeted malware and older attack techniques that IT teams have long

forgotten.

In 2019, 83 percent of medical imaging devices run on unsupported operating systems, which

is a 56 percent jump from 2018, as a result of the Windows 7 operating system reaching its end

of life. This general decline in security posture presents opportunities for new attacks.

Fig 3.1.2: IoT devices

24
3.2 IoT connectivity technologies:

● 2G/2.5G:

2G connectivity remains a prevalent and viable IoT connectivity option due to the low cost of

2G modules, relatively long battery life, and large installed base of 2G sensors and M2M

applications.

● 3G:

IoT devices with 3G modules use either Wideband Code Division Multiple Access (W-CDMA)

or Evolved High Speed Packet Access (HSPA+ and Advanced HSPA+) to achieve data transfer

rates of 384Kbps to 168Mbps.

● 4G/Long-Term Evolution (LTE):

4G/LTE networks enable real-time IoT use cases, such as autonomous vehicles, with 4G LTE

Advanced Pro delivering speeds in excess of 3Gbps and less than 2 milliseconds of latenc.

● C-band:

C-band satellite operates in the 4 to 8 gigahertz (GHz) range. It is used in some Wi-Fi devices

and cordless phones, and in surveillance and weather radar systems.

● L-band:

L-band satellite operates in the 1 to 2GHz range. It commonly is used for radar, global

positioning systems (GPSs), radio, and telecommunications applications

25
● Adaptive Network Technology:

ANT is a proprietary multicast wireless sensor network technology primarily used in personal

wearables, such as sports and fitness sensors.

● Wi-Fi/802.11:

The Institute of Electrical and Electronics Engineers (IEEE) defines the 802 LAN protocol

standards. 802.11 is the set of standards used for Wi-Fi networks typically operating in the

2.4GHz and 5GHz frequency bands. The most common implementations today include

Fig 3.2.1: IoT devices

● Z-Wave:

Z-Wave is a low-energy wireless mesh network protocol primarily used for home automation

applications such as smart appliances, lighting control, security systems, smart thermostats,

windows and locks, and garage doors.

26
Fig 3.2.2: Z-Wave

● Zigbee/802.14:

Zigbee is a low-cost, low-power wireless mesh network protocol based on the IEEE 802.15.4

standard. Zigbee is the dominant protocol in the low-power networking market, with a large

installed base in industrial environments and smart home products.

Fig 3.2.3: Zigbee/802.14

Cloud Technologies
4.1 Cloud Service Models:

Customers are provided access to an application running on a cloud infrastructure. The

application is accessible from various client devices and interfaces, but the customer has no

knowledge of, and does not manage or control, the underlying cloud infrastructure. The

27
customer may have access to limited user-specific application settings, and security of the

customer data still is the responsibility of the customer.

● Platform as a service (PaaS):

Customers can deploy supported applications onto the provider’s cloud infrastructure, but the

customer has no knowledge of, and does not manage or control, the underlying cloud

infrastructure. The customer has control over the deployed applications and limited

configuration settings for the application-hosting environment. The company owns the

deployed applications and data, and therefore it is responsible for the security of those

applications and data

Fig 4.1.1: Platform as a service

● Infrastructure as a service (IaaS):

Customers can provision processing, storage, networks, and other computing resources, and

deploy and run operating systems and applications. However, the customer has no knowledge

of, and does not manage or control, the underlying cloud infrastructure. The customer has

control over operating systems, storage, and deployed applications.

28
Fig 4.1.2: Infrastructure as a service

4.2 Cloud Deployment Models :

● Public:

A cloud infrastructure that is open to use by the general public. It is owned, managed, and

operated by a third party (or parties), and it exists on the cloud provider’s premises.

● Community:

A cloud infrastructure that is used exclusively by a specific group of organizations

● Private:

A cloud infrastructure that is used exclusively by a single organization. It may be owned,

managed, and operated by the organization or a third party (or a combination of both), and it

may exist on-premises or off-premises.

● Hybrid:

A cloud infrastructure that comprises two or more of the aforementioned deployment models,

bound by standardized or proprietary technology that enables data and application portability

29
(for example, fail over to a secondary data center for disaster recovery or content delivery

networks across multiple clouds).

4.2.1 cloud security responsibilities :


The security risks that threaten your network today do not change when you move to the cloud.

The shared responsibility model defines who (customer and/or provider) is responsible for what

(related to security) in the public cloud.

In general terms, the cloud provider is responsible for security of the cloud, including the

physical security of the cloud data centers, and foundational networking, storage, compute, and

virtualization services. The cloud customer is responsible for security in the cloud, which is

further delineated by the cloud service model

Fig 4.2.1: cloud security responsibilities

30
● Cloud computing :

doesn’t mitigate existing network security risks. The security risks that threaten your network

today do not change when you move to the cloud. The shared responsibility model defines who

(customer and/or provider) is responsible for what (related to security) in the public cloud. In

general terms, the cloud provider is responsible for security of the cloud, including the physical

security of the cloud data centers and foundational networking, storage, compute, and

virtualization services. The cloud customer is responsible for security in the cloud, which is

further delineated by the cloud service model. For example, in an infrastructure-as-a-service

(IaaS) model, the cloud customer is responsible for the security of the operating systems,

middleware, runtime, applications, and data. In a platform-as-a-service (PaaS) model, the cloud

customer is responsible for the security of the applications and data, 2and the cloud provider is

responsible for the security of the operating systems, middleware, and runtime. In a SaaS

model, the cloud customer is responsible only for the security of the data, and the cloud

provider is responsible for the full stack, from the physical security of the cloud data centers to

the application.

Fig 4.2.2: Cloud computing

31
● Security :

requires isolation and segmentation; the cloud relies on shared resources. Security best

practices dictate that mission-critical applications and data be isolated in secure segments on

the network using the Zero Trust principle of “never trust, always verify.” On a physical

network, Zero Trust is relatively straightforward to accomplish using firewalls and policies

based on application and user identity. In a cloud computing environment, direct

communication between VMs within a server and in the data center (east-west traffic) occurs

constantly, in some cases across varied levels of trust, thus making segmentation a difficult

task. Mixed levels of trust, when combined with a lack of intra-host traffic visibility by

virtualized port-based security offerings, may weaken an organization’s security posture.

Fig 4.2.3: Security

● Traditional network and host security models don’t work in the cloud for serverless

applications. Defense in depth mostly has been performed through Network layer controls.

Advanced threat prevention tools can recognize the applications that traverse the network and

determine whether they should be allowed. This type of security still is very much required in

32
cloud native environments, but is no longer sufficient on its own. Public cloud providers offer

a rich portfolio of services, and the only way to govern and secure many of them is through

Identity and Access Management (IAM). IAM controls the permissions and access for users

and cloud resources. IAM policies are sets of permission policies that can be attached to either

users or cloud resources to authorize what they access and what they can do with what they

access.

● Your business applications segmented using Zero Trust principles: To fully maximize the use

of computing resources, a relatively common current practice is to mix application workload

trust levels on the same compute resource. Although mixed levels of trust are efficient in

practice, they introduce security risks in the event of a compromise. Your cloud security

solution needs to be able to implement security policies based on the concept of Zero Trust as

a means of controlling traffic between workloads while preventing lateral movement of threats.

● Centrally managed business applications; streamlined policy updates: Physical network

security still is deployed in almost every organization, so the ability to manage both hardware

and virtual form factor deployments from a centralized location using the same management

infrastructure and interface is critical. To ensure that security keeps pace with the speed of

change that your workflows may exhibit, your security solution should include features that

will allow you to reduce, and in some cases eliminate, the manual processes that security policy

updates often require. Regardless of which type of cloud service you use, the burden of securing

certain types of workloads will always fall on you instead of your vendor.

33
● Review default settings:

Although certain settings are automatically set by the provider, some must be manually

activated. You should have your own set of security policies rather than assume that the vendor

is handling a particular aspect of your cloud native security.

Adapt data storage and authentication configurations to your organization All locations where

data will be uploaded should be password protected. Password expiration policies also should

be carefully selected to meet the needs of your organization.

● Don’t assume your cloud data is safe:

Never assume that vendor-encrypted data is totally safe. Some vendors provide encryption

services before upload, and some do not. Whichever the case, make sure to encrypt your data

in transit and at rest by using your own keys.

● Integrate with your cloud’s data retention policy:

You must understand your vendor’s data retention and deletion policy. You must have multiple

copies of your data and a fixed data retention period. But what happens when you delete data

from the cloud? Is it still accessible to the vendor? Are there other places where it might have

been cached or copied? You should verify these issues before you set up a new cloud

environment.

● Set appropriate privileges:

Appropriate settings for privilege levels are helpful for making your cloud environment more

secure. When you use role-based access controls (RBACs) for authorization, you can ensure

that every person who views or works with your data has access only to the things that are

absolutely necessary.

34
4.3 Cloud multitenancy Enabling multitenancy:

allows you to host multiple instances of Prisma Access on a single Panorama appliance. Each

instance is known as a Tenant. Prisma Access tenants get their own dedicated Prisma Access

instances. They are not shared between tenants.

Fig 4.3.1: Cloud multitenancy Enabling multitenancy


● security tools in various cloud environments :
Cloud security, or cloud computing security, consists of various technologies and tools

designed to protect each aspect of the Shared Responsibility Model. Although cloud users aren't

responsible for the security of the underlying infrastructure, they are responsible for protecting

their information from theft, data leakage, and deletion. Many security approaches in the cloud

are the same as those of traditional IT security, but there are some fundamental differences.

Whether you implement public, private, or hybrid cloud environments, it’s important to adopt

security controls that facilitate frictionless deployment and don't hinder the dynamic, agile

nature for which cloud environments are renowned.

● Public Cloud :
The public cloud is a cloud computing model in which IT services are delivered via the public

internet. In this case, the entire underlying infrastructure is completely owned and operated by

35
a third-party cloud provider, such as Google Cloud, Amazon or Microsoft. Public cloud

deployments are often used to provide common services like web-based applications or storage,

but they can also be used for complex computations or to test and develop new services. These

environments are generally billed via annual or use-based subscriptions based on the number

of cloud resources used and traffic processed. Within a public cloud environment, you share

the foundational infrastructure with other organizations, and you can access your services as

well as deploy and manage your resources through your account. The public cloud yields many

potential advantages for businesses, including the ability to deploy highly scalable, globally

available applications quickly and without costly upfront investments.

Fig 4.3.2: Public Cloud

● Private Cloud :
In a private cloud, infrastructure is provisioned for exclusive use by a single business or

organization. It can be owned, managed and operated by the business, a third-party service

provider, or a combination of the two. It can also be located on the business’s premises or off,

similar to the public cloud. Any application can be run in a private cloud environment,

including websites, big data and machine learning applications, and databases. The private

36
cloud offers many of the same benefits as the public cloud, such as elastic scalability and cost

savings, but it also guarantees resource availability, total control, privacy, and regulatory

compliance. This makes private clouds highly desirable to organizations with strict compliance

requirements or that demand absolute control over their data location, such as government

agencies or financial institutions.

Fig 4.3.3: Private Cloud

● Hybrid Cloud :
A hybrid cloud is a combination of on-premises, private, and/or public cloud environments

that remain separate yet orchestrated. In a hybrid cloud environment, data and applications can

move between environments, enabling greater flexibility – especially for organizations looking

to extend their existing on-premises footprints with specific use cases ideally suited for the

cloud. As an example, public clouds can be used for high-volume, lower-security needs, such

as web-based applications, while private clouds can be used for more sensitive, business-

critical operations like financial reporting. Often referred to as the best of both worlds, its

adaptability makes it attractive for many enterprises. Any application can be run in a private

cloud environment, including websites, big data and machine learning applications, and

databases. The private cloud offers many of the same benefits as the public cloud, such as

37
elastic scalability and cost savings, but it also guarantees resource availability, total control,

privacy, and regulatory compliance. This makes private clouds highly desirable to

organizations with strict compliance requirements or that demand absolute control over their

data location, such as government agencies or financial institutions.

Fig 4.3.4: Hybrid Cloud

Elements of Security Operations

5.1 SOC business objectives :

Security operations centers can go by many names, including Cyber Defense Center or Security
Intelligence Center. A security operations center, or SOC, is typically thought of as a physical
room or area in an organization’s office where cybersecurity analysts work to monitor
enterprise systems. Security operations can be defined more broadly as a function that
identifies, investigates, and mitigates threats. If there is a person in an organization responsible
for looking at security logs, that fits the role of security operations. Continuous improvement
is also a key activity of a security operations organization.

38
Fig 5.1.1: SOC business objectives

● Mission:
Developing, documenting, and socializing the mission statement for your security operations

is one of the most important elements of the organization. It will define to you, and to the

business, the purpose of the SOC. This should include the objectives of the security operations

organization and the goals the organization expects to achieve for the business. Socializing the

mission statement and getting buy-in from executives provides clear expectations and scope of

the security operations team’s responsibilities.

Fig 5.1.2: Mission

39
● Governance:
Governance measures performance against the defined and socialized mission statement. It

defines the rules and processes put in place to ensure proper operation of the organization. It

can include principles, mandates, standards, enforcement criteria, and SLAs. Additionally, it

defines how the security operations team will be managed and who is responsible for ensuring

the team continually meets the mission of the business. This should include actions performed

to ensure the mission objectives are met.

Fig 5.1.3: Governance

● Planning:
Planning includes details on how the security operations organization will achieve its goals.

Main business drivers must be identified and documented. Other inclusions consist of vision,

strategy, service scope, deliverables, responsibilities, accountability, operational hours,

stakeholders and a statement of success.

Planning ought to include a three-year vision, ensuring the continuation of operations – even

in times of rotating executives that may have execution variances – to provide the expected

40
value to the business. Planning also ought to incorporate an investment strategy. This not only

includes technology purchases but automation goals and investment in people. It should tightly

align to the business. If there is a large M&A strategy or digital transformation to the cloud, for

example, the investment plan should align to those initiatives.

Fig 5.1.4: Planning

5.2 SOC business management and operations :

Fig 5.2.1: SOC business management and operations

41
● Case Management :
An SOC’s necessary capability includes a clear protocol for documenting and escalating

incidents. Case management is a collaborative process that involves documenting, monitoring,

tracking and notifying the entire organization of security incidents and their current status. The

minimum set of data points that should be captured in a case, as well as the tool users select

for this function, should be capable of handling this data. Often, organizations will utilize

multiple tools (ticketing, SOAR, email, etc.) for case management. However, this path is ill-

advised, as it severs data continuity and incident handling efficiency takes a hit.

Fig 5.2.2: Case Management

● Budget :
A financial plan for the costs of running the SOC should begin with an agreement on the

mission of the SOC. Then, the technology, staff, facility, training, and additional needs to

achieve that mission are identified. From there, a budget can be established to meet the

minimum requirements of the team. Often, a SOC budget is set from the top-down or assigned

a percentage of an IT budget. This approach is not business focused and will result in frustration

between capabilities and expectations from the business.

42
Once the budget is established, it should be followed by a regular review to identify additional

needs or surplus. The timeline for regular budget requests and approval should be documented

to avoid surprises or a last-minute rush to defend the organization’s needs. Define the process

needed to change the allocated budget, as well as a process for emergency budget relief.

A business-savvy budgeting resource can help the security operations organization navigate

CapEx spending vs. OpEx spending and the expectations of the business. Be aware that

government SOCs have additional considerations around the timing of elections and possible

party-switching, which could result in dramatic budget shifts.

● Metrics :
If analysts spend time gathering metrics that cannot drive change, then this process will prove,

at best, a waste of time. Worse, this method can drive the wrong behavior. Mean Time to

Resolution (MTTR) provides a clear example of this danger. MTTR is a fine metric when used

in an NOC (where uptime is key) but can be detrimental when used in an SOC. Holding

analysts accountable for MTTR will result in rushed and incomplete analyses; analysts will rush

to close incidents rather than do full investigations that can feed learning back into the controls

to prevent future attacks. This will not produce better outcomes or reduced risk for the business.

Another poor metric is counting the number of firewall rules deployed. Organizations can put

in place 10,000 firewall rules, but if the first is inaccurate, then the rest are useless. This is

similar to measuring the number of data feeds into a SIEM. If there are 15 data feeds but only

one use-case, then the data feeds aren’t being properly utilized and are a potentially expensive

waste. Caution should be taken when measuring peoples’ performance. Ranking top performers

by number of incidents handled can have skewed results and may lead to analysts “cherry-

picking” incidents that they know are fast to resolve. Additionally, evaluating individual

performance in this way violates the law in various countries.

43
● Reporting :
Reporting ought to give an account of what analysts have observed, heard, done, or

investigated. It should quantify activity and demonstrate the value the security operations team

provides to the business or client organizations in the case of an MSSP. Reporting outcomes

will not necessarily drive changes in behavior but can track current activity. Reports are

typically generated daily, weekly and monthly.

Daily reports should include open incidents, with details centered on daily activity. Weekly

reports should identify security trends to initiate threat-hunting activities, which includes the

number of cases opened and closed and conclusions of the tickets (malicious, benign, false

positives). Include such information as how many different security use cases were triggered

and their severity, as well as how they were distributed through the hours of the day.

Monthly reports should focus on the overall effectiveness of the SecOps function. These reports

should cover topics such as how long events are sitting in queue before being triaged, if the

staffing in the SOC is appropriate (do more resources need to be added or reassigned), the

efficacy of rule fires, and if rules that never fire or always fire result in a false-positive.

● Business :
Liaisons A growing trend is for security organizations to hire business liaisons. This role ties

in to the different aspects of the business and helps to identify and explain the impact of

security. This includes keeping up to date with new product launches and development

schedules, onboarding new branch offices, and handling mergers and acquisitions where legacy

networks/applications need to be brought into the main security program. This role can also

assume responsibilities for partner, vendor, and team interface management.

44
● Governance, Risk, and Compliance :
The governance, risk, and compliance (GRC) function is responsible for creating the guidelines

to meet business objectives, manage risk, and meet compliance requirements. Common

compliance standards include PCI-DSS, HIPAA, GDPR, etc. These standards require different

levels of protection/encryption and data storage. Those requirements are typically handled by

other groups; however, the breach disclosure requirements directly involve the security

operations team. The SOC team must interface with the GRC team to define escalation

intervals, contacts, documentation and forensic requirements.

● DevOps :
The DevOps team’s responsibilities include developing, implementing, and maintaining

company-created applications. This role has evolved greatly with the adoption of cloud apps

and agile development, where application upgrades are now rolled out within minutes, rather

than the long cycles where we would see major releases only every six to 12 months. The

DevOps team’s main motivation is to push bug-free features out to users as rapidly as possible.

Some groups work security protocols into their release cycles, but so far most do not.

CONCLUSION
Training from Palo Alto Networks and our Authorized Training Partners delivers the

knowledge and expertise to prepare you to protect our way of life in the digital age. Our trusted

security certifications give you the Palo Alto Networks product portfolio knowledge necessary

to prevent successful cyberattacks and to safely enable applications.

● Digital Learning :
For those of you who want to keep up to date on our technology, a learning library of free

digital learning is available. These on-demand, self-paced digital-learning classes are a helpful

45
way to reinforce the key information for those who have been to the formal hands-on classes.

They also serve as a useful overview and introduction to working with our technology for those

unable to attend a hands-on, instructor-led class.

Simply register in Beacon and you will be given access to our digital-learning portfolio. These

online classes cover foundational material and contain narrated slides, knowledge checks, and,

where applicable, demos for you to access. New courses are being added often, so check back

to see new curriculum available.

REFERENCE

● Instructor-Led Training :
Looking for a hands-on, instructor-led course in your area?

Palo Alto Networks Authorized Training Partners (ATPs) are located globally and offer a

breadth of solutions from onsite training to public, open-environment classes. About 42

authorized training centers are delivering online courses in 14 languages and at convenient

times for most major markets worldwide. For class schedule, location, and training offerings,

see

https://www.paloaltonetworks.com/services/education/atc-locations

• Learning Through :
the Community You also can learn from peers and other experts in the field. Check out
our communities site at

https://live.paloaltonetworks.com/

● Discover reference material

● Learn best practices

46
The end

47

You might also like