Social Engineering
Social Engineering
What is social engineering? Understood in the context of information security, it is the act of manipulating people
to gain advantage, often at the expense of those targeted.
All social engineering attacks are designed to benefit the attacker. Most successful social engineering attacks are
detrimental to their victims. Social engineering attacks can result in a loss of confidential data, blackmail or
embezzlement, disruption or damage to a network, the denial of network services, the alignment of the target’s
opinion with the attacker’s as a result of manipulation, or some combination of all of these outcomes.
There are many different methods—physical and digital—of social engineering attack. Some examples of those
methods include, a bad actor convincing a target to allow them access to a restricted area, or an email sent to a
target convincing them to click a provided link. In both examples, manipulation is the means used to get the
desired action from the targeted individual. Social engineering works because people are vulnerable to
manipulation. How do you protect yourself from such attacks?
By being educated about and more alert to social engineering techniques, you will be less likely to fall victim to this
type of attack.
The lessons in the Social Engineering module provide an essential understanding of the fundamentals necessary
to achieve your goals.
You have already seen two examples, but there are many more methods and techniques used by bad actors to
achieve their goals.
Describe the attack vectors, methods, and threat indicators associated with insider threats.
Bad actors rely on your complacency to enact their plans, and the more secure you feel, the happier bad actors
are. Feelings of security, however, are deceptive. You may feel the most secure around those with whom you
work, and this may cause you to let down your guard. Still, you can be certain that bad actors will take advantage
of this.
Fraud and scams are generally short-term transactions intended to swindle people out of their money. In contrast,
influence campaigns can be protracted and sophisticated operations that affect people’s opinions, or even their
morale. Although the purposes of fraud, scams, and influence campaigns can be different from phishing, baiting,
or watering hole attacks, they all rely on psychological manipulation to forward the advantage of the instigator, and
often at the expense of the human targets, which is social engineering.
After completing this lesson, you will be able to achieve these objectives:
l Explain social engineering.
l Describe the different methods of social engineering.
What is social engineering? Social engineering is a term that you have been introduced to already but, given its
importance, it is worth revisiting. According to one source, social engineering remains a leading cause of network
breaches. Social engineering refers to a wide range of attacks that leverage human emotions to manipulate a
target. The attack may incite the target to action or inaction. Ultimately, social engineering aims to steer the target
in a direction prescribed by the orchestrator and often to the detriment of the target. Examples of social
engineering goals include disclosing confidential information, transferring money, and influencing a person or
persons to think in a certain way.
Perhaps the most famous social engineering bad actor is Frank Abagnale. His criminal life was depicted in the
movie Catch Me If You Can and in an autobiography by the same name. The book and movie reveal how
Abagnale successfully impersonated a doctor, lawyer, and airplane pilot to gain people’s trust and take advantage
of them.
In 2011, an attacker compromised the network of a well-respected security company by sending phishing emails
to groups of employees, using a method known as spear phishing. The emails had an Excel spreadsheet
attached. The spreadsheet had malicious code embedded in it, which exploited a vulnerability in Adobe Flash to
install a backdoor into the host computer. Unfortunately, at least one employee opened the attachment. It only
takes one.
As you can see from these two examples, there are many methods or attack vectors that a bad actor can use to
execute a social engineering attack. A social engineering bad actor can speak to the human target directly, like
Frank Abagnale, or communicate through email, or through some other method. In this lesson, you will learn the
different methods of a social engineering attack.
You are already familiar with the term phishing, and you likely have experience with phishing emails. Hopefully
you were not a victim of one. The two important characteristics of phishing are that it exploits email as the attack
vector and that it targets anyone with an email address. The attack is indiscriminate, insofar as who receives the
email. Phishing is simply malicious spam that is sent to as many people as possible with the hope that at least one
will be taken in. However, phishing can also have a categorial meaning—it can be used to refer to all electronic
social engineering attacks, such as spear phishing, whaling, smishing, and vishing.
There are many variants of phishing, some of which could be a part of a sophisticated campaign and may demand
research and reconnaissance on the part of the attacker. Spear phishing and whaling fit into this category. Both
spear phishing and whaling can be described as a social engineering attack that uses email to target a specific
individual or group with the intent of stealing confidential information or profiting in some way. Like phishing, spear
phishing and whaling use email as the attack vector, but with specific targets in mind.
What, then, is the difference between spear phishing and whaling? In a spear phishing attack, the bad actor
targets an individual or category of individuals with lower profiles, such as the employees at that security
company. In a whaling attack, the bad actor targets high-ranking individuals within an organization. When creating
a whaling email, the attacker often does research on their target so that they can personalize the email in order to
gain their target’s trust.
The extra work very often pays off because executives and board members can be just as susceptible to an email
attack as those who work for them. One of the most successful whaling attacks, and indeed one of the most
successful social engineering attacks of all time, was conducted against a Belgian bank. The CEO of the bank
didn’t even know he had been compromised until after a routine internal audit disclosed that the attackers
escaped with over 70 million euros. The attackers have never been caught or brought to justice.
Since the introduction of email as a ubiquitous means of communication, other methods have become popular,
such as instant messaging, live chat, and SMS or text messaging. These methods can also act as threat vectors
for a social engineering attack in the same way that email is used. A phishing-type attack that uses these media is
known as smishing. Its name is derived from combining SMS with phishing.
Other social engineering attacks can take place over a telephone or mobile device. These types of attacks are
named vishing. Its name is derived from combining voice and phishing. In March 2019, the CEO of a UK energy
provider received a phone call from someone who sounded exactly like his boss. Fraudsters are known to use AI
voice technology to impersonate other people. The call was so convincing that the CEO ended up transferring
$243,000 to a “Hungarian supplier”, which was a bank account belonging to the bad actor.
These types of attacks are identical to phishing, spear phishing or whaling, except that the attack vectors are
different. They can also be equally profitable.
In 2020, a group of hackers took control of 130 accounts on a well-known social media platform, including those
belonging to several celebrities. They downloaded users’ data, accessed direct messaging, and made tweets
requesting donations to a Bitcoin wallet. Within minutes, the bad actors had grossed $110,000 through 320
transactions. While the method used to take control of these accounts remains unknown, it is speculated that
employees of the media platform were tricked into revealing account credentials, which allowed access to these
accounts.
Social engineering bad actors use other tactics that are not restricted to any one attack vector. They could use
email, messaging, voice, or even speak to the target face-to-face. One tactic is called quid pro quo, a Latin phrase
that means “one thing for another”. In the context of social engineering, a quid pro quo attack is when a bad actor
offers a service, usually tech support, in exchange for access to information, such as user credentials.
Pretexting is another attack tactic. It involves a situation, or pretext, devised to invoke an emotional response from
the target. Here is a real-life example. The target, a grandfather, receives a telephone call from someone who
claims to be a police officer—the bad actor. The police officer tells the grandfather that his grandson has been
arrested for possession of narcotics. The police officer also tells the grandfather that his grandson resisted arrest
and that, during the ensuing fight, his grandson’s nose was broken. The police officer then puts the grandson—
another bad actor—on the phone, who sobs and pleas for the grandfather to post bail. The grandson’s voice
doesn’t sound quite right to the grandfather, but he reasons that his grandson is sobbing and has a broken nose,
after all. Fortunately, the grandfather decides to hang up and phone the grandson’s parents to verify if the story is
true.
Another tactic used by bad actors is baiting. Baiting can take a number of forms and is similar to phishing. Baiting
can occur in the form of an email, text, or telephone call claiming that you have won a prize or that you qualify for a
rebate. Baiting relies on positive emotions, such as a reward, to entice you to act. A baiting attack can also be
subtle. For example, a bad actor leaves a USB memory stick in a public place, such as the parking lot, lobby, or
washroom of the targeted organization. An intriguing label, such as “managers’ salaries and compensation” would
be affixed to the drive. The bad actor is relying on you to be overcome with curiosity and to connect the USB drive
to your computer. When you do, the malware on the USB drive installs a backdoor on your computer and the bad
actor now has a gateway into the network.
You have completed the lesson. You are now able to:
l Explain social engineering.
l Describe the different methods of social engineering.
After completing this lesson, you will be able to achieve these objectives:
l Explain social engineering.
l Describe the different methods of social engineering.
In the Social Engineering Techniques, Part A lesson, bad actors initiated actions against targets. In this lesson,
the social engineering attacks are less intrusive, and some of them could even be described as clandestine. The
social engineering methods described here all depend on you going to the bad actor, or at least this is how it is
made to appear.
You are surfing the internet when a warning appears. The warning states that your computer is infected by
malware and that to clean your computer, you should download the antivirus software using the provided link. If
this has ever happened to you, then you’ve experienced a scareware attack. The antivirus software that is being
offered, either for free or at a cost, is very likely fake or malware. A scareware attack is also known as a rogue
attack.
In a watering hole attack, the attacker might compromise a site that is likely to be visited by a particular target
group. Bad actors are known to exploit social media sites, such as Facebook and LinkedIn, to start and groom
relationships with targets.
This also applies to the physical world, where a bad actor or agent would orchestrate a chance encounter at a
place that the target is known to frequent. If the agent is talented, they can develop this initial contact into a
relationship that they can exploit. In cinematography, a good example of an orchestrated encounter can be seen
in the movie Red Sparrow where the female agent, played by Jennifer Lawrence, goes to swim at a public pool
where she knows she could meet her target. Her attractiveness ensures that he will approach her, instead of her
engaging him, which might cause suspicion.
This leads to another type of attack that has its origins—as least in terms of popular culture—in the world of
espionage. This type of attack is known as a honeypot trap. The secret services of some nations recruit beautiful,
educated women and groom them to act as professional temptresses. As per the previous example, the agent is
the honeypot, and the public pool is the watering hole.
In cybersecurity, white hat analysts can apply the same concept to strengthen network defenses. A honeypot is a
security mechanism that creates a virtual trap to lure attackers. An intentionally compromised computer system
allows attackers to exploit vulnerabilities. Then, analysts can study the attacker’s tactics and use what they learn
to improve network security. Many network security companies sell honeypot systems. The name of the Fortinet
honeypot product is FortiDeceptor.
The last social engineering attack method covered in this lesson is called tailgating. Tailgating involves a bad
actor following someone with security clearance into a secure building or an access-controlled room. The bad
actor, or tailgater, relies on the courtesy or sympathy of the target to gain access. For example, the target might
hold a door open for someone who is following behind them, not knowing that person has malicious intent. Or
perhaps the bad actor approaches their target pretending to be burdened with parcels and needing help to open
the door. Or perhaps the attacker claims that they forgot their security pass.
While it is professional and correct to be courteous, if you don’t recognize the person or know their security status,
then ensure that you involve reception or security to help the individual. To be clear, if a tailgater slips through the
door without you noticing, then this does not qualify as social engineering. There must be psychological
manipulation between the attacker and the target for an attack to be considered social engineering. An attack can
be physical, such as tailgating; active, such as whaling and pretexting; or subtle, such as baiting.
In real life, bad actors often rely on several methods and attack vectors to achieve their ends. As the attack
campaign progresses, and new targets of opportunity arise, the tactics and methods used by bad actors might
evolve and change. Think of social engineering techniques and various malware types, such as ransomware,
Trojan horse, backdoor, worm, bot, and zero-day attacks, as tools in a toolbox, which an attacker can use as the
situation requires. Just like a carpenter needs more than a hammer to do their job, a cyberattacker requires
different tools and techniques to do theirs.
The following story is a real-life example of a successful social engineering attack carried out by a blue hat
penetration testing team. The penetration team analyst, who reported the results at the RSA Europe security
conference in 2012, did not disclose the name of the organization, but did reveal that the organization specializes
in cybersecurity.
The team began preparing for the attack by building a credible online identity on LinkedIn for an attractive fictitious
woman named Emily Williams. The team also set up information about her on other websites so people would be
able to match the information on her social media profiles with the information obtained through Google searches.
This meant that fabricated profiles set up on Facebook and other social media sites were used to corroborate the
information found on LinkedIn. They even posted on some university forums using her name. According to her
fake social media profiles, she was a 28-year-old MIT graduate with ten years’ experience who claimed to have
just been hired by the targeted organization.
Within the first 15 hours, Emily Williams had 60 Facebook friends and 55 LinkedIn connections with employees
from the targeted organization and its contractors. After 24 hours, she had three job offers from other companies.
Soon, she received LinkedIn endorsements for skills, and men working for the targeted organization offered to
help her get started faster in her alleged new job. The team, acting as Emily, was able to manipulate the men into
providing her with a work laptop and network access while skirting proper channels and normal procedures. In
fact, she was granted more network privileges than she normally would have received as a new hire.
The penetration team decided not to use the laptop and network access, but rather to continue their social
engineering assault on the organization. Around the Christmas holidays, the team created a site with a Christmas
card and posted a link to it on Emily’s social media profiles. People who visited the site were prompted to execute
a signed Java applet that opened a reverse shell back to the attack team, by way of an SSL connection. Once they
had a shell, the team used privilege escalation exploits to gain administrative rights and was able to sniff
passwords, install other applications, and steal documents containing sensitive information.
Even though it was not part of the plan, contractors for the targeted organization also were deceived by the
Christmas card attack, including employees from antivirus companies. At least one of the victims was a developer
with access to source code. It might have been possible to compromise the source code that was being written for
the targeted organization, which would have made detecting an attack on the organization even more difficult.
Through Facebook, the penetration team learned from two employees that the head of information security at their
organization was about to celebrate his birthday. While that individual did not have any social media exposure, the
penetration team sent him an email with a birthday card that appeared to come from one of the two employees
who had been talking about the event on Facebook. After he clicked the link in the malicious birthday card, his
computer became infected with malware, and because of his elevated privileges within the organization, much of
the network and sensitive information became compromised too.
However, the team did not focus solely on high-profile individuals. The Emily Williams attack started by targeting
low-level employees, like sales and accounting staff. By connecting to or befriending these employees first, the
team gained credibility and increased trust, thereby making inroads with the higher-ranking individuals easier. As
the social network around Emily Williams grew, the attack team was able to target more technical people, security
people, and even executives.
In short, the penetration testing team’s success confirmed that even technically sophisticated organizations can
fall victim to social engineering attacks.
You have completed this lesson. You are now able to:
l Explain social engineering.
l Describe the different methods of social engineering.
After completing this lesson, you will be able to achieve these objectives:
l Define insider threat.
l Explain the different types of insiders.
l Describe the different threat vectors and methods used by or against insiders.
l Describe the threat indicators of insider threats and mitigation methods.
When addressing cybersecurity, organizations tend to focus on external threats. However, given that a significant
number of security breaches are due to insiders, cybersecurity teams should address insider threats through
initiatives such as employee awareness training. But what is an insider threat?
An insider threat is an individual or individuals who work for an organization or have authorized access to its
networks or systems and who pose a physical threat or cyberthreat to the organization. The insider is typically a
current employee, but it could also be a former employee, contractor, business partner, board member, or even an
imposter who gains access to sensitive information or network privileges.
There are different ways to categorize insider threats and assign them names, but essentially there are two main
groups: those who unintentionally assist bad actors because of human error, poor judgement, or carelessness
and those on the inside who act maliciously. The first group could be named negligent, and the second group
malicious. These two categories can be further subdivided.
The principal goals of malicious insider threats, sometimes called turncloaks, include espionage, fraud,
intellectual property theft, and sabotage. Here is an example: In 2020, a former executive of company A stole
trade secrets from its self-driving-car division and handed them to his new employer. He was sentenced to 18
months in prison.
Malicious insider threats can be divided into three types: moles, collaborators, and lone wolves. Lone wolves may
seem as harmless as sheep, but they harbor malicious intent. And as the name implies, lone wolves work
independently and without outside influence. Collaborators are authorized users who work with a third party. The
third party may be a competitor, nation-state, organized criminal network, or an individual. A third type of malicious
insider is an outsider who has gained access to the organization’s network. They may gain access to the
organization by posing as a vendor, partner, contractor, or employee. This type of malicious insider is known as a
mole. The Emily Williams attack, discussed in the Social Engineering Techniques, Part B lesson, is an example
where a mole was used. In the attack, Emily Williams gained access to the targeted organization’s network by
posing as an employee.
The more benign, yet equally dangerous category, is the careless insider who inadvertently helps a bad actor.
They fall victim to phishing and other social engineering attacks. The careless insider category can be subdivided
into two groups: pawns and goofs. A pawn is an authorized user who has been manipulated into unintentionally
helping the bad actor, often through social engineering techniques, like tailgating or spear phishing. An example of
a pawn is the head of information security who was duped by a digital birthday card in the Emily Williams attack. A
goof is an insider who deliberately takes potentially harmful actions but harbors no malicious intent. This type of
insider could be described as arrogant, ignorant, or incompetent, and one who refuses to recognize the need to
follow security policies and procedures.
Malicious agents use many methods and attack vectors against the careless insider. The attack vector can be
physical. Methods used against a physical attack vector include tailgating or piggybacking, shoulder surfing,
dumpster diving, and eavesdropping. Tailgating, or piggybacking, is a type of engineering attack in which an
unauthorized person gains physical access to a restricted area by following someone who is authorized. Shoulder
surfing is using direct observation techniques, such as looking over someone’s shoulder, to get information.
Dumpster diving is looking for information in someone else’s garbage or recycle bin. Dumpster diving can also
occur in a general access area, such as a printing room where confidential information is printed and not
immediately retrieved. Eavesdropping in a physical setting could be listening in on a conversation where
confidential information is discussed. Digitally speaking, eavesdropping refers to network snooping or sniffing,
which occurs when a malicious actor exploits an insecure or vulnerable network to read or steal information as it
travels between two devices. Eavesdropping occurs more commonly in wireless communications than on
Ethernet networks because wireless networks are more accessible.
Within digital attack vectors, a plethora of methods, including spear phishing attacks and whaling attacks, are
used to hoodwink careless insiders. In other attack vectors, such as messaging or telephoning, insiders can be
victims of smishing, vishing, or pretexting. In the social media attack vector, watering hole attacks can be used.
These are terms that you should already be familiar with from previous lessons.
An insider attack can be more difficult to detect than an external attack. In part, this is because an insider has
access to and knowledge about the network that an external attacker likely does not. However, there are
behavioral and digital indicators that can help you to detect a possible insider threat.
If an insider appears to be dissatisfied with the organization, appears to hold a grudge against the organization, or
starts to take on more tasks with excessive enthusiasm, these could be indicators of a potential insider threat.
Context is everything. There are go-getter or type A personalities who aggressively take on new challenges.
However, if an employee’s behavior changes appreciably without some logical explanation, then this may be a
threat indicator. Routine violations, open contempt of organization policies, or attempts to circumvent security are
also possible behavioral indicators of an insider threat.
Anomalous activity at the network level is a digital indicator. Several activities are trackable, such as:
l Activity at unusual times, such as logging into the network at 4 AM, or always working late.
l Volume of traffic, such as transferring unusual amounts of data within the network.
l Type of activity, such as accessing resources that are atypical or not needed for the insider’s job.
You should also be alerted to digital activities, such as:
l Repeated requests for access to systems not relevant to their job function.
l Using unauthorized devices, such as USB drives.
l Network crawling and deliberate searches for sensitive information.
l Emailing sensitive information outside the organization.
Your behavior as a person working for an organization can either jeopardize or enhance security. To help make
your organization more secure, follow this list of recommendations:
l Learn your organization’s security policies.
l Do not take shortcuts around security protocols.
l Do not leave login credentials exposed.
l Do not allow tailgating.
l Do not store confidential digital documents unencrypted or leave physical documents unsecured.
l Do not disable endpoint security and controls. Do not share proprietary or confidential information with unauthorized
individuals.
l Patch your devices as soon as OS and software updates are available.
There are measures that you can take to protect your organization’s assets from internal threats. First, identify
your organization’s critical assets, both logical and physical. These include networks, systems, confidential data,
facilities, and people. Rank and prioritize each asset and identify the current state of each asset’s protection. By
prioritizing the assets, you can focus on securing the most important assets first.
Tools like machine learning (ML) applications can help analyze the data stream and prioritize the most relevant
alerts. You can use digital forensics and analytics tools, such as user and event behavior analytics (UEBA) to help
detect, analyze, and alert the security team to any potential insider threats. User and device behavior analytics
can establish a baseline for normal data access activity, while database activity monitoring can help identify policy
violations.
Deploy tools that monitor user activity as well as aggregate and correlate activity information from multiple
sources. Deception software, such as FortiDeceptor by Fortinet, establishes traps to attract malicious insiders and
tracks their actions to better understand their intentions. The information gathered by a honeypot solution can be
shared with other intelligence to improve detection and to mitigate attacks and breaches.
Define, document, and disseminate the organization’s security policies. Then, provide training to those who work
for your organization, and follow up with testing to ensure comprehension. This prevents ambiguity and
establishes a foundation for enforcement. They should recognize their responsibility to comply and respect the
organization’s security policies.
This is a good segue to the final recommendation: Promote a culture of security awareness. Promoting a security-
aware culture is key to mitigating the insider threat. Instilling the right beliefs and attitudes combats negligence
and reduces the opportunity for malicious behavior.
You have completed the lesson. You are now able to:
l Define insider threat.
l Explain the different types of insiders.
l Describe the different threat vectors and methods used by or against insiders.
l Describe the threat indicators of insider threats and mitigation methods.
After completing this lesson, you will be able to achieve these objectives:
l Describe fraud, scams, and influence campaigns.
l List examples of cyber fraud and cyber scams.
l Describe how a typical online influence campaign unfolds.
Increasingly, the internet has become a platform for bad actors to stage large-scale fraud, scams, or malevolent
influence to sway people to a particular point of view. The methods used to achieve the sordid goals of fraud and
scams often involve social engineering techniques, such as phishing, coupled with malware. In this lesson, you
will also learn about influence campaigns, which is a social media technique to spread ideas and manipulate
others.
What is cyber fraud? Cyber fraud is a social engineering technique, malware, or other type of deception that is
used to defraud or take advantage of a person or organization for financial or personal gain.
What are cyber scams? Cyber scams are a type of fraud, but they are generally classified as petty or not as
serious as cyber fraud. This is not to suggest that cyber scams are trivial, however. According to the FBI, elderly
Americans lose more than three billion dollars annually to various types of scams. Senior citizens are often
targeted because they are more trusting than younger adults and they have a lifetime of savings to prey upon.
Fraud and scams are criminal activities that use the same threat vectors and methods you have seen in previous
lessons.
This is how an influence campaign might unfold: (1) The bad actor creates fake user accounts on social media
platforms. (2) The bad actor creates content to promote a given narrative. (3) They post this content as fake users
on numerous social media sites. (4) Real people see the content and begin to share it. (5) After reaching a certain
threshold, mass media picks up the story, further amplifying the narrative.
This is the strength of influence campaigns. With little cost and effort, the bad actor can manipulate the opinions of
hundreds of thousands of people. The nature of social media allows the bad actor to operate secretly and without
fear of being identified as the source of the attack.
Publicly attacking an adversary is likely to result in undesirable consequences, but secretly turning public opinion
against them is harder to prove and harder to retaliate against. Consider this scenario. Two restaurant owners are
bitter rivals. Restaurant owner A uses anonymous social media accounts to spread disinformation about
restaurant owner B. Restaurant owner A claims that restaurant owner B refused to hire an individual based on that
individual’s race. Others jump onto social media, demand retribution, and boycott the restaurant. If restaurant
owner B accuses their rival of circulating lies, this could easily backfire and further provoke the virtue-signaling
mob. Regardless of what restaurant owner B does—remains silent, denies the accusation, or accuses their rival of
foul play—it’s a losing proposition. On the other hand, if someone openly makes a false claim or accusation, then
the victim can take legal recourse, putting the reputation of the accuser at stake.
Influence campaigns can also be a part of hybrid warfare, as conducted by “psyops”, a division of the military. In
this scenario, traditional warfare tactics are combined with political strategy and cyber warfare, which can include
hacking, social engineering, influence campaigns, and promoting fake news. In hybrid warfare, the objective of
influence campaigns is to weaken the enemy’s resolve by sowing confusion and division.
You have completed the lesson. You are now able to:
l Describe fraud, scams, and influence campaigns.
l List examples of cyber fraud and cyber scams.
l Describe how a typical online influence campaign unfolds.