See discussions, stats, and author profiles for this publication at: https://www.researchgate.
net/publication/289521178
Cyberhate
Chapter · January 2016
DOI: 10.1002/9781118783764.wbieme0215
CITATIONS READS
5 2,209
2 authors:
Thorsten Quandt Ruth Wendt
University of Münster Ludwig-Maximilians-University of Munich
246 PUBLICATIONS 6,937 CITATIONS 48 PUBLICATIONS 1,802 CITATIONS
SEE PROFILE SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Competitive Reaction Time Task View project
Social Competencies in Digitalized Worlds View project
All content following this page was uploaded by Thorsten Quandt on 19 September 2018.
The user has requested enhancement of the downloaded file.
Cyberhate
Thorsten Quandt & Ruth Festl
Department of Communication, University of Münster
[email protected]Abstract
Cyberhate describes various forms of online communication by hate groups with the purpose
of attracting new members, building and strengthening group identity, coordinating group
action, distributing propagandistic messages and indoctrination, provoke counter-reactions
as part of propagandistic campaigns, and attack societal groups and individuals with hateful
messages. A large segment of research under the label “cyberhate” has focused on racist and
xenophobic groups (esp. white supremacist groups in the United States) and their use of e-
mail, websites, blogs and social online networks. However, the general principles and
functions of cyberhate can be identified in the online communication of other extremist groups
as well, like religious extremists and terrorist groups with comparable supremacy, separatist
or extermination ideologies. Typically, these groups portray themselves as being oppressed or
endangered by a far more powerful, misguided or despicable enemy, therefore justifying
unrestrained hate and extreme actions.
The Internet has profoundly changed societal communication processes by offering new tools
and channels for existing communicative needs, but also by opening up completely new
possibilities of communication, for better or for worse. Some extremist – mostly supremacist,
nationalist or religious – groups in society interpreted the term ‘online revolution’ in their
very own literal sense by appropriating online technologies and options for their own interest,
spreading hateful comments and (mis)information with proclaimed ultimate goals of
separatism, annihilation of other societal groups, or overthrowing the existing societal order
and replacing it with a new one. While there is some overlap in terms of tools and messages
with other forms of aggressive, hateful, deviant and problematic internet use – like
cyberbullying or cyber harassment – cyberhate is different, as it typically is planned and/or
carried out by a community or group of extremists, often with explicit political, racist or
religious goals. It follows a long-term strategy, and it usually targets one or multiple opposing
groups (and not just a single individual). While there are unifying characteristics and
functions, there is still a lot of internal variation between different cases of cyberhate, and the
respective research is somewhat fragmented in terms of groups under analysis, goals of
research, methodology and involved disciplines.
Historically, cyberhate predates the introduction of the World Wide Web: Extremist groups
were already using computer (networks) in the early 1980s (Marks, 1996). Bulletin boards
with hateful content appeared in the mid-80’s (Schafer, 2002). The first hate web site credited
in the literature is ‘Stormfront’ with roots in the mid-1990s (Meddaugh & Kay, 2009; Perry &
Olsson, 2009). This site supported a White Supremacist ideology, as have numerous other
hate sites that followed this early example. Indeed, a large part of the literature on cyberhate
focuses on nationalist, racist cyberhate, including the Ku Klux Klan and anti-Black hate
groups, and other right wing extremists and xenophobic communities (Adams & Roscigno,
2005; Bostdorff, 2004; Pollock, 2009). However, while early work in the field seemed to be
1
pre-occupied with these groups, and often implicitly equalled cyberhate with the online
version of the American white supremacist hate movement, other authors pointed out parallels
to other forms of extremism (for example religious terrorism) and expanded the concept to a
more general understanding of cyberhate (Post, McGinnis & Moody, 2014). Indeed, while
there are notable differences between the various involved groups in terms of ideology and
background, there seem to be striking parallels in terms of cyberhate strategies and functions.
Definitions: What is cyberhate?
There is no standard definition of what encompasses cyberhate or a hate group website. Early
public attention to the topic emerged on the Internet itself, with the “Guide to Hate Groups on
the Internet” installed by the Harvard Law School librarian David Goldman in 1995 (Southern
Poverty Law Center, 2001). This ‘watchdog’ website was later renamed to “HateWatch”, and
it defined the originators of hate sites as “an organization or individual that advocates
violence against or unreasonable hostility toward those persons or organizations identified by
their race, religion, national origin, sexual orientation, gender or disability. Also including
organizations or individuals that disseminate historically inaccurate information with regards
to these persons or organizations for the purpose of vilification” (cited in Schafer, 2002, p.
73). Even this early working definition did not necessarily restrict cyberhate to the White
Supremacist movement in the United States, despite numerous studies focusing on their
online activities. Notably, it has been discussed that the Internet fosters a globalization of
cyberhate, with racist groups working together and building virtual communities across
national borders, not confining themselves by concepts of nation, but extending their unifying
principle to racist ideology (Perry & Olsson, 2009). Similarities in ideological features,
although rooted in religion, and a comparable move towards globalization of online activities
can also be found in the Jihadist movement, although the online activities of these religious
extremists are often discussed in relation to (religious) terror (Post, McGinnis & Moody,
2014). Furthermore, cyberhate is not restricted to the portrayal of the underlying ideology via
websites only. Indeed, there is a wide range of online activities, and some of them are also
directed at specific groups or individuals: “Vilified people and groups are targeted directly
through text messages, emails, blogs etc., often containing malicious threats, or indirectly in
forums, virtual communities or chat groups” (Perry & Olsson, 2009, pp. 187-8). In contrast to
other hateful forms of online communication such as cyberbullying, cyberhate is typically
embedded into the actions of larger, enduring hate movements or hate campaigns directed
against target groups defined by race, ethnicity, gender, religion etc. (and mostly not limited
to single victims or based on singular events). Therefore, cyberhate communication usually
has specific characteristics and defined functions, and is strategically planned.
Typical characteristics of cyberhate groups
Despite notable differences in proclaimed ideological background, cyberhate groups portray
and position themselves in similar ways. Primarily following a social or religious movement
logic, their working principles are comparable. Cyberhate groups often portray themselves as
enlightened communities that have discovered societal imbalances or dangers to specific parts
of society. They claim to offer access to a hidden truth and information not accessible in
public, as these are suppressed by misguided societal mainstream or evil and powerful
oppositional forces. Divergent information is denounced as lies spread by the enemy or
oppressor, which is also used as an explanation for obvious discrepancies between the
selected (mis)information spread by the hate group and publicly available sources (Adams &
Roscigno, 2005; Bostdorff, 2004; McNamee, Peterson & Peña, 2010).
2
By putting themselves in an oppressed underdog role, they can excuse radical actions as a
necessity for the survival of the ones they claim to protect, and an act of pre-emptive self-
defence. The binary ‘us vs. them’ logic with a supposed immoral, inhumane, evil, or
depraved, yet powerful enemy, not only provides a reason for hateful and aggressive actions,
but these also seem to become a moral or religious duty for the respective in-group. It has
been argued that cyberhate groups systematically use interpretational framing to install a
specific view in order to form identity and advocate extreme actions (Adams & Roscigno,
2005).
However, the underlying reasoning and message structure may vary according to the specifics
of the intended segment of the in-group. While the older adolescent and young adult male
audience may be targeted by aggressive messages offering them reassurance and also an
outlet for their anger, younger groups are lured into a seemingly ‘cool’ group that
accommodates their need for identity building and may be compatible with aspects of youth
culture (Schafer, 2002). Previous research has also shown that there are gendered differences
with some messages and sites aiming at a female audience, with women often “serving
secondary (support) function” in the underlying ideologies (Schafer, 2002, p. 77).
The differentiation of messages according to intended ‘in-group’ audience(s) shows that
cyberhate communication serves multiple purposes that go beyond simply conveying a hateful
message in order to denounce a specific group in society. It can be argued that there are
always several audiences, not only the ‘in-group’ – these may include the victimized target
group of the hate communication, potential other (political, societal) opponents, the
oppressing majority, ideological friends and competitors, potentially interested groups of
persons that may be recruited, as well as the general audience. Accordingly, cyberhate
communication serves several functions.
Functions of cyberhate
Cyberhate often serves several functions at once, because it addresses multiple audiences with
differing interests. Most obviously, cyberhate websites and other tools of online hate are used
to spread (mis)information, extremist viewpoints and ideological messages (Bostdorff, 2004;
Chau & Xu, 2007; McNamee, Peterson & Peña, 2010). For many of the supporting groups, it
is an inexpensive and direct information channel that cannot be easily controlled and
regulated (Adams & Roscigno, 2005; Levin, 2002). As the communication serves an
ideological, political or religious goal, it typically works on several levels and is not meant to
be neutral. Often, it has a subtext only accessible by understanding the groups’ codes, or
viewpoints and messages are deliberately disguised (Chris Hale, 2012; Daniels, 2009). The
respective messages are used to educate the supporters, foster participation, invoke a specific
understanding of group identity and indict potential opponents as enemies, making them
victims of hate speech (Adams & Roscigno, 2005; Bostdorff, 2004; McNamee, Peterson &
Peña, 2010).
Furthermore, the public information can be used to reach groups of yet uninitiated, but
potentially interested persons. Many groups try to inflame potential supporters among specific
groups of fragile, disoriented and insecure persons that can be easily influenced, especially
children and teenagers or young men (Chris Hale, 2012; Schafer, 2002). The persuasiveness
of the messages is increased by using targeted strategies, for example colourful and
entertaining websites including music downloads for young persons, and seemingly
‘innocent’ or neutral tone for reaching out into the mainstream of a population (Meddaugh &
Kay, 2009; Schafer, 2002). A subtle attitude influencing strategy might be used to convert and
recruit persons that would otherwise not turn to such hate groups. In many cases, the ingroup
is portrayed as innocent, endangered, and superior, while others are described as unjust,
3
inferior, evil, and as directly or indirectly threating the in-group or other innocent and
endangered groups (Adams & Roscigno, 2005; Douglas et al., 2005; Pollock, 2009). Out of
this, urgency and necessity are inferred, and any potential actions are excused as moral and
just (Adams & Roscigno, 2005; Bostdorff, 2004). Proposed actions may include extremism
and terror, although many cyberhate groups just ‘imply’ actions for legal reasons or do not
directly propose any action at all (Douglas et al., 2005; Gerstenfeld, Grant & Chiang, 2003
Leets, 2001). Such groups may rather count on self-radicalization and self-organization.
The recruitment function has been identified as one of the core features of cyberhate
communication (Adams & Roscigno, 2005; Chau & Xu, 2007; Chris Hale, 2012). It often
starts with information, which is followed by influencing, changing and radicalization of
attitudes, and it culminates in a conversion to the group (sometimes followed by further
radicalization resulting in extreme action). Recruitment and subsequent radicalization go hand
in hand with identity building (Adams & Roscigno, 2005; McNamee, Peterson & Peña, 2010;
Post, McGinnis & Moody, 2014). Cyberhate communication, while addressing opposing
groups and enemies as potential targets for hate attacks, is also and primarily meant to focus
on the in-group and to help build a collective identity.
Group identity processes are supported by self-aggrandizement of the group and its members
(argued on the basis of divine choice, superior genetic or human qualities, and similar
‘outstanding’ qualities of group members; see Douglas et al., 2005; McNamee, Peterson &
Peña, 2010). This often stands in stark contrast to the actual life situation and self-perception
of the intended audience that is frequently described in the literature as insecure, fragile,
troubled and angry, usually consisting of young men, sometimes adolescents and even
children (McNamee, Peterson & Peña, 2010). Diverging and oppositional opinions are either
not mentioned or denounced as false and misleading, solely being produced on purpose to
weaken the group’s cohesion (Adams & Roscigno, 2005; Bostdorff, 2004; McNamee,
Peterson & Peña, 2010). Group identity is not only (re)produced by image-building, image-
control, information and education, but also by offering (or implying) options for participation
(Adams & Roscigno, 2005; Bostdorff, 2004). In that respect, cyberhate communication can
have a very practical function to coordinate group action and to even organize refinancing of
the group or specific sub-groups. Coordination can also take place on an international level by
bringing together similar groups with comparable background or aims (which even happens
for nationalist-racist groups, see Gerstenfeld, Grant & Chiang, 2003; Perry & Olsson, 2009).
It has been noted in the literature, though, that there are forms of extremist and even terrorist
cyberhate that do not aim at radicalization happening in organizational structures, but count
on independent actions of supporters not directly linked to the communicators. Such forms of
cyberhate communication may be informed by concepts like leaderless resistance and lone
wolf terrorism (Adams & Roscigno, 2005; Levin, 2002; Post, McGinnis & Moody, 2014). As
lone wolf action is not centrally coordinated, it is not actively planned by the communicators
and cannot be directly traced back to them (which may be legally relevant) – but this does not
mean that it is not strategically intended. Indeed, there are several works that regard this as a
new form of extremism and terrorism that is enabled – or at least simplified – by online
communication (Adams & Roscigno, 2005; Levin, 2002; Post, McGinnis & Moody, 2014).
Last but not least, cyberhate communication may contribute to an erosion of the public
discourse (Citron & Norton, 2011) – not only directly by adding destructive communication
and hate attacks, but also by setting negative reference standards for communication that may
be copied by other persons in their own online communication. Such erosion may be a slow,
but sweeping process aimed at the very foundations of a given society.
4
Research on cyberhate
Research on cyberhate is multifaceted and also somewhat fragmented, as several disciplines
covered differing aspects of the phenomenon – ranging from law studies and criminology,
political studies, media sociology, communication studies to psychology. Naturally, the
interest and findings vary according to the respective disciplinary focus. The approaches and
methods range from case studies, content analyses, network analysis, observation, interviews
and experiments to numerous historical studies and analytic works.
Case studies have described some significant forms of cyberhate communication, especially
hate websites. For example, the reportedly first cyberhate site ‘Stormfront’ has been analysed
several times (e.g., Levin, 2002; Meddaugh & Kay, 2009). Other analyses discussed certain
variants of cyberhate, like cloaked websites or web sites of the Ku Klux Klan (Daniels, 2009;
Bostdorff, 2004). Content analyses of cyberhate sites revealed general functions of this form
of communication, as outlined above, and specific differences between the various groups and
their online-strategies, as visible via their communicative output (Chau & Xu, 2007;
Gerstenfeld, Grant & Chiang, 2003). Furthermore, the arguing and convincing strategies can
be uncovered using textual analysis (Adams & Roscigno, 2005). Such forms of content
oriented analysis typically approach the phenomenon from its visible – and therefore readily
analysable – output, which can be explained by the relatively easy access to the material (at
least if it is not protected or hidden to the public).
In contrast, direct access to producers and users, for example via interview studies or
observations, is problematic for many reasons, and therefore rare. Indeed, methodological
difficulties in relation to studying online hate speech have been discussed, and options – for
example virtual ethnographies – have been proposed (Pollock, 2009). Other researchers
focused on (potential) audiences and the effects of cyberhate communication on them (Leets,
2001), uncovering how hate messages are perceived and processed, allowing for a better
understanding of the underlying psychological mechanics.
Overall, the research on cyberhate allows for a better understanding of the phenomenon and
some of its mechanics. However, the analyses typically focus on what is openly accessible.
The producers, their aims, motivations and strategies are rarely objects of empirical research,
and hidden layers of communication (happening in protected, hidden, semi-private or private
contexts) are less frequently researched for apparent reasons.
Open questions and future directions of research
Cyberhate describes various forms of online communication by hate groups with varying aims
and strategies. While general characteristics and functions have been analysed by previous
studies, and also some idiosyncrasies of specific groups have been described in much detail,
research has yet to cover all potential and relevant aspects of the phenomenon.
So far, many studies under the label ‘cyberhate’ have focused on the activities of White
Supremacists and similar nationalist and racist groups, especially in the US. There have been
recent works that directly or indirectly linked the concept to other extremist groups and also
religious terrorism (Post, McGinnis & Moody, 2014). Indeed, the principles seem to be
comparable, and it may be worthwhile to expand the scope of groups under analysis to
understand the more basic logics of hate, conspiracy and extremist groups online. What looks
like idiosyncrasies of some nationalist/racist groups at first, may be indeed a feature of all
kinds of groups that try to radically delimit themselves from the mainstream, deliberately and
ideologically close their way of thinking, depict themselves as oppressed, and identify others
in society as the reason for their perceived suffering, unjust treatment and societal positioning.
5
Furthermore, research needs to go beyond a description and analysis of what is easy and
openly accessible. Inevitably, studies focusing on what can be reached directly via search
engines only scratch the surface of cyberhate communication. Protected or hidden
communication may reveal how radicalization of the already initiated may work – something
that is hinted at numerous times in the literature, but typically is only backed up by anecdotal
evidence. Furthermore, both the producers and the (potential) audiences of cyberhate
communication still remain somewhat under-researched. However, interview studies/surveys
and observations are very difficult to implement for both practical and ethical reasons. Still,
that type of research would be crucial for an understanding of the phenomenon, and for
developing useful prevention strategies and counter-measures against cyberhate propaganda
addressing fragile and vulnerable groups in society, such as children and adolescents
(Bostdorff, 2004; Chris Hale, 2012; Schafer, 2002).
On a meta-level, one might ask why cyberhate communication apparently has grown and
branched out, i.e. what motivates people to turn to the respective messages or even start
cyberhate groups. Such research may need to address questions of orientation and
communication in an increasingly complex world, and unite previously separate discussion
threads and disciplinary approaches in a more holistic approach. The benefits would be a
more complete understanding of the phenomenon, its reasons, mechanics and effects – and
potential solutions to grave problems and societal risks connected to cyberhate.
SEE ALSO: Cyberbullying; Social Networking; Online Communication; Audience Effects;
Children/Adolescents; Intergroup Communication: Media Influence on; Parental Mediation of
Media; Social Context of Media Use
6
References
Adams, J., & Roscigno, V. J. (2005). White supremacists, oppositional culture and the World Wide
Web. Social Forces, 84(2), 759–778.
Bostdorff, D. M. (2004). The Internet rhetoric of the Ku Klux Klan: A case study in web site
community building run amok. Communication Studies, 55(2), 340–361.
Chau, M., & Xu, J. (2007). Mining communities and their relationships in blogs: A study of online
hate groups. Information security in the knowledge economy, 65(1), 57–70.
Chris Hale, W. (2012). Extremism on the World Wide Web: A research review. Criminal Justice
Studies, 25(4), 343–356.
Citron, D. K., & Norton, H. (2011). Intermediaries and hate speech: Fostering digital citizenship
for our information age. Boston University Law Review, 91(4), 1435–1484.
Daniels, J. (2009). Cloaked websites: propaganda, cyber-racism and epistemology in the digital
era. New Media & Society, 11(5), 659–683.
Douglas, K. M., McGarty, C., Bliuc, A.-M., & Lala, G. (2005). Understanding cyberhate: Social
competition and social creativity in online white supremacist groups. Social Science Computer
Review, 23(1), 68–76.
Gerstenfeld, P. B., Grant, D. R., & Chiang, C.-P. (2003). Hate online: A content analysis of extremist
Internet sites. Analyses of Social Issues and Public Policy, 3(1), 29–44.
Leets, L. (2001). Responses to Internet hate sites: Is speech too free in cyberspace?
Communication Law and Policy, 6(2), 287–317.
Levin, B. (2002). Cyberhate: A legal and historical analysis of extremists’ use of computer
networks in America. American Behavioral Scientist, 45(6), 958–988.
Marks, K. (1996). Faces of right wing extremism. Boston, MA: Braden Publishing.
McNamee, L. G., Peterson, B. L., & Peña, J. (2010). A call to educate, participate, invoke and indict:
Understanding the communication of online hate groups. Communication Monographs, 77(2),
257–280.
Meddaugh, P. M., & Kay, J. (2009). Hate speech or “reasonable racism?” The other in Stormfront.
Journal of Mass Media Ethics, 24(4), 251–268.
Perry, B., & Olsson, P. (2009). Cyberhate: The globalization of hate. Information &
Communications Technology Law, 18(2), 185–199.
Pollock, E. (2009). Researching white supremacists online: Methodological concerns of
researching ‘hate speech’. Internet Journal of Criminology, 1–19.
http://www.internetjournalofcriminology.com/Pollock_Researching_White_Supremacists_Onlin
e.pdf
Post, J. M., McGinnis, C., & Moody, K. (2014). The changing face of terrorism in the 21st century:
The communications revolution and the virtual community of hatred. Behavioral Sciences & the
Law, 32(3), 306–334.
Schafer, J. A. (2002). Spinning the web of hate. Web-based hate propagation by extremist
organisations. Journal of Criminal Justice and Popular Culture, 9, 69-88.
Southern Poverty Law Center (2001). Harvard Law School librarian discusses cyberhate.
Intelligence Report, 101, n. p. http://www.splcenter.org/get-informed/intelligence-
report/browse-all-issues/2001/spring/the-year-in-hate/cyberhate-revisited
Further Readings
7
Daniels, J. (2013). Race and racism in Internet Studies: A review and critique. New Media &
Society, 15(5), 695–719.
Lee, E., & Leets, L. (2002). Persuasive storytelling by hate groups online: Examining its effects on
adolescents. American Behavioral Scientist, 45(6), 927–957.
Lennings, C. J., Amon, K. L., Brummert, H., & Lennings, N. J. (2010). Grooming for terror: The
internet and young people. Psychiatry, Psychology and Law, 17(3), 424–437.
View publication stats