0% found this document useful (0 votes)
56 views17 pages

Zeng Et Al - From Content Moderation To Visibility Moderation A Case Study of Platform Governance On

Uploaded by

yt2797
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views17 pages

Zeng Et Al - From Content Moderation To Visibility Moderation A Case Study of Platform Governance On

Uploaded by

yt2797
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Received: 30 October 2021 | Accepted: 9 February 2022

DOI: 10.1002/poi3.287

RESEARCH ARTICLE

From content moderation to visibility


moderation: A case study of platform
governance on TikTok

Jing Zeng1 | D. Bondy Valdovinos Kaye2

1
University of Zurich, Zurich, Switzerland
2
Abstract
Queensland University of Technology,
Brisbane, Australia TikTok, a short‐video app featuring video content
typically between 15 and 60 s long, has become im-
Correspondence mensely popular around the world in the last few
Jing Zeng, University of Zurich, IKMZ, years. However, the worldwide popularity of TikTok
Andreasstrasse 15, Zurich 8050, Switzerland. requires the platform to constantly negotiate with the
Email: [email protected]
rules, norms and regulatory frameworks of the re-
gions where it operates. Failure to do so has had
significant consequences. For example, for content‐
related reasons, the platform has been (temporarily
and permanently) banned in several countries, in-
cluding India, Indonesia and Pakistan. Moreover, its
Chinese ownership and popularity among underage
users have made the platform subject to heightened
scrutiny and criticism. In this paper, we introduce the
notion of visibility moderation, defined as the process
through which digital platforms manipulate the reach
of user‐generated content through algorithmic or
regulatory means. We discuss particular measures
TikTok implements to shape visibility and issues
arising from it. This paper presents findings from in-
terviews with content creators, which takes a user‐
centric approach to understand their sense‐making of
and negotiation with TikTok's visibility moderation.
Findings from this study also highlight concerns that
leave these stakeholders feeling confused, fru-
strated or powerless, which offer important directions
for further research.

KEYWORDS
platform governance, platforms studies, qualitative, TikTok,
visibility moderation

This is an open access article under the terms of the Creative Commons Attribution‐NonCommercial License, which permits use,
distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
© 2022 The Authors. Policy & Internet published by Wiley Periodicals LLC on behalf of Policy Studies Organization.

Policy Internet. 2022;14:79–95. wileyonlinelibrary.com/journal/poi3 | 79


19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
80 | ZENG AND KAYE

INTRODUCTION AND BACKG ROU N D


As societies around the world grow ever more digitalised, our political lives and cultural
activities have been increasingly configured into a platformed global ecosystem (van Dijck
et al., 2018). Against this background, how platforms govern and how they are governed
have vast political and societal impacts. Alongside established dominant platforms such as
Facebook, Twitter and Google, one emerging platform whose platform governance has
been under close scrutiny and criticism is the short video platform TikTok. In September
2021, TikTok reached one billion monthly active users globally (TikTok, 2021b), but the
trajectory of TikTok's rise has been tortuous. The platform has been sued (Ridley, 2021),
banned (Ellis‐Petersen, 2020) and forced to sell (Allyn, 2020) in some of its most important
international markets. Although geopolitics, especially the platform's Chinese roots, plays an
important role in causing some of these controversies (Gray, 2021), TikTok's failure in
governing content shared on its platform is also a crucial factor.
While content regulation is an important consideration on many digital platforms, certain
platform specificities of TikTok have posed unique governance challenges. First is TikTok's
young user demographic. Compared to other mainstream social media platforms, the user
demographic of TikTok is quite young. It is estimated that over 60% of TikTok users are
members of the Generation Z cohort (born between 1997 and 2012) (Muliadi, 2020). Ac-
cording to a 2021 usage report, 60% of US young adults (18–24) use the platform daily
(TikTok, 2021a). In the United Kingdom, a survey conducted by the media regulator Ofcom
(2021) reveals that almost half of children aged between 5 and 15 used TikTok to watch
short video content in 2020. Because of its popularity with underaged users, TikTok has
been under tight scrutiny from regulators from around the world. Since its launch, the
platform has faced multiple high‐profile lawsuits in the United States and Europe, the ma-
jority of which concern minor safety (Allyn, 2021; Ridley, 2021).
Second is the visual centrality of TikTok. As a platform hosting vast quantities of video
content, TikTok is an obvious arena for propagating graphic, violent and pornographic
content. In India, Indonesia and Pakistan, TikTok has been temporarily and permanently
banned for hosting these types of content (Kaye et al., 2022). In comparison to text‐rich or
static image‐centric platforms, such as Twitter and Instagram, moderating video content is
more technologically demanding and time‐consuming. The technological threshold to au-
tomate the tasks of detecting and removing videos is much higher than text or images (Gray
& Suzor, 2020). As a result, human moderators play a crucial role in evaluating the ‘ap-
propriateness’ of videos on digital platforms, including TikTok (Shead, 2020).
Another factor that contributes to the complexity of governance on TikTok is its virality‐
centric platform logic. Social media's platform logic can be defined as ‘the strategies, me-
chanisms, and economies underpinning their dynamics’ (van Dijck & Poell, 2013, p. 3).
Content discovery on TikTok centres on its algorithm recommender system that pushes
trending videos, which, in turn, shapes users’ practices surrounding visibility. As Abidin
(2021a) points out, on other mainstream social media, such as Instagram and YouTube,
influence is ‘persona‐based or profile‐anchored’ (p. 79). On TikTok, however, a large fol-
lower base does not guarantee visibility. Creators’ influence is based on the performance of
individual posts. Therefore, creating viral content is the primary means for visibility and
sociality (Zulli & Zulli, 2020). In attempts to achieve intentional or ‘accidental virality’ (Kaye,
2020), TikTok creators may seek to push boundaries, producing erratic or freakish content
as well as harmful viral trends. The unpredictability and uncertainty of virality make it difficult
for human content moderators to keep trends, particularly harmful or dangerous trends, in
check. For example, in 2020 a video depicting an individual taking their own life spread
quickly on TikTok and proved challenging to scrub from the platform as copies of the video
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
POLICY & INTERNET | 81

were continually reuploaded or inserted into other videos using TikTok's internal video
creation features (Matamoros‐Fernández & Kaye, 2020).
Considering the specificities and complexities of platform governance on TikTok, this
study addresses issues concerning TikTok's governing strategies by focusing on its ma-
nipulation of content visibility. In the sections below, we first review key literature on content
moderation. We then introduce the concept of visibility moderation. We define visibility
moderation as the process through which digital platforms manipulate (i.e., amplify or
suppress) the reach of user‐generated content through algorithmic or regulatory means. We
use this notion to conceptualise and underscore the measures TikTok implements to shape
user activities and the connected issues arising from it. We then follow with an empirical
discussion informed by 14 qualitative interviews with TikTok creators. We take a user‐centric
approach to understand TikTok content creators’ experience and perception of the visibility
moderation on the platform. Through insights from individual creators, we reveal how Tik-
Tok's practices and strategies to manipulate visibility are understood and negotiated by
creators. We highlight concerns that cause these stakeholders to feel confused, frustrated or
powerless. Visibility moderation encompasses the desire to be seen, the threat of invisibility,
and logics of manipulation, suppression and amplification on digital platforms. This study
offers directions for future work on content moderation and platform governance that fore-
ground visibility.

LITERATURE REVIEW
Content moderation

Content moderation is an expansive and complex sociotechnical phenomenon (Gillespie


et al., 2020; Gorwa, 2019) defined as the process in which platforms shape information
exchange and user activity through deciding and filtering what is appropriate according to
policies, legal requirements and cultural norms (Flew et al., 2019; Gillespie, 2018; Witt et al.,
2019). It entails cooperation between human and nonhuman actors (Roberts, 2016), ne-
gotiation between platforms’ epistemic authority and users’ agency to resist (Cotter, 2019,
2021) and contestation between corporate interests and regulatory obligations
(Suzor, 2019).
Throughout the 2010s, against a background of increasingly visible impacts of mis-
information, hate crimes, online bullying and other toxic communication (Citron, 2014;
Matamoros‐Fernández, 2017; Noble, 2018; Saurwein & Spencer‐Smith, 2020), how digital
platforms moderate user‐generated content has grown more relevant in public debates. In
academic research, scholars from media and communication studies (Bucher, 2012;
Gillespie et al., 2020), informatics (e.g., Binns et al., 2017; Pavlopoulos et al., 2017), and
legal and policy studies (e.g., Helberger, 2020; Suzor et al., 2019) have developed extensive
bodies of work to interrogate the complex systems of regulating online content, as well as
social–political controversies emerging around these practises.
While voices pressuring digital platforms to regulate information and user behaviours
continue to grow among regulators and the public, the academic community critically in-
terrogates risks associated with platforms’ growing power in arbitrating what is deemed to be
‘irrelevant’ ‘false’ or ‘harmful’ (Gillespie, 2020; Helberger, 2020). Related scholars pro-
blematise platforms’ decision‐making processes that introduce and reinforce prejudice and
bias (Crawford & Gillespie, 2016; Katzenbach & Ulbricht, 2019; Wilkinson & Berry, 2020).
For instance, in Crawford and Gillespie's (2016) study of user flagging on social media, the
authors note that by outsourcing the moderation tasks to users, the flagging system imposes
and legitimises platforms’ own governing norms and logic. In their study of content
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
82 | ZENG AND KAYE

moderation on Instagram, Pinterest and Tumblr, Gerrard and Thornham (2020) use the
concept of sexist assemblages to discuss how content moderation assembles human and
technological elements to perpetuate normative gender roles and police representations of
women bodies.
Algorithms have become ubiquitous in governing online communication. In response,
scholars call for further research devoted to understanding the affective dimensions and
perceptions of algorithmic moderation (Bucher, 2016). Algorithmic governance largely relies
on ‘black‐box’ machine‐learning mechanisms to predict, classify and filter, introducing errors
and opacity. The combination of algorithmic opacity and errors makes it difficult for users to
hold the platform accountable. Inappropriate implementation of algorithmic content mod-
eration also triggers senses of injustice, insignificance and distrust among users (Gillespie,
2020; Jhaver, 2019; West, 2018).
Prior research examining content moderation from users’ perspectives investigates how
users circumvent moderation systems (Gerrard, 2018; Zeng, 2020) and how they make
sense of the opaque algorithmic systems by developing folk theories (Bishop, 2020; West,
2018) surrounding how systems of governance function. Bishop (2019) describes this form
of unauthoritative knowledge about algorithmic systems created and shared by members of
online communities as ‘algorithmic gossip’. Bishop (2019) argues that algorithmic gossip is
productive as a collective resource for knowledge production. One example of such algo-
rithmic gossip is claims about shadow bans, or when platforms make users’ content less
likely to be seen without entirely removing it or notifying users (Cotter, 2021; West, 2018).
Although social media platforms regularly deny the practice of shadow‐banning, frequent
creator disputes exemplify the power and information asymmetry between platforms and
users (Cotter, 2021). Bucher (2016) argues it is important to understand how users imagine,
perceive and experience algorithms—their algorithmic imaginary—as they ‘not only shape
the expectations users have towards computational systems, but also help shape the al-
gorithms themselves’ (p. 33). As we discuss in the sections below, these mutually shaping
processes impact norms of moderation and visibility on platforms.

Visibility moderation on TikTok


Scholarly discussion of content moderation has traditionally focused on governing through
removal (Jhaver, Bruckman, et al., 2019; Witt et al., 2019), filtering (Coluccia, 2020;
Diakopoulos et al., 2012) and suspension (Thomas et al., 2011; Zeng et al., 2019). With the
rise of algorithmic curation and recommendation, platforms can effectively regulate user‐
generated content without relying on deleting, but by intentionally promoting, amplifying and
prioritising the visibility of content perceived as more relevant and appropriate (Chen et al.,
2021; Gillespie, 2017; Suzor, 2019). Developing upon Bucher's (2012) conceptualisation of
visibility as a disciplinary diagram, we propose visibility moderation as a lens to discuss the
specificities of TikTok's content moderation mechanism. With a Foucauldian approach to
visibility and governance, Bucher (2012) proposes a framework to examine the disciplinary
power of social media content ranking algorithms. Using Facebook as a case study, Bucher
(2012) argues that ‘discipline denotes a type of power that economises its functioning by
making subjects responsible for their own behaviour’ (p. 1175). Social media exercise such
power through ‘threat of invisibility’—the constant possibility of becoming irrelevant and
obsolete (Bucher, 2012, p. 1164). To remain or become relevant and noticeable, social
media users adjust their behaviours, which Cotter (2019, p. 895) describes as ‘play the
visibility game'. Due to its particular content distribution architecture and platform culture,
TikTok offers a productive case study to examine the disciplinary power of visibility.
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
POLICY & INTERNET | 83

On TikTok, the disciplinary power of visibility is embodied in its complex visibility mod-
eration systems. At the macro level, TikTok implements platform‐wide recommendation
algorithms to personalise, curate and prioritise content. The main content viewing interface
and landing page on the mobile app, the For You Page (FYP), pushes highly personalised
feeds to individual users based on their interests and usage data. The FYP's underlying
algorithmic recommender system, which we refer to as the For You algorithm, selects and
matches videos with viewers by analysing engagement data of videos and user profiles
(TikTok, 2020a). With its ‘power to grant visibility’ (Gillespie, 2017, p. 63), the For You
algorithm detects which videos will make an impact on the platform. Within this system,
one's video cannot make an impact on the platform unless it is ‘favoured’ by the For You
algorithm. The power of this algorithm is vast, but so is the bias when considering the human
subjectivity and values embedded in its design. Researchers have warned of human bias
introduced in algorithmic governance, drawing attention to issues of fairness and efficacy
(Gorwa et al., 2020; Zarsky, 2016). Fairness concerns the treatment of particular groups
under digital platforms’ governing systems and efficiency considers issues arising from
inaccurate automated decision‐making (Zarsky, 2016). These two closely intertwined issues
are prominent in the governing logic of the FYP, exemplified in the ways the platform
moderates visibility of the perceived vulnerable. In attempts to prevent online bullying,
TikTok takes intentional measures to constrain the visibility of users who are deemed to be
vulnerable to online bullying. Two sets of internal documents from TikTok were leaked and
reviewed by reporters in 2019 and 2020 (Biddle et al., 2020; Köver & Reuter, 2019), re-
vealing that visibility was deliberately restricted for videos featuring individuals with traits of
‘unattractiveness’ (e.g., in body shape, appearance, age) and developmental disorders
(e.g., autism, Down syndrome). Visibility‐restricted videos would not be recommended by
the For You algorithm outside users’ home country and were not allowed to be viewed over
10,000 times. Similar reports allege TikTok's For You algorithm has suppressed content that
mentioned ‘Black’ or ‘Black Lives Matter’ in profiles on the TikTok Creator Marketplace
(Brown, 2021) and was also accused of preventing LGBTQ+ content from going viral
(Bacchi, 2020).
At the mesolevel, TikTok implements visibility moderation by nudging creator commu-
nities and users toward social justice campaigns, particularly those that promote the plat-
form's overall corporate image by serving socially responsible goals. In the years since its
international release in 2018, TikTok has become a prominent online space for youth‐led
activism, such as racial injustice and climate crisis (Kaye et al., 2022). Because hashtags
that are officially endorsed by the platform are believed to be favoured by the For You
algorithm, bandwagoning such campaigns has become a shortcut for creators to gain vis-
ibility (Hautea et al., 2021; Zeng & Abidin, 2021). From TikTok's perspective, by introducing
and endorsing ‘positive’ trends, the platform can indirectly regulate content creation prac-
tices that benefit its own image. In early 2019, TikTok was warned and temporarily banned
by the Indian government for hosting pornography and other forms of potentially harmful
content (Kalra & Varadhan, 2019). In response, the platform launched a campaign called
#EduTok that attempted to counterbalance perceived ‘harmful’ content with educational
videos. Similarly, in response to the Chinese government's criticism over pornographic
content, Douyin, the Chinese counterpart of TikTok, launched an initiative to cultivate and
incentivise platform‐wide trends of making more appropriate, patriotic content through a
nation‐wide ‘positive energy’ hashtag campaign (Chen et al., 2021).
On the microlevel, visibility moderation can also be exemplified by content creators and
users’ collective curation of comments. Through features such as ‘like’, ‘dislike’, ‘upvote’ or
‘downvote’, the practises of participatory ranking and curation of user‐generated content are
widely implemented on social media (Graham & Rodriguez, 2021; Massanari, 2017). On
TikTok, video creators can directly moderate their comments and viewers participate in
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
84 | ZENG AND KAYE

ranking comments through interaction. The impact of the collective moderating of com-
ments’ is ‘both semiotic and material’ (Davis & Graham, 2021, p. 652). Semiotically, TikTok
views, likes and replies connote approval, interest and controversy. Materially, such reac-
tions are translated to metadata, which helps the algorithm make ranking calculations to
determine how much exposure to grant certain types of content.
The concept of visibility moderation is a useful analytical framework, which helps to
systematically delineate the complex system used by TikTok, as well as other social media,
to ‘discipline’ content creation. In the following sections, we present an empirical study of
visibility moderation on TikTok focusing on content creators’ perspectives guided by three
research questions:

RQ1: How do content creators make sense of TikTok's visibility moderation?


RQ2: How do content creators respond to the visibility moderation system?
RQ3: What are content creators’ main concerns regarding TikTok's visibility moderation?

METHODOLOG Y
We present an empirical study of creators both to illustrate the concept of visibility mod-
eration and to centre users in scholarly debates of platform moderation and governance. We
follow Ruckenstein and Turunen (2020) who advocate for research that moves to re-
humanize platform operation, ‘by re‐establishing the human as a critical and creative actor in
current and future platform arrangements’ (p. 1027). Whereas Ruckenstein and Turunen
focus on the complexities of practises of human content moderators, in this study we expand
their rehumanizing approach by centring on the experiences of users, specifically content
creators, who are at the receiving end of governance practises. As discussed above, plat-
forms’ design and implementation of content governance embed bias and errors. As
Gillespie (2020, p. 3) notes ‘the margin of error typically lands on the marginal: who these
tools over‐identify, or fail to protect, is rarely random’. Taking a user‐centric angle to re-
search content governance helps to illuminate power dynamics and injustices on platforms.
Data for this study were collected via in‐depth semi‐structured interviews conducted in
January and February 2021. The interviews were conducted as part of a larger empirical
study of TikTok platformisation. Twenty‐two (n = 22) English‐speaking TikTok creators from
the United States, Canada and Australia participated in 45–90 min Zoom interviews to share
insights regarding how they understood and navigated the infrastructures, markets and
governance systems on TikTok. The focus of these interviews was broader than platform
governance; however, interviewees’ perspectives and experiences helped inform our con-
ceptualisation of visibility moderation on TikTok. In this study, we draw on 14 (n = 14)
interviews.
Interviewees were members of a smaller, discrete TikTok community purposively se-
lected to study artistic content creators on TikTok. Interviews were audio‐recorded, tran-
scribed and analysed in NVivo qualitative data analysis software using a grounded approach
to thematic coding (Corbin & Strauss, 2008). Qualitative coding of interviews proceeded in
two stages. The first stage involved open coding, in which interview transcripts were an-
notated with labels and concepts that are potentially relevant to our three research ques-
tions. The second stage involved axial coding, which aims to refine, align and group codes
established in the first stage. To do so, we established semantic relationships between
labels we used for annotation and merged related labels into broader themes.
Interviews were conducted, transcribed and coded by one author. The transcript was
reviewed by both authors, and the categorisation of thematic codes involved the participa-
tion, discussion and reconciling between both authors (McDonald et al., 2019). Each
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
POLICY & INTERNET | 85

interviewee provided informed consent before participating. The Interview questionnaire and
consent forms were reviewed and approved by the interviewer's University human research
ethics committee. As part of the consent process, interviewees were asked their preference
to be named or remain anonymous. All interviewees consented to be named and to have
their TikTok username included in publications, which we include below. In the following
sections, we analyse creators’ understandings of and experiences with visibility moderation,
framed as sense‐making, negotiation and concerns.

TIKTOK CREA TO RS AND V I S I BIL IT Y M OD E R A T I ON


How do content creators make sense of visibility moderation?

In interviews, creators described their attempts to interrogate the black box For You algo-
rithm in service of understanding and improving visibility. Creators’ perceptions were in-
formed by folk theories (West, 2018), based on their own experiences with virality or
experiences shared by others. They also shared snippets of algorithmic gossip (Bishop,
2019) informed by conversations with friends and mutuals, the vernacular term for users
who mutually follow each other on TikTok.
A recurring theme connected to visibility involved how creators with massive followings
interpreted not being seen. A commonly cited explanation was that they had been shadow
banned. Like on other digital platforms (Cotter, 2021; West, 2018), shadow bans are a
nebulous concept on TikTok as Erynn, an American creator, summarised:

It's hard to trace exactly what constitutes a shadow ban but something doesn't
add up when you have 600,000 followers and videos get like a hundred views…
I've had multiple followers tell me that they have to seek out my content. they
never actually see it naturally (Erynn, @rynnstar).

Some felt shadow bans were a natural occurrence on an algorithmically curated platform
owned and operated by a commercial entity. Shout, an American creator, highlighted the
impacts of feeling suppressed alongside what they viewed as TikTok's imperative to push
trendy content that kept viewers watching:

I feel like to an extent TikTok does suppress videos that aren't as trendy as, you
know, stuff that you would see coming from Charlie D'Amelio or some other
people in that high follower count… I feel like my FYP went from seeing a bunch
of different things to every third video being a verified celebrity, you know? So
we're out here thinking, ‘Oh, we can't compete with that.’ I see friends who have
quit TikTok because of that (Shout, @vocaloutburst).

Damoyee, an American creator, felt that she had not been shadow banned, per se, but
was acutely aware that the success of her content depended on a mixture of user interaction
and the For You algorithm:

The Watchers of the FYP help out by triggering the algorithm, even by just
viewing the whole video and maybe like it or share it to three friends or even one.
If you just have one friend, you can help them find their way through the FYP. I
know if one person triggers it it might get sent to five more people (Damoyee,
@damoyee).
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
86 | ZENG AND KAYE

Damoyee's view contests the power of the FYP as the chief ‘governor' of visibility on
TikTok, by reinserting humans, the watchers of the FYP, as gatekeepers. In a press release
detailing certain aspects of how the FYP functions, TikTok (2020a, p. 1) notes that users
watching a video in its entirety before scrolling down was a ‘strong indicator of interest’ that
would lead to higher ranks and more likely to be recommended. This supports Damoyee's
assertion and aligns with previous research on the role of user practices in participatory
ranking systems (Graham & Rodriguez, 2021).
Interviewees also shared experiences of videos going viral that made them temporarily
‘TikTok famous’, showing up on the FYPs of thousands or sometimes millions of users
seemingly all at once. After their visibility had been exponentially amplified by virality, they
watched as their content found its way to FYPs far beyond their usual followers or TikTok
communities. Violet, an American nonbinary creator who posted content about gender and
sexuality on their account, questioned the efficacy of automated governing systems at the
macro level that should, according to TikTok's guidelines and policies (TikTok, 2020b),
prevent underaged individuals from using the platform at all. Violet explained that they had
received numerous community guidelines violations on videos with no clear explanation as
to why or what aspect of their video resulted in a violation. In one example, the specific
guideline allegedly being violated was Minor Safety, a guideline that restricts sexual de-
piction of underaged users. Violet, who was over 18 years old at the time, explained that:

It's just so weird to me that me simply talking about how my videos are getting
down, because my collarbones are showing gets taken down for something that
isn't for minor safety (Violet, @violetbutnotaflower).

Violet's interpretation was that human viewers who sought to have Violet's content re-
moved were reporting the videos for community guidelines violations aligns with previous
findings on the strategic use of thin or ambiguous flags on digital platforms (Crawford &
Gillespie, 2016). Ambiguous or erroneous flagging was also mentioned in the context of
automated reports, as Tai, a Canadian creator, explained:

I have had a couple of community guideline violations. Not that many, like they
used to happen a lot when I first started posting… But a couple of the other ones
were just like, I don't know, [TikTok] automatically banned me a couple of times
for saying the ‘n‐word.’ And I'm like, I'm literally a Black person, leave me be
(Tai, @thctai).

Previous research on African‐American Vernacular English has explored the complex


and nuanced use of the ‘n‐word’ in American English (Jones & Hall, 2019), yet that com-
plexity and nuance are lost in AI moderation. As with other visibility issues noted in inter-
views, the effects still disproportionately impact Black, Indigenous and people of colour on
TikTok, as Alex, an American creator, noted:

The sense I've gotten is that TikTok just doesn't do a good of a job of giving
people the tools to get rid of nasty people… A friend who was a Black creator
made a music remix video where he sings the ‘n‐word’ kind of in a comedic way.
His video got taken down, but there are still white people saying the n‐word and
like not getting taken down. There's clearly a problem with the fairness of how
things get reported (Alex @alexengleberg).

As West (2018) identified on Facebook, confusion and lack of transparency surrounding


community guidelines violations on TikTok was often followed by subsequent violations.
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
POLICY & INTERNET | 87

Interviewees who had received violations explained that options for appeal were limited and
equally confusing, as Jake described:

There's zero way to contact TikTok…There's no phone number. There's no


nothing. I could get on the phone with someone or email somebody directly, but
no, I'm basically it's within the app. Um, you press submit an appeal and it gives
you like a character limit of like 50 or 200 characters to voice your complaints
about the video (Jake @jakedoesmusicsometimes).

Jake's observation aligns with previous research on YouTube in which creators posi-
tioned the process of appealing platform community guidelines violations as being confusing
and ineffective (Kaye & Gray, 2021) and offers little clarity or comfort when content is
removed (Jhaver, Appling, et al., 2019). Visibility was cited as a motivating factor for creators
to engage with the appeals, particularly when the videos removed were viral hits. These
perceptions informed creators’ actions and decisions to increase their own visibility or to
intervene when they noticed TikTok's moderation failing to prevent harm and abuse.

How do content creators negotiate visibility moderation?


Alongside sharing their interpretation of how visibility moderation works on TikTok, inter-
viewees also spoke about how they negotiated visibility on TikTok. As mentioned, TikTok
practices visibility moderation on a macro level through its For Your algorithm. For creators,
landing their content on the FYP is crucial to their visibility on the platform. To improve their
chance to be picked by the For You algorithm, or to become ‘algorithmically recognizable’
(Gillespie, 2017, p. 63), creators noted the ways in which they were encouraged to use
certain hashtags either by the platform or other users. TikTok encourages users to in-
corporate trending hashtags by suggesting popular hashtags. Before publishing a video,
creators have the option to add captions, mentions and hashtags. Typing any letter after
pressing the ‘#’ key displays suggestions for the most popular hashtag (i.e., typing ‘f’ dis-
plays suggestions #fyp, #foryou, #foryoupage next to the number of views associated with
each). Other users can influence the decision to incorporate certain hashtags by repeated
exposure or viral success.
Interviewees mentioned they would use hashtags that were trending, even those that
had nothing to do with the video they were posting or hashtags that were nonsensical, such
as in the case of #xyzbca, a trending hashtag with over 1.5 trillion views as of October 2021.
Tai, a Canadian creator, admitted:

I have no idea what [xyzbca] means but I used to use it all the time… I've
watched videos that have this hashtag and they seem to not have any corre-
lation between them all. So I guess I thought I'll just start using it then. Right now
I think I've been seeing ‘Chobani Flip’ or something. I know it's a brand but I have
no idea why this yoghurt is showing up on all these random videos. It's just
trending (Tai, @thcthai).

In instances where trending hashtags relate to specific branded marketing campaigns,


such as in Tai's example of the yoghurt company Chobani, creators were helping brands
market despite not receiving any compensation or creating content that related to the overall
campaign; they participated to improve visibility.
Using unrelated hashtags to capitalise on trends and increase visibility, or hashtag hi-
jacking, is not unique to TikTok (Bruns & Burgess, 2012), and has been previously found to
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
88 | ZENG AND KAYE

be a strategy to improve visibility on TikTok (Abidin, 2021b; Hautea et al., 2021). Shout
(@vocaloutburst) recalled a period in late 2020 and early 2021 when #blackvoices and
#blacklivesmatter were trending on TikTok, corresponding with the widespread social justice
movement taking place around the world. While millions of videos included BLM hashtags,
still more millions were examples of hashtag hijacking that had nothing to do with the cause.
Another popular TikTok‐oriented group of hashtags, #fyp, #foryou or #foryoupage, were
mentioned several times in interviews. While ‘for you’ hashtags are among the most widely
used hashtags on TikTok, influencer marketing groups acknowledge their status as folk
theories (Geyser, 2021) affirmed by interviewees who expressed their doubts that including
‘for you’ hashtags did anything to impact visibility.
Another perceived strategy to influence visibility was to post new videos every day:

I joined the cult of posting every day a long time ago. So many people were like
‘post everyday post every day. It's good for the algorithm. And it lets you know
that your pages are alive, post every day’ (RJ @rjthecomposer).

RJ, an American creator, explained that he saw other users posting videos dissecting the
TikTok algorithm and heard stories from mutuals that posting every day was a key to
improving visibility. Despite posting something new every day, RJ immediately noticed that
his videos were not being seen by his tens of thousands of followers. On the contrary, he
remarked that often videos were only seen by a fraction of a percent of his followers.
Worth mentioning, interviewees’ attempts to negotiate visibility moderation is not always
for achieving wider reach. In contrast to strategies used to improve visibility, creators also
described strategies they employed when they did not want to be seen by certain groups of
users. For many interviewees, a viral video meant sudden exposure to new audiences who
might be more likely to leave abusive comments, such as users out to troll creators or
individuals with radically different social or political views. Sudden spikes could increase the
likelihood of experiencing harassment or receiving spurious reports and also influence the
kinds of content interviewees would see when scrolling their own feeds. To mitigate this,
some interviewees described strategies to reduce their visibility to avoid harassment, re-
ports or shifts in content recommendations. For example, Damoyee (@damoyee) would
deliberately interact with videos that aligned with her personal views to see more similar
content and cause a knock‐on effect of her content being seen by other like‐minded users as
well. Others would post videos asking viewers to interact with their videos to navigate to help
steer them back into familiar territories on the FYP where they were less likely to be seen by
hostile users.
Strategies for negotiating visibility are crucial on platforms like TikTok that rely heavily on
algorithmic curation. While many more visibility strategies have been offered by marketing
firms and influencers, those strategies mentioned by interviewees demonstrate creators’
attempts to ‘play the visibility game’ (Cotter, 2019, p. 895) and assert a sense of agency
against the dynamic, black box For You algorithm. Discussions of creators’ negotiated
visibility on the platform also led to additional points of critique that creators felt were beyond
their control, and possibly beyond TikTok's control.

What are content creators’ concerns regarding visibility moderation?


Interviewees’ concerns related to visibility moderation reflects broader societal issues and
cultural norms that manifest on TikTok, as on other platforms (i.e., Katzenbach & Ulbricht,
2019; Noble, 2018; Witt et al., 2019). A principal source of frustration among creators was
the high visibility of harmful comments that expressed racist, antisemitic, sexist and
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
POLICY & INTERNET | 89

homophobic views. Some interviewees had the impression that the problem of harmful
comments was more prominent on TikTok than on other social media. Gabbi, an Australian
creator, said she was shocked by:

…the audacity of commenters to just write literally whatever they want is so


much stronger on TikTok than anywhere else. On Twitter, I find that people can
track you pretty easily based on where you are, your workplace, like they can
find you… Facebook it's obvious… because your whole personal profile is right
there. But TikTok you can make 65 accounts and just comment whatever you
want from all of them (Gabbi @fettucinefettuqueen).

Later in the interview, Gabbi mentioned that the verification requirements to create new
TikTok accounts seemed more relaxed than on other platforms allowing users to create
dozens of new accounts to circumvent bans or avoid the reputational consequences of
commenting, such as losing followers for posting critical, argumentative or abusive com-
ments. The practice of creating alternate accounts for the sole purpose of commenting,
while not unique to TikTok, is another example of users moderating visibility (Cotter, 2019),
in this case, to obfuscate links to their main account.
In discussing comment moderation systems, Stacey, a Canadian creator, compared
TikTok to YouTube:

YouTube does a really, really good job with [comment moderation]. In YouTube
Studio you can see comments that didn't make it through because they were
maybe hateful, maybe marked as spam, maybe something else. But on TikTok
there's nothing… there's no one going into your comments, deleting them. If
you're getting plenty of hate, no one will do anything about it, except for the
creator (Stacey, @staceyryanmusic).

This also came up in the context of minor safety, given that TikTok is home to an
infamously young user base (Muliadi, 2020; TikTok, 2021a). Violet explained that they
sometimes felt the need to self‐censor content geared towards more adult audiences after
repeatedly finding ostensibly underaged users appearing in their comment section. Violet
commented on the irony that their videos had been erroneously flagged or removed for
Minor Safety, while minor users were evidently able to use TikTok and comment on Violet's
videos.

When my videos blow up then they have thousands of comments… I don't read
them all, but I try. I look at people's accounts. I see who's following me. There
were people who were under nine years old, like six or seven maybe… I don't
know how they have the access to even be on the app (Violet,
@violetbutnotaflower).

In response to this problem, some interviewees explained that they felt a need to take
personal responsibility to moderate their comment sections. Rachel, an American creator,
discussed the burden of moderating anti‐Semitic content shouldered by individual creators:

I do think that, that the anti‐Semitism on TikTok has run rampant at times. I
understand why others would [leave the platform]. I know it has been exhausting
for other creators that I follow to be faced with what feels like attacks on all
sides (Rachel, @rvmillz).
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
90 | ZENG AND KAYE

Emerson, an Australian creator, explained that he felt it was his responsibility to mod-
erate his comment section even when the harassment was not addressed to him directly:

I've probably read more comments than I should have… I don't think things ever
get deleted. There were times earlier, before I started really deleting comments
there would be… argumentative comment[s] and then other people on TikTok
would be having that argument and I can see it happening. It was a time where I
didn't have to moderate but I felt a sense of responsibility that it was on my video
and it's an argument that's going absolutely nowhere (Emerson,
@emersonbrophy).

Emerson explained that arguments and abusive comments became more frequent the
further he waded into American political debates with his videos. As noted, the majority of
creators interviewed as part of the larger research programme on TikTok were musical
creators. Many professed that they normally posted videos that were uncontroversial; they just
enjoyed playing music on the app with their friends. The creators quoted in this study, how-
ever, described more instances of controversy due to the politically charged nature of their
content. Discussions about racism, homophobia and misogyny on TikTok reflected broader
feelings that content moderation is unfair and inefficient (Zarsky, 2016), as Erynn explained:

… being a Black woman on the internet in general you're always a target for
harassment. Lots of people seem to love the opportunity to take you down or to
criticise you for various things or to be racist… I think the more frustrating thing is
that a lot of times you don't feel like you're given appropriate support from the
platform because you report racist videos and stuff to TikTok and oftentimes
they don't take it down (Erynn, @rynnstar).

In addition to concerns about comments, creators pointed to ways in which the report
function could be used as tools to inflict harm and abuse, particularly for Black and Queer
creators. We adopt the term report bombing to describe instances of coordinated mass
reporting by human users to have content flagged or removed. The notion of online bombing
is a form of brigading that has been used, for example, to describe review bombing, in which
groups of users artificially deflate review scores for various types of media (Wordsworth,
2019). Like other folk theories, interviewees’ could not provide direct evidence that they had
been the victim of report bombing but it figured into their attempts to make sense of
otherwise nonsensical reports they had received.
Creators explained that report bombing was, in their view, attacks from trolls intended to
reduce their visibility or their likelihood of showing up on others’ FYPs. In one instance,
despite self‐identifying as ‘not a very political person’ Anthony (@ewokbeats), an American
creator who normally only posted videos playing drums, shared that he had a video removed
for community guidelines violations in which he voiced his support for the Black Lives Matter
movement. He explained that the video had received multiple reports for violations that had
nothing to do with the content of his video, such as hateful behaviour, harassment and
bullying. Other interviewees described the same phenomenon, which they took as evidence
that the reports were being made by other users as opposed to automated systems or teams
of moderators. Like on other digital platforms, violating community guidelines on TikTok
carries increasingly severe consequences for repeat offences, as Violet described:

One of my favourite creators is trans and Black[…] Recently she had her ac-
count banned because she had exceeded the amount of community guidelines
violations, and then they deleted her account (Violet, @violetbutnotaflower).
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
POLICY & INTERNET | 91

Beyond the risks of impacting visibility or resulting in accounts being taken down, report
bombing can delegitimise internal methods of platform governance. Interviewees were ex-
asperated that reporting was effective at removing their own videos that did not violate
guidelines but ineffective at removing content they reported for being harmful, proble-
matic and legitimately in violation of community guidelines. These individuals questioned
why to even bother reporting legitimately abusive content while they were spending time
trying to appeal illegitimate abusive reports.
These concerns illustrate tensions produced by heightened visibility on TikTok. Whether
strategic or unexpected, creators may find themselves seen by unintended audiences of
age‐inappropriate or hostile users that can result in flags, reports or bans. In addition to
reiterating the contested nature of algorithmic governance (Katzenbach & Ulbrict, 2019),
these criticisms also provide a human dimension of the impacts of moderation from a creator
standpoint, to complement research on content moderators (Ruckenstein & Turunen, 2020).

CONCLUSION
In this paper, we propose the concept of visibility moderation as a dimension to explore,
understand and critique platform governance. Rather than emphasising how platforms make
rules about what is not to be seen, visibility moderation concerns a wider range of practises
and logic involved in suppressing and amplifying the reach of certain user‐generated con-
tent. Alongside gatekeeping videos for individual users, visibility moderation disciplines
content creators through their desire to be seen, as well as through the threat of invisibility.
To illustrate and problematise visibility moderation, we empirically examined TikTok
content creators’ interpretation and negotiation of visibility through in‐depth interviews. First,
creators in this study made sense of visibility moderation on TikTok by sharing gossip and
lore. Their understandings were rationalised by personal experiences, such as with shadow
bans, and reinforced by policy violations, such as being reported for community guidelines
violations. Second, creators attempted to negotiate visibility using strategies that might
improve their chances of landing on certain users’ algorithmically curated feeds while at-
tempting to avoid landing on the FYPs of unwanted others. Third, creators raised concerns
with visibility moderation, such as a lack of human or automated oversight of comments and
the prevalence of reporting systems being used as tools of suppression. We present these
limited understandings, strategies and concerns as instructive starting points for further
critique, policy work and research on visibility moderation.
TikTok content creators’ strategies to negotiate visibility demonstrates the effects of
visibility as a disciplinary diagram. As discussed in section 'Literature and review', social
media platforms govern through visibility, especially through the threat of invisibility (Bucher,
2012). As indicated in our findings of content creators’ attempts to negotiate the influence of
their content, the desperation to be seen and the threat of invisibility are evident. On the one
hand, strategies reported by creators showcase a sense of agency in negotiating their
influence. On the other hand, creators directly pointed to information and power asymme-
tries between users and platforms (Cotter, 2021). Relying on folk theories, creators em-
ployed strategies to improve their chances to appear on the FYP, while encountering what
they perceived as unjust and unfair treatment by the platform and other users. Overall,
participants expressed feelings of confusion, desperation and powerlessness towards
moderating visibility on TikTok.
While visibility moderation is not unique to short video platforms, TikTok is a unique site
to explore visibility governance due to the centrality of its algorithmic recommender system
and the many different approaches creators take to interrogate black boxes, negotiate
power and mitigate harms on the platform. This study joins previous research that calls for
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
92 | ZENG AND KAYE

more ground‐level stakeholder involvement to shape understandings of platform govern-


ance (Cotter, 2021; Edwards & Moss, 2020; Kaye & Gray, 2021). With algorithms becoming
the norm to moderate content, platforms are increasingly ruled by statistical mindsets, which
carry errors and result in human costs (Gillespie, 2020). Future research should continue to
centre focus on the people carrying the burdens and suffering the costs. There are no easy
ways out or clear‐cut solutions to address the concerns associated with visibility moderation
on digital platforms. Further research can productively explore visibility moderation to inform
scholarly debates, make policy recommendations and influence public deliberation over
platform governance.
ACKNOWLEDGEMENT
Open access funding provided by Universitat Zurich.
ORCID
Jing Zeng http://orcid.org/0000-0001-5970-7172

REFERENCES
Abidin, C. (2021a). Mapping internet celebrity on TikTok: Exploring attention economies and visibility labours.
Cultural Science Journal, 12(1), 77–103. https://doi.org/10.5334/csci.140
Abidin, C. (2021b). From ‘networked publics’ to ‘refracted publics’: A companion framework for researching ‘below
the radar’ studies. Social Media + Society, 7, 1–13. https://doi.org/10.1177/2056305120984458
Allyn, B. (2020, November 13). Trump's TikTok sell‐by date extended by 15 days. NPR. https://www.npr.org/2020/
11/13/933916944/trump‐ordered‐tiktok‐to‐be‐sold‐off‐but‐then‐ignored‐the‐deadline
Allyn, B. (2021, February 25). TikTok to pay $92 million to settle class‐action suit over 'Theft' of personal data.
NPR. https://choice.npr.org/index.html?origin=https://www.npr.org/2021/02/25/971460327/tiktok‐to‐pay‐92‐
million‐to‐settle‐class‐action‐suit‐over‐theft‐of‐personal‐data
Bacchi, U. (2020, September 20). TikTok apologises for censoring LGBT+ content. Reuters. https://www.reuters.
com/article/britain‐tech‐lgbt‐idUSL5N2GJ459
Biddle, S., Ribeiro, P. V. & Dias, T. (2020, March 16). Invisible censorship: TikTok told moderators to suppress
posts by ‘ugly’ people and the poor to attract new users. The Intercept. https://theintercept.com/2020/03/16/
tiktok‐app‐moderators‐users‐discrimination/
Binns, R., Veale, M., Van Kleek, M. & Shadbolt, N. (2017, September). Like trainer, like bot? Inheritance of bias in
algorithmic content moderation. In International conference on social informatics (pp. 405–415). Springer.
Bishop, S. (2019). Managing visibility on YouTube through algorithmic gossip. New Media & Society, 21(11–12),
2589–2606.
Bishop, S. (2020). Algorithmic experts: Selling algorithmic lore on YouTube. Social Media + Society, 6(1), 1–11.
https://doi.org/10.1177/2056305119897323
Brown, A. (2021, July 7). TikTok influencer of color faced 'Frustrating’ obstacle trying to add the word ‘Black’ to his
creator marketplace bio. Forbes. https://www.forbes.com/sites/abrambrown/2021/07/07/tiktok‐black‐creators‐
creator‐marketplace‐black‐lives‐matter/?sh=75898e0b6d24
Bruns, A., & Burgess, J. (2012). Researching news discussion on Twitter: New methodologies. Journalism Studies,
13(5), 801–814.
Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media
& Society, 14(7), 1164–1180.
Chen, X., Valdovinos Kaye, D. B., & Zeng, J. (2021). #PositiveEnergy Douyin: Constructing ‘playful patriotism’ in a
Chinese short‐video application. Chinese Journal of Communication, 14(1), 97–117. https://doi.org/10.1080/
17544750.2020.1761848
Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press.
Coluccia, A. (2020). On the probabilistic modeling of fake news (hoax) persistency in online social networks and the
role of debunking and filtering. Internet Technology Letters, 3(5), 204.
Corbin, J. M., & Strauss, A. L. (2008). Basics of qualitative research: Techniques and procedures for developing
grounded theory (3rd ed.). Sage.
Cotter, K. (2019). Playing the visibility game: How digital influencers and algorithms negotiate influence on
Instagram. New Media & Society, 21(4), 895–913.
Cotter, K. (2021). ‘Shadowbanning is not a thing’: Black box gaslighting and the power to independently know and
credibly critique algorithms. Information, Communication & Society, Online First, 1–18. https://doi.org/10.
1080/1369118X.2021.1994624
Crawford, K., & Gillespie, T. (2016). What is a flag for? Social media reporting tools and the vocabulary of
complaint. New Media & Society, 18(3), 410–428. https://doi.org/10.1177/1461444814543163
Davis, J. L., & Graham, T. (2021). Emotional consequences and attention rewards: The social effects of ratings on
Reddit. Information, Communication & Society, 24(5), 649–666.
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
POLICY & INTERNET | 93

Diakopoulos, N., De Choudhury, M. & Naaman, M. (2012). Finding and assessing social media information sources
in the context of journalism. In Proceedings of the SIGCHI conference on human factors in computing systems
(pp. 2451–2460). New York: ACM.
Edwards, L., & Moss, G. (2020). Evaluating justifications of copyright: An exercise in public engagement.
Information, Communication & Society, 23(7), 927–946. https://doi.org/10.1080/1369118X.2018.1534984
Ellis‐Petersen, H. (2020, June 29). India bans TikTok after Himalayan border clash with Chinese troops. The
Guardian. https://www.theguardian.com/world/2020/jun/29/india‐bans‐tiktok‐after‐himalayan‐border‐clash‐with‐
chinese‐troops
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital
communication platform governance. Journal of Digital Media & Policy, 101, 33–50.
Gerrard, Y. (2018). Beyond the hashtag: Circumventing content moderation on social media. New Media &
Society, 20(12), 4492–4511. https://doi.org/10.1177/1461444818776611
Gerrard, Y., & Thornham, H. (2020). Content moderation: Social media's sexist assemblages. New Media &
Society, 22(7), 1266–1286.
Geyser, W. (2021, September 24). How to feature on TikTok's ‘For You’ page. Influencer Marketing Hub. https://
influencermarketinghub.com/tiktok‐for‐you‐page/
Gillespie, T. (2017). Algorithmically recognizable: Santorum's Google problem, and Google's Santorum problem.
Information, Communication & Society, 20(1), 63–80.
Gillespie, T. (2018). Platforms are not intermediaries. Georgetown Law Technology Review, 22, 198–216.
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 5, 371. https://doi.org/
10.1177/2053951720943234
Gillespie, T., Aufderheide, P., Carmi, E., Gerrard, Y., Gorwa, R., Matamoros‐Fernández, A., Roberts, S. T.,
Sinnreich, A., & West, S. M. (2020). Expanding the debate about content moderation: Scholarly research
agendas for the coming policy debates. Internet Policy Review, 9(4), https://doi.org/10.14763/2020.4.1512
Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871. https://
doi.org/10.1080/1369118X.2019.1573914
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges
in the automation of platform governance. Big Data & Society, 7(1):2053951719897945.
Graham, T., & Rodriguez, A. (2021). The Sociomateriality of rating and ranking devices on social media: A case
study of Reddit's voting practices. Social Media + Society, 7(3):20563051211047667.
Gray, J. (2021). The geopolitics of ‘platforms’: The TikTok challenge. Internet Policy Review, 10(2), 1–26. https://
doi.org/10.14763/2021.2.1557
Gray, J., & Suzor, N. (2020). Playing with machines: Using machine learning to understand automated copyright
enforcement at scale. Big Data & Society, 7, 1–13. https://doi.org/10.1177/2053951720919963
Hautea, S., Parks, P., Takahashi, B., & Zeng, J. (2021). Showing they care (or don't): Affective publics and
ambivalent climate activism on TikTok. Social Media + Society, 7, 205630512110123. https://doi.org/10.1177/
20563051211012344
Helberger, N. (2020). The political power of platforms: How current attempts to regulate misinformation amplify
opinion power. Digital Journalism, 86, 842–854.
Jhaver, S., Bruckman, A., & Gilbert, E. (2019). Does transparency in moderation really matter? User behavior after
content removal explanations on Reddit. Proceedings of the ACM on Human‐Computer Interaction, 3CSCW,
1–27.
Jhaver, S., Appling, D. S., Gilbert, E., & Bruckman, A. (2019). Did you suspect the post would be removed?
Understanding user reactions to content removals on Reddit. Proceedings of the ACM on Human‐Computer
Interaction, 3CSCW, 1–33.
Jones, T., & Hall, C. (2019). Grammatical reanalysis and the multiple N‐words in African American english.
American Speech, 94(4), 478–512. https://doi.org/10.1215/00031283‐7611213
Kalra, A., & Varadhan, S. (2019, April 16). Indian court refuses to suspend ban order on Chinese app TikTok.
Reuters. https://www.reuters.com/article/tiktok‐india‐court‐idINL3N21Y1DU
Katzenbach, C., & Ulbricht, L. (2019). Algorithmic governance. Internet Policy Review, 8(4), 1–18.
Kaye, D. B. (2020). Make this go viral: Building musical careers through accidental virality on TikTok. Flow, 27(1)
https://www.flowjournal.org/2020/09/make‐this‐go‐viral/.
Kaye, D. B. V., & Gray, J. E. (2021). Copyright gossip: Exploring copyright opinions, theories, and strategies on YouTube.
Social Media + Society, 2021(3):205630512110369. 1–12. https://doi.org/10.1177/20563051211036940
Kaye, D. B. V., Zeng, J. & Wikström, P. (2022). TikTok: Creativity and culture in short video. Cambridge, UK: Polity
Press.
Köver, C., & Reuter, M. (2019). TikTok curbed reach for people with disabilities. https://netzpolitik.org/2019/
discrimination‐tiktok‐curbed‐reach‐for‐people‐with‐disabilities/
Massanari, A. (2017). #Gamergate and the fappening: How Reddit's algorithm, governance, and culture support
toxic technocultures. New Media & Society, 19(3), 329–346.
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
94 | ZENG AND KAYE

Matamoros‐Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race‐based
controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946.
Matamoros‐Fernández, A., & Kaye, D. B. V. (2020, September 8). TikTok suicide video: It's time platforms
collaborated to limit disturbing content. The Conversation. https://theconversation.com/tiktok‐suicide‐video‐
its‐time‐platforms‐collaborated‐to‐limit‐disturbing‐content‐145756
McDonald, N., Schoenebeck, S., & Forte, A. (2019). Reliability and inter‐rater reliability in qualitative research:
Norms and guidelines for CSCW and HCI practice. Proceedings of the ACM on Human‐Computer Interaction,
3CSW, 1–23.
Muliadi, B. (2020, July 7). What the rise of TikTok says about generation Z. Forbes. https://www.forbes.com/sites/
forbestechcouncil/2020/07/07/what‐the‐rise‐of‐tiktok‐says‐about‐generation‐z/?sh=6ea64c656549
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
Ofcom. (2021). Children and parents: Media use and attitudes report. https://www.ofcom.org.uk/__data/assets/
pdf_file/0025/217825/children‐and‐parents‐media‐use‐and‐attitudes‐report‐2020‐21.pdf
Pavlopoulos, J., Malakasiotis, P. & Androutsopoulos, I. (2017). Deeper attention to abusive user content moderation.
In Proceedings of the 2017 conference on empirical methods in natural language processing (pp. 1125–1135).
Ridley, K. (2021, April 21). TikTok faces claim for billions in London child privacy lawsuit. Reuter. https://www.
reuters.com/technology/tiktok‐faces‐claim‐billions‐london‐child‐privacy‐lawsuit‐2021‐04‐20/
Roberts, S. T. (2016). Digital refuse: Canadian garbage, commercial content moderation and the global circulation
of social media's waste. Media Studies Publications. 14(1), 1–12. https://ir.lib.uwo.ca/commpub/14
Ruckenstein, M., & Turunen, L. L. M. (2020). Re‐humanizing the platform: Content moderators and the logic of
care. New Media & Society, 226, 1026–1042.
Saurwein, F., & Spencer‐Smith, C. (2020). Combating disinformation on social media: Multilevel governance and
distributed accountability in Europe. Digital Journalism, 8(6), 820–841.
Shead, S. (2020). TikTok is luring Facebook moderators to fill new trust and safety hubs. CNBC. https://www.cnbc.
com/2020/11/12/tiktok‐luring‐facebook‐content‐moderators.html
Suzor, N. P. (2019). Lawless: The secret rules that govern our digital lives. Cambridge University Press
Suzor, N. P., West, S. M., Quodling, A., & York, J. (2019). What do we mean when we talk about transparency?
Toward meaningful transparency in commercial content moderation. International Journal of Communication,
13(2019), 1526–1543.
Thomas, K., Grier, C., Song, D., & Paxson, V. (2011). Suspended accounts in retrospect: An analysis of Twitter
spam. In Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference (pp.
243–258).
TikTok. (2020a, June 19). How TikTok recommends videos #foryou. TikTok Newsroom. https://newsroom.tiktok.
com/en‐us/how‐tiktok‐recommends‐videos‐for‐you
TikTok. (2020b, December 1). Community guidelines. TikTok Resources. https://www.tiktok.com/community‐
guidelines?lang=en#31
TikTok. (2021a, February 24). TikTok transparency report. TikTok Safety. https://www.tiktok.com/safety/resources/
transparency‐report‐2020‐2?lang=en&appLaunch=
TikTok. (2021b, September 27). Thanks a billion! TikTok Newsroom. https://newsroom.tiktok.com/en‐us/1‐billion‐
people‐on‐tiktok
van Dijck, J., & Poell, T. (2013). Understanding social media logic. Media and Communication, 1(1), 2–14. https://
ssrn.com/abstract=2309065
van Dijck, J., Poell, T., & De Waal, M. (2018). The platform society: Public values in a connective world. Oxford
University Press.
West, S. M. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social
media platforms. New Media & Society, 20(11), 4366–4383. https://doi.org/10.1177/1461444818773059
Wilkinson, W. W., & Berry, S. D. (2020). Together they are Troy and Chase: Who supports demonetization of gay
content on YouTube? Psychology of Popular Media, 92, 224–235.
Witt, A., Suzor, N., & Huggins, A. (2019). The rule of law on Instagram: An evaluation of the moderation of images
depicting women's bodies. The University of New South Wales Law Journal, 422, 557–596.
Wordsworth, R. (2019, March 25). The secrets of review‐bombing: Why do people write zero‐star reviews? The
Guardian. https://www.theguardian.com/games/2019/mar/25/review‐bombing‐zero‐star‐reviews
Zarsky, T. (2016). The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in
automated and opaque decision making. Science, Technology, & Human Values, 41(1), 118–132. https://doi.
org/10.1177/0162243915605575
Zeng, J. (2020). # MeToo as connective action: A study of the anti‐sexual violence and anti‐sexual harassment
campaign on Chinese social media in 2018. Journalism Practice, 142(2), 171–190.
Zeng, J., & Abidin, C. (2021). ‘#OkBoomer, time to meet the Zoomers’: Studying the Memefication of
Intergenerational Politics on TikTok. Information, Communication & Socity, 16(26), 2459–2481. https://doi.
org/10.1080/1369118X.2021.1961007
19442866, 2022, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/poi3.287 by Test, Wiley Online Library on [06/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
POLICY & INTERNET | 95

Zeng, J., Burgess, J., & Bruns, A. (2019). Is citizen journalism better than professional journalism for fact‐checking
rumours in China? How Weibo users verified information following the 2015 Tianjin blasts. Global Media and
China, 4(1), 13–35.
Zulli, D., & Zulli, D. J. (2020). Extending the Internet meme: Conceptualizing technological mimesis and imitation
publics on the TikTok platform. New Media & Society, 1461444820983603.

How to cite this article: Zeng, J., & Kaye, D. B. V. (2022). From content moderation
to visibility moderation: A case study of platform governance on TikTok. Policy &
Internet, 14, 79–95. https://doi.org/10.1002/poi3.287

You might also like