0% found this document useful (0 votes)
44 views3 pages

Policy Brief 1

Hypothetical policy brief concerning the governance strategies of the platform Reddit

Uploaded by

bschurtz3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views3 pages

Policy Brief 1

Hypothetical policy brief concerning the governance strategies of the platform Reddit

Uploaded by

bschurtz3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Brad Schurtz

MDST 3510
10/08/23

Executive Summary
This policy brief examines the lack of transparency around Reddit’s front page content sorting
algorithm, and proposes more clarity on what subreddits, accounts, and content are prioritized
in rising to the front page as well as what sorts of content the algorithm attempts to filter out.
Reddit’s purportedly community driven content sorting platform has become increasingly
obfuscated as the platform attempts to increase its appeal to advertisers. Rather than relying on
community engagement for feeds such as “Best,” Reddit has moved to more personalized
machine-learning algorithmic recommendations while still presenting content as being curated
by the community. This can lead to misinformation and manipulation, as was seen with
r/The_Donald in 2016, and fabricate a false sense of communal thinking. An increase in
transparency would serve to create a more inclusive and unobscured online environment, and
allowing users to see what content is filtered out could encourage users to be more careful
about contributing harmful content, including misinformation or bigoted speech.

Background and Methodology


Reddit’s history regarding harmful content is well documented and much discussed. Due to the
nature of a community-driven content curation system, the potential for “groupthink” and
ideological echo chambers to arise is high. This likely counts among the reasons for the
implementation of machine-learning algorithmic sorting. However, according to a study by
Jasser et al., content marked as “controversial” by Reddit’s algorithm attracts more engagement
by users than non-controversial content. If more controversial and harmful content attracts
more activity, an algorithm built on “engagement” is likely to spread to more users. exposing
more Redditors to harmful content than a true community upvote-downvote system likely
would. While the potential for vote manipulation and misinformation was still present prior to
the implementation of this new algorithm, it was more easily tracked and prevented by
moderators within their communities, over which they had control. The introduction of the
algorithm manipulates an unwitting user’s feed without their knowledge. An understanding of
how and why Reddit promotes certain posts and subreddits onto a user’s default feed would
serve to restore user agency and community curation. As a study by Brady et al. found, the
synthesis of user interaction and algorithmic amplification can lead to misinformation being
spread farther than it ideally should. The study also found that while a majority of what they
dubbed “negative moralized content” is generated by a minority of users, algorithmic
promotion and over-representation may lead a user to believe that those beliefs are more
common.

Key Findings
From the research conducted and studies collected, the best path forward for Reddit to slow or
stop the spread of misinformation and hateful speech is by increasing transparency to better
inform users how their feeds and user experiences are being manipulated by machine learning
technologies. Brady et al. found two solutions to algorithmically promoted misinformation and
biased social learning. In the design-centered approach, more diverse content algorithms would
lessen the creation of echo chambers and decrease the more inflammatory and tribalistic
tendencies of online community building – however, partisan sorting could still be an issue.
Additionally, any algorithm produced by tech companies will be primarily in the interest of
creating profit, rather than a safer experience. As a result, the more person-centered approach
in increasing transparency around algorithmic influence is the best solution, allowing users to
better understand how algorithms affect their experience.

Recommendations
Rather than a complete overhaul of the content moderating algorithmic infrastructure of Reddit
as a platform, which would be expensive, ambitious, and not certain to work, Reddit should
instead inform users of how their current algorithm works and at what points in their user
experience the algorithm is operating. Additionally, Reddit should allow their users to control
the degree to which the content they see is defined by algorithmic influence, giving users the
ability to opt out or in to the algorithm’s manipulation. These two steps would still allow Reddit
to promote the posts the algorithm pushes to the top for those users who opt in while providing
an option for those who want more control over their time on the platform the ability to choose
when and where the algorithm affects the content on their feed.

Conclusion
Reddit is far from the only social media platform whose algorithmic content moderation
practices deserve more scrutiny. However, it is a platform from which many users receive the
majority of their news, and the company’s claims around community driven content curation as
well as Reddit’s role as a news aggregator makes the spread of misinformation on the platform
particularly dangerous. Reddit’s history further proves this – from the mistaken “doxxing” of
multiple innocent people in the aftermath of the Boston Bombing to the undeniable role
r/The_Donald had on the 2016 election, the power of Reddit as a platform and a dispenser of
information is clear. More transparency around the workings of the company’s content
algorithm, as well as the potential biases that the algorithm’s nebulousness may obscure, would
serve to improve user experience on the platform as well as help maintain the platform’s status
among social media platforms as reliable and open. There has been an uptick in discussion on
Reddit about the new algorithm, with one user stating that the algorithm feels “lazier” in that it
shows users the same posts over and over, or saturates the home feed with content they are
not interested in due to what the algorithm dubbed engagement. Additionally, with the recent
controversy the company faced around its decision to essentially ban third party apps built to
browse its platform, it is more important than ever to be transparent and keep the userbase
happy. The steps this paper outlines should be effective in keeping both stakeholders and users
happy.
Works Cited
William J. Brady, Joshua Conrad Jackson, Björn Lindström, M.J. Crockett, Algorithm-mediated
social learning in online social networks, Trends in Cognitive Sciences, Volume 27, Issue 10,
2023, Pages 947-960.

Jasser, J., Garibay, I., Scheinert, S. et al. Controversial information spreads faster and further
than non-controversial information in Reddit. J Comput Soc Sc 5, 111–122 (2022).

https://www.reddit.com/r/beta/comments/1007q6u/
reddit_feeds_have_gotten_significantly_worse_in/

You might also like