Deepfakes, Misinformation, and Disinformation in The Era of Frontier AI, Generative AI, and Large AI Models
Deepfakes, Misinformation, and Disinformation in The Era of Frontier AI, Generative AI, and Large AI Models
Abstract—With the advent of sophisticated artificial intelligence critical in equipping society to navigate the complex dynamics
(AI) technologies, the proliferation of deepfakes and the spread of information dissemination and integrity in the era of frontier
of m/disinformation have emerged as formidable threats to the AI2 .
integrity of information ecosystems worldwide. This paper provides
an overview of the current literature. Within the frontier AI’s Over the last decade, the precipitous advancements in Gen-
crucial application in developing defense mechanisms for detecting erative Artificial Intelligence with large models (LM-based
deepfakes, we highlight the mechanisms through which generative GenAI) have made revolutionary progress in crafting human-
AI based on large models (LM-based GenAI) craft seemingly like multimedia content (e.g., text, image, video, or audio).
convincing yet fabricated contents. We explore the multifaceted
implications of LM-based GenAI on society, politics, and individual
Foundation models are a form of LM-based adaptable models
privacy violations, underscoring the urgent need for robust defense that have become the backbone of significant technological
strategies. To address these challenges, in this study, we intro- progress, driving innovations from autonomous vehicles to
duce an integrated framework that combines advanced detection personalized medicine [2]. However, considering the power of
algorithms, cross-platform collaboration, and policy-driven initia- LM-based GenAI tools, they might bring unprecedented risks
tives to mitigate the risks associated with AI-Generated Content
(AIGC). By leveraging multi-modal analysis, digital watermarking,
and unintended consequences to our society, for instance, by
and machine learning-based authentication techniques, we propose empowering malicious actors to apply for cyber-scamming or
a defense mechanism adaptable to AI capabilities of ever-evolving cyberbullying in the form of deepfake advertisements through
nature. Furthermore, the paper advocates for a global consensus social media platforms [3]. This paper delves into these phe-
on the ethical usage of GenAI and implementing cyber-wellness nomena by discussing both possible outcomes of LM-based
educational programs to enhance public awareness and resilience
against m/disinformation. Our findings suggest that a proactive
GenAI models, their societal impacts, and the urgent need
and collaborative approach involving technological innovation of today’s society for comprehensive defense mechanisms and
and regulatory oversight is essential for safeguarding netizens sufficient cyber-wellness programs [4].
while interacting with cyberspace against the insidious effects of The rise of Deepfake, a portmanteau of “deep learning”
deepfakes and GenAI-enabled m/disinformation campaigns.
and “fake” media, are digital fabrications in which realistic
Index Terms—Deepfakes, disinformation, misinformation, large
AI models, frontier AI, foundation models, AI-generated content likenesses of things are synthetically generated or entirely
(AIGC), generative AI. altered to say or do something that never occurred [5]. Due
to the public accessibility of sophisticated LM-based GenAI
I. I NTRODUCTION tools (e.g., ChatGPT and LivePerson), anyone can craft deep-
The frontier AI, characterized by its advanced capabilities fake contents. As these capabilities become democratized, the
and cutting-edge applications, significantly enhances the realism potential for misuse scales exponentially. Mis/disinformation,
of deepfakes [1]. Concurrently, it is instrumental in devising closely related but not limited to deepfakes, encompasses all
innovative solutions to detect and counter m/disinformation. forms of false or misleading information deliberately spread to
Frontier AI encompasses new, innovative AI technologies that deceive netizens (active Internet users). This phenomenon is
could exhibit sufficiently dangerous capabilities such as gener- not new; however, the advent of LM-based GenAI models has
ative AI, advanced machine learning algorithms, large models, supercharged its potential reach and believability. Multimedia
etc. The implications of frontier AI technologies extend beyond contents (e.g., texts, images, audios, and videos) produced by
technological advancements, necessitating a global consensus the LM-based GenAI tools can now fabricate reality so that
on ethical tools usage and the implementation of comprehen- discovering truth from fiction becomes increasingly challenging
sive cyber-wellness educational programs1. Such measures are (e.g., family voice cloning threats) [6].
1 https://www.whitehouse.gov/briefing-room/statements-
releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on- 2 https://www.channelnewsasia.com/singapore/ai-safety-summit-singapore-
safe-secure-and-trustworthy-artificial-intelligence/ pm-lee-frontier-3892476
1
This paper appears in IEEE International Conference on Computer Applications (ICCA), 2023.
The implications of the LM-based GenAI technologies are ChatBots can disseminate false information by deploying LM-
profound and multifaceted. Democracies worldwide grapple based GenAI tools that can craft fake news articles by claiming
with the ramifications of AI generated content (AIGC) on elec- to be written by reputable sources. Malicious actors can deploy
toral processes and public opinion [5]. Netizens face unprece- these ChatBots to spread using social media platforms, which
dented threats to their privacy and security, as the deepfakes can inadvertently prioritize and amplify misleading content.
that might be created of their public data may act without their
E. Previous Efforts in Combating Digital Misinformation
consent or even knowledge. Furthermore, the media landscape,
the traditional outlets of factual information propagation, is In the literature, many researchers have taken promising steps
undergoing a seismic shift as journalists and content creators to counter digital misinformation involving content moderation,
confront the existential question of what is a piece of trustwor- community reporting, and algorithmic detection [2], [3]. How-
thy information in the post-deepfake era [7]. ever, these methods face challenges, such as the overwhelming
content volume and evolving misinformation techniques. LM-
II. BACKGROUND
based GenAI models play a significant role in spreading deep-
This section gives the background for our discussion. fakes that may cause mis/disinformation, necessitating a deeper
A. Historical Context of Information Manipulation understanding of effective defense strategies [13], [14].
Historically, information manipulation was labor-intensive III. T HE R ISE OF L ARGE AI M ODELS
and required significant resources, restricting its practice to
The third decade of the 21st century is considered the
powerful entities such as state officials or large organiza-
landmark of a turning point in the capabilities of artificial intel-
tions [8]. The infamous propaganda of wartime misinformation
ligence, primarily through the advent of LM-based GenAI. AI
campaigns, psychological operations, and political machinations
foundation models and LM-based GenAI models (i.e., LLMs,
are some testaments of how entities will affect public opinion
LVMs, LAMs, or LMMs) have demonstrated unprecedented
or discredit opposition [9]. The advent of digital technology
proficiency in understanding and generating human-like text,
began a shift, enabling broader participation in information
images, and sounds, leading to significant advancements in
manipulation with the rise of Photoshop, video editing, social
AIGC [15]. This section outlines the development of LM-based
media platforms, and LM-based GenAI tools to disseminate
GenAI models, their capabilities, and their associated risks.
such content widely and rapidly [10].
A. Overview of LM-based GenAI Models
B. Frontier AI Amplifying and Combating Digital Deception
LM-based GenAI models, such as OpenAI’s GPT (Generative
Frontier AI has reshaped the challenges in information
Pre-trained Transformer) series [16], Google’s BERT (Bidirec-
manipulation. Its advances in neural networks and machine
tional Encoder Representations from Transformers) [17], and
learning have heightened deepfakes realism, complicating the
others, represent the modern cutting edge technology, which
distinction between real and fake content3 . Concurrently, fron-
craft contents automatically. These models include Large Lan-
tier AI is crucial in developing tools to counter misinformation
guage Models (LLMs), Large Vision Models (LVMs), Large
and disinformation, as highlighted in recent studies. This dual
Audio Models (LAMs), or Large Multimodel Models (LMMs)
role underscores both its potential for generating and detecting
are characterized by their deep neural networks, which consist
digital falsehoods.
of millions or even billions of parameters that enable them to
C. Evolution of LM-based GenAI Tools in Media Creation process and generate complex data patterns. The “large” in their
The role of LM-based GenAI tools in media creation started name not only denotes their size in terms of parameters but also
benignly enough, with techniques designed to enhance image their vast training datasets and substantial computational power
quality, recommend content, or power voice assistants. As required for performing their operations.
machine learning models advanced, they transcended these B. Training and Functioning
supportive roles, becoming regular tools in content creation.
The training process of the LM-based GenAI models involves
Generative adversarial networks (GANs) [11], introduced in
feeding them enormous datasets, often sourced from the In-
2014, represented a significant leap forward, enabling the
ternet, including books, articles, websites, and other publicly
creation of photorealistic images indistinguishable from actual
available media. This training allows the LM-based GenAI
photographs by the unaided vision systems. The evolution of AI
models to learn the nuances of human language, visual cues,
continues with LM-based GenAI that could synthesize human
and audio patterns [18]. They function by predicting the next
voices, compose music, and create realistic video footage [2].
word in a sentence, the next pixel in an image, or the following
D. AI-Generated Mis/Disinformation waveform in an audio file, learning from context and mimicking
Technically, if deepfakes are generated based on event-related the style and texture of their training data [19].
concepts, they could be formed as mis/disinformation [12]. C. Case Studies of Deepfakes and their Associated
While text manipulation is less technologically complex than M/disinformation
other media files, the implications are no less severe. Automated
Real-world instances of misusing LM-based GenAI tech-
3 https://www.gov.uk/government/publications/frontier-ai-taskforce-first- nologies provide sobering case studies. Deepfake videos have
progress-report/frontier-ai-taskforce-first-progress-report been used to create fake celebrity advertisements, pornographic
2
This paper appears in IEEE International Conference on Computer Applications (ICCA), 2023.
videos, fabricated political speeches, and voice cloning of CEOs verifying content authenticity while the public grows increas-
to commit fraud. AIGC has been employed to create fake news ingly skeptical of media reports [27]. This skepticism can lead
articles and social media posts that have gone viral, influencing to a ‘cry wolf’ scenario, where even legitimate news contents
public opinion and potentially affecting election outcomes [20]. are doubted, contributing to a disconcerting post-truth era where
facts are fungible, and the truth is subjective.
D. Risks Associated with LM-based GenAI Capabilities
Evidently, the risks these models pose are beyond their capa- D. Erosion of Public Trust
bilities as they provide opportunities for academic misconduct The cumulative effects of unchecked deepfakes and misin-
[21] or deepfake phishing [22] and many uncovered threats. The formation are the erosion of public trust [28]. When netizens
fact that LM-based GenAI tools can be deployed in deceptive cannot trust their eyes or ears, they can become cynical and
scenarios as convincing m/disinformation provides opportunities disengaged. This disengagement poses risks not just to polit-
for virtually anyone with the requisite technical know-how to ical processes but to the social fabric that binds communities
launch sophisticated misleading campaigns. The potential for together. Without trust, conspiracy theories flourish, scientific
these technologies to be used for blackmail, electoral interfer- consensus is questioned, and social polarization deepens.
ence, and social unrest is a pressing concern [23]. Moreover, the
E. Legal and Ethical Dilemmas
speed at which AIGC can be produced outstrips the ability of
current detection and moderation systems to keep up, creating a The rise of AIGC has also precipitated legal and ethical
game of digital cat-and-mouse where the mouse is increasingly dilemmas [29]. Current laws are ill-equipped to handle the
agile. nuances of deepfakes, often lagging behind technological ad-
vancements. Ethically, the implications are just as complex
IV. S OCIETAL I MPLICATIONS as creating and distributing deepfakes of people without their
The societal implications of deepfakes and consent, violating their rights.
mis/disinformation generated by LM-based GenAI are
V. T ECHNICAL D EFENSE M ECHANISMS
bringing unprecedented impacts, touching upon every facet of
modern life—from politics and security to individual rights and This section discusses the technological, strategic, and policy-
societal trust [24]. In the following, we provide an overview oriented defense approaches that can mitigate the risks associ-
of the far-reaching consequences of these phenomena and ated with AIGC. Since the realistic construction of deepfakes
underscore the critical need for a robust societal response. and dissemination of mis/disinformation have become more
sophisticated with the advancement of LM-based GenAI tools,
A. Effects on Democracy and Public Opinion developing robust technical defense mechanisms is a complex
In democratic societies, the integrity of public discourse is agenda. Below, we outline current and emerging technologies
foundational. Deepfakes can be deployed as LM-based GenAI- aimed at detecting and countering AIGC and the challenges
generated misinformation that threaten the integrity of news inherent in their deployment.
propagation, as they could be exploited for fabricating scandals,
A. Detection Algorithms
falsifying records of public statements, and manipulating elec-
toral processes. When voters cannot distinguish between real Detection is the first line of defense against AI-generated false
and falsified representations of candidates or policies, the very content. Algorithms designed to identify deepfakes typically
fabric of democratic decision-making is undermined [25]. The analyze various data points that may indicate manipulation,
dissemination of spurious information can sway elections, fuel such as inconsistencies in lighting, unnatural blinking patterns,
political polarization, and erode the public’s trust in democratic or irregularities in skin texture. Advances in machine learning
governments. have led to the development of models that can scrutinize video
frames for signs of alteration at a pixel level, often with the
B. Impact on Privacy and Personal Security aid of deep learning techniques similar to those used to create
The ability to create convincing fake images and videos deepfakes [3]. Audio deepfake detection similarly analyzes
of individuals without getting their consent has raised alarm vocal patterns, looking for subtle signs of manipulation that may
bells regarding privacy and personal security. Deepfakes can be not be apparent to the human ear. These include irregularities
weaponized to discredit individuals, exploit them for blackmail, in speech patterns, breathing sounds, and background noises
or invade their privacy in egregious ways, as seen in the creation [3]. The challenge lies in the fact that as detection algorithms
of non-consensual deepfake pornography [26]. The impacts of become more sophisticated, so too do the methods for creating
such artcrafts are deceptive effects on free expression and the deepfakes, leading to an ongoing arms race between creators
pervasive sense of vulnerability as individuals grapple with the and detectors.
potential for their likeness to be used in harmful ways.
B. AI-Driven Authentication Methods
C. Consequences for Media and Journalism In addition to detection, authentication methods aim to verify
Technically, Journalism’s role as the fourth estate is predi- the origin and integrity of content. Digital watermarking, for
cated on the ability to provide accurate, reliable information. instance, involves embedding a hidden and unique pattern or
Deepfakes and AIGC pose existential challenges to this role. code within the content at the time of creation, which can later
Journalists are forced to contend with the additional burden of be used to confirm its authenticity [30]. Blockchain technology
3
This paper appears in IEEE International Conference on Computer Applications (ICCA), 2023.
offers another layer of security by providing a decentralized and • Content Moderation Enhancements: Using a combination
immutable ledger of content creation and distribution, making of AI-driven and human moderation to detect and flag
unauthorized alterations easily traceable. Another approach is deepfakes.
the use of biometric authentication, which employs unique • Partnerships with Fact-Checkers: Collaborating with inde-
biological characteristics such as facial recognition patterns, pendent fact-checking organizations to verify content.
voiceprints, or even typing rhythms to confirm the identity of • User Reporting Mechanisms: Empowering users to report
individuals in digital media [31]. These methods, however, must suspicious content, which can then be reviewed by special-
balance the need for security with concerns about privacy and ized teams.
the potential for misuse. • Transparency Reports: Publishing regular reports on the
number of deepfakes detected and the actions taken.
C. Machine Learning-Based Authentication Techniques
• User Education: Providing educational resources to help
Machine learning is not only used to create deepfakes but users spot and understand the nature of deepfakes.
can also be harnessed to combat them. Models can be trained
to recognize the digital ‘fingerprints’ left by the AI models that B. Collaborative Filtering and Fact-Checking Initiatives
generate deepfakes. These fingerprints are often subtle flaws Collaborative filtering involves leveraging the collective effort
or patterns in the generated content that are consistent with the of platform users to identify and filter out disinformation [37].
training data or generation method used [32]. By analyzing these This can be facilitated through:
fingerprints, machine learning algorithms can identify whether • Community-Driven Moderation: Enabling community
content has been artificially generated or altered. moderators to review and moderate content within their
domains of expertise.
D. Limitations and Challenges of Current Technologies • Crowdsourced Verification: Utilizing crowdsourcing to
While these technologies show promise, they are not without gather user input on the authenticity of content.
limitations. Deepfake creation techniques are evolving rapidly, • Real-Time Fact-Checking: Implementing systems that pro-
and detection methods must continually adapt to keep pace [33]. vide live fact-checking during events, speeches, and de-
Moreover, the computational resources required to analyze large bates.
volumes of content in real time are substantial, and false
C. User-Centric Approaches
positives remain a concern. Another challenge is the ease of
access to deepfake generation tools, which can be used by Putting users at the center of the defense strategy involves
individuals with minimal technical expertise, further complicat- education and empowerment [38]. This includes:
ing detection efforts [34]. Additionally, the adaptability of AI • Digital Literacy Programs: Educating the public on digital
means that as soon as a detection method becomes effective, media, the existence of deepfakes, and the importance of
new techniques are developed to circumvent it. This cat-and- critical thinking online.
mouse dynamic requires a proactive and dynamic approach to • Critical Media Literacy: Encouraging users to question the
defense mechanism development. source and intent behind the content they consume.
• Promotion of Verified Content: Boosting the visibility of
E. The Need for Open Collaboration content from verified and reputable sources.
Given the scale and complexity of the challenge, open
D. Community Guidelines and Enforcement
collaboration between academia, industry, and government is
necessary. Sharing data, research findings, and strategies can ac- Platforms must establish clear community guidelines that de-
celerate the development of effective defense mechanisms [35]. fine acceptable use and the consequences of spreading deepfakes
Transparency in the functioning of detection and authentication and mis/disinformation [39]. Enforcement actions may include:
technologies is also crucial to build trust and ensure these tools • Content Removal: Removing or demoting content that
are used responsibly. violates platform policies.
• Account Suspension: Temporarily or permanently suspend-
VI. C ROSS -P LATFORM S TRATEGIES ing accounts that repeatedly disseminate fake content.
The digital ecosystem’s interconnected nature necessitates • User Feedback: Informing users when they have interacted
cross-platform strategies to combat the spread of deepfakes and with or shared false content.
mis/disinformation effectively. This section outlines a collabo-
E. Developing Standardized Protocols
rative approach that spans various stakeholders, including social
media companies, technology firms, content creators, and end- To streamline cross-platform efforts, there is a need for
users. standardized protocols for content verification, data sharing, and
incident response. This could involve [40]:
A. The Role of Social Media and Technology Companies • Interoperable Verification Tags: Creating tags that indicate
Social media platforms are the primary battlegrounds for content has been verified, which can be recognized across
the spread of deepfakes and mis/disinformation due to their different platforms.
vast reach and the speed at which content can go viral. These • Data Sharing Agreements: Establishing agreements to
companies have a responsibility to actively monitor and mitigate share data on deepfakes and misinformation trends and
the spread of fake content. Strategies include [36]: techniques.
4
This paper appears in IEEE International Conference on Computer Applications (ICCA), 2023.
• Joint Response Frameworks: Developing coordinated re- D. The Ethical Use of Deepfakes
sponse plans for widespread disinformation campaigns.
While deepfakes are often discussed in negative terms, they
also have potentially positive applications:
VII. E THICAL C ONSIDERATIONS
• Artistic and Educational Uses: Deepfakes can be used for
The ethical implications of deepfakes and misinformation legitimate artistic expression or educational purposes, such
are as vast and complex as their technical and social counter- as recreating historical speeches [5].
parts [41]. This section explores the moral landscape that AIGC • Medical and Therapeutic Applications: There are possibil-
presents, the responsibilities of creators and disseminators, ities for using deepfake technology in medical simulations
and the overarching need for ethical guidelines to shape the or therapeutic settings [45].
evolution of AI technologies.
VIII. P ROPOSED I NTEGRATED D EFENSE F RAMEWORK
A. Ethical AI Development and Use
The development of AI technologies is not value-neutral; The multifaceted nature of the threats posed by deepfakes and
it reflects the biases, priorities, and ethical orientations of its mis/disinformation necessitates a comprehensive response [46].
creators. Therefore, the following needs to be addressed. This section proposes an integrated defense framework that
synthesizes technological, strategic, policy-oriented, and edu-
• Bias and Fairness: There is a need for ethical AI de- cational responses to these threats.
velopment that actively seeks to minimize biases in
training data and algorithms, ensuring fairness and non-
discrimination [42]. A. Design of the Integrated Defense Framework
• Transparency: AI systems should be developed with trans- The proposed framework is designed with four key pillars:
parency in mind, allowing for traceability and explainabil-
ity in the AI’s decision-making processes [43]. • Technological Solutions: Incorporating advanced detection
• Accountability: Developers and users of AI must be ac- algorithms, AI-driven authentication methods, and machine
countable for the outcomes of their technologies, partic- learning-based authentication techniques.
ularly when they impact public opinion or infringe on • Strategic Initiatives: Implementing cross-platform strate-
personal rights [44]. gies, including content moderation enhancements and col-
laborative filtering.
• Policy and Regulation: Developing new legislation and
B. The Balance between Innovation and Regulation
ethical guidelines that clearly define and impose penalties
There is a delicate balance to be maintained between en- for the creation and distribution of deepfakes.
couraging innovation in AI and implementing regulations that • Education and Public Awareness: Launching comprehen-
protect against its misuse: sive educational programs and public awareness campaigns
• Innovation-Friendly Policies: Policies should aim to foster to improve media literacy and critical thinking.
innovation and the beneficial applications of AI while
guarding against risks. B. Implementation of the Framework
• Proactive Ethical Design: AI should be designed proac-
tively with ethical considerations in mind, rather than For effective implementation, the framework requires:
retroactively applying ethical standards to existing tech- • Multi-Stakeholder Collaboration: Coordination among
nologies. governments, tech companies, academia, and civil society
to ensure a united front against deepfakes.
C. Future Outlook and Philosophical Implications • Resource Allocation: Commitment of financial, human,
and technological resources to support the framework’s
AI’s capabilities force us to confront deep philosophical ques- initiatives.
tions about the nature of truth, reality, and human experience: • Adaptive Strategies: Continuous adaptation of strategies to
• Ontological Questions: As AI blurs the lines between address the evolving nature of deepfake and misinforma-
reality and simulation, we must address the ontological tion tactics [47].
status of experiences and entities created by AI.
• Epistemological Considerations: The proliferation of deep- C. Case Study: Applying the Framework in a Simulated Envi-
fakes calls into question the basis of knowledge and the ronment
conditions under which we can claim to know something
as true or false. To validate the framework, a simulated environment that
• Human Agency and Autonomy: There is a need to consider replicates the complex ecosystem of media platforms and AIGC
how AI impacts human agency and autonomy, particularly can be created [48]. Here, the framework’s components would
when individuals are subject to AI-generated representa- be tested against various attack scenarios to assess their effec-
tions without their consent. tiveness and identify areas for improvement.
5
This paper appears in IEEE International Conference on Computer Applications (ICCA), 2023.
D. Analysis of Framework Effectiveness B. Open Challenges and Areas for Future Research
Evaluating the effectiveness of the defense framework in- Several challenges remain open, requiring ongoing attention:
volves: • Technological Advancement: Keeping defensive measures
• Monitoring and Evaluation: Regular assessment of each up-to-date with the latest advancements in AI and deepfake
pillar’s performance in detecting and countering deepfakes. technologies.
• Feedback Mechanisms: Systems for collecting feedback • Global Cooperation: Achieving a consensus on interna-
from stakeholders to inform the iterative improvement of tional standards and cooperation in the face of geopolitical
the framework. tensions and differing national interests.
• Benchmarking: Setting benchmarks for success and con- • Public Engagement: Ensuring continued public engage-
ducting comparative analysis with other defense strategies. ment and understanding in the face of “fatigue” around the
topic of misinformation. Future research areas are plentiful,
including:
E. Potential Unforeseen Consequences and Mitigation Strate-
• Behavioral Insights: Gaining a deeper understanding of
gies
why people create and spread misinformation, and how
While the framework aims to be comprehensive, there may be they are influenced by it.
unforeseen consequences, such as over-censorship or the stifling • Economic Models: Developing economic models to un-
of innovation. Mitigation strategies include: derstand the incentives behind the spread of deepfakes and
misinformation [50].
• Ethical Oversight: Establishing ethical oversight commit-
• Technological Innovations: Exploring new technological
tees to review the impact of defense measures.
innovations that can preemptively address the creation of
• Balanced Approach: Ensuring a balanced approach that
deepfakes.
respects freedom of expression while protecting against
misinformation. X. C ONCLUSION
• Rapid Response Protocols: Developing protocols for
rapidly addressing negative consequences as they arise. The paper emphasizes the critical role of frontier AI in
countering the profound threat of deepfakes and generative AI
to global information ecosystems. It underscores the need for
IX. D ISCUSSION
a comprehensive, multi-faceted defense strategy that evolves in
The emergence of deepfakes and the proliferation of tandem with frontier AI advancements. The paper highlights the
mis/disinformation through the use of advanced AI models pose importance of developing sophisticated technological solutions,
a significant threat to the integrity of information, necessitating adaptable international policies, and enhancing public education
a multi-pronged approach to mitigation [49]. This discussion in media literacy to effectively combat these threats. Advocating
evaluates the proposed solutions, explores potential unintended for a collaborative approach, it integrates the advancements
consequences, and highlights ongoing challenges and areas for in frontier AI with regulatory strategies and media literacy
future research. education, framing the battle against deepfakes as not only a
technical challenge but a broader societal issue.
A. Analysis of the Proposed Solutions’ Effectiveness ACKNOWLEDGEMENT
The proposed integrated framework’s effectiveness hinges on This research is partly supported by the Singapore Ministry
the synergy between its components: of Education Academic Research Fund under Grant Tier 1
• Technological Efficacy: The rapid detection of deepfakes RG90/22, Grant Tier 1 RG97/20, Grant Tier 1 RG24/20 and
is crucial. However, as the technology to create deepfakes Grant Tier 2 MOE2019-T2-1-176; and partly by the Nanyang
becomes more sophisticated, detection methods may need Technological University (NTU)-Wallenberg AI, Autonomous
to become more specialized, potentially leading to an arms Systems and Software Program (WASP) Joint Project.
race between creation and detection capabilities.
• Strategic Resilience: Cross-platform strategies emphasize R EFERENCES
the need for a coordinated response to misinformation. The [1] M. Anderljung, J. Barnhart, J. Leung, A. Korinek, C. O’Keefe, J. Whittle-
scalability of such initiatives is vital, as is the ability to stone, S. Avin, M. Brundage, J. Bullock, D. Cass-Beggs et al., “Frontier
adapt quickly to new forms of disinformation. ai regulation: Managing emerging risks to public safety,” arXiv preprint
arXiv:2307.03718, 2023.
• Policy Impact: The effectiveness of policy measures will [2] Y. Cao, S. Li, Y. Liu, Z. Yan, Y. Dai, P. S. Yu, and L. Sun, “A
largely depend on their enforcement and the international comprehensive survey of AI-generated content (AIGC): A history of
community’s willingness to adopt and implement harmo- generative AI from GAN to ChatGPT,” arXiv preprint arXiv:2303.04226,
2023.
nized standards. [3] Y. Mirsky and W. Lee, “The creation and detection of deepfakes: A
• Educational Outcomes: Long-term, the success of edu- survey,” ACM Computing Surveys (CSUR), vol. 54, no. 1, pp. 1–41, 2021.
cational programs in enhancing the public’s ability to [4] W. Shin and M. O. Lwin, “Parental mediation of children’s digital media
use in high digital penetration countries: perspectives from singapore and
discern true from false information may be one of the most australia,” Asian Journal of Communication, vol. 32, no. 4, pp. 309–326,
sustainable defenses against misinformation. 2022.
6
This paper appears in IEEE International Conference on Computer Applications (ICCA), 2023.
[5] V. Danry, J. Leong, P. Pataranutaporn, P. Tandon, Y. Liu, R. Shilkrot, [27] K. Wahl-Jorgensen and M. Carlson, “Conjecturing fearful futures: jour-
P. Punpongsanon, T. Weissman, P. Maes, and M. Sra, “AI-generated nalistic discourses on deepfakes,” Journalism Practice, vol. 15, no. 6, pp.
characters: putting deepfakes to good use,” in CHI Conference on Human 803–820, 2021.
Factors in Computing Systems Extended Abstracts, 2022, pp. 1–5. [28] M. Pawelec, “Deepfakes and democracy (theory): how synthetic audio-
[6] N. Amezaga and J. Hajek, “Availability of voice deepfake technology visual media for disinformation and hate speech threaten core democratic
and its impact for good and evil,” in Proceedings of the 23rd Annual functions,” Digital Society, vol. 1, no. 2, p. 19, 2022.
Conference on Information Technology Education, 2022, pp. 23–28. [29] C. Öhman, “Introducing the pervert’s dilemma: a contribution to the
[7] J. Fletcher, “Deepfakes, artificial intelligence, and some kind of dystopia: critique of deepfake pornography,” Ethics and Information Technology,
The new faces of online post-fact performance,” Theatre Journal, vol. 70, vol. 22, no. 2, pp. 133–140, 2020.
no. 4, pp. 455–471, 2018. [30] Y. Wang, “Synthetic realities in the digital age: Navigating the opportu-
[8] R. W. Zmud, “Opportunities for strategic information manipulation nities and challenges of ai-generated content,” 2023.
through new information technology,” Organizations and Communication [31] C. Campbell, K. Plangger, S. Sands, and J. Kietzmann, “Preparing for an
Technology, pp. 95–116, 1990. era of deepfakes and ai-generated ads: A framework for understanding
responses to manipulated advertising,” Journal of Advertising, vol. 51,
[9] D. Silverman, K. Kaltenthaler, and M. Dagher, “Seeing is disbelieving: the no. 1, pp. 22–38, 2022.
depths and limits of factual misinformation in war,” International Studies [32] A. Mitra, S. P. Mohanty, P. Corcoran, and E. Kougianos, “A machine
Quarterly, vol. 65, no. 3, pp. 798–810, 2021. learning based approach for deepfake detection in social media through
[10] M. T. Ahvanooey, Q. Li, X. Zhu, M. Alazab, and J. Zhang, “ANiTW: A key video frame extraction,” SN Computer Science, vol. 2, pp. 1–18, 2021.
novel intelligent text watermarking technique for forensic identification of [33] H. Zhao, W. Zhou, D. Chen, T. Wei, W. Zhang, and N. Yu, “Multi-
spurious information on social media,” Computers & Security, vol. 90, p. attentional deepfake detection,” in Proceedings of the IEEE/CVF Confer-
101702, 2020. ence on Computer Vision and Pattern Recognition, 2021, pp. 2185–2194.
[11] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, [34] M. Masood, M. Nawaz, K. M. Malik, A. Javed, A. Irtaza, and H. Malik,
S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” “Deepfakes generation and detection: State-of-the-art, open challenges,
Advances in Neural Information Processing Systems, vol. 27, 2014. countermeasures, and way forward,” Applied Intelligence, vol. 53, no. 4,
[12] J. Zhou, Y. Zhang, Q. Luo, A. G. Parker, and M. De Choudhury, “Synthetic pp. 3974–4026, 2023.
lies: Understanding AI-generated misinformation and evaluating algorith- [35] D. Krishna, “Deepfakes, online platforms, and a novel proposal for
mic and human solutions,” in Proceedings of the 2023 CHI Conference transparency, collaboration, and education,” Rich. JL & Tech., vol. 27,
on Human Factors in Computing Systems, 2023, pp. 1–20. p. 1, 2020.
[13] B. He, Y. Hu, Y. Lee, S. Oh, G. Verma, and S. Kumar, “A survey [36] J. T. Hancock and J. N. Bailenson, “The social impact of deepfakes,” pp.
on the role of crowds in combating online misinformation: Annotators, 149–152, 2021.
evaluators, and creators,” arXiv, 2023, accessed November 21, 2023. [37] A. Ünver, “Emerging technologies and automated fact-checking: Tools,
[14] S. Siwakoti, J. N. Shapiro, and N. Evans, “Less reliable media drive inter- techniques and algorithms,” Techniques and Algorithms (August 29, 2023),
est in anti-vaccine information,” Harvard Kennedy School Misinformation 2023.
Review, 2023. [38] P. Gupta, K. Chugh, A. Dhall, and R. Subramanian, “The eyes know
[15] M. U. Hadi, R. Qureshi, A. Shah, M. Irfan, A. Zafar, M. B. Shaikh, it: Fakeet-an eye-tracking database to understand deepfake perception,”
N. Akhtar, J. Wu, S. Mirjalili et al., “Large language models: a com- in Proceedings of the 2020 International Conference on Multimodal
prehensive survey of its applications, challenges, limitations, and future Interaction, 2020, pp. 519–527.
prospects,” 2023. [39] K. Kikerpill, A. Siibak, and S. Valli, “Dealing with deepfakes: Reddit, on-
line content moderation, and situational crime prevention,” in Theorizing
[16] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal,
Criminality and Policing in the Digital Media Age. Emerald Publishing
A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language Limited, 2021, vol. 20, pp. 25–45.
models are few-shot learners,” Advances in Neural Information Processing
[40] K. Shiohara and T. Yamasaki, “Detecting deepfakes with self-blended
Systems, vol. 33, pp. 1877–1901, 2020.
images,” in Proceedings of the IEEE/CVF Conference on Computer Vision
[17] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training and Pattern Recognition, 2022, pp. 18 720–18 729.
of deep bidirectional transformers for language understanding,” arXiv [41] N. Diakopoulos and D. Johnson, “Anticipating and addressing the ethical
preprint arXiv:1810.04805, 2018. implications of deepfakes in the context of elections,” New Media &
[18] S. Biderman, H. Schoelkopf, Q. G. Anthony, H. Bradley, K. O’Brien, Society, vol. 23, no. 7, pp. 2072–2098, 2021.
E. Hallahan, M. A. Khan, S. Purohit, U. S. Prashanth, E. Raff et al., [42] B. Giovanola and S. Tiribelli, “Beyond bias and discrimination: redefining
“Pythia: A suite for analyzing large language models across training and the AI ethics principle of fairness in healthcare machine-learning algo-
scaling,” in International Conference on Machine Learning. PMLR, rithms,” AI & Society, vol. 38, no. 2, pp. 549–563, 2023.
2023, pp. 2397–2430. [43] U. Ehsan, Q. V. Liao, M. Muller, M. O. Riedl, and J. D. Weisz, “Expanding
[19] G. Xiao, J. Lin, M. Seznec, H. Wu, J. Demouth, and S. Han, explainability: Towards social transparency in AI systems,” in ACM CHI
“Smoothquant: Accurate and efficient post-training quantization for large Conference on Human Factors in Computing Systems, 2021, pp. 1–19.
language models,” in International Conference on Machine Learning. [44] A. Henriksen, S. Enni, and A. Bechmann, “Situated accountability: Ethical
PMLR, 2023, pp. 38 087–38 099. principles, certification standards, and explanation methods in applied AI,”
[20] D. Xu, S. Fan, and M. Kankanhalli, “Combating misinformation in the era in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and
of generative AI models,” in Proceedings of the 31st ACM International Society, 2021, pp. 574–585.
Conference on Multimedia, 2023, pp. 9291–9298. [45] H.-C. Yang, A. R. Rahmanti, C.-W. Huang, and Y.-C. J. Li, “How
[21] S. A. Bin-Nashwan, M. Sadallah, and M. Bouteraa, “Use of chatgpt can research on artificial empathy be enhanced by applying deepfakes?”
in academia: Academic integrity hangs in the balance,” Technology in Journal of Medical Internet Research, vol. 24, no. 3, p. e29506, 2022.
Society, vol. 75, p. 102370, 2023. [46] D. Kelly and J. Burkell, “It’s not (all) about the information: The role of
[22] Y. Mirsky, A. Demontis, J. Kotak, R. Shankar, D. Gelei, L. Yang, cognition in creating and sustaining false beliefs,” Cambridge Studies on
X. Zhang, M. Pintor, W. Lee, Y. Elovici et al., “The threat of offensive Governing Knowledge Commons, 2024.
ai to organizations,” Computers & Security, vol. 124, p. 103006, 2023. [47] P. T. Jaeger and N. G. Taylor, “Arsenals of lifelong information literacy:
Educating users to navigate political and current events information in
[23] M. Mustak, J. Salminen, M. Mäntymäki, A. Rahman, and Y. K. Dwivedi,
world of ever-evolving misinformation,” The Library Quarterly, vol. 91,
“Deepfakes: Deceptions, mitigations, and opportunities,” Journal of Busi-
no. 1, pp. 19–31, 2021.
ness Research, vol. 154, p. 113368, 2023.
[48] Y. Liu, H. Du, D. Niyato, J. Kang, Z. Xiong, C. Miao, and A. Jamalipour,
[24] S. Gregory, “Fortify the truth: How to defend human rights in an age of “Blockchain-empowered lifecycle management for AI-generated content
deepfakes and generative ai,” p. huad035, 2023. (AIGC) products in edge networks,” arXiv preprint arXiv:2303.02836,
[25] K. J. Schiff, D. S. Schiff, and N. Bueno, “The liar’s dividend: The impact 2023.
of deepfakes and fake news on trust in political discourse,” 2023. [49] K. Langmia, Black Communication in the Age of Disinformation: Deep-
[26] U. A. Ciftci, G. Yuksek, and I. Demir, “My face my choice: Privacy Fakes and Synthetic Media. Springer Nature, 2023.
enhancing deepfakes for social media anonymization,” in Proceedings of [50] N. Kshetri, “The economics of deepfakes,” Computer, vol. 56, no. 8, pp.
the IEEE/CVF Winter Conference on Applications of Computer Vision, 89–94, 2023.
2023, pp. 1369–1379.
7
This figure "fig1.png" is available in "png" format from:
http://arxiv.org/ps/2311.17394v1