0% found this document useful (0 votes)
208 views7 pages

(IJCST-V11I4P16) :nikhil Sontakke, Sejal Utekar, Shivansh Rastogi, Shriraj Sonawane

Due to the widespread use of smartphones with high-quality digital cameras and easy access to a wide range of software apps for recording, editing, and sharing videos and images, as well as the deep learning AI platforms, a new phenomenon of 'faking' videos has emerged. Deepfake algorithms can create fake images and videos that are virtually indistinguishable from authentic ones. Therefore, technologies that can detect and assess the integrity of digital visual media are crucial.

Uploaded by

EighthSenseGroup
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
208 views7 pages

(IJCST-V11I4P16) :nikhil Sontakke, Sejal Utekar, Shivansh Rastogi, Shriraj Sonawane

Due to the widespread use of smartphones with high-quality digital cameras and easy access to a wide range of software apps for recording, editing, and sharing videos and images, as well as the deep learning AI platforms, a new phenomenon of 'faking' videos has emerged. Deepfake algorithms can create fake images and videos that are virtually indistinguishable from authentic ones. Therefore, technologies that can detect and assess the integrity of digital visual media are crucial.

Uploaded by

EighthSenseGroup
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

International Journal of Computer Science Trends and Technology (IJCST) – Volume 11 Issue 4, Jul-Aug 2023

RESEARCH ARTICLE OPEN ACCESS

Comparative Analysis of Deep-Fake Algorithms


Nikhil Sontakke [1], Sejal Utekar [2], Shivansh Rastogi [3], Shriraj Sonawane [4]
[1]
Department of Computer Engineering,Vishwakarma Institute of Technology, Pune Maharashtra
[2]
Department of Computer Engineering,Vishwakarma Institute of Technology, Pune Maharashtra
[3]
Department of Computer Engineering,Vishwakarma Institute of Technology, Pune Maharashtra
[4]
Department of Computer Engineering,Vishwakarma Institute of Technology, Pune Maharashtra

ABSTRACT
Due to the widespread use of smartphones with high-quality digital cameras and easy access to a wide range of software apps
for recording, editing, and sharing videos and images, as well as the deep learning AI platforms, a new phenomenon of 'faking'
videos has emerged. Deepfake algorithms can create fake images and videos that are virtually indistinguishable from authentic
ones. Therefore, technologies that can detect and assess the integrity of digital visual media are crucial. Deepfakes, also known
as deep learning-based fake videos, have become a major concern in recent years due to their ability to manipulate and alter
images and videos in a way that is virtually indistinguishable from the original. These deepfake videos can be used for
malicious purposes such as spreading misinformation, impersonating individuals, and creating fake news. Deepfake detection
technologies use various approaches such as facial recognition, motion analysis, and audio-visual synchronization to identify
and flag fake videos. However, the rapid advancement of deepfake technologies has made it increasingly difficult to detect
these videos with high accuracy. In this paper, we aim to provide a comprehensive review of the current state of deepfake
creation and detection technologies. We examine the various deep learning-based approaches used for creating deepfakes, as
well as the techniques used for detecting them. Additionally, we analyze the limitations and challenges of current deepfake
detection methods and discuss future research directions in this field. Overall, the paper highlights the importance of continued
research and development in deepfake detection technologies in order to combat the negative impact of deepfakes on society
and ensure the integrity of digital visual media.
Keywords: - Deep Learning, Python, Deepfake, Videos, Digital Forensics, Manipulating, Detecting, Classifying, Segmenting,
Machine Learning.

deepfakes since they have a vast amount of videos and


I. INTRODUCTION photographs online. In pornographic photographs and movies,
Deepfake detection is a relatively new field of research that deepfakes were utilized to replace the faces of celebrities and
emerged in response to the growing threat of manipulated politicians with bodies. In 2017, the first deepfake video was
media, particularly videos, in which the appearance and/or released, in which a celebrity's face was replaced with a porn
behavior of individuals is artificially altered using deep actor's face. Deepfake methods can be used to make movies of
learning techniques. The term "deepfake '' was first coined in world leaders with fake speeches for the aim of falsification,
2017, and since then, the technology has evolved rapidly, which poses a threat to global security. In 2018, a fake video
making it easier and cheaper to create convincing deepfake was made of Barack Obama, using statements he never said.
videos. Furthermore, DeepFakes have already been used to distort Joe
Biden footage showing his tongue out during the US 2020
Technological advancements, particularly in handheld election. These detrimental applications of deepfakes can have
devices with high-definition cameras, combined with the a significant impact on our society and can lead to the spread
widespread use of artificial intelligence tools, models, and of false information, particularly on social media [3].
apps, have resulted in a large number of videos of world- Several big companies have decided to take action against
famous celebrities and leaders that have been doctored to this phenomenon, Google has created a database of fake
convey fake news for political gain or to ridicule specific videos to support researchers who are developing new
individuals [1]. Photographic and video evidence are routinely techniques to detect them, while AWS, Facebook, Microsoft,
utilized in courtrooms and police investigations and are the Partnership on AI’s Media Integrity Steering Committee,
considered to be trustworthy. Video evidence, however, is and academics have come together to build the Deepfake
becoming potentially untrustworthy as video manipulation Detection Challenge (DFDC). The goal of the challenge is to
techniques progress. It's likely that, in the not-too-distant spur researchers around the world to build innovative new
future, video evidence will need to be reviewed for signs of technologies that can help detect deep fakes and manipulated
tampering before being considered acceptable in court. media.
To train models to create photorealistic images and videos, Deepfake detection is a rapidly growing field of research
deepfake methods often require a vast amount of image and that aims to identify and mitigate the threat of manipulated
video data. Celebrities and politicians are the first targets of media, particularly videos, in which the appearance and/or

ISSN: 2347-8578 www.ijcstjournal.org Page 109


International Journal of Computer Science Trends and Technology (IJCST) – Volume 11 Issue 4, Jul-Aug 2023

behavior of individuals is artificially altered using deep processing. Deepfakes employs deep learning technology to
learning techniques. The use of deepfake technology has the modify photographs and videos of people to the point where
potential to cause significant harm in areas such as politics, humans are unable to distinguish them from the genuine thing.
entertainment, and personal relationships. In response, Many studies have been undertaken in recent years to better
researchers have developed various algorithms and techniques understand how deepfake function, and many deep learning-
using artificial intelligence (AI) to detect deepfakes and based algorithms have been presented to detect deepfake
preserve the integrity of digital media. This paper aims to videos or photos. They conducted a thorough review of
provide an overview of the current state-of-the-art in deepfake deepfake production and detection technologies utilizing deep
detection, including the most widely used algorithms and learning methodologies in this paper. They also provided a
approaches in the field. The goal of this paper is to provide a thorough examination of various technologies and their use in
comprehensive introduction to deepfake detection using AI the identification of deepfake [3].
and to highlight the challenges and opportunities in this The Deepfake phenomena has exploded in popularity in
rapidly evolving field of research. recent years as a result of the ability to make very realistic
We examined various deepfake strategies for the images using deep learning methods, mostly adhoc Generative
development and detection of deepfakes in our article. Adversarial Networks (GAN). They concentrate their research
on the study of Deepfakes of human faces with the goal of
developing a novel detection approach that can detect a
II. LITERATURE REVIEW forensics trail buried in photos, similar to a fingerprint left in
Thanh Thi Nguyen, Cuong M. Nguyen, Dung Tien the image generating process. They introduced a technique
Nguyen, Duc Thanh Nguyen and Saeid Nahavandi mentioned that extracts a set of local characteristics specially targeted to
that Deep learning has been used to handle a variety of represent the underlying neural generation process using an
complicated challenges, including large data analytics, Expectation Maximization (EM) algorithm. Experimental
computer vision, and human-level control. Deep learning experiments with naive classifiers on five alternative
technologies, on the other hand, have been used to develop architectures (GDWCT, STARGAN, ATTGAN,
software that poses a risk to privacy, democracy, and national STYLEGAN, STYLEGAN2) on the CELEBA dataset as
security. "deepfake'' is a recent example of a deep learning- ground-truth for non-fakes were used for ad-hoc validation
powered application. Deepfake algorithms can make fake [6].
photos and videos that people can't tell apart from the real
thing. As a result, the development of tools that can detect and III. METHODOLOGIES
analyze the integrity of digital visual material is critical. Their Deepfake is a methodology for creating false photos and
study included a survey of deepfake-creation algorithms and, videos that employs the concepts of Generative Adversarial
more crucially, deepfake detection methods proposed in the Networks (GANs). We'll start with an overview of the current
literature to date. They hold in-depth conversations on the applications and technologies for creating deepfake images
difficulties, research trends, and future prospects of deepfake and videos in this part. Then, to address this issue, we describe
technologies [1]. different deep learning detection strategies.
Koopman, Marissa, Andrea Macarulla Rodriguez, and Zeno
Geradts in their paper stated that deep fake poses a threat to A. DeepFake Generation Methodologies:
global security and integrity of multimedia, the Deepfake Deep fakes can be generated and detected using a variety of
algorithm allows a user to photorealistically swap the face of models. GAN (Generative Adversarial Networks) is a model
one actor in a video with the face of another. This raises that generates Deep Fakes . GANs, or Generative Adversarial
forensic issues in terms of video evidence dependability. Networks, is a type of generative modeling that employs deep
Photo response non uniformity (PRNU) analysis is examined learning techniques such as convolutional neural networks.
for its effectiveness at identifying Deepfake video tampering
as part of a solution. The PRNU study reveals a substantial Generative modeling belongs to the unsupervised learning
difference between authentic and Deepfake videos in mean category in machine learning this model can then be used to
normalized cross correlation scores [4]. discover and learn regularities or patterns in input data so that
Lyu, Siwei stated that the status of films and audios as it can be applied to the input data of the system and the model
definitive evidence of events has begun to be challenged by may be used to produce or output new examples that could
high-quality fake videos and audios generated by AI have been drawn from the original dataset.
algorithms (deep fakes). They explored research opportunities As generative models, GANs are exciting and rapidly
in this area and highlighted a number of these problems in advancing, promising to provide realistic examples across a
their publication [5]. wide range of problem domains, most notably in image-to-
Guarnera, Luca, Oliver Giudice, and Sebastiano Battiato in image translation tasks, such as converting images of winter
their paper mentioned deep learning is a powerful and or summer to day and night, and in generating photorealistic
versatile technology that has been widely used in domains photos of objects, scenes, and people that even humans cannot
such as computer vision, machine vision, and natural language discern as fake.

ISSN: 2347-8578 www.ijcstjournal.org Page 110


International Journal of Computer Science Trends and Technology (IJCST) – Volume 11 Issue 4, Jul-Aug 2023

supervised vs. unsupervised learning and discriminative vs. known as hidden variables, are variables that are relevant to a
generative modeling are discussed in the context of GANs. domain but are not easily visible.
GANs are a system for automatically training a generative A latent space is a compressed or high-level idea of
model by treating an unsupervised problem as supervised and observable raw data, such as the distribution of input data. In
employing both a generative and a discriminative model. the case of GANs, the generator model assigns meaning to
GANs offer a route to advanced domain-specific data points in a given latent space, allowing new points to be
augmentation as well as a solution to problems that demand drawn from the latent space as input and utilized to produce
generative solutions, such as image-to-image translation. new and varied output examples.
The GAN model architecture is made up of two sub-models
i.e a generator model for creating new instances and a
discriminator model for determining whether the generated
examples are fake, generated by the generator model, or real
generated from the domain. Generator is the model used to
produce fresh credible instances from the area of the problem
and Discriminator Model is used for determining if instances
are genuine (from the domain) or not (generated).

Fig 2. Generator Model of GAN

The Discriminator Model:


The discriminator model takes a domain example (actual or
produced) as input and predicts whether it is real or fake
(generated).
The real-world example is taken from the training data set.
The generator model produces the generated examples. A
conventional (and well-understood) classification model
serves as the discriminator. The discriminator model is
destroyed after the training procedure because we are only
interested in the generator.
Because it has learned to extract characteristics from
examples in the issue area, the generator can sometimes be
repurposed. Using the same or similar input data, some or all
of the feature extraction layers can be employed in transfer
learning applications.
Fig. 1. Working of Generative Adversarial Network Model

The Generator Model:


An input random vector of fixed length is used as input for
the generator model in order to generate a sample in the
domain.
A Gaussian distribution is employed to generate the vector,
which is then used to seed the generative process. Points in
this multidimensional vector space will correspond to points
in the issue domain during training, resulting in a compressed
representation of the data distribution.

This vector space is called a latent space (a latent variable


is a random variable that we can't see clearly) or a vector
space made up of latent variables. Latent variables, often

ISSN: 2347-8578 www.ijcstjournal.org Page 111


International Journal of Computer Science Trends and Technology (IJCST) – Volume 11 Issue 4, Jul-Aug 2023

F
ig. 4. Working of CNN model

Transfer learning can be used to detect DeepFakes for


practical reasons. Transfer learning makes use of pre-trained
neural network weights to train a fine-tuned version of the
same on a different dataset for a given purpose. VGG Net,
Xception, Inception, and ResNet are just a few of the pre-
trained models that have been given open source. A fine-tuned
model based on a pre-trained convolutional network, such as
the VGG Net, is offered for Image Analysis on human faces.
Fig. 3. Discriminator Model of GAN
2) RNN (Recurrent Neural Networks):-
Long Short Term Memory (LSTM) networks are a form of
B. DeepFake Detection Techniques Recurrent Neural Network (RNN) that were first used to learn
DeepFakes employ an unique approach that involves long-term data dependencies. When a deep learning
modifying fixed areas on the face that must be used as a architecture includes both an LSTM and a CNN, it is referred
foundation for superimposition. The algorithm generates to as "deep in space" and "deep in time," which are two
different deepfakes in a similar fashion, resulting in some distinct system modalities. In visual identification tasks,
differences during the editing process. CNNs have had a lot of success, while LSTMs are commonly
Compression differences, illumination differences, and utilized for long sequence processing difficulties. A
temporal discrepancies such as lip and eye movements can all convolutional LSTM architecture has been extensively studied
be used to train algorithms to detect DeepFake movies [14]. for other computer vision tasks involving sequences (e.g.
Below mentioned are some techniques that are used for activity recognition or human re-identification in videos) and
DeepFake detection. has resulted in significant improvements due to its inherent
properties (rich visual description, long-term temporal
1) Convolution Neural Network :- memory, and end-to-end training).
Convolution Neural Networks (CNN) have been a popular Another application of artificial neural networks is the
choice among the approaches offered for DeepFake detection. recurrent neural network (RNN), which can learn
When compared to other approaches for supervised learning characteristics from sequence data. RNN is made up of
in Artificial Intelligence, CNNs have shown significant numerous invisible layers, each with a weight and bias,
aptitude and scalability for applications involving image and similar to neural networks. Relationships between nodes in a
video operations. CNN has the unique ability to extract direct cycle graph that run in order in RNN. RRN has the
information from images that can subsequently be used in a advantage of allowing temporal dynamic behavior to be
variety of ways. Other supervised learning technologies can discovered. RNNs, unlike feed forward networks (FFNs), use
then be utilized for final classification for DeepFake to an internal memory to remember sequences of information
produce stronger and more precise models for DeepFake from earlier inputs, making them helpful in a range of
Detection. applications, such as natural language processing and audio
CNNs have an input and output layer, as well as one or recognition. A temporal sequence can be handled by an RNN
more hidden layers, similar to neural networks. The inputs by introducing a recurrent hidden state that captures
from the first layer are read by the hidden layers, which then interdependence across time scales.
apply a convolution mathematical process on the input values.
Convolution denotes a matrix multiplication or other dot 3) LSTM( Long short term memory)
product in this context. Following matrix multiplication, CNN LSTM is a type of artificial recurrent neural network
employs a nonlinearity activation function like the Rectified (RNN) that handles long-term dependencies. The LSTM uses
Linear Unit (RELU), followed by additional convolutions feedback connections to learn the whole data stream. LSTM
such as pooling layers. Pooling layers' main purpose is to has been used in a variety of domains that use time series data,
minimize the dimensionality of data by computing outputs such as classification, processing, and prediction. LSTMs
using functions like maximum pooling or average pooling. have a common architecture that includes an input gate, a
forget gate, and an output gate. The cell state is a type of long-
term memory that remembers and saves values from prior
intervals in the LSTM cell. The input gate, for starters, is in

ISSN: 2347-8578 www.ijcstjournal.org Page 112


International Journal of Computer Science Trends and Technology (IJCST) – Volume 11 Issue 4, Jul-Aug 2023

charge of picking the values that should be entered into the


cell state. By using a sigmoid function with a range of, the C. Fake Face Dataset (DFFD)
forget gate is capable of deciding which information to forget. Dang et al. [9] just published a new dataset named Diverse
The output gate specifies which current-time information Fake Face Dataset (DFFD). DFFD contains 100,000 and
should be taken into account in the next phase. 200,000 fake images created using state-of-the-art
A convolutional LSTM is used to provide a temporal technologies, respectively (ProGAN and StyleGAN models).
sequence descriptor for image manipulation of the shot frame The collection contains roughly 47.7% male photographs and
given an image sequence. An integration of fully-connected 52.3 percent female photographs, with the majority of the
layers is employed to convert the high-dimensional LSTM samples ranging in age from 21 to 50 years old.
description to a final detection probability, aiming for end-to-
end learning. To reduce training over-fitting, our shallow D. CASIA-WebFace
network comprises two fully connected layers and one Dong et al. [10] published the CASIA-WebFace database,
dropout layer. A CNN and an LSTM are two types of which has approximately 10,000 people and 500,000
convolutional LSTM. photographs. This information was gathered from the IMDB
For Sequence Processing, using LSTM. Assume a 2-node database, which covers 10,575 well-known actors and
neural network with the probability of the sequence being part actresses. After that, clustering methods are used to retrieve
of a DeepFake video or an untampered video as input and a the images of such celebs.
series of CNN feature vectors of input frames as output. The
main problem we must solve is the creation of a model that E. VGGFace2
can recursively analyze a sequence in a meaningful way. We Cao et al. [11] introduced the VGGFace2 dataset, which is
employ a 2048-wide LSTM unit with a 0.5 risk of dropout to a large-scale face dataset. Over three million face photographs
solve this challenge, which is capable of doing exactly what from over nine thousand different persons are included in this
we require. Our LSTM model, in particular, takes a sequence collection, with an average of more than 300 images per
of 2048-dimensional ImageNet feature vectors during subject. Images were acquired using the Google search
training. A 512 fully-connected layer with a 0.5 risk of engine, which contains a wealth of information such as
dropout follows the LSTM. ethnicity, lighting, age, and occupation (e.g., actors, athletes,
Finally, we compute the odds of the frame sequence being and politicians).
pristine or deepfake using a softmax layer. The LSTM module
is an intermediary unit in our pipeline that is trained from start F. The Eye-Blinking Dataset
to finish without the need of auxiliary loss functions. The currently available dataset was not created with eye-
blinking detection in mind. Li et al. [13] published eye-
blinking datasets that were created specifically for this
purpose. This dataset contains 50 interviews and films for
IV. DATASETS FOR DEEPFAKE each participant, each lasting approximately thirty seconds
DETECTION and involving at least one eye blinking. The author then tags
the left and right eye states for each video clip using their own
There are seven publicly available datasets for deepfake
tools.
detection which can be very helpful for researchers - FFHQ,
100K-Faces, DFFD, CASIA-WebFace, VGGFace2, The eye-
G. DeepfakeTIMIT
blinking dataset, DeepfakeTIMIT.
Korshunov et al. created a dataset of videos called
DeepfakeTIMIT. [12] uses the database to create a collection
A. Flickr-Faces-HQ, FFHQ
of swapped face movies using the GAN-based method. A
Karras et al.[7] proposed a human face dataset (Flickr-
lower quality model with 64 64 input/output size and a higher
Faces-HQ, FFHQ). The FFHQ dataset contains a collection of
quality model with 128 128 input/output size were used to
70,000 high-resolution face photos generated using generative
create the dataset. There are 32 subjects in each non-real video
adversarial networks (GAN). The photographs were gathered
collection. For each subject, the author made ten fake videos.
via the Flicker platform and include images of people wearing
eyeglasses, sunglasses, hats, and other accessories. According
to the author, the dataset was pre-processed to reduce the
collection and eliminate noise from the photos.
V. COMPARATIVE ANALYSIS OF
B. 100K-Faces DEEPFAKE DETECTION
100K-Faces [8] is a well-known public dataset that contains Following is the thorough analysis of deepfakes creation &
100,000 unique human photos created with StyleGAN [7]. detection approaches on the basis of parameters like
StyleGAN was used to create shots with a flat background Architecture, Network structure, Input/Output, Variants, and
from a big dataset of over 29,000 images obtained from 69 their use cases.
different models.

ISSN: 2347-8578 www.ijcstjournal.org Page 113


International Journal of Computer Science Trends and Technology (IJCST) – Volume 11 Issue 4, Jul-Aug 2023

TABLE I TABLE II
ANALYSIS OF MAJOR DEEPFAKE ALGORITHMS ANALYSIS OF VARIANTS OF DEEPFAKE DETECTION ALGORITHMS

CNN Variant Accuracy Precision F1-score AUC


Parameters GAN CNN RNN

VGG19 0.94 0.91 0.94 0.987


Architecture Deep Feed- Recurring
learning forward network VGG16 0.92 0.93 0.92 0.977
algorithmic neural that feeds VGGFace 0.99 0.99 0.99 0.998
architecture networks the results DenseNet169 0.95 0.99 0.95 0.996
that uses two using back into DenseNet201 0.96 0.96 0.96 0.994
neural filters and network DenseNet121 0.97 0.99 0.82 0.971
networks. pooling ResNet50 0.97 0.99 0.97 0.997
LSTM 0.94 0.85 0.98 0.893
Network Discriminati Input Input
Structure on model, Layer, layer,
Generation Convolutio Hidden VI. CONCLUSIONS
model n Layer, Layer,
Pooling output Deepfake is a new technique for deceiving a huge number
Layer,Full layer of people. Though not all deepfake contents are harmful, they
connected must be identified because some of them are actually
layer dangerous to the globe. The major goal of this research was to
analyze different methods available for detecting deepfake
Input/Output The size of The size of The size photos and videos. Using a variety of approaches, several
input is the input of the other academics have been working tirelessly to detect
fixed and input and deepfake content. Their ground-breaking work will have a
resulting the huge impact on our society. Deepfake victims can rapidly
output are resulting evaluate whether the images are real or fake with this
fixed output technology. People will be attentive since they will be able to
may vary
recognise a deepfake image thanks to our efforts.

Variants DCGAN LeNet, LSTM Many more trials and tests will be carried out in the future
AlexNet, and as advancements occur in this technology we may be able
VggNet
to use more efficient models to detect deepfake photos and
videos in order to reduce crime in our community and, more
Use Cases Image Image Text broadly, the world and strengthen global security.
generation, recognition translation
Video and , natural
generation classificati language
REFERENCES
on, face processing [1] Nguyen, Thanh Thi, Cuong M. Nguyen, Dung Tien
detection, , Nguyen, Duc Thanh Nguyen, and Saeid Nahavandi.
medical conversati "Deep learning for deepfakes creation and detection: A
analysis, onal survey." arXiv preprint arXiv:1909.11573 (2019).
drug intelligenc [2] Koopman, Marissa, Andrea Macarulla Rodriguez, and
discovery e, Zeno Geradts. "Detection of deepfake video
sentiment
analysis.
manipulation." In The 20th Irish machine vision and
image processing conference (IMVIP), pp. 133-136.
2018.
[3] Almars, A. (2021) Deepfakes Detection Techniques
Using Deep Learning: A Survey. Journal of Computer
and Communications, 9, 20-35. doi:
10.4236/jcc.2021.95003..
[4] Lyu, Siwei. "Deepfake detection: Current challenges and
next steps." In 2020 IEEE International Conference on
Multimedia & Expo Workshops (ICMEW), pp. 1-6.
IEEE, 2020.
[5] Guarnera, Luca, Oliver Giudice, and Sebastiano Battiato.
"Deepfake detection by analyzing convolutional traces."

ISSN: 2347-8578 www.ijcstjournal.org Page 114


International Journal of Computer Science Trends and Technology (IJCST) – Volume 11 Issue 4, Jul-Aug 2023

In Proceedings of the IEEE/CVF Conference on [10] Cao, Q., Shen, L., Xie, W., Parkhi, O.M. and Zisserman,
Computer Vision and Pattern Recognition Workshops, A. (2018) Vggface2: A Dataset for Recognising Faces
pp. 666-667. 2020. across Pose and Age. 2018 13th IEEE International
[6] Karras, T., Laine, S. and Aila, T. (2019) A Style-Based Conference on Automatic Face & Gesture Recognition
Generator Architecture for Generative Adversarial (FG 2018), Xi’an, 15-19 May 2018, 67-74.
Networks. Proceedings of the IEEE/CVF Conference on https://doi.org/10.1109/FG.2018.00020
Computer Vision and Pattern Recognition, Long Beach, [11] Korshunov, P. and Marcel, S. (2018) Deepfakes: A New
15-20 June 2019, 4401-4410. Threat to Face Recognition? Assessment and Detection.
https://doi.org/10.1109/CVPR.2019.00453 [12] Li, Y., Chang, M.-C. and Lyu, S. (2018) In Ictu Oculi:
[7] 100,000 Faces Generated by AI, 2018. Exposing AI Generated Fake Face Videos by Detecting
https://generated.photos Eye Blinking. 2018 IEEE International Workshop on
[8] Dang, H., Liu, F., Stehouwer, J., Liu, X. and Jain, A.K. Information Forensics and Security (WIFS), Hong Kong,
(2020) On the Detection of Digital Face Manipulation. 11-13 December 2018, 1-7.
Proceedings of the IEEE/CVF Conference on Computer https://doi.org/10.1109/WIFS.2018.8630787
Vision and Pattern Recognition, Seattle, 13-19 June [13] Aarti Karandikar, Vedita Deshpande, Sanjana Singh,
2020, 5781-5790. Sayali Nagbhidkar, Saurabh Agrawal: Deepfake Video
https://doi.org/10.1109/CVPR42600.2020.00582 Detection Using Convolutional Neural Network April
[9] Yi, D., Lei, Z., Liao, S. and Li, S.Z. (2014) Learning 2020 International Journal of Advanced Trends in
Face Representation from Scratch. Computer Science and Engineering 9(2):1311-1315

ISSN: 2347-8578 www.ijcstjournal.org Page 115

You might also like