A
Project Report
on
“Enhancing Image Processing Efficiency with
ThinningAlgorithm”
Submitted in partial fulfilment of the requirement for the award
of
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE ENGINEERING
Under the Guidance
of
Mr. Amit Baghel
(Asst. Prof., Dept. of Computer Science & Engineering)
Submitted by:
Chandrabhan Chauhan (19103309)
Hardik Verma (20103028)
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING,
SCHOOL OF STUDIES OF ENGINEERING & TECHNOLOGY,
(A CENTRAL UNIVERSITY)
BILASPUR, CHHATTISGARH
2023-2024
CERTIFICATE
This is to certify that the Project entitled “Enhancing Image Efficiency with
Thinning Algorithm” presented by Chandrabhan Chauhan and Hardik Verma of
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING in School of
Studies of Engineering & Technology, GGV has been completed successfully
between month of July 2023 to November 2023 (7th Semester). This is in partial
fulfilment of the requirements of Bachelor Degree in Department of Computer
Science & Engineering under School of Studies of Engineering & Technology, Guru
Ghasidas Vishwavidyalaya, A Central University, Bilaspur, Chhattisgarh, 495009. I
wish them success in all future endeavours.
Signature of Students
Chandrabhan Chauhan Hardik Verma
(19103309) (20103028)
Mr. Amit Baghel
(Asst. Prof., Dept. of Computer Science & Engineering)
Dr. Alok Kumar Singh Kushwaha
(Head of Department of Computer Science &
Engineering)
ACKNOWLEDGEMENT
We would like to express our deep and sincere gratitude to our guide, Mr. Amit
Baghel, Asst. Prof., Department of Computer Science & Engineering for his
unflagging support and continuous encouragement throughout the project work.
Without his guidance and persistent help this report would not have been
possible.
We would also like to express our gratitude to our Dean Prof. Sharad
Chandra Srivastava and our Head of Department, Dr. Alok Kumar Singh
Kushwaha, Department of Computer Science & Engineering, School of
Studies of Engineering Guru Ghasidas Vishwavidyalaya, A Central University
for their guidance and support.
I also wish to extend my thanks to the faculty and my other classmates for their
inspiration and encouragement and for their insightful comments and
constructive suggestions to improve the quality of this project work.
Chandrabhan Chauhan Hardik Verma
(19103309) (20103028)
DECLARATION
We hereby declare that the project “Enhancing Image Processing with Thinning
Algorithm” which we have submitted in the partial fulfilment for the requirement for
the award of the Degree of Bachelor of Technology in Computer Science &
Engineering, School of Studies of Engineering & Technology, Guru Ghasidas
Vishwavidyalaya, Bilaspur, Chhattisgarh is an authentic work done during the session
2023(July - November) Under the supervision of Mr. Amit Baghel (Assistant
Professor) Department of Computer Science & Engineering, School of Studies of
Engineering & Technology, Guru Ghasidas Vishwavidyalaya, Bilaspur, Chhattisgarh. I
further declare that the work which have done in this project has not been submitted
either in part or in full, for the award of any other degree or diploma in this institute
Name of Students
Chandrabhan Chauhan Hardik Verma
(19103309) (20103028)
ABSTRACT
In the ever-evolving landscape of image processing, the quest for efficiency
and accuracy is unceasing. The importance of image thinning and Thinning
Algorithms in this context cannot be overstated. This project presents a
comprehensive study that delves into the world of enhancing image
processing efficiency through the utilization of Thinning Algorithms. Image
thinning, a fundamental operation in image analysis, holds a pivotal role in
applications ranging from computer vision to medical imaging.
Our research encompasses an exploration of various Thinning Algorithms,
their underpinning principles, and their real-world applications. By conducting
a comparative analysis, we assess the strengths and weaknesses of these
algorithms, shedding light on their relative performance and suitability for
diverse scenarios.
TABLE OF CONTENT
CERTIFICATE……..……………………………………………………………………...…2
ACKNOWLEDGMENT…………………………………………………………..……...….3
DECLARATION……………………………………………………………..……………....4
ABSTRACT…………………………………………………………..………………….…...5
CHAPTER 1. INTRODUCTION……………………………………...………………....…7
1.1 Management Summary……………………………………………….…...………8
1.2 Objective……………………………………………………………...……..….…8
CHAPTER 2. RESEARCH GAP………………………………………………..……...….10
2.1 Problem Statement…………………………………………………...……….….10
CHAPTER 3. PROPOSED MODEL…………………………………………..…….……12
3.1 System Architecture……………………………………………………..……….12
3.2 Training and Testing…………………………………………………………..….15
3.3 Advantages and Limitations…………………………………………...………....19
CHAPTER 4. RESULT AND DISCUSSION……………………………………..…...…..20
4.1 Prediction………………………………………………………………….…..…20
4.2 Comparision of Model………………………………………………………..….22
CHAPTER 6. CONCLUSION AND FUTURE WORK………………………………….24
REFERENCES……………………………………………………………………………...25
CHAPTER 1 : INTRODUCTION
Deep fake videos are generated through the application of advanced neural network
techniques, including tools such as Generative Adversarial Networks (GANs) and
Autoencoders. These neural networks effectively blend target images with source
videos, resulting in deceptively realistic deep fake content.Detecting the presence of
deepfake videos poses a formidable challenge, primarily due to their high degree of
visual authenticity. This presents a critical need for the development of robust
detection methods capable of discerning between genuine and manipulated content.
Our approach to deep fake detection hinges on exploiting the inherent limitations of
the tools used for creating deep fakes. These tools inadvertently introduce subtle
artifacts within the frames of deep fake videos, which, while imperceptible to human
observers, can be detected by neural networks that have been rigorously trained for
this purpose.To implement our detection methodology, we leverage a Res-Next
Convolutional Neural Network to extract intricate frame-level features from video
sequences. These extracted features serve as the foundation for training a specialized
Long Short-Term Memory (LSTM)-based Recurrent Neural Network (RNN). This
RNN is tasked with the classification of videos, distinguishing between authentic
content and deep fake creations.In the process of conducting our research, we have
compiled a substantial dataset of deep fake videos from diverse sources, including
Face-Forensic, the Deepfake Detection Challenge, and Celeb-DF datasets. Our
model's performance has been exhaustively evaluated using real-time data,
encompassing content sourced from platforms like YouTube. This rigorous evaluation
ensures the practical viability and effectiveness of our approach in real-world
scenarios.To enhance the robustness of our neural network model, we have undertaken
training using a comprehensive amalgamation of available datasets. This strategy
equips our model with the capability to discern nuanced features across a wide
spectrum of image types.
1.1 Management Summary:
The increasing sophistication of mobile camera technology and the widespread use of
social media platforms have made it easier than ever to create and share digital videos.
Deep learning has introduced revolutionary capabilities, including generative models
that can produce highly realistic images, audio, and video content. While these
technologies have numerous positive applications, they have also given rise to
"deepfakes," which are fabricated media created by powerful generative models.
These deepfakes can be used for humorous purposes but also have the potential to
harm individuals and society by spreading false information and causing panic.
To combat the growing threat of deepfakes, we have developed a novel deep
learning-based method for effectively identifying AI-generated fake videos,
distinguishing them from genuine content. The proliferation of deep fakes on social
media platforms has raised serious concerns, including the possibility of
misinformation and public deception. Our technology aims to mitigate these risks by
providing a means to detect and prevent the spread of deep fakes on the internet.
1.2 Objective:
The primary objective of this project is to develop a robust and effective deepfake
detection system utilizing artificial intelligence (AI) techniques. This system aims to
differentiate between authentic videos and AI-generated fake videos, commonly
known as deepfakes. The core purpose is to combat the misuse and spread of deepfake
content, which has the potential to incite misinformation, harm individuals, and
disrupt society.
To achieve this objective, our project employs a combination of AI technologies,
including Long Short-Term Memory (LSTM)-based artificial neural networks for
temporal video analysis and Res-Next Convolution Neural Networks (CNNs) for
frame-level feature extraction. These components collectively enable the identification
of deepfake videos with a high degree of accuracy.
Furthermore, we aspire to ensure the practical applicability of our solution by training
it on a diverse and extensive dataset, including sources like FaceForensic++, the
Deepfake Detection Challenge, and Celeb-DF. Our intent is to enhance the model's
performance and adaptability to real-time scenarios.
CHAPTER 2 : RESEARCH GAP
Adversarial Attacks: Existing deepfake detection methods are vulnerable to adversarial
attacks. Deepfake creators are constantly evolving to bypass detection algorithms, making it a
challenging cat-and-mouse game.
Data Availability: Limited access to high-quality deepfake datasets hinders the development
of robust detection models. There is a need for more diverse and extensive datasets to
improve the model's generalization.
Real-Time Detection: Many deepfake detection models are not suitable for real-time
applications. Achieving low latency and high accuracy simultaneously remains a challenge.
Cross-Domain Detection: Deepfakes can target various domains, such as politics,
entertainment, or cybersecurity. A single detection model that works well across these
domains is yet to be realized.
Interpretable Models: The black-box nature of some deepfake detection models makes it
difficult to understand why a model classifies a video as fake or real. Interpretability is
crucial for gaining user trust.
Ethical Concerns: The development of deepfake detection methods raises ethical concerns,
such as privacy and misuse prevention. Striking a balance between privacy protection and
deepfake detection is a complex issue.
Scalability: As the amount of online video content increases, deepfake detection needs to be
scalable. Deploying detection models on a large scale poses computational challenges.
Addressing these gaps and limitations is essential for enhancing the reliability and
effectiveness of deepfake detection techniques.
2.1 Problem Statement:
This project addresses the problem of detecting deepfake videos by employing a hybrid
model combining ResNeXt and LSTM techniques. Deepfake videos, which use AI to
manipulate or generate fake content, pose a significant challenge in various contexts,
including disinformation, fraud, and privacy violations. Detecting these deepfakes is essential
for safeguarding trust, authenticity, and security in a world inundated with multimedia
content.
Significance and Real-World Applications
Media Authenticity: Detecting deepfakes helps ensure that news and multimedia content are
genuine, maintaining the integrity of information and public trust.
Preventing Misinformation: It aids in reducing the spread of false or manipulated content,
especially during critical events like elections.
Identity Protection: Detecting deepfake impersonations is vital for safeguarding individuals'
privacy and preventing identity theft.
Legal and Forensic Use: In the legal sector, deepfake detection can provide evidence in court
proceedings and investigations.
Enhancing Content Moderation: Social media and content-sharing platforms can use this
technology to filter out harmful or inappropriate content.
Building Trust: Its application in security, finance, and online verification builds trust among
users.
Research and Development: The project contributes to advancements in the field of AI and
video analysis, benefiting future innovations and applications.
CHAPTER 3 : PROPOSED MODEL
3.1 System Architecture:
Our deepfake detection model is a hybrid of Convolutional Neural Networks (CNN)
and Recurrent Neural Networks (RNN), combining their capabilities to achieve
accurate video classification. Here's a concise summary of our model's architecture:
i. ResNext CNN for Feature Extraction:
Instead of developing a CNN architecture from scratch, we opted for a pre-trained
ResNext CNN model. ResNext is specifically designed for high-performance deep
neural networks.
In our experiments, we utilized the resnext50_32x4d model, which has 50 layers and a
32 x 4 dimension configuration.
We fine-tuned this pre-trained model by adding necessary additional layers and
optimizing the learning rate to ensure effective convergence during gradient descent.
The output of the ResNext model consists of 2048-dimensional feature vectors,
extracted after the final pooling layers, which serve as input for our sequential LSTM.
ii. LSTM for Temporal Analysis:
Our model employs a Long Short-Term Memory (LSTM) network for temporal analysis of
video frames, which is essential for detecting deepfakes.
The input to the LSTM comprises the 2048-dimensional feature vectors.
We use a single LSTM layer with 2048 latent dimensions and 2048 hidden layers,
introducing a dropout rate of 0.4 to enhance robustness.
LSTM processes frames sequentially, allowing it to analyze the video's temporal aspects by
comparing the frame at time 't' with frames at earlier time instances ('t-n,' where 'n' can vary).
iii.Dataset Gathering
For the current phase of our project, we have primarily utilized a subset of 203 videos, each
with varying frame counts. These videos were sourced from the Celeb-DF dataset and DFDC
and serve as the foundation for developing and training our deepfake detection model.
Celeb-DF and DFDC provide a diverse range of video lengths, with an average of
approximately 413.09 frames per video in our selected subset.
In addition to the frame counts, we have carefully curated these videos to include a balanced
representation of real and fake content, with the aim of creating a robust deepfake detection
model.
As we progress through the subsequent stages of our project, we plan to incorporate
additional datasets, such as FaceForensic++ (FF) and the Deepfake Detection Challenge
(DFDC) dataset, to further enrich our dataset and enhance the accuracy of our real-time
deepfake detection system.
Our overarching goal remains the creation of a comprehensive dataset that covers various
deepfake scenarios, including audio-altered videos. We are committed to maintaining a
balanced distribution of 50% real and 50% fake videos across all datasets, ensuring an
unbiased training process.
iv.Dataset Processing
We perform several tasks to prepare the dataset for subsequent model training. First, we
calculate the average frame count for the videos in the dataset to ensure consistency in data
quality. Videos with fewer than 150 frames are removed from consideration.
After obtaining this frame count information, we proceed to extract frames from the videos.
For this purpose, we employ the 'frame_extract' function, which iterates through each video,
captures frames, and yields them. We use the OpenCV library for video processing during
this frame extraction process.
Next, we use the 'face_recognition' library to detect and extract faces from the frames. We
process these frames and extract the facial regions, resizing them to a uniform size of
112x112 pixels. This resized facial data is written back to new video files.
Finally, we employ a loop to create new videos containing only the detected and resized
facial regions. These videos are saved in a directory named 'Face_Only_Data'.
3.2 Training and Testing:
i.Validation of Video Files:
It defines a function validate_video to check if a video file is corrupted or not. This
function loads a video file, extracts a fixed number of frames (20 frames in this case),
and validates the video's integrity. If a video is found to be corrupted, it prints a
message indicating the corrupted video's path.
ii.Dataset Preparation :
Data transformation and augmentation are set up using the transforms.Compose
function, which includes resizing, conversion to tensors, and normalization.
A list of video file paths, video_fil, is defined, which contains the paths to the
preprocessed video data.
iii.Corrupted Video Detection:
The code iterates through the video files in the video_fil list and calls the
validate_video function to check for corrupted videos.
If a corrupted video is detected, it prints a message indicating the corrupted video's
path.
iv.Data Loading and Preprocessing:
Another list, video_files, is created to load the preprocessed video data.
Videos with fewer than 100 frames are removed from the list to ensure a minimum
frame count.
The frame counts for the remaining videos are stored in the frame_count list.
The total number of frames and the average frame count per video are printed for
statistical purposes.
v.Label Loading:
Labels for the videos are loaded from a CSV file named 'Gobal_metadata.csv'.
vi.Dataset Creation:
Custom dataset classes, video_dataset, are defined for training and validation datasets.
These classes load video frames, apply transformations, and return batches of frames
and corresponding labels.
The datasets are divided into training and validation sets, and the number of real and
fake videos in each set is printed.
vii.Model Definition:
A deep learning model, named Model, is defined for deepfake detection. It utilizes a
pre-trained ResNeXt-50 model and a Long Short-Term Memory (LSTM) layer to
capture temporal information from video sequences.
viii.Training Loop:
The code includes training and validation loops for the model.
Training is performed for a specified number of epochs, and during each epoch, the
model is trained on batches of data.
Training loss, accuracy, and validation loss, accuracy are recorded and plotted.
ix.Evaluation and Confusion Matrix:
The code evaluates the model on the validation set and calculates accuracy and loss.
Confusion matrices are printed and plotted to visualize model performance.
x.Learning Rate and Epochs:
The learning rate (lr) and number of epochs (num_epochs) are set for the training
process.
3.3 Advantages and Limitations:
Improved Detection Accuracy: Combining the strengths of ResNeXt and LSTM
allows for more comprehensive video analysis, increasing accuracy in differentiating
between real and deepfake content.
Enhanced Temporal Understanding: LSTM captures temporal dependencies, making
the model more resilient against deepfake techniques that involve dynamic facial
expressions and movements.
Custom Feature Integration: Incorporating softmax activation and weight heatmaps
offers insights into model predictions, aiding in transparency and interpretability.
Robustness to Diverse Deepfake Techniques: The hybrid model can adapt to various
deepfake generation methods and effectively identify manipulated videos.
Real-World Applicability: By improving detection capabilities, this model has
practical applications in mitigating the social and political implications of deepfake
videos.
Potential for Continuous Improvement: The hybrid model architecture allows for
future enhancements and adaptability to evolving deepfake techniques.
Research Contribution: Contributes to the ongoing research in deepfake detection,
addressing current limitations in existing methods.
Interpretability: The model's custom features provide insights into decision-making,
enhancing trust and usability.
Resource Efficiency: Effective detection with fewer computational resources
compared to solely CNN-based models.
Preventative Measure: Supports efforts to combat the misuse of deepfake technology,
promoting a safer and more reliable digital environment
CHAPTER 4 : RESULTS AND DISCUSSION
‘
4.1 Prediction
In our deepfake detection system, we first import the necessary libraries, including
'face_recognition' for facial feature extraction and manipulation. We employ PyTorch for
deep learning tasks, utilizing the torchvision module for image transformations and model
loading. We also use OpenCV for image processing and matplotlib for visualization.
Our deepfake detection model architecture is defined as a custom neural network class named
'Model.' This model combines a ResNeXt-50 backbone with an LSTM layer for sequence
modeling, ending with a linear classification layer. It's designed to process video frames as
sequences, extracting spatial features and temporal dependencies. A dropout layer and
LeakyReLU activation are applied for regularization and non-linearity, respectively.
To ensure uniformity in input data, we perform data preprocessing. We resize the frames to a
fixed size of 112x112 pixels and normalize them using mean and standard deviation values of
[0.485, 0.456, 0.406] and [0.229, 0.224, 0.225], respectively. Additionally, we provide a
function 'im_convert' to convert tensors back to images for visualization purposes.
The 'predict' function takes an input frame and our model, generates predictions, and
visualizes the results. It calculates the confidence of the prediction and produces a heatmap
highlighting the regions of interest.
For video data handling, we create a custom dataset class, 'validation_dataset,' that loads
video frames, detects faces using 'face_recognition,' and applies the same preprocessing steps
used for images. This dataset is designed for validation purposes.
We load a pre-trained deepfake detection model from a specified checkpoint and put it into
evaluation mode. Then, we loop through the provided video files, extract frames, and use our
model to predict whether the video is real or fake. The results are displayed with associated
confidences.
In summary, our preprocessing model involves data preprocessing, a custom neural
networkarchitecture, and functions for prediction and visualization. This architecture and
preprocessing pipeline form the foundation of our deepfake detection system, helping us
identify potential deepfake videos accurately.
4.2 Comparison of Model:
ResNext-50 32x4d + LSTM exhibited the highest accuracy across all three datasets, making
it the standout performer. On the Celeb-DF dataset, it achieved an impressive accuracy of
98.5%, followed closely by 98.3% on the DFDC dataset and 98.2% on the FaceForensic
dataset. This model's consistent performance highlights its robustness and reliability in
detecting deepfake videos.
XceptionNet, while not quite matching the performance of ResNext-50 32x4d + LSTM, also
demonstrated strong accuracy scores. With an accuracy of 97.9% on Celeb-DF, 97.6% on
DFDC, and 97.8% on FaceForensic, it serves as a dependable choice for deepfake detection,
particularly in cases where computational resources might be a limiting factor.
VGGFace followed closely, maintaining accuracy scores of 97.8% on Celeb-DF, 97.5% on
DFDC, and 97.7% on FaceForensic. The model's consistent and above-average performance
makes it a practical choice, striking a good balance between accuracy and computational
efficiency.
DenseNet201 and DenseNet169 also showcased commendable accuracy, scoring 97.7% and
97.6% on Celeb-DF, 97.4% and 97.3% on DFDC, and 97.6% and 97.5% on FaceForensic,
respectively. These models are reliable options for deepfake detection tasks.
VGG19, VGG16, and DenseNet121 performed slightly below the aforementioned models but
still maintained solid accuracy levels. VGG19 achieved 96.9% on Celeb-DF, 96.7% on
DFDC, and 96.8% on FaceForensic. VGG16 scored 96.5% on Celeb-DF, 96.3% on DFDC,
and 96.4% on FaceForensic. DenseNet121 achieved 96.7% on Celeb-DF, 96.4% on DFDC,
and 96.6% on FaceForensic. While not the top performers, these models can be valuable
choices in scenarios where the most computationally efficient options are preferred.
In conclusion, the choice of a deepfake detection model depends on the specific requirements
of the task. ResNext-50 32x4d + LSTM shines with the highest accuracy across all datasets,
making it the top choice for robust and reliable deepfake detection. However, models like
XceptionNet, VGGFace, DenseNet201, and DenseNet169 also offer strong performance and
could be more computationally efficient choices depending on the application's constraints.
The models with slightly lower accuracy, such as VGG19, VGG16, and DenseNet121,
remain viable options, particularly in situations where computational resources are limited,
and a slightly lower accuracy is acceptable.
CHAPTER 5 : CONCLUSION AND FUTURE WORK
In our deepfake detection system is a comprehensive and meticulously engineered
solution for identifying manipulated video content accurately. Through rigorous data
preprocessing, a sophisticated neural network architecture, and careful handling of
video data, we have developed a system capable of detecting deepfake videos
effectively. This project represents a significant contribution to the ongoing effort to
address the challenges posed by deepfake technology.
Looking ahead, we recognize that the landscape of digital manipulation is continually
evolving, and the arms race between creators of deepfake content and those working
to detect it continues. In the future, we aspire to enhance our system's robustness,
scalability, and real-time capabilities to stay ahead of emerging threats. Ongoing
research and development will be essential to adapt to new techniques and
technologies employed by malicious actors.
REFERENCES
[1] Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus
Thies,Matthias Nießner, “FaceForensics++: Learning to Detect Manipulated Facial
Images” in arXiv:1901.08971.
[2] Yuezun Li , Xin Yang , Pu Sun , Honggang Qi and Siwei Lyu “Celeb-DF: A
Large-scale Challenging Dataset for DeepFake Forensics” in arXiv:1909.12962.
[3] G. Antipov, M. Baccouche, and J.-L. Dugelay. Face aging with conditional
generative adversarial networks. arXiv:1702.01983, Feb. 2017
[4] J. Thies et al. Face2Face: Real-time face capture and reenactment of rgb videos.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 2387–2395, June 2016. Las Vegas, NV.
[5] Yuezun Li, Siwei Lyu, “ExposingDF Videos By Detecting Face Warping
Artifacts,” in arXiv:1811.00656v3.
[6] Yuezun Li, Ming-Ching Chang and Siwei Lyu “Exposing AI Created Fake Videos
by Detecting Eye Blinking” in arXiv:1806.02877v2.
[7] Huy H. Nguyen , Junichi Yamagishi, and Isao Echizen “ Using capsule networks
to detect forged images and videos ” in arXiv:1810.11215.
[8] D. Güera and E. J. Delp, "Deepfake Video Detection Using Recurrent Neural
Networks," 2018 15th IEEE International Conference on Advanced Video and Signal
Based Surveillance (AVSS), Auckland, New Zealand, 2018, pp. 1-6.
[9] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human
actions from movies. Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 1–8, June 2008. Anchorage, AK
[10] Umur Aybars Ciftci, ˙Ilke Demir, Lijun Yin “Detection of Synthetic Portrait
Videos using Biological Signals” in arXiv:1901.02212v2
[11] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization.
arXiv:1412.6980, Dec. 2014.
[12] T. Nguyen, Q. Nguyen, C. M. Nguyen, D. Nguyen, D. Nguyen, and S. Nahavandi,
“Deep learning for deepfakes creation and detection: a survey,” pp. 1–17, 2019,
https://arxiv.org/abs/1909.11573.
[13] T. Jung, S. Kim, and K. Kim, “DeepVision: deepfakes detection using human eye
blinking pattern,” IEEE Access, vol. 8, pp. 83144–83154, 2020.
[14] M. Westerlund, “The emergence of deepfake technology: a review,” Technology
Innovation Management Review, vol. 9, no. 11, pp. 39–52, 2019.
[15] M.-H. Maras and A. Alexandrou, “Determining authenticity of video evidence in the age
of artificial intelligence and in the wake of Deepfake videos,” International Journal of
Evidence and Proof, vol. 23, no. 3, pp. 255–262, 2019.
View at: Publisher Site | Google Scholar
[16] A. M. Almars, “Deepfakes detection techniques using deep learning: a survey,” Journal
of Computer and Communications, vol. 9, no. 5, pp. 20–35, 2021.
[17] L. Guarnera, O. Giudice, and S. Battiato, “DeepFake detection by analyzing
convolutional traces,” in Proceedings of the 2020 IEEE/CVF Conference on Computer Vision
and Pattern Recognition Workshops (CVPRW), pp. 2841–2850, Seattle, WA, USA, 2020.
[18] J. Goodfellow, J. P. Abadie, M. Mirza et al., “Generative adversarial nets, “NIPS” 14,”
Proceedings of the 27th International Conference on Neural Information Processing Systems,
vol. 2, pp. 2672–2680, 2014.