Anteneh Birhan
Anteneh Birhan
Thesis proposal on
CLASSIFYING ETHIOPIAN ORTHODOX TEWAHIDO
SACRED IMAGES USING DEEP LEARNING
TECHNIQUES
By
Injibara, Ethiopia
October, 2024
Proposal Approval Sheet
Injibara University
College of Engineering and Technology
Department of Information Technology
Submitted by:
Approved by:
2.
3.
i|Page
Acknowledgments
I would like to express my heartfelt gratitude to all those who have supported me throughout the
writing process of this proposal.
Firstly, I extend my sincere thanks to my advisor, Dr. Yirga Y., whose unwavering guidance and
insightful feedback have been invaluable in shaping this work. Your expertise in the field and
encouragement during challenging times has motivated me to push the boundaries of my
proposal.
Additionally, I would like to acknowledge my family and friends, whose love and
encouragement have provided me with the strength to persevere. A special thanks to my brother
Kerebih, who has been a constant source of motivation and has helped me stay focused and
positive throughout this journey.
ii | P a g e
Abbreviations
No. Abbreviation Full Form
1 ANN Artificial Neural Network
2 CNN Convolutional Neural Network
3 CPU Central Processing Unit
4 DL Deep Learning
5 EOTC Ethiopian Orthodox Tewahido Church
6 F1 Score F1 Score
7 IDE Integrated Development Environment
8 LSTM Long Short Term Memory
9 Python Programming Language
10 SVM Support Vector Machine
11 YOLO You Only Look Once
iii | P a g e
Table of Contents
Proposal Approval Sheet.................................................................................................................................i
Acknowledgments..........................................................................................................................................ii
Abbreviations.................................................................................................................................................iii
Table of Figures..............................................................................................................................................v
List of Tables..................................................................................................................................................v
Executive Summary.......................................................................................................................................vi
CHAPTER ONE............................................................................................................................................1
INTRODUCTION..........................................................................................................................................1
1.1 Background of the Study..........................................................................................................................1
1.2 Literature Review.....................................................................................................................................2
1.3 Motivation of the Study............................................................................................................................7
1.4 Statement of the Problem.........................................................................................................................8
1.5 Objectives of the Study..........................................................................................................................10
1.5.1 General Objective............................................................................................................................10
1.5.2 Specific Objectives..........................................................................................................................10
1.6 Scope of the study..................................................................................................................................10
1.7 Significance of the Study.......................................................................................................................10
1.8 Research Methodology...........................................................................................................................11
1.8.1 Research Design..............................................................................................................................11
1.8.2 Proposed System Architecture........................................................................................................12
1.8.3 Methods of Data Collection............................................................................................................13
1.8.4 Data Quality Control.......................................................................................................................16
1.8.5 Experimental Setup.........................................................................................................................17
1.8.6 Methods of Data Analysis...............................................................................................................18
1.9 Work plan...............................................................................................................................................20
1.10 Budget plan..........................................................................................................................................21
1.11 Conclusion............................................................................................................................................21
References....................................................................................................................................................22
iv | P a g e
Table of Figures
Figure 1 Proposed System Architecture.......................................................................................................12
List of Tables
Table 1 Summary of Related Works..............................................................................................................7
Table 2 Project Work Plan...........................................................................................................................20
Table 3 Project Budget Plan.........................................................................................................................21
v|Page
Executive Summary
This study investigates the use of DL techniques, specifically CNNs and YOLO algorithms, for
the automatic identification and classification of sacred images within the EOTC. The increasing
influence of globalization and western culture has led to a rise in fake Saint Images that look like
the real ones, making manual identification difficult, even for domain experts. This study aims to
address this issue by developing a model capable of distinguishing authentic images based on the
EOTC‟s doctrines and dogmas. Data for the study will be collected from historical archives,
digital repositories, field research, and direct collaboration with EOTC painters. These images
will undergo preprocessing, including resizing, normalization, and data augmentation, to prepare
them for analysis. The DL model will then classify the images into acceptable or unacceptable
categories based on EOTC standards. The scope of this study is limited to the identification and
classification of 12 EOTC saints' images. The goal is to create a robust model that can classify
EOTC Saint Images with high accuracy and reliability.
vi | P a g e
CHAPTER ONE
INTRODUCTION
In EOTC, pictures of Saints play a crucial role to implicate a person who is recognized as having
an exceptional degree of likeness or closeness to God as well as their struggle against the world
to protect their soul from sin. They use pictures of Saints on their home, church, and place of
prayer or use in their prayer books to show their love, for excogitation, evocation and to get their
blesses as well as intercession from God. In EOTC, pictures of Saints should be prepared in
accordance with the dogmas and doctrines. However, due to the influence of globalization and
culture of western countries, the EOTC Pictures are mixed with western pictures so that the
identification process is difficult for human beings with direct observation even for domain
experts [2].
Recognition of picture of Saints will be identified and verified by digital image processing
technology to differentiate them from other different pictures and check whether the picture is
acceptable or unacceptable by EOTC doctrines and dogmas. In this research we will take the
picture of Saints using digital camera and the Saint picture used as input for the algorithm finally
the model will classified as acceptable or unacceptable by EOTC doctrines and dogmas.
In the context of preserving and analyzing images, DL techniques emerge as powerful tools.
Among these, CNNs [3] and YOLO [4] algorithms stand out for their capabilities in image
1|Page
detection and classification. CNNs are particularly adept at recognizing patterns and features in
images, making them ideal for tasks involving visual data. These networks mimic the human
visual processing system, allowing for the extraction of hierarchical features through multiple
layers, which enhances their accuracy in identifying complex patterns. On the other hand, YOLO
is renowned for its efficiency in real-time object detection. Unlike traditional methods that
process images in multiple stages, YOLO takes an end-to-end approach, predicting bounding
boxes and class probabilities directly from full images in a single evaluation. This capability is
especially relevant for analyzing large datasets of sacred images, enabling quicker and more
accurate classification.
The integration of these advanced deep learning techniques into the analysis of EOTC sacred
images not only facilitates a deeper understanding of their artistic and theological significance
but also contributes to the preservation of cultural heritage in a digital format. By employing
such methodologies, this research aims to bridge the gap between tradition and technology,
ensuring that the rich legacy of the EOTC is both recognized and safeguarded for future
generations.
In other considers [6] proposed automatic Picture forgery discovery based on Gabor Wavelet
change and Local Phase Quantization was proposed. The tests were performed by taking the Cr
channel in YCbCr color space and applying Gabor and LBP on the Cr picture. The feature vector
in this way gotten is given as input to the SVM for classification. The strategy is tried on CASIA
2|Page
v1 and DVMM (color) datasets. This comes about of the proposed strategy appear that it
outperforms the state-of-the-art strategies in identifying picture forgery with precision of over
99%. One downside of this strategy is the expansive dimensionality of the include vector which
leads to a high processing time. This issue can be solved by performing dimensionality
diminishment of the present highlight set, which we arrange as a future improvement to this
work.
Another consider [7] has done Digital picture forgery detection utilizing a DL approach. The
algorithm was based on utilizing of the VGG-16 convolutional neural organize. The analysts
utilized CNN, which in the past few a long time have accomplished exceptionally great results in
numerous computer vision applications, such as picture classification, question acknowledgment,
picture division, face acknowledgment, and numerous others. The proposed organize engineering
takes picture patches as input and gets classification results for a fix: original or forgery. The
gotten Results illustrate high classification accuracy (97.8% exactness for fine-tuned demonstrate
and 96.4% accuracy for the zero-stage trained) for a set of pictures containing artificial
distortions in Comparison with existing solutions. Experimental research was conducted utilizing
CASIA Dataset.
In this [8] automated Detection and Localization of Picture Forgeries utilizing Resampling
Highlights and DL the analyst propose two strategies to identify and localize picture controls
based on a combination of Resampling features and DL. In the to begin with strategy, the Radon
transform of Resampling features are computed on overlapping picture patches. Deep learning
classifiers and a Gaussian conditional random field model are at that point utilized to make a
heatmap. Altered regions are found utilizing a Random Walker segmentation strategy. In the
second strategy, Resampling features computed on overlapping picture patches are passed
through a long LSTM based network for classification and localization. The experimental results
appear that the proposed show execution is 94.86% exactness
In this [9] automated CNN based Picture Forgery Detection utilizing pre-trained AlexNet Model
the analyst proposed a picture forgery detection approach utilizing CNN based pre-trained
AlexNet model to extract deep features, without contributing much time in training. The
proposed approach moreover abuses the SVM as a classifier. The analyst utilized MICC-F220
dataset comprising of 220 pictures of forged and non-forged pictures are classified utilizing
3|Page
SVM Classifier. Performance of the deep features extracted from a pre-trained AlexNet based
model is very palatable, indeed in the nearness of rotational and geometrical transformation and
moreover compared the results of the given approach with the existing state-of-the-art
approaches.
In this [10] Digital picture forgery detection utilizing artificial neural network and autoregressive
coefficients, the analyst has utilized Auto-Regressive (AR) coefficients as the include vector for
recognizing the area of computerized forgery in a test picture. 300 feature vectors from
distinctive pictures are utilized to train an Artificial Neural Network (ANN) and the ANN is tried
with another 300 feature vectors. Two tests were conducted. In Explore 1, controlled pictures
were utilized to train the ANN. In test 2 a database of forged pictures was utilized. The rate of
hits in distinguishing the computerized forgery is 77.67%.in try 1 and 94.83% in explore 2.
In this research [2] Automatic recognition of pictures of Saints using convolutional neural
network has been done in EOTC. This research is proposed to identify and verify acceptable
pictures of Saints in line with EOTC dogmas and doctrines. To achieve tasks both acceptable and
unacceptable pictures of saints are used. The work was mainly focused on five pictures of saints.
The pictures of Medhanialem, St. Mariam, and St. Michael, St. Gebriel, and St. Arsema. To
achieve tasks both acceptable and unacceptable 2140 pictures of saints were used. The researcher
used the proposed model, AlexNet ResNet and VGGNet models. The research achieved 93.89%
training and 91.3% test accuracies. Similarly, Tiruwork Assefa's recent work on enhancing
recognition of EOTC Saint icons using ensemble deep CNN models demonstrated the potential
of DL in boosting classification accuracy for sacred images, achieving a promising accuracy of
99.5% using an ensemble method [11]. Depth of the dataset and increasing the number of saint‟s
image should be considered for further study.
Based on this assumption, a new approach of DL towards the identification and recognition of
EOTC pictures of saints through their images will be used in this thesis.
CNNs have established themselves as a powerful tool in image processing, owing to their ability
to learn hierarchical features directly from raw images. This capability makes them well-suited
for tasks such as object detection, classification, and segmentation. Early work by Krizhevsky et
al. [12] using CNNs on the ImageNet dataset demonstrated the network's potential in achieving
unprecedented accuracy in large-scale image classification. Subsequent developments, such as
4|Page
the introduction of architectures like ResNet [13] and VGGNet [14], have further enhanced the
depth and performance of CNN models, enabling them to handle more complex image
classification tasks with greater accuracy and efficiency.
YOLO is a state-of-the-art, real-time object detection system that frames the detection task as a
single regression problem, enabling the simultaneous prediction of bounding boxes and class
probabilities directly from full images. Introduced by Redmon et al.[15], YOLO has been
instrumental in applications requiring quick and accurate object detection, such as surveillance,
autonomous driving, and the classification of sacred images. The algorithm's speed and accuracy
make it ideal for scenarios where real-time processing is critical.
Recent iterations, such as YOLOv3 [16] and YOLOv4 [17], have further optimized the
algorithm's performance, allowing for even more efficient detection of objects in complex
scenes. By integrating YOLO's detection capabilities with CNN's classification, the system
ensures fast, accurate detection and classification of multiple objects (sacred images) in a single
pass, optimizing the identification process. Combining YOLO‟s object detection with CNN‟s
region-based feature extraction boosts accuracy, reduces false positives, and accelerates
inference time, which is critical for real-time image recognition systems [18].
Beyond cultural heritage, the integration of CNNs and YOLO has found applications in various
fields, including medical imaging, security, and surveillance. For instance, Zhang et al.
[19]applied YOLO to detect tumors in chest X-rays, demonstrating the model's utility in
healthcare. Similarly, Chao et al. [20]utilized YOLO for real-time object detection in crowded
environments, underscoring its potential in public safety and security applications.
The integration of YOLO's real-time detection capabilities with CNN's deep feature extraction
results in improved detection accuracy and faster processing speeds, useful for complex
environments.Combining YOLO with CNN architectures such as ResNet or EfficientNet
enhances both classification and detection accuracy, balancing speed and precision, especially in
tasks requiring the recognition of subtle object differences[21].
Based on this assumption, a new approach of DL towards the identification and classification of
EOTC pictures of Saints through their images will be used in this thesis.
5|Page
Summary of related works
6|Page
of EOTC ResNet, and size could be
saints' VGGNet) unacceptable expanded
pictures saints)
[11] T. Assefa Enhance Ensemble of EOTC saint 99.5% accuracy Dataset was
recognition deep CNN images narrow,
of EOTC models limiting the
saint icons generalizability
using of the model
ensemble
CNN
Therefore to educate Ethiopian Orthodox Tewahido Religion followers and readers on behalf of
features used to recognize accepted Pictures of Saints and to overcome the problem,
technological use such as developing a model that recognizes and separates acceptable picture of
Saints from unacceptable pictures using the dogma and doctrine of EOTC is helpful. In their day
to day prayer times, Orthodox Tewahido religion followers use pictures of Saints to show the
saints helps them as an intervention from the holy God. However, many other pictures are
printed out in the same way with correct ones. Differentiating acceptable pictures of Saints from
unaccepted ones through the help of technological use provides a meaningful value to the flocks
helps us as a motivation for the research.
7|Page
1.4 Statement of the Problem
The EOTC has a rich and deep cultural heritage that spans centuries, with its sacred images
playing a significant spiritual and historical role. These images, characterized by complex
iconography, color symbolism, and religious narratives, are beyond simple artistic
representations; they serve as crucial mediums for spiritual belief, education, and cultural
identity within the EOTC followers. However, in today‟s rapidly digitizing world, the
preservation and accurate identification of these sacred images are becoming increasingly
difficult, especially as traditional forms of art are at risk of being overlooked or replaced by fake
representations influenced by globalization and Western artistic styles.
In EOTC tradition, sacred images of Saints are prepared according to strict doctrinal and
dogmatic guidelines. Yet, the spread of global culture has led to the creation of images that are
similar with authentic ones but they do not align with the Church's dogmas and doctrines. This
creates confusion, as real Saint images are often mixed with fake representations, making it
difficult, even for experts, to distinguish between them. Traditionally, identifying these images
relies on manual methods, requiring deep knowledge of EOTC doctrines and iconography.
Unfortunately, many EOTC followers lack the detailed understanding necessary to distinguish
authentic Saints images from fake ones, leading to misclassification or loss of these culturally
and religiously significant artifacts.
Furthermore, current methods of image identification are not only prone to human error but also
lack systematic approaches, resulting in many sacred images remaining unidentified or
misclassified. This reduces their accessibility and appreciation in both academic and religious
contexts. While technological frameworks for image recognition have advanced, they are not
customized to the distinct features of EOTC sacred images. Traditional image processing
techniques often fail to capture the complex details and iconographic elements essential for
accurate identification, highlighting the limitations of existing systems.
Previous research has explored machine learning and DL techniques for image classification and
forgery detection, but these studies have several limitations when applied to EOTC imagery:
Limited Scope of Datasets: Many studies rely on general image datasets or a small selection of
EOTC Saints images, which fail to capture the diversity and richness of the EOTC
8|Page
tradition[2][7]. Even Tiruwork Assefa‟s work, which achieved high accuracy (99.5%) using an
ensemble of CNN models, was limited by the narrow dataset of Saints images used, affecting its
generalizability [11].
High Processing Times and Computational Complexity: Techniques like Gabor wavelet
transforms and extensive feature extraction result in high processing times, limiting their
feasibility for real-time applications[6][8].
Inadequate Focus on EOTC Doctrines: Existing approaches do not adequately incorporate the
doctrinal significance necessary for classifying EOTC sacred images. As a result, they risk
misclassifying authentic images as fakes or vice versa[5][9].
Reliability and Accuracy Issues: While some models report high accuracy[7][10], their
performance often does not reflect the complexity of real-world EOTC sacred images. The
limited scope of datasets in these studies restricts the generalizability of their results, particularly
when faced with the unique iconography of EOTC art.
These limitations point to the need for a more integrated and systematic strategy that takes
advantage of modern DL techniques like CNNs and YOLO, specifically designed to handle the
complexity of EOTC sacred images. This research aims to develop an automated system that
improves the accuracy and reliability of saint image recognition of EOTC art.
Research Questions
How to develop an automatic model using DL techniques such as CNNs and YOLO to
classify authentic vs. fake Saints images using DL techniques?
What feature extraction methods or techniques use to validate recognition model for saint
picture?
How to evaluate the performance of the model using the appropriate measurement
metrics?
9|Page
1.5 Objectives of the Study
1.5.1 General Objective
To develop a DL model capable of automatically identifying and classifying EOTC sacred
images as either authentic or fake based on EOTC doctrines.
10 | P a g e
To identifying and verifying the real and fake EOTC Saint pictures.
For the EOTC fathers, it allows to teach and transfer the right saint's pictures for the next
generation.
For new painters to recognize the right ways of painting which depend on the dogmas
and doctrines of the church.
Can solve the problems with the existing systems and previous studies by considering
other parameters which are not yet considered.
Increases the experience of other researchers about EOTC Saints pictures.
The picture collected and prepared can help future researches in this direction.
Data Collection: Images of EOTC Saints will be gathered from historical archives, digital
repositories, field research, and collaboration with EOTC painters. The dataset will include both
authentic and counterfeit Saints images, ensuring diversity in style and quality.
Model Selection: DL algorithms such as CNNs and the YOLO will be used for image
recognition and classification.
Feature Extraction: Key characteristics such as color, texture, and shape will be extracted from
the images to train the model.
11 | P a g e
Training and Testing: The dataset will be divided into training and testing sets. The model will
be trained using the training set, and its performance will be evaluated on the test set using
metrics such as accuracy, precision, recall, and F1 score.
Classification: The trained model will classify Saints images as either acceptable or
unacceptable based on EOTC doctrinal standards.
This research design aims to develop a robust model that accurately identifies and classifies
EOTC sacred images.
The input data, which are images of Saints from the EOTC, these images will undergo a series of
processes for the purpose of classification. Preprocessing is the initial stage, where raw saint
images are cleaned and prepared for further steps. Common preprocessing steps include:
Resize: Adjusting image dimensions to a uniform size suitable for the neural network model.
12 | P a g e
Normalize: Scaling the pixel values (usually to a range between 0 and 1 or -1 and 1) to ensure
consistent input.
Data Augmentation: Applying transformations like rotation, flipping, zooming, etc., to
artificially increase the dataset size and improve the model‟s robustness.
After preprocessing, the feature extraction step identifies and captures key characteristics (e.g.,
color, texture, shape) from the images. These extracted features are used as inputs to the
classification model. Then the dataset is split into training data and testing data. The algorithm
refers to the chosen model for learning the relationship between the input data and its
corresponding labels. In this case, the algorithm could be CNNs or YOLO model for
classification. Once the algorithm has been trained on the training data, it produces a trained
model. This model has learned to map input features (from Saint Images) to their correct
categories. The final stage is classification. Using the trained model, the system can predict the
category of a new, unseen Saint image. This prediction is based on the learned features during
training and is validated by comparing it to the actual label in the testing phase.
Sources of Data
Data for this project will be sourced from multiple repositories to ensure a comprehensive and
varied dataset. Key sources include:
Historical Archives: Institutions such as the EOTC archives, monasteries, ancient churches, and
national museums will provide access to historical artifacts and images. These archives often
contain valuable resources that represent sacred images from different historical periods.
Field Research: Visits to local churches and religious institutions will be conducted to
photograph sacred images that may not be represented in existing collections. This fieldwork is
vital for capturing contemporary expressions of EOTC art.
13 | P a g e
Painters: EOTC sacred image painters can be an important source of data for this study. These
painters adhere to specific religious guidelines, doctrines, and traditional styles when creating
sacred images. By including their works as part of the dataset, to ensure the authenticity and
doctrinal accuracy of the images used to train and test the deep learning model.
Additionally, collaborating with these painters can provide valuable insights into the unique
iconographic and stylistic elements that are essential for distinguishing real images from fake.
Their input could be useful in annotating the images and establishing the criteria for classifying
the pictures based on the EOTC doctrines.
Iconographic Relevance: Images must show themes, figures, or narratives central to the EOTC
faith, including Saints, biblical stories, and liturgical symbols.
Diversity of Styles: The dataset will aim to include a variety of artistic styles and techniques,
representing different regions and periods within the EOTC tradition. This diversity is essential
for training a model that accurately reflects the richness of the art form.
Image Quality: High-resolution images will be prioritized to ensure that the intricate details of
the artworks are preserved. Quality is crucial for training deep learning models, as lower quality
images may lead to suboptimal learning and classification outcomes.
By employing these methods of data collection, this research aims to build a robust and
representative dataset that will serve as the foundation for developing an effective DL model for
the identification and classification of EOTC sacred images.
Pre-processing Phases
Pre-processing is a crucial step in preparing collected images for DL algorithms, ensuring that
the data is optimized for effective analysis and classification. The phases involved in pre-
processing EOTC sacred images include resizing, normalization, augmentation, and annotation,
each holding significant importance in the overall workflow.
Resizing: Is the first step, where images are adjusted to a consistent size. This is vital because
DL models typically require input images to be of uniform dimensions. By resizing images to a
14 | P a g e
standard format, we eliminate inconsistencies that could lead to poor model performance.
Additionally, resizing can help reduce computational load and speed up the training process,
allowing the model to process batches of images efficiently.
Normalization: Follows resizing and involves scaling pixel values to a specific range, often
between 0 and 1 or -1 and 1. Normalization improves the convergence of the training process by
ensuring that the model learns effectively from the input data. It standardizes the input features,
allowing the model to focus on learning the underlying patterns rather than being influenced by
varying brightness or contrast levels present in the original images.
Augmentation: Plays a key role in enhancing the dataset's diversity and size, which can
significantly improve the robustness of the model. Techniques such as rotation, flipping,
zooming, and cropping can be applied to generate variations of the existing images. This not
only helps prevent overfitting by introducing variability but also allows the model to generalize
better when faced with new, unseen data during classification.
Annotation: Is essential for creating a labeled dataset that DL models can learn from. Each
image needs to be tagged with relevant information, such as icon type and style. Accurate
annotation enables the model to understand and differentiate between various categories,
facilitating precise classification. The quality of annotations directly impacts the model's
performance, making this step critical for achieving reliable results.
Overall, these pre-processing phases are foundational for optimizing the dataset, ensuring that
the DL algorithms can effectively learn from the provided images and yield accurate
classifications.
Extracting Key Features: The following features will be targeted for extraction:
15 | P a g e
Color Features: Color histograms will be generated to capture the distribution of colors in each
image, which can provide insights into the stylistic elements unique to EOTC sacred images.
Color spaces (such as RGB, HSV, or LAB) may be employed to represent color information
more effectively.
Texture Features: Texture analysis will be conducted using techniques like Gray Level Co-
occurrence Matrix (GLCM) or Local Binary Patterns (LBP) to characterize the visual patterns
and textures present in the images. These texture descriptors will help distinguish between
different painting styles and techniques used in EOTC art.
Data Cleaning
The first step in data quality control is data cleaning, which involves identifying and rectifying
inaccuracies or inconsistencies in the dataset. This can include removing duplicate images,
correcting miss labeled entries, and addressing issues related to image quality, such as low
resolution or excessive noise. Automated scripts can be employed to streamline this process,
utilizing techniques such as image hashing to detect duplicates and image processing algorithms
to assess quality metrics. Moreover, images that do not meet predefined quality thresholds must
be systematically excluded to maintain a high standard for the dataset.
Data Validation
Following the cleaning process, data validation serves to ensure that the dataset accurately
represents the intended subjects and adheres to the established criteria for inclusion. This
involves cross-referencing the collected images with reliable sources, such as historical texts or
expert annotations, to verify their authenticity and relevance. Peer reviews and expert evaluations
can also be incorporated at this stage, where scholars familiar with EOTC art assess the dataset
for both completeness and accuracy, providing valuable insights and recommendations.
16 | P a g e
Data preparation
Data preparation is a vital aspect of maintaining data quality, as it involves the organization and
management of the dataset to facilitate efficient access and usability. This includes creating a
structured database with detailed metadata for each image, encompassing information about the
iconographic elements, historical context, and theological significance. Proper preparation not
only enhances the dataset's integrity but also aids in the reproducibility of the research.
Additionally, establishing clear documentation on data collection methods and decisions made
during the preparation process ensures transparency and clarity for future researchers.
Continuous Monitoring
Lastly, implementing a system for continuous monitoring of data quality throughout the research
project is essential. This can involve periodic audits of the dataset to identify potential issues that
may arise as new data is added or as the model evolves. By fostering a culture of quality
assurance within the research team, the integrity and reliability of the dataset can be upheld,
ultimately contributing to the success of the DL model in accurately identifying and classifying
EOTC sacred images.
Hardware Requirements
To facilitate the training and evaluation of deep learning models, a robust hardware configuration
is essential. The following components are recommended:
CPU: A multi-core CPU, such as an Intel i7 or AMD Ryzen 7, is recommended to handle data
preprocessing and other computational tasks that may not be GPU-accelerated. Sufficient RAM
(at least 16 GB) is also necessary to support the execution of large datasets and complex models.
Storage: Hard disk with at least 1 TB of storage capacity is necessary. This will provide fast data
access and retrieval times, which is crucial for reading large datasets during training.
17 | P a g e
Software Requirements
The software environment plays a significant role in the implementation of DL models. The
following software components are essential:
DL Frameworks: The primary frameworks utilized for this research will be TensorFlow and
PyTorch. These frameworks offer extensive support for building and training DL models, as well
as providing pre-trained models that can facilitate transfer learning.
Programming Language: Python will be the primary programming language used for
developing the models. Its extensive libraries, such as NumPy, Pandas, and OpenCV, support
data manipulation, processing, and visualization, making it an ideal choice for this research.
Development Environment: IDE such as Jupyter Notebook or PyCharm will be used for
writing and testing code. Jupyter Notebooks offer the advantage of interactive coding and
immediate feedback, which is beneficial during the experimentation phase.
Performance Metrics
The primary performance metrics that will be utilized to evaluate the classification accuracy of
the algorithms include accuracy, precision, recall, and F1 score.
Accuracy: Measures the overall correctness of the model's predictions, calculated as the ratio of
correctly predicted images to the total number of images in the dataset. A high accuracy indicates
that the model is effectively identifying sacred images.
𝑇𝑃+𝑇𝑁
Accuracy =
𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁
Precision: Reflects the proportion of true positive predictions among all positive predictions
made by the model. This metric is particularly important in the context of image classification, as
it helps to understand how many of the identified sacred images are indeed correct.
18 | P a g e
𝑇𝑃
Precision=
𝑇𝑃+𝐹𝑃
Recall: Measures the proportion of true positive predictions among all actual positive cases in
the dataset. This metric is crucial for assessing the model's ability to identify all relevant sacred
images, ensuring that few images go unrecognized.
𝑇𝑃
Recall=
𝑇𝑃+𝐹𝑁
F1 Score: Is the harmonic mean of precision and recall, providing a single metric that balances
the trade-off between the two. This score is particularly useful when there is an uneven class
distribution, ensuring that both false positives and false negatives are accounted for.
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 × Recall
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + Recall
F1 Score = 2 ×
Comparative Analysis
In addition to the primary performance metrics, a comparative analysis will be conducted against
existing image classification methods. This involves benchmarking the developed models against
traditional techniques and other deep learning architectures to highlight improvements in
classification accuracy and efficiency. The comparative analysis will help to determine the
advantages of using DL techniques such as CNNs and YOLO for identifying EOTC sacred
images, providing insights into the strengths of the newly developed models.
Error Analysis
To further enhance the understanding of the model's performance, an error analysis will be
performed. This will involve examining misclassified images to identify patterns or common
characteristics that led to incorrect predictions. By categorizing these errors, such as confusion
between similar iconographies or challenges posed by low-quality images, the research can
inform potential improvements in model architecture and training processes. This iterative
feedback loop is vital for refining the DL models and ensuring higher accuracy in subsequent
versions.
19 | P a g e
Visualization Techniques
Data visualization techniques will also be employed to present the analysis results effectively.
Confusion matrices will be generated to visually represent the performance of the classification
models, illustrating how well different classes are recognized. Additionally, precision-recall
curves and ROC curves may be utilized to provide a more nuanced view of the models'
performance across various thresholds.
By systematically employing these methods of data analysis, this research aims to thoroughly
evaluate the success of the algorithms in accurately classifying EOTC sacred images, ultimately
contributing to the preservation and appreciation of this vital aspect of cultural heritage.
20 | P a g e
1.10 Budget plan
1.11 Conclusion
This study aims to address the challenge of identifying and classifying EOTC sacred images
using DL techniques, such as CNNs and YOLO. With the rise of fake images influenced by
globalization, it is necessary to build an automated system capable of distinguishing authentic
from fake Saints images based on EOTC doctrines and dogmas. The research design involves
collecting a diverse dataset of Saints images, applying preprocessing techniques, and utilizing
advanced DL models for feature extraction and classification. This study enhances the accuracy
and reliability of sacred image recognition, benefiting scholars, clergy, and followers of the
EOTC.
21 | P a g e
References
[1] W. Engedayehu, “The Ethiopian Orthodox Tewahedo Church in the Diaspora: Expansion
in the Midst of Division,” African Soc. Sci. Rev., vol. 6, no. 1, 2013, [Online]. Available:
http://digitalscholarship.bjmlspa.tsu.edu/assrhttp://digitalscholarship.bjmlspa.tsu.edu/assr/
vol6/iss1/8
[3] L. Alzubaidi et al., Review of deep learning: concepts, CNN architectures, challenges,
applications, future directions, vol. 8, no. 1. Springer International Publishing, 2021.
doi: 10.1186/s40537-021-00444-8.
[4] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-
time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.,
vol. 2016-Decem, pp. 779–788, 2016, doi: 10.1109/CVPR.2016.91.
[5] S. Ranjan, P. Garhwal, A. Bhan, M. Arora, and A. Mehra, “Framework for Image Forgery
Detection and Classification Using Machine Learning,” Proc. 2nd Int. Conf. Intell.
Comput. Control Syst. ICICCS 2018, no. Iciccs, pp. 1872–1877, 2018, doi:
10.1109/ICCONS.2018.8663168.
[6] M. M. Isaac and M. Wilscy, “Image Forgery Detection Based on Gabor Wavelets
and Local Phase Quantization,” Procedia Comput. Sci., vol. 58, pp. 76–83, 2015, doi:
10.1016/j.procs.2015.08.016.
[7] A. Kuznetsov, “Digital image forgery detection using deep learning approach,” J. Phys.
Conf. Ser., vol. 1368, no. 3, 2019, doi: 10.1088/1742-6596/1368/3/032028.
[8] J. Bunk et al., “Detection and Localization of Image Forgeries Using Resampling Features
and Deep Learning,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work.,
vol. 2017-July, pp. 1881–1889, 2017, doi: 10.1109/CVPRW.2017.235.
[9] A. Doegar, M. Dutta, and G. Kumar, “CNN based Image Forgery Detection using pre-
trained AlexNet Model,” Proc. Int. Conf. Comput. Intell. IoT 2018, pp. 402–407, 2018.
22 | P a g e
[10] E. S. Gopi, N. Lakshmanan, T. Gokul, S. K. Ganesh, and P. R. Shah, “Digital image
forgery detection using artificial neural network and auto regressive coefficients,” Can.
Conf. Electr. Comput. Eng., no. May, pp. 194–197, 2006, doi:
10.1109/CCECE.2006.277398.
[11] T. Assefa, “Enhance Recognition of EOTC Saint Icons Using Ensemble of Deep
CNN Models,” no. June, 2023.
[12] N. L. W. Keijsers, “Neural Networks,” Encycl. Mov. Disord. Three-Volume Set, pp. V2-
257-V2-259, 2010, doi: 10.1016/B978-0-12-374105-9.00493-7.
[13] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,”
Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp.
770–778, 2016, doi: 10.1109/CVPR.2016.90.
[14] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image
recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14,
2015.
[15] S. D. Joseph Redmon, “You Only Look Once: Unified, Real-Time Object Detection,”
ACM Int. Conf. Proceeding Ser., 2018, doi: 10.1145/3243394.3243692.
[17] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed
and Accuracy of Object Detection,” 2020, [Online]. Available:
http://arxiv.org/abs/2004.10934
23 | P a g e
[20] G. Lavanya and S. D. Pande, “Enhancing Real-time Object Detection with YOLO
Algorithm,” EAI Endorsed Trans. Internet Things, vol. 10, pp. 1–9, 2024, doi:
10.4108/eetiot.4541.
[21] F. Dumitrescu, C. A. Boiangiu, and M. L. Voncila, “Fast and Robust People Detection in
RGB Images,” Appl. Sci., vol. 12, no. 3, 2022, doi: 10.3390/app12031225.
24 | P a g e