0% found this document useful (0 votes)
14 views7 pages

Literature 1

The article discusses a smart attendance system utilizing facial recognition technology, highlighting its challenges and effects. It emphasizes the integration of advanced algorithms, particularly MobileNetV2 and Face Net, to enhance accuracy and efficiency in attendance tracking. The system aims to address limitations of traditional methods while ensuring high performance on low-resource devices like the Jetson Nano.

Uploaded by

ddelosreyes175
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views7 pages

Literature 1

The article discusses a smart attendance system utilizing facial recognition technology, highlighting its challenges and effects. It emphasizes the integration of advanced algorithms, particularly MobileNetV2 and Face Net, to enhance accuracy and efficiency in attendance tracking. The system aims to address limitations of traditional methods while ensuring high performance on low-resource devices like the Jetson Nano.

Uploaded by

ddelosreyes175
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/370100565

A Smart Attendance System Based on Face Recognition: Challenges and


Effects

Article · April 2023


DOI: 10.5281/zenodo.7843852

CITATIONS READS

2 1,481

1 author:

Muhammad Ahmad Baballe


Nigerian Defence Academy
304 PUBLICATIONS 1,423 CITATIONS

SEE PROFILE

All content following this page was uploaded by Muhammad Ahmad Baballe on 18 April 2023.

The user has requested enhancement of the downloaded file.


Global Journal of Research in Engineering & Computer Sciences
ISSN: 2583-2727 (Online)
Volume 03| Issue 02 | March-April | 2023
Journal homepage: https://gjrpublication.com/gjrecs/
Review Article
A Smart Attendance System Based on Face Recognition: Challenges and Effects
1
Muhammad Abubakar Falalu, 2Ibrahim Umar, 3Amina Ibrahim, 4Abdulkadir Shehu Bari, 5Muhammad Ahmad Baballe*, 6Aminu
Ya’u
1
Department of Computer Science, Audu Bako College of Agriculture Danbatta, Kano, Nigeria
2
Department of Building Technology, School of Enviromental Studies Gwarzo, Kano State Polytechnic, Kano, Nigeria
3
Department of Computer Science, School of Technology, Kano State Polytechnic, Kano, Nigeria
4
Department of Computer Science, Audu Bako College of Agriculture Danbatta, Kano, Nigeria
5
Department of Computer Engineering Technology, School of Technology, Kano State Polytechnic, Kano, Nigeria
6
Department of Architecture Technology, School of Enviromental Studies Gwarzo, Kano State Polytechnic, Kano, Nigeria
DOI: 10.5281/zenodo.7843852 Submission Date: 10 April 2023| Published Date: 19 April 2023

*Corresponding author: Muhammad Ahmad Baballe


Department of Computer Engineering Technology, School of Technology, Kano State Polytechnic, Kano, Nigeria
ORCID: 0000-0001-9441-7023

Abstract
Modern high technology has significantly advanced thanks to the fourth industrial revolution, and artificial
intelligence has made tremendous strides. Facial recognition is one of the most important computer vision jobs in
real life, with applications ranging from intelligent services to security and attendance systems.

Keywords: RFID, Artificial intelligence; Attendance system; Facial recognition; Internet of Thing; Mobile Nets.

INTRODUCTION
Today's fourth industrial revolution has achieved an amazing confluence of cutting-edge technology advancements,
providing a chance to address global concerns [69]. Attendance is crucial in influencing students' academic achievement
in a wide range of real-life and practical systems [1–9]. Systems for nearly automatic human identification are based on
time-tested techniques like user IDs, passwords, and fingerprints [1, 2]. But, issues like losing an ID card and forgetting a
password become inconvenient. In order to do so, new technologies have been used, such as radio frequency
identification systems (RFID) [5, 6, 65, 66, 67] or rapid response (QR) codes [3, 4]. Linear barcode scanners are unable
to read QR codes. They need fixed reading distance image processing systems based on cameras. By utilizing the
Geolocation information, Sultana et al. [7] presented an Android-based attendance tracking system. To enhance the
quality of monitoring, Mahesh et al. in [8] combined a smartphone with a smart classroom built on facial recognition
technology. RFID usage is based on physical usage frequency. Obstacles or wave-absorbing materials close to the RFID
reader can reduce the quality. Voice, fingerprints, and face recognition are only a few examples of other technologies that
use biometric markers to track attendance [9–14]. According to the time duration reports, fingerprints have also been
employed in [10–12] to identify individuals and determine the attendance percentage. Due to poor image processing and
lighting, face recognition has several limitations in real-world situations [9, 13, 14]. The stability and effectiveness of the
attendance quality in the working environment settings are ensured by improved facial recognition algorithms in
conjunction with support systems and equipment [13]. The face recognition problem [13–18] stands out since it
necessitates high accuracy and good processing speed, research for real-world applications, etc. Despite recent
developments in face recognition technology, the widespread application of reliable facial recognition and verification
places severe restrictions on current methodologies. How to check whether the preselected facial features from the
picture database match the current image's information and whether the person's face is in the system or not. Local binary
pattern (LBP) descriptors were utilized by Pawar et al. in [15] to transform the input image into a binary image. Then, a
regional feature vector was created by concatenating the descriptors at various resolutions. The histogram feature was
determined by computing the histogram density on each block. However, extrinsic factors like input image quality, light,
and other variables may have an impact on the feature extraction from histograms. The article used Face Net to calculate

25 @ 2023 | PUBLISHED BY GJR PUBLICATION, INDIA


Global J Res Eng Comput Sci. 2023; 3(2), 25-30

the separation between face vectors in an effort to lessen this impact. Shebani et al. suggested the modified facial
recognition architecture in [16], which combines three- and four-patched LBP with Linear Discriminant Analysis (LDA)
[16] and Support Vector Machine (SVM) [18-21] to increase the accuracy of face recognition by encoding similarities
between nearby pixel patches. We take into account the issues with little training data as one of them. To satisfy the
needed accuracy and frame rate within the constraints of the system resources, the performance of the facial recognition
system in particular needs to be optimized. However, due to the numerous multi-layer perceptron’s (a network with n (n
2) layers (typically the input is not taken into account): where there is an output layer (the nth layer) and (n-1) hidden
layer) which increases the computational volume, previous papers using Neural Network (NN) combined with LDA [22-
29] have achieved facial recognition accuracy of more than 95% and processing speed of 4 FPS. Pawar et al., in [15]
proposed real-time face recognition with LPB model applying into smart city model with only achieves 80 % accuracy
[24]. The facial recognition model was then enhanced by T. V. Dang et al. [28] to raise the accuracy to 87-90%.
Moreover, the recognition process has been impacted by feature extraction from the histogram. Convolutional neural
networks (CNNs) have thus been created as effective models for image recognition issues needing vast quantities of
labeled training data. CNNs can only be used to solve issues with a small amount of training data since calculating
millions of deep CNN parameters takes a huge number of labeled examples [25]. Deep Convolutional Neural Networks
(DCNNs) enhance facial recognition as a result of the successful implementation of deep learning [26]. One of the most
widely used DCNN architectures in computer vision is MobileNetV2 [26, 27]. The Conv2D layer will be replaced by a
depth-wise separable convolution layer, which will further improve the efficiency of MobileNetV2's face recognition
system by lightening the load on the convolutional network layers. Moreover, the MobilenetV2 output serves as the face
detection input for Single-Shot Detection (SSD), which also uses it. For the purposes of this study, the paper makes use
of an improved Face Net model built on the MobileNetV2 backbone and SSD subsection, which guarantees resolving the
object identification issue and can be used in security or attendance systems with mobile devices that have limited
resources while still achieving high levels of accuracy and speed. The following output demonstrates that this is the best
model for low-resource mobile and embedded devices: Jetson Mini (128-core MaxwellTM GPU, quad-core ARM
Cortex-A57 CPU) [29, 50–64]. Practical examples show that produced face images have a good detection performance
with a 95% accuracy and a 25 frame per second inference speed. The model is more effective and quicker than state-of-
the-art models, which need more datasets for training and processing and fewer resources in the training model.
Ultimately, the IoT and upgraded facial recognition technology were effectively integrated into the smart attendance
system [30–36].

SUGGESTIVE METHOD
The automatic attendance system is composed of two primary processing stages. For the best accuracy in
identification and attendance, there will be two layers of identification at the user interface layer, the first based on the
fingerprint system and the second with the camera system. Data is sent to the second layer for processing once it has been
gathered at the first layer. At this time, the data will have been fully calculated. The lecturers will then assess the final
grade for each student and save the information to the database at the conclusion of the course. Lastly, an Excel file can
be used to import and export the data. The model may be operated on the Jetson Nano 4GB embedded computer using
only 5 to 10 watts of power. Developer of the Jetson Nano 4GB, AI Development Board, includes the following: CPU:
Quad-core ARM [email protected] GHz; Memory: 4 GB 64-bit LPDDR4 (25.6 GB/s); and Storage: 16GB EMMC. GPU: 128-
core Maxwell. We can install a workable attendance system with numerous recognition cameras using the advantages
that Jetson Nano offers, especially the GPU-based board. This system calls for cameras with sufficient quality and
between 25 and 30 frames per second (FPS). The R305 Fingerprint Scanning Module is a product with a compact design,
reliable performance, and straightforward construction. Power supply between 3.6 and 6 VDC; USB 1.1 or TTL-UART
communication; multi-fingerprint recognition mode; baud rate: 57600 bps; average recognition time of less than 0.8 s;
etc. are some of the specifications. It assists in capturing and storing the user's fingerprint identifying data, allowing it to
be distinguished from fingerprints. Consequently, to improve the functionality of the attendance system and check
identification, we employ the fingerprint scanning module in conjunction with the Jetson Nano. Use the program PYQT5
to create graphical user interfaces (GUIs). This is a Python-created interface for Qt, one of the most well-known and
renowned cross-platform GUI toolkits. Then, for learning models, employ some deep learning frameworks, such as
Keras, TensorFlow, etc. [30]. The entire data system will be handled on a single thread when the attendance system uses
a standard scan module. Yet, a system that combines facial recognition with fingerprinting. Face recognition system
using Jetson Nano capable of handling multiple video streams [31]. Jetson Nano is connected to the camera module for
storing the acquired camera images. a TV or LCD monitor that is connected to the Jetson Nano as a means of
observation. Mobile phones, laptops, and desktop computers may all access the Internet with the Jetson Nano [31–33].
Mobile Nets are built on a simplified design that creates lightweight deep neural networks using depth-wise separable
convolutions [37–44]. In comparison to MobileNetV1 [26, 44], MobileNetV2 [37–43] makes improvements to achieve
higher accuracy with less input parameters and calculations. Using depth-wise separable convolutions with linear
bottlenecks and inverted residual blocks (shortcut connections between bottlenecks) [43], we primarily introduce the key
features of MobileNetV2, optimize the loss function, and use the improved model architecture from the Face Net model
to illustrate MobileNetV2. Because standard residual architectures have more channels at a block's input and output than

26
Global J Res Eng Comput Sci. 2023; 3(2), 25-30

at its intermediate layers, MobileNetV2's residual block is the polar opposite of previous residual architectures. To reduce
the amount of model parameters, one of the layers has an inverted residual block, and depth-separated convolution
transforms are also used. The answer allows for a minor size reduction of the Mobile Net model. Real-time and
lightweight networking are becoming more and more important in the age of mobile networks. Nevertheless, due to an
excessive number of parameters and computations, many identity networks are unable to achieve the real-time criteria.
When compared to other contemporary approaches for resolving this issue, the proposed solution leveraging the
MobileNetV2 backbone performs better in the database of face expressions and features. The input channels are
expanded by 1x1 point convolution in the MobileNetV2 algorithm. Finally, integrate output features while lowering the
size of the network by using linear convolutional integration and depth convolution to obtain linear features from the
input. Replacing Relu6 with a linear function after size reduction will allow the output channel size to match the input.
When used with SSDs, MobileNetV2 will be very helpful in lowering latency and increasing processing performance.
YOLO [14] and Faster-RCNN [46] are two fast and effective object detection features that are shown in the subsection
SSD [45, 46], which is one of them. As a result, the subsection SSD extracts the feature map from the MobilenetV2
backbone and adds extra bits to forecast the item. Initiation modules are used in blocks by Face Net to minimize the
amount of trainable parameters [13]. This model creates a 128-d embedding vector for each picture from a 160x160 RGB
image. Extraction of Face Net features for face recognition. Face Net can be used to vectorize facial features, and the
triplet loss function can be used to determine how far apart the face vectors are from one another. To make mathematical
comparison and recognition easier, the face will first be represented as a vector [71]. In order to calculate the similarity
and difference between the faces we receive for face identification, we must essentially solve the problem of computing
the distance between the Triplet Loss vectors. Three parameters make up a triplet; they are the inquiry, a second face
image of the subject, and a third, unrelated face image [28, 47, 48]. Face detection and face recognition are the two
primary steps in the recognition process. Algorithms used in each procedure vary. In this study, the author employs Face
Net feature extraction for face identification and a multilayer convolutional neural network to recognize faces in frames.
The MobileNetV2 backbone network is used by the subsection SSD to define the face's box during the face recognition
process. Nevertheless, the output of MobileNetV2 will be used as feature maps to base the detection on the input photos,
and the last few layers of the network, including FC, Maxpool, and SoftMax, will be disregarded. Finally, using a face
image as input, Face Net generates a vector of 128 values that reflect the most crucial facial traits for each individual.
This vector is known as an embedding in machine learning. The gap between facial traits is then measured using a
classifier to differentiate between various identities. Due to their effectiveness in multi-class classification, support vector
machines (SVM) in [18, 19] and K-Nearest Neighbor (KNN) in [50] are two of the most often employed algorithms in
face recognition applications. The Face Net model extracts features for face identification after detecting faces in the
frame. The Labeled Face in the Wild (LFW) dataset, which contains more than 13000 photos of human faces, is used as
the pre-training image database for a multi-layer convolutional network [28, 47, 48, 49]. The photos of 150 tagged
subjects from three classrooms make up the attendance training data. 50 students from each class will upload their photos
for face recognition training. The direct face view image with all of the feature information contains the majority of the
faces in the dataset. Using a larger face dataset, we assess the Face Net model based on the MobileNetV2 backbone
network and SSD subsection and contrast it with models like MTCNN employing O-net, P-net, and R-net in [7, 43, 48].
Rectinaface model built using the respective R512 and R50 backbones [47]. All of the models in [7, 43, 48] achieve the
same high accuracy at a low frame rate. Similar to the Rectinaface model's result based on MobilenetV2, Particularly, the
Face Net model's result based on the attendance system's backbone MTCNN [28] is approximately 87-90%. We use the
faces of 14 randomly selected students to assess the Face Net model's performance in the face recognition method based
on MobilenetV2-SSD. The model's accuracy ranges from 91% to 94%. The experimental outcomes, which were based on
the MTCNN backbone, outperformed Face Net significantly [28]. The new Face Net model is more effective and quicker
than the previous state-of-the-art models, which require larger datasets for training and processing, based on the
suggested experimental approach, which uses smaller datasets and fewer resources in the training model. The intended
system exemplified two crucial characteristics in relation to its capacity to enhance the veracity of the attendance, as
follows: Use face recognition for automatic attendance verification first because it is the least evasive and just needs
simple acquisition tools. Second, integrate highly mobile devices with advanced mobile devices that have constrained
resources (such as constrained RAM and on-device storage) with short datasets and fewer resources in the training model
[68].

Pros of Facial Recognition


Technology like facial recognition can help society in various ways, such as by minimizing human interaction, enhancing
safety and security, and preventing crimes. Consider some advantages of facial recognition:
1. Helps find missing people
2. Protects businesses against theft
3. Improves medical treatment
4. Strengthens security measures
5. Makes shopping more efficient
6. Reduces the number of touchpoints

27
Global J Res Eng Comput Sci. 2023; 3(2), 25-30

7. Improves photo organization [72].

Cons of Facial Recognition


A cutting-edge technology called facial recognition has the potential to alter our future. However, the introduction of this
new system into society carries some dangers and implications, just like every invention.
1. Threatens privacy
2. Imposes on personal freedom
3. Violates personal rights
4. Data vulnerabilities
5. Misuse causing fraud and other crimes
6. Technology is still new
7. Errors can implicate innocent people
8. Technology can be manipulated [72].

CONCLUSION
In order to reduce the model size and compute volume, this paper uses a Face Net model built on the foundation of
MobilenetV2 with the subsection SSD to detect faces using depth-separated convolutional networks. The results are
astounding. By contrasting various retinal models and the MCTT backbone, the authors experimented with and assessed
the proposed model. The SSD in conjunction with the MobileNetV2 backbone has enabled accuracy of roughly 99% in
simulated studies and 91-95% in real-world applications using the same dataset as Wider Face. A frame rate of 20–23
(FPS) substantially speeds up the procedure. The revised Face Net model is more effective and quicker than the previous
state-of-the-art models, which need larger datasets for training and processing. It also requires fewer resources in the
training model. In addition, the deep learning-based solution might be able to maximize the resources of a variety of low-
capacity hardware systems. The facial recognition system's effects and difficulties are examined.

REFERENCES
1. G. Hua, M. H. Yang, E. L. Miller, T. M. Ma, D. J. Kriegman, and T. S. Huang, “Introduction to the special section
on real world face recognition,” IEEE Trans. Pat. Anal. Mach. Intell., vol. 33, pp. 1921- 1924, 2011.
2. F. P. Filippidou, and G. A. Papakostas, “Single Sample Face Recognition Using Convolutional Neural Networks for
Automated Attendance Systems,” 2020 Fourth Int. Conf. Intell. Comput. Data Sci. (ICDS), 2020.
3. M. G. M. Johar, and M. H. Alkawaz, "Student's activity management system using QR code and C4. 5 algorithm,”
Int. J. Med. Toxicology & Legal Med., vol. 21, pp. 105-107, 2018.
4. F. Masalha, and N. Hirzallah, “A students attendance system using QR code,” Int. J. Adv. Comput. Sci. Appl., vol.
5, pp. 75-79, 2014.
5. O. Arulogun, A. Olatunbosun, O. Fakolujo, and O. Olaniyi, “RFID based students attendance management system,”
Int. J. Sci. & Eng. Res., vol. 4, pp. 1-9, 2013.
6. F. Silva, V. Filipe, and A. Pereira, “Automatic control of students' attendance in classrooms using RFID,” ICSNC
08. 3rd Int. Conf. Syst. Netw., pp. 384-389, 2008.
7. M. Karunakar, C. A. Sai, K. Chandra, and K. A. Kumar, “Smart Attendance Monitoring System (SAMS): A Face
Recognition Based Attendance System for Classroom Environment,” Int. J. Recent Develop. Sci. Technol., vol. 4,
no. 5, pp. 194-201, 2020.
8. S. Wenhui, and J. Mingyan, “Face Recognition Based on Multi-view - Ensemble Learning,” Chin. Conf. Pat.
Recognit. Comput. Vision (PRCV), pp. 127-136, 2018.
9. A. S. Al-Waisy, R. Qahwaji, S. Ipson, and S. Al-Fahdawi, “A Robust Face Recognition System Based on Curvelet
and Fractal Dimension Transforms,” IEEE Int. Conf. Comp. Inf. Technol., pp. 548-555, 2015.
10. A. Fassio et al., “Prioritizing Virtual Screening with Interpretable Interaction Fingerprints,” J. Chem. Inf. Model.,
vol. 62, no. 18, pp. 1- 53, 2022.
11. Y. Jiang, “Space Debris Fingerprint Definition and Identification,” The Journal of Brief Ideas, 2022.
12. A. B. V. Wyzykowski, M. P.Segundo, and R. D. P. Lemes “Multiresolution synthetic fingerprint generation,” IET
Biometrics, vol. 11, no. 1, 2022.
13. W. Chunming, and Z. Ying, “MTCNN and FaceNet Based Access Control System for Face Detection and
Recognition,” Autom. Control Comput. Sci., vol. 55, pp. 102-112, 2021.
14. A. Bochkovskiy, C.Y. Wang, and H. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” arXiv
preprint arXiv:2004.10934., 2020.
15. S. Pawar, V. Kithani, S. Ahuja, and S. Sahu, “Local Binary Patterns and Its Application to Facial Image Analysis,”
2011 Int. Conf. Recent Trends Inf. Technol. (ICRTIT), pp. 782-786, 2011.
16. Q. A. Shebani, “A Hybrid Feature Extraction Technique for Face Recognition,” Int. Proc. Comput. Sci. Inf.
Technol., vol 3, no. 2, 2012.
17. A. Sharma and S. Chhabra, “A Hybrid Feature Extraction Technique for Face Recognition,” Int. J. Adv. Res.
Comput. Sci. Softw. Eng., vol. 7, no. 5, pp. 341-350, 2017.

28
Global J Res Eng Comput Sci. 2023; 3(2), 25-30

18. M. Mia, R. Islam, M. F. Wahid, and S. Biswas, “Image Reconstruction Using Pixel Wise Support Vector Machine
SVM Classification,” Comput. Sci. Math. Int. J. Sci. Technol. Res., vol. 4, no. 2, pp. 232-235, 2015.
19. U. Maulik and D. Chakraborty, “Remote Sensing Image Classification: A survey of support-vector-machine-based
advanced techniques,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 1, pp. 33-52, 2017.
20. T. V. Dang, “Smart home Management System with Face Recognition based on ArcFace model in Deep
Convolutional Neural Network,” J. Robot. Control (JRC), vol. 3, no. 6, pp. 754-761, 2022.
21. T. V. Dang et at., “Design of a Face Recognition Technique based MTCNN and ArcFace,” MMMS 2022, LNME,
2022.
22. Z. Haitao, L. Zhihui, L. Henry and Z. Xianyi, “Linear Discriminant Analysis,” Feature Learn. Understanding, pp 71-
85, 2020.
23. F. Zuo, and P. H. N. de With, “Real-time Face Recognition for Smart Home Applications,” 2005 Digest of Tech.
Papers, Int. Conf. Consum. Electron., pp. 35-36, 2005.
24. S. Pawar, V. Kithani, S. Ahuja, and S. Sahu, “Smart Home Security using IoT and Face Recognition,” 2018 Fourth
Int. Conf. Comput. Commun. Control and Automat. (ICCUBEA), vol. 176, no. 13, pp. 45- 47, 2018.
25. F. P. Filippidou, and G. A. Papakostas, “Single Sample Face Recognition Using Convolutional Neural Networks for
Automated Attendance Systems,” 2020 Fourth Int. Conf. Intell. Comput. Data Sci. (ICDS), 2020.
26. A. G. Howard, M. Zhu, Bo Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam,
"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications," Comput. Vision Pat.
Recognit., 2017.
27. A. Wirdiani, P. Hridayami, and A. Widiari, “Face Identification Based on K-Nearest Neighbor,” Sci. J. Inform., vol.
6, no. 2, pp. 150-159, 2019.
28. T. V. Dang, and D. K. Nguyen, “Research and Design the Intelligent Mechatronics system applying with face
recognition and deep learning in student’s diligencet,” 7 th Nat. Conf. Sci. Eng. Meas., pp. 239-246, 2020.
29. A. Süzen, B. Duman, and B. Sen, “Benchmark Analysis of Jetson TX2, Jetson Nano and Raspberry PI using Deep-
CNN,” 2020 Int. Congr. Human-Comput. Interact. Optim. Robot. Appl. (HORA), pp. 1-5, 2020.
30. S. Kurzadkar, “Hotel management system using Python Tkinler GUI,” Int. J. Comput. Sci. Mobile Comput., vol. 11,
no. 1, pp. 204-208, 2022.
31. V. Sati, S.M. Sánchez, N. Shoeibi, A. Arora, and J.M. Corchado, “Face Detection and Recognition, Face Emotion
Recognition Through NVIDIA Jetson Nano,” 11th Int. Symp. Ambient Intell., pp. 177-185, 2020.
32. M. H. Widianto, A. Sinaga, and M. A. Ginting, “A Systematic Review of LPWAN and Short-Range Network using
AI to Enhance Internet of Things,” J. Robot. Control (JRC), vol. 3, no. 2, pp. 505-518, 2022.
33. G. P. N. Hakim, D. Septiyana, and I. Suwarno, “Survey Paper Artificial and Computational Intelligence in the
Internet of Things and Wireless Sensor Network,” J. Robot. Control (JRC), vol. 3, no. 4, pp. 439-454, 2022.
34. A. A. Sahrab and H. M. Marhoon, “Design and Fabrication of a Low-Cost System for Smart Home Applications,” J.
Robot. Control (JRC), vol. 3, no. 4, pp 409-414, 2022.
35. M. H. Widianto, A. Sinaga, and M. A. Ginting, “A Systematic Review of LPWAN and Short-Range Network using
AI to Enhance Internet of Things,” J. Robot. Control (JRC), vol. 3, no. 4, pp. 505-518, 2022.
36. N. S. Irjanto and N. Surantha, “Home Security System with Face Recognition based on Convolutional Neural
Network,” Int. J. Adv. Comput. Sci. Appl., vol. 11, no. 11, pp. 408-412, 2020.
37. A. Verma and M. K. Srivastava, “Real-time Face mask Detection Using Deep Learning and MobileNet V2,” VLSI,
Microwave and Wireless Technologies, Lecture Notes in Electrical Engineering, vol. 877, pp. 297-305, 2022.
38. M. Sandler, A. Howard, M. Zhu, Andrey Zhmoginov, and L. C. Chen. “MobileNetV2: Inverted Residuals and
Linear Bottlenecks,” The IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), pp. 4510-4520, 2018.
39. V. Patel and D.Patel, “Face Mask Recognition Using MobileNetV2,” Int. J. Sci. Res. Comp. Sci. Eng. Inf. Technol,
vol. 7, no. 5, pp. 35-42, 2021.
40. H. Sun, S. Zhang, R. Ren, and L. Su “Maturity Classification of “Hupingzao” Jujubes with an Imbalanced Dataset
Based on Improved MobileNet V2,” Agriculture, vol. 12, no. 9, pp. 1-17, 2022.
41. G. Edel and V. Kapustin, “Exploring of the MobileNet V1 and MobileNet V2 models on NVIDIA Jetson Nano
microcomputer,” J. Phys. Conf. Series, vol. 2291, no. 1, pp.1-7, 2022.
42. C. L. Amar et al., “Enhanced Real-time Detection of Face Mask with Alarm System using MobileNetV2,” Inter. J.
Sci. Eng., vol.7, no. 7, pp. 1-12, 2022.
43. M. Sandler, A. Howard, M. Zhu, Andrey Zhmoginov, and L. C. Chen. “MobileNetV2: Inverted Residuals and
Linear Bottlenecks,” The IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), pp. 4510-4520, 2018.
44. Y. Nan, J. Ju, Q. Hua, H. Zhang, and B. Wang, “A-MobileNet: An approach of facial expression recognition,”
Alexandria Eng. J., vol. 61, no. 6, pp. 4435-4444, 2021.
45. A. Srivastava1, A. Dalvi, C. Britto, H. Rai, and K. Shelke, “Explicit Content Detection using Faster R-CNN and
SSD MobileNet v2,” Int. Res. J. Eng. Technol. (IRJET), vol. 7, no. 3, pp. 5572-5577, 2020.
46. J. Jeong, H. Park, and N. Kwak, “Enhancement of SSD by concatenating feature maps for object detection,” arXiv
preprint arXiv:1705.09587, 2017.
47. R. Redondo and J. Gilbert, “Extended Labeled Faces in-the-Wild (ELFW): Augmenting Classes for Face
Segmentation,” Comput. Sci., 2020.
48. J. Deng, J. Guo, E. Ververas, I. Kotsia, and S. Zafeiriou, “RetinaFace: Single-shot Multi-level Face Localisation in
the Wild,” 2020 IEEE/CVF Conf. Comput. Vision and Pattern Recognit. (CVPR), pp. 5203-5212, 2020.
49. A. Wirdiani, P. Hridayami, and A. Widiari, “Face Identification Based on K-Nearest Neighbor,” Sci. J. Inform., vol.
6, no. 2, pp. 150-159, 2019.
50. R. S. Peres, X. Jia, J. Lee, K. Sun, A. W. Colombo, and J. Barata, “Industrial Artificial Intelligence in Industry 4.0 -
Systematic Review, Challenges and Outlook,” IEEE Access, vol. 8, pp. 220121-220139, 2020.

29
Global J Res Eng Comput Sci. 2023; 3(2), 25-30

P. Vrushali, and M. SanJay, “Autonomous Vehicle using Computer Vision and LiDAR,” I-manager’s J. Embedded
Syst., vol. 9, pp. 7-14, 2021.
51. W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj and L. Song, “SphereFace: Deep Hypersphere Embedding for Face
Recognition,” 2017 IEEE Conf. Comput. Vision and Pattern Recognit. (CVPR), pp. 6738-6746, 2017.
52. S. Qureshi et. al. “Face Recognition (Image Processing) based Door Lock using OpenCV, Python and Arduino,” Int.
J. Res. Appl. Sci. & Eng. Technol., vol. 8, no. 6, pp. 1208-1214, 2020.
53. N. Saxena, and D. Varshney, “Smart Home Security Solutions using Facial Authentication and Speaker Recognition
through Artificial Neural Networks,” Int. J. Cognitive Comput. Eng., vol. 2, no. 2, pp. 154-164, 2021.
54. M. H. Khairuddin, S. Shahbudin, and M. Kasssim, “A smart building security system with intelligent face detection
and recognition,” IOP Conf. Series: Mater. Sci. Eng., no. 1176, 2021.
55. M. R. Dhobale, R. Y. Biradar, R. R. Pawar, and S. A. Awatade, “Smart Home Security System using Iot, Face
Recognition and Raspberry Pi,” Int. J. Comput. Appl., vol. 176, no. 13, pp. 45-47, 2020.
56. T. Bagchi et. al., “Intelligent security system based on face recognition and IoT,” Materials Today: Proc., vol. 62, pp.
2133-2137, 2022.
57. P. Kaur, H. Kumar, and S. Kaushal “Effective state and learning environment based analysis of students’
performance in online assessment,” Int. J. Cognitive Comput. Eng., vol. 2, pp. 12-20, 2021.
58. N. A. Othman, and I. Aydin, “A face recognition method in the Internet of Things for security applications in smart
homes and cities,” 2018 6th Int. Istanbul Smart Grids and Cities Congr. Fair (ICSG), pp. 20-24, 2018.
59. T. V. Dang, “Design the encoder inspection system based on the voltage 7 wave form using finite impulse response
(FIR) filter,” Int. J. Modern Phys. B, vol. 34, p. 2040146, 2020.
60. T. V. Dang and N. T. Bui, “Design the abnormal object detection system using template matching and subtract
background algorithm,” MMMS 2022, LNME, 2022.
61. T. V. Dang and N. T. Bui, “Research and Design Obstacle Avoidance Strategy for Indoor Autonomous Mobile
Robot using Monocular Camera,” J. Adv. Transp., 2022.
62. T. V. Dang and N. T. Bui, “Multi-scale Fully Convolutional Network based Semantic Segmentation for Mobile
Robot Navigation,” Electron., vol. 12, no. 3, p. 533, 2022.
63. T. V. Dang and Dinh-Son Nguyen, “Optimal Navigation based on Improved A* Algorithm for Mobile Robot,” Int.
Conf. Intell. Syst. Netw., 2023.
64. M. B. Ahmad, and F. A. Nababa, “A Comparative Study on Radio Frequency Identification System and its various
Applications”, International Journal of Advances in Applied Sciences (IJAAS) Vol. 10, No. 4, December 2021, pp.
392~398, DOI: 10.11591/ijaas.v10.i4.pp392-398.
65. M. B. Ahmad, and F. A. Nababa, “The need of using a Radio Frequency Identification (RFID) System”,
International Journal of New Computer Architectures and their Applications (IJNCAA) 11(2): 22-29 The Society of
Digital Information and Wireless Communications, 2021.
66. M. A. Baballe, Y. I. Muhammad, A. Abba, S. A. Ibrahim, A. A. Yako, A. S. Aliyu, I. Abba, “The Implementation of
a Bluetooth and GSM Module-Based Student Attendance System”, Control Science and Engineering 2022; 6(1): 10-
16 http://www.sciencepublishinggroup.com/j/cse doi: 10.11648/j.cse.20220601.12.
67. T. Dang, “Smart Attendance System based on Improved Facial Recognition”, Journal of Robotics and Control (JRC)
Volume 4, Issue 1, January 2023, DOI: 10.18196/jrc.v4i1.16808.
68. M. Izuddeen, M. K. Naja’atu, M. U. Ali, M. B. Abdullahi, A. M. Baballe, A. U. Tofa, and M. Gambo , “FPGA
Based Facial Recognition System”, Journal of Engineering Research and Reports 22(8): 89-96, 2022; Article
no.JERR.87772 ISSN: 2582-2926.
69. M. A. Baballe, and M. I. Bello, “Impact and Challenges of Implementing Management Information System”, Global
Journal of Research in Engineering & Computer Sciences Volume 01| Issue 02 | Nov-Dec | 2021 Journal homepage:
https://gjrpublication.com/journals/.
70. A. S. Muhammad, A. S. Iliyasu, A. U. Abdullahi, B. A. Imam, S. H. Ayagi, M. A. Baballe, “Gabor Based Band
Selection For Multispectral Palmprint Recognition System Using Feature Fusion”, Journal on Image Processing,
Volume 6. No. 2 April - June 2019.
71. https://senstar.com/senstarpedia/pros-and-cons-of-facial-recognition/

CITE AS
M. A. Falalu, Ibrahim U, Amina I, Abdulkadir S. B., M. A. Baballe, & Aminu Ya'u. (2023). A Smart Attendance
System Based on Face Recognition: Challenges and Effects. Global Journal of Research in Engineering & Computer
Sciences, 3(2), 25–30. https://doi.org/10.5281/zenodo.7843852

30

View publication stats

You might also like