TSP Jiot 52325
TSP Jiot 52325
DOI: 10.32604/jiot.2024.052325
ARTICLE
ABSTRACT
Impressive advancements and novel techniques have been witnessed in AI-based Human Intelligent-Things Inter-
action (HITI) systems. Several technological breakthroughs have contributed to HITI, such as Internet of Things
(IoT), deep and edge learning for deducing intelligence, and 6G for ultra-fast and ultralow-latency communication
between cyber-physical HITI systems. However, human-AI teaming presents several challenges that are yet to be
addressed, despite the many advancements that have been made towards human-AI teaming. Allowing human
stakeholders to understand AI’s decision-making process is a novel challenge. Artificial Intelligence (AI) needs to
adopt diversified human understandable features, such as ethics, non-biases, trustworthiness, explainability, safety
guarantee, data privacy, system security, and auditability. While adopting these features, high system performance
should be maintained, and transparent processing involved in the ‘human intelligent-things teaming’ should be
conveyed. To this end, we introduce the fusion of four key technologies, namely an ensemble of deep learning,
6G, IoT, and corresponding security/privacy techniques to support HITI. This paper presents a framework that
integrates the aforementioned four key technologies to support AI-based Human Intelligent-Things Interaction.
Additionally, this paper demonstrates two security applications as proof of the concept, namely intelligent smart
city surveillance and handling emergency services. The paper proposes to fuse four key technologies (deep learning,
6G, IoT, and security and privacy techniques) to support Human Intelligent-Things interaction, applying the
proposed framework to two security applications (surveillance and emergency handling). In this research paper,
we will present a comprehensive review of the existing techniques of fusing security and privacy within future
HITI applications. Moreover, we will showcase two security applications as proof of concept that use the fusion
of the four key technologies to offer next-generation HITI services, namely intelligent smart city surveillance and
handling emergency services. This proposed research outcome is envisioned to democratize the use of AI within
smart city surveillance applications.
KEYWORDS
Deep edge learning; human intelligent-things interaction; Internet of Things
1 Introduction
1.1 Overview
Advanced technologies, such as the Internet of Things (IoT), Artificial Intelligence (AI), and 6G
network connectivity, have become an integral part of the civilized modern lifestyle and how businesses
are run [1]. Yet, there are some obstacles to completely relying on them, especially when human
judgment is required to assess certain values, such as safety, ethics, and trustworthiness. Therefore,
a near real-time Human Intelligent-Things Interaction (HITI) application is an emergent challenge.
The current research work will try to resolve the aforementioned challenges. One way to do this is
by allowing humans to understand how decisions are made by the HITI algorithms. This could be
achieved by establishing HITI processing transparency to eliminate the Blackbox syndrome of the
processing step. Understandability between humans and the edge processing endpoints must also be
established to support HITI processing transparency.
A fusion of advanced technologies is used in the current research work to enable a clear
understanding of non-functional processing features, such as ethics, trustworthiness, explainability,
and auditability. Hence, this research investigates the possibility of building edge GPU devices that
can train and run multiple ensembles of deep learning models on different types of edge IoT nodes
with constrained resources. In addition, a 6G architecture-based framework will be built to fuse edge
IoT devices and an ensemble of GPU-enabled deep-learning processing units. This framework will be
supported with edge 6G communication capability [2], edge storage system [3], and system security
and data privacy techniques. As a proof of concept, a smart city surveillance application and handling
emergency services will be implemented to showcase the research idea.
c) Live alerts
d) Live events
3. The framework will showcase next-generation demonstrable AI-based services to different
stakeholders, such as city offices and ministries, for further technology transfer.
1.5.1 6G Networks
The number of intelligent devices connected to the internet is increasing day by day. This
requires more reliable, resilient, and secure connectivity to wireless networks. Thus, a wireless network
beyond 5G (i.e., 6G) is essential for building connectivity between humans and machines [10–12].
Moreover, recent advances in deep learning support many remarkable applications, such as robotics
and self-driving cars. This increases the demand for more innovations in 6G wireless networks.
Such networks will be needed to support extensive AI services in all the network layers. A potential
architecture for 6G was introduced in [13]. Many other works also describe the roadmap towards 6G
[14–16]. Zhang et al. [17] discussed several possible technology transformations that will define the 6G
networks. Additionally, many recent applications are now using 6G to upgrade both performance and
service quality [18–21]. Some other noteworthy research that focuses on 6G can be reached at [22–26].
to achieve the personalized, responsive, and private learning. This task is challenging since it requires
learning the value of millions of parameters by performing a huge number of operations using simple
edge devices with limited memory and computation power. Thus, it is impossible to benefit from the
great power of deep learning applied on edge devices without either compressing or distilling the large
deep learning model.
The compression of an already-trained model is done using weights sharing [31], quantization
[32], or pruning [33,34]. It is also possible to combine more than one of these compression techniques
[35]. However, the compressed models cannot be retrained to capture user-specific requirements, nor
can they be retrained to capture new data available at runtime. The distillation approaches are based
on knowledge transfer using knowledge distilled from a termed teacher (a cloud-based deep model)
to improve the accuracy of a termed student (an on-device small model) [36–39]. The distillation
approaches suffer from limited accuracy [40,41] and the undefined training speed of the model to
reach an acceptable accuracy. Moreover, these approaches assume that all required training data is
available at the training time. It is also assumed that the tasks for the student and the teacher remain
the same. However, both assumptions are not realistic in practice. The concerning limitations in the
current studies are summarized in Table 1 for easier readability and reference. Some other noteworthy
research that uses deep learning on edge devices are [42,43].
The next section discusses the research methodology and design that explains advanced archi-
tectures in communication technologies, secure big data sharing and transmission techniques, and
advanced deep learning algorithms that can support HITI. In addition, this section demonstrates
a use-case scenario that explains the incorporation of human-understandable features (i.e., ethics,
trust, security, privacy, and non-biases) in the use of AI for HITI applications. In this section, the
framework design is demonstrated via a high-level diagram that shows all employed technologies
JIOT, 2024, vol.6 47
required to run the research project implementation. It also shows the techniques used for preserving
human-AI interaction data security and privacy. It talks about the project implementation of two
types of services: smart city surveillance and emergency handling scenarios. After that, research
implementation is tested, and results are discussed and evaluated using evaluation measures, such as
accuracy, recall, precision, and F1 score. Assessment of the usage of edge learning and the provenance
of using blockchain is presented in this section. The last section summarizes the research paper in
conclusion and identifies potential future works.
of a tactile internet for human-computing device interactions. Fig. 2 shows the fusion of different
entities form the HITI ecosystem. AI is one of the areas that has taken the full benefit of this ultra-
low latency and massive bandwidth communication [13]. Due to the availability of massive number of
datasets, very high bandwidth communication, availability of a massive memory and GPU, innovative
AI applications have been developed and proposed by researchers. Even the AI processes can now be
performed at the edge nodes [46].
Figure 3: Fusion of deep edge learning models to support surveillance and emergency situations
Figure 4: Fusion of human intelligence and AI interaction for emergency health applications within a
smart city
By avoiding the Blackbox syndrome during model development, the AI algorithm could be
explainable, auditable, and debuggable. Focusing on datasets, models, and computing environment
security, in addition to encryption mechanisms, helps satisfy features like safety, privacy-awareness,
and responsibility. Measures like differential privacy, Blockchain-based provenance, and homomor-
phic encryption support non-biases and trustworthiness. Fig. 6 shows a scenario in which human
intelligence interacts with an AI that understands human semantics. This semantic AI would bring
trust in human subjects and will convince the HITI application designers to democratize AI for more
and more critical applications.
Figure 6: Fusion of human intelligence with edge AI for deducing different types of objects and event
JIOT, 2024, vol.6 51
As shown in Fig. 8, 6G will allow User Plane (UP) functions that provide personalized qualities
of service and experience-related functionalities with the help of AI functional plane. The Control
Plane (CP) functionalities use AI algorithms that ensure the network functions, security functions,
user mobility, and other control mechanisms to support the UP requirements. The 6G will use built-
in AI plane functions to offer custom, dynamic, and personalized network slices that will serve the
HITI applications’ needs [13]. AI-based Multiple-Input Multiple-Output (MIMO) beam-forming will
use the needed bandwidth only based on the HITI vertical. Network slices and network functions are
virtually configured according to the QoS needed.
As shown in Fig. 9, 6G will use intelligence to manage applications from application to physical
Operating System Interface (OSI) layers [51]. Intelligent computing offloading and caching can be
52 JIOT, 2024, vol.6
managed by application layer AI algorithms. Intelligent traffic prediction can be done by transport
layer AI algorithms. Traffic clustering can be done by employing AI algorithms in the network layer.
Intelligent channel allocation can be performed by a datalink layer AI algorithm. Fusion of OSI
layers can also be done to offer hybrid services. For example, physical and datalink layers can be
jointly optimized by AI algorithms to offer adaptive configuration, network, and datalink layers.
Such fusion can offer radio resource scheduling, network, and transport layers can offer intelligent
network traffic control, while the rest of the upper layers offer data rate control. 6G will also use AI
algorithms at different metaphors, e.g., AI for edge computing and cloud computing layers. At the
edge computing layers where sensing takes place, intelligent control functions and access strategies
are managed by AI algorithms. In the fog computing layer, intelligent resource management, slice
orchestration, and routing are managed by AI algorithms. Finally, in the cloud computing layer, i.e.,
service space, computation and storage resource allocation are performed by AI algorithms.
Figure 9: 6G intelligent network configuration based on HITI performance and needs. 6G intelligent
network configuration for mobility, low-latency, and high-bandwidth requiring HITI surveillance
applications
JIOT, 2024, vol.6 53
Figure 10: 6G intelligent network will allow real-time and live AI-based HITI surveillance applications
deep learning models will ensemble a set of edge and cloud capable models. This will allow targeting
any type of GPU capability within the complete end-to-end smart city application scenario. Fig. 13
shows different open-source frameworks that we have researched, tested, and finally considered in
this research.
Figure 12: Security and privacy models that has been studied, implemented, and tested
JIOT, 2024, vol.6 55
Figure 13: Families of edge learning models that has been studied, implemented, and tested
3.1 Implementation
As for proof of concept, we have implemented two types of services where human and AI teaming
has been realized: namely intelligent smart city surveillance and handling emergency services. For
pre-training, two classification learning algorithms were used: Logistic Regression (LR) and Support
Vector Machine (SVM) classification models for selective feature extraction and recognition. To
improve the classification accuracy, the number of iterations were increased, and the model parameters
were continuously observed and adjusted for best performance. Hence, input images shall be accurately
classified into a specific class based on the features extracted and recognized.
The deep learning model training step is done through running training data iteratively on the
Convolutional Neural Network (CNN) model while updating the parameters between the Neural
Network layers for higher detection accuracy. Those parameters represent the weights assigned to
each neuron in each hidden layer. In the input Neural Network layer, neurons represent image features.
Hence, adjusting the parameters or weights is reflected in how accurately the model can recognize an
image feature. The data extracted from the training data (images) at this stage are Internal Neural
Network rules or intricate patterns that enable the algorithm to learn the analysis process of those
images and learn how to detect it independently in the future. This deep learning model configuration
proved to improve the classification performance compared to traditional feature selection methods.
Figs. 15 and 16 show surveillance of different suspicious objects, such as drones, ships, mines, and
separate these objects of interest with different labels, such as harmless (i.e., drones) or regular (i.e.,
birds) objects.
Figs. 17 and 18 show a scenario in which public places (i.e., train station, airport, mosque) areas
can be monitored for luggage that is not accompanied by any human.
Fig. 19 shows deep learning-based surveillance on satellite imagery.
Figs. 20–22 show live surveillance of a construction site and airspace for possible violations.
Figs. 23 and 24 show live surveillance capability of the city’s public spaces.
JIOT, 2024, vol.6 57
Figs. 25–28 show our deep learning model running on drone images and video feed to detect
objects of interest.
Figs. 29 and 30 show deep learning-based surveillance on satellite imagery to detect airplanes,
bridges, and other objects of interest.
58 JIOT, 2024, vol.6
Finally, Fig. 31 shows the proposed live video analytics on social distancing in Mecca Grand
Mosque.
implemented and tested. For example, Fig. 32 shows a crowd sensing scenario in which a developed
citizen application is being used to capture an emergency car accident and share the captured image
with the emergency 911 department. The evidence that is shared with the 911 department for further
action is also included.
Fig. 33 shows video analytics describing the automated emergency environment with the 911
department.
Fig. 34 shows a detailed report generated by our developed deep learning algorithms to support
description of emergency events by the 911 department.
For example, two experiments have been done to compare the performance of Convolutional
Fuzzy Neural Network (CFNN) with a basic CNN model. Both models have the same CNN
architecture except for CFNN, which has additional FNN layers. Experiment 1 was done with a small
dataset containing 1500 smart city images. 1080 images were used for training, 120 for validation and
300 for testing, shown in Table 2.
In this case, FCNN performs better with test accuracy of 95.67% and a processing speed of 56.21
Frames Per Second (FPS), whereas CNN has test accuracy of 94.67% and a processing speed of 54.43
Frames Per Second (FPS). FCNN also has higher recall, precision and F1 scores than CNN with this
small dataset shown in Table 3. The experiment trials are compared with each other in further detail
in Fig. 40. A detailed report is also provided in Figs. 41 and 42.
Table 3: Performance comparison between fuzzy CNN and CNN with small dataset
Train Validation Test
CFNN CNN CFNN CNN CFNN CNN
Recall 1.0 1.0 0.9655 0.9655 0.9371 0.9182
Specificity 1.0 1.0 0.9677 0.9677 0.9787 0.9787
Accuracy 1.0 1.0 0.9667 0.9667 0.9567 0.9467
Precision 1.0 1.0 0.9655 0.9655 0.9803 0.9799
F1 score 1.0 1.0 0.9655 0.9655 0.9582 0.9481
CFNN CNN
In Experiment 2, the dataset had a total of 4000 samples. In this experiment, with an enlarged
dataset, CNN slightly outperformed FCNN during testing in terms of specificity, accuracy, precision,
and F1 score, and processing speed on FPS. Data distribution and performance measures for this
experiment are shown in Figs. 41 and 42. CNN has achieved testing accuracy of 94.75% and a
processing speed of 55.25 FPS, whereas CFNN has the closest testing accuracy of 94.5% and a
processing speed of 55.21 FPS. So, in the case of a small dataset, CFNN might perform better than
CNN. If there is enough data available for training, CNN might be sufficient. According to Table 4,
comparisons between FCNN and CNN have shown that the FCNN can classify images faster than
CNN in terms of computation time when dealing with smaller datasets (<3000 images). For bigger
dataset sizes, CNN performs a faster classification.
preservation and anonymity add more value to the smart city surveillance and emergency handling
scenarios.
Figure 41: Details report (for small dataset with 1500 samples)
Figs. 45 and 46 demonstrate the training performance for drone-based deep learning model
development using different techniques. Fig. 45 presents the comparison between the rates of the
true positive and false positive rates. Fig. 46 presents different capturing techniques. In the end, it
is important to note that no matter how sophisticated a deep learning model can be, there are certain
limitations that it is yet to overcome. Such a limitation is the mandatory availability of enough training
data. Some emergency phenomena (earthquakes, hurricanes, etc.) or behaviors don’t happen enough
JIOT, 2024, vol.6 67
times to collect sufficient data for training, validation, and testing to be able to detect their future
occurrences. In addition, some complex programs, when used to train deep learning algorithmic
models, create a highly resource-consuming model with a huge size. On the other hand, some complex
programs are simply unlearnable due to extreme complexity. Hence, despite the promising results
of this research work, its scalability is subject to the complexity of the problem at hand, and the
availability of sufficient training data.
Table 4: Computation time comparison between FCNN and CNN on various sample sizes
Sample size in # of images FCNN computation time in CNN computation time in
seconds seconds
500 0.02 0.04
1000 0.05 0.07
2000 0.12 0.13
3000 0.18 0.18
4000 0.25 0.23
5000 0.31 0.28
6000 0.36 0.34
Mean 0.12925187 0.14630137
Standard deviation 0.11962527 0.10161954
Figure 43: Service access delay posed by the introduction of Blockchain-based provenance
Figure 44: Performance of training for edge learning applications through federated learning
JIOT, 2024, vol.6 69
Figure 45: Training performance for drone-based deep learning model development
Figure 46: Training performance for deep learning model development; (a) satellite-based, (b) human-
based, (c) other objects of interests shown in Figs. 14 to 30, and (d) for emergency events shown in
Figs. 32 to 38
Acknowledgement: Special thanks to Dr. Ahmed Elhayek for his input in regard to the AI aspect of
this work.
Funding Statement: The authors received no specific funding for this study.
Author Contributions: Mohammed Abdur Rahman established the research idea and execution. Ftoon
H. Kedwan and Mohammed Abdur Rahman shared fair writing responsibility. All authors reviewed
the results and approved the final version of the manuscript.
Availability of Data and Materials: The data and materials used collected from personal experiments
and hence will not be supplied for public access.
JIOT, 2024, vol.6 71
Ethics Approval: The accomplished work does not involve any humans, animals, or private and
personal data. Therefore, no ethical approvals were needed.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the
present study.
References
[1] D. Vrontis, M. Christofi, V. Pereira, S. Tarba, A. Makrides and E. Trichina, “Artificial intelligence, robotics,
advanced technologies and human resource management: A systematic review,” Int. J. Hum. Resour.
Manage., vol. 33, no. 6, pp. 1237–1266, May 2022. doi: 10.1080/09585192.2020.1871398.
[2] W. Jiang et al., “Terahertz communications and sensing for 6G and beyond: A comprehensive review,”
IEEE Commun. Surv. Tutorials, Apr. 2024. doi: 10.1109/COMST.2024.3385908.
[3] C. Li, L. Wang, J. Li, and Y. Fei, “CIM: CP-ABE-based identity management framework for
collaborative edge storage,” Peer Peer Netw. Appl., vol. 17, no. 2, pp. 639–655, Mar. 2024. doi:
10.1007/s12083-023-01606-6.
[4] R. Gnanaselvam and M. S. Vasanthi, “Dynamic spectrum access-based augmenting coverage in nar-
row band Internet of Things,” Int. J. Commun. Syst., vol. 37, no. 1, Jan. 2024, Art. no. e5629. doi:
10.1002/dac.5629.
[5] P. Segeč, M. Moravčik, J. Uratmová, J. Papán, and O. Yeremenko, “SD-WAN-architecture, functions and
benefits,” presented at the 18th Int. Conf. Emerg. eLearn. Technol. Appl. (ICETA), Košice, Slovenia, IEEE,
Nov. 2020, pp. 593–599.
[6] F. H. Kedwan and C. Sharma, “Twitter texts’ quality classification using data mining and neural networks,”
Int. J. Comput. Appl., vol. 178, no. 32, pp. 19–27, Jul. 2019.
[7] S. H. Shah and I. Yaqoob, “A survey: Internet of Things (IoT) technologies, applications and challenges,”
IEEE Smart Energy Grid Eng. (SEGE), vol. 17, pp. 381–385, Aug. 2016. doi: 10.1109/SEGE.2016.7589556.
[8] D. C. Nguyen et al., “6G Internet of Things: A comprehensive survey,” IEEE Internet Things J., vol. 9, no.
1, pp. 359–383, Jan. 2022. doi: 10.1109/JIOT.2021.3103320.
[9] S. Dong, P. Wang, and K. Abbas, “A survey on deep learning and its applications,” Comput. Sci. Rev., vol.
40, May 2021, Art. no. 100379. doi: 10.1016/j.cosrev.2021.100379.
[10] C. D’Andrea et al., “6G wireless technologies,” The Road towards 6G: Opportunities, Challenges, and
Applications: A Comprehensive View of the Enabling Technologies, Cham: Springer Nat. Switzerland, vol.
51, no. 114, pp. 1–222, 2024.
[11] R. Chataut, M. Nankya, and R. Akl, “6G networks and the AI revolution—exploring technologies, appli-
cations, and emerging challenges,” Sensors, vol. 24, no. 6, Mar. 2024, Art. no. 1888. doi: 10.3390/s24061888.
[12] A. Samad et al., “6G white paper on machine learning in wireless communication networks,” Apr. 2020,
arXiv:2004.13875.
[13] K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y. J. A. Zhang, “The roadmap to 6G: AI empowered wireless
networks,” IEEE Commun. Mag., vol. 57, no. 8, pp. 84–90, Aug. 2019. doi: 10.1109/MCOM.2019.1900271.
[14] K. David and H. Berndt, “6G vision and requirements: Is there any need for beyond 5G?,” IEEE Veh.
Technol. Mag., vol. 13, no. 3, pp. 72–80, Sep. 2018. doi: 10.1109/MVT.2018.2848498.
[15] E. C. Strinati et al., “6G: The next frontier: From holographic messaging to artificial intelligence using
subterahertz and visible light communication,” IEEE Veh. Technol. Mag., vol. 14, no. 3, pp. 42–50, Aug.
2019. doi: 10.1109/MVT.2019.2921162.
[16] F. Tariq, M. R. Khandaker, K. K. Wong, M. A. Imran, M. Bennis and M. Debbah, “A speculative study
on 6G,” IEEE Wirel. Commun., vol. 27, no. 4, pp. 118–125, Aug. 2020. doi: 10.1109/MWC.001.1900488.
[17] Z. Zhang et al., “6G wireless networks: Vision, requirements, architecture, and key technologies,” IEEE
Veh. Technol. Mag., vol. 14, no. 3, pp. 28–41, Sep. 2019. doi: 10.1109/MVT.2019.2921208.
72 JIOT, 2024, vol.6
[18] M. H. Alsharif, A. Jahid, R. Kannadasan, and M. K. Kim, “Unleashing the potential of sixth generation
(6G) wireless networks in smart energy grid management: A comprehensive review,” Energy Rep., vol. 11,
pp. 1376–1398, Jun. 2024. doi: 10.1016/j.egyr.2024.01.011.
[19] J. Bae, W. Khalid, A. Lee, H. Lee, S. Noh and H. Yu, “Overview of RIS-enabled secure transmission in 6G
wireless networks,” Digit. Commun. Netw., Mar. 2024. doi: 10.1016/j.dcan.2024.02.005.
[20] W. Abdallah, “A physical layer security scheme for 6G wireless networks using post-quantum cryptogra-
phy,” Comput. Commun., vol. 218, no. 5, pp. 176–187, Mar. 2024. doi: 10.1016/j.comcom.2024.02.019.
[21] A. Alhammadi et al., “Artificial intelligence in 6G wireless networks: Opportunities, applications, and
challenges,” Int. J. Intell. Syst., vol. 2024, no. 1, pp. 1–27, 2024. doi: 10.1155/2024/8845070.
[22] R. Sun, N. Cheng, C. Li, F. Chen, and W. Chen, “Knowledge-driven deep learning paradigms
for wireless network optimization in 6G,” IEEE Netw., vol. 38, no. 2, pp. 70–78, Mar. 2024. doi:
10.1109/MNET.2024.3352257.
[23] D. Verbruggen, H. Salluoha, and S. Pollin, “Distributed deep learning for modulation classification in 6G
Cell-free wireless networks,” Mar. 2024, arXiv:2403.08563.
[24] P. Yang, Y. Xiao, M. Xiao, and S. Li, “6G wireless communications: Vision and potential techniques,”
IEEE Netw., vol. 33, no. 4, pp. 70–75, Jul. 2019. doi: 10.1109/MNET.2019.1800418.
[25] T. S. Rappaport et al., “Wireless communications and applications above 100 GHz: Opportunities and
challenges for 6G and beyond,” IEEE Access, vol. 7, pp. 78729–78757, Jun. 2019. doi: 10.1109/AC-
CESS.2019.2921522.
[26] K. Zhao, Y. Chen, and M. Zhao, “Enabling deep learning on edge devices through filter pruning and
knowledge transfer,” Jan. 2022, arXiv:2201.10947.
[27] A. Raha, D. A. Mathaikutty, S. K. Ghosh, and S. Kundu, “FlexNN: A dataflow-aware flexible deep
learning accelerator for energy-efficient edge devices,” Mar. 2024, arXiv:2403.09026.
[28] M. Zawish, S. Davy, and L. Abraham, “Complexity-driven model compression for resource-constrained
deep learning on edge,” IEEE Trans. Artif. Intell., vol. 5, no. 8, pp. 1–15, Jan. 2024. doi:
10.1109/TAI.2024.3353157.
[29] J. DeGe and S. Sang, “Optimization of news dissemination push mode by intelligent edge com-
puting technology for deep learning,” Sci. Rep., vol. 14, no. 1, Mar. 2024, Art. no. 6671. doi:
10.1038/s41598-024-53859-7.
[30] W. Chen, J. Wilson, S. Tyree, K. Weinberger, and Y. Chen, “Compressing neural networks with the hashing
trick,” presented at the Int. Conf. Mach. Learn., Lille, France, Jul. 2015, vol. 37, pp. 2285–2294.
[31] D. Kadetotad, S. Arunachalam, C. Chakrabarti, and J. -S. Seo, “Efficient memory compression in deep
neural networks using coarse-grain sparsification for speech applications,” presented at the 35th Int. Conf.
Comput.-Aided Des., ACM, Austin, TX, USA, Nov. 2016, pp. 1–8.
[32] Y. LeCun, J. S. Denker, and S. A. Solla, “Optimal brain damage,” Adv. Neural Inf. Process. Syst., vol. 2,
pp. 598–605, 1989.
[33] S. Srinivas and R. V. Babu, “Data-free parameter pruning for deep neural networks,” Jul. 2015,
arXiv:1507.06149.
[34] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning,
trained quantization and huffman coding,” Oct. 2015, arXiv:1510.00149.
[35] J. Ba and R. Caruana, “Do deep nets really need to be deep?,” Adv. Neural Inf. Process. Syst., vol. 27, pp.
2654–2662, 2014.
[36] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” Mar. 2015,
arXiv:1503.02531.
[37] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta and Y. Bengio, “FitNets: Hints for thin deep
nets,” arXiv preprint arXiv:1412.6550, vol. 1, p. 6550, Dec. 2014.
[38] R. Venkatesan and B. Li, “Diving deeper into mentee networks,” Apr. 2016, arXiv:1604.08220.
[39] J. Yim, D. Joo, J. Bae, and J. Kim, “A gift from knowledge distillation: Fast optimization network
minimization and transfer learning,” in 2017 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR),
Honolulu, HI, USA, 2017.
JIOT, 2024, vol.6 73
[40] A. Benito-Santos and R. T. Sanchez, “A data-driven introduction to authors, readings, and techniques in
visualization for the digital humanities,” IEEE Comput. Graph. App., vol. 40, no. 3, pp. 45–57, Feb. 2020.
[41] D. Berardini, L. Migliorelli, A. Galdelli, E. Frontoni, A. Mancini and S. Moccia, “A deep-learning frame-
work running on edge devices for handgun and knife detection from indoor video-surveillance cameras,”
Multimed. Tools Appl., vol. 83, no. 7, pp. 19109–19127, Feb. 2024. doi: 10.1007/s11042-023-16231-x.
[42] N. Rai, Y. Zhang, M. Villamil, K. Howatt, M. Ostlie and X. Sun, “Agricultural weed identification in
images and videos by integrating optimized deep learning architecture on an edge computing technology,”
Comput. Electron. Agric., vol. 216, Jan. 2024, Art. no. 108442. doi: 10.1016/j.compag.2023.108442.
[43] C. Li, W. Guo, S. C. Sun, S. Al-Rubaye, and A. Tsourdos, “Trustworthy deep learning in 6G-enabled mass
autonomy: From concept to quality-of-trust KPIs,” IEEE Veh. Technol. Mag., vol. 15, no. 4, pp. 112–121,
Sep. 2020. doi: 10.1109/MVT.2020.3017181.
[44] X. Wang, Y. Han, V. C. M. Leung, D. Niyato, X. Yan and X. Chen, “Convergence of edge computing and
deep learning: A comprehensive survey,” IEEE Commun. Surv. Tutorials, vol. 22, no. 2, pp. 869–904, Jan.
2020. doi: 10.1109/COMST.2020.2970550.
[45] M. A. Rahman, M. S. Hossain, N. Alrajeh, and F. Alsolami, “Adversarial examples–security threats to
COVID-19 deep learning systems in medical IoT devices,” IEEE Internet Things J., vol. 8, no. 12, pp. 9603–
9610, Aug. 2020. doi: 10.1109/JIOT.2020.3013710.
[46] M. A. Rahman, M. S. Hossain, N. Alrajeh, and N. Guizani, “B5G and explainable deep learning assisted
healthcare vertical at the edge COVID 19 perspective,” IEEE Netw., vol. 34, no. 4, pp. 98–105, Jul. 2020.
doi: 10.1109/MNET.011.2000353.
[47] A. Rahman, M. S. Hossain, M. M. Rashid, S. Barnes, and E. Hassanain, “IoEV-Chain: A 5G-based secure
inter-connected mobility framework for the internet of electric vehicles,” IEEE Netw., vol. 34, no. 5, pp.
190–197, Aug. 2020. doi: 10.1109/MNET.001.1900597.
[48] H. Viswanathan and P. E. Mogensen, “Communications in the 6G Era,” IEEE Access, vol. 8, pp. 57063–
57074, Mar. 2020. doi: 10.1109/ACCESS.2020.2981745.
[49] C. She et al., “A tutorial on ultrareliable and low-latency communications in 6G: Integrating
domain knowledge into deep learning,” Proc. IEEE, vol. 109, no. 3, pp. 204–246, Mar. 2021. doi:
10.1109/JPROC.2021.3053601.
[50] N. Kato, B. Mao, F. Tang, Y. Kawamoto, and J. Liu, “Ten challenges in advancing machine learn-
ing technologies toward 6G,” IEEE Wirel. Commun., vol. 27, no. 3, pp. 96–103, Apr. 2020. doi:
10.1109/MWC.001.1900476.
[51] A. Khan, L. Serafini, L. Bozzato, and B. Lazzerini, “Event detection from video using answer set
programming,” in CEUR Workshop Proc., 2019, vol. 2396, pp. 48–58.
[52] M. Emmi et al., “RAPID: Checking API usage for the cloud in the cloud,” presented at the 29th ACM Joint
Meet. Eur. Soft. Eng. Conf. Symp. Found. Soft. Eng., New York, NY, USA, Aug. 2021, pp. 1416–1426.
[53] A. N. Aprianto, A. S. Girsang, Y. Nugroho, and W. K. Putra, “Performance analysis of RabbitMQ and
Nats streaming for communication in microservice,” Teknologi: Jurnal Ilmiah Sistem Informasi, vol. 14,
no. 1, pp. 37–47, Mar. 2024.
[54] R. Sileika and S. Rytis, “Distributed message processing system,” in Pro Python Syst. Adm., Nov. 2014, pp.
331–347.
[55] S. M. Levin, “Unleashing real-time analytics: A comparative study of in-memory computing vs. traditional
disk-based systems,” Braz. J. Sci., vol. 3, no. 5, pp. 30–39, Apr. 2024. doi: 10.14295/bjs.v3i5.553.
[56] S. Lin, C. Li, and K. Niu, “End-to-end encrypted message distribution system for the Internet of
Things based on conditional proxy re-encryption,” Sensors, vol. 24, no. 2, Jan. 2024, Art. no. 438. doi:
10.3390/s24020438.