0% found this document useful (0 votes)
21 views21 pages

Content Storage Management and Precaching Scheme in Content-Centric Networks-Based Internet of Vehicle

Uploaded by

salman rashid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views21 pages

Content Storage Management and Precaching Scheme in Content-Centric Networks-Based Internet of Vehicle

Uploaded by

salman rashid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

This article has been accepted for publication in IEEE Internet of Things Journal.

This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 1

Content Storage Management and Precaching


Scheme in Content-Centric Networks-based Internet
of Vehicle
Youngju Nam, Hyeonseok Choi, and Euisin Lee

Abstract—The emergence of smart cars has led to a significant spend more content during their driving journey [1], [2]. This
increase in mobile data traffic on the backhaul links as a increased demand for content, coupled with the increased size
vast amount of contents is generated and consumed in the of content due to higher quality displays and improved content
Internet of Vehicles (IoVs). To cope with this situation, the
precaching research of content-centric networks (CCN) has been quality, results in significant data traffic consumption [3]–[6].
applied as a promising solution for reducing traffic consumption. Ericsson’s Mobility Report predicts that mobile data traffic
However, most precaching schemes in CCN-IoVs only focus on will triple from 160 EB per month in 2023 to 563 EB per
delay-sensitive content and don’t have proper content storage month in 2029, with video traffic constituting a significant
management, limiting their ability to minimize delays. To address portion [7]. In addition, due to the mobility of vehicles as
these issues, we thus propose a novel content storage management
and precaching (CSMP) scheme consisting of three methods. To users in IoV, the increased size of content also means that a
prevent the erases of precached or popular content, we have single base station (BS) or roadside unit (RSU) cannot always
designed a content storage management method based on our provide all pieces of the requested content to the requester
caching priority algorithm. Additionally, we have designed a vehicle within its communication coverage, resulting in access
delay-sensitive content precaching method to improve mobility delays during frequent handovers. Moreover, when multiple
prediction and reduce delays using the recalculation time based
on Gaussian distribution with skewness. Finally, we have designed vehicles request the same content within the coverage area of
a delay-tolerant content precaching method to minimize backhaul a BS or RSU, it creates additional data traffic and delays for
link traffic consumption by using an integer linear programming re-accessing the content server, leading to congestion, similar
approach. The proposed CSMP scheme improved the evaluation to the challenges seen at crowded events, due to the inherent
value considering the success ratio and the backhaul link traffic limitations of communication resources in both the BS and
by 23.21% compared with the existing schemes in our NS3-based
simulations. RSU. These problems reduce the quality of service (QoS) for
vehicle users who spend their time enjoying various content.
Index Terms—Content-Centric Network, Internet of Vehi-
To improve QoS for vehicle users, Content-Centric Net-
cle, Precaching, Caching storage management, Delay toler-
ant/sensitive content, Priority algorithm. works (CCNs [8], [9]) have emerged as a promising solution
in IoV, called CCN-IoV [10], [11]. Different from host-centric
networks, CCNs utilize content names instead of IP addresses
I. I NTRODUCTION
to identify and cache content in network nodes. This approach

T HE Internet of Things (IoT) encompasses many intercon-


nected physical and virtual objects, such as smart devices,
that collaborate to collect, refine, process, and exchange mean-
significantly reduces the need to access content servers by
downloading content using the name of the content that can
be cached in the caching storage of BSs or RSUs during
ingful information via the public Internet. These smart devices, provisioning or forwarding, resulting in reduced access delays
which can be assigned unique identities or IP addresses, have and backhaul link traffic. However, the inherent limitation in
the ability to autonomously send and receive data across net- caching storage capacity within BSs and RSUs prevents the
works. When we focus on vehicles within this IoT ecosystem, caching of all content. Thus, when a vehicle user requests
it evolves into the Internet of Vehicles (IoV). Smart vehicles content that isn’t cached at its current BS or RSU, they must
equipped with sensors, GPS, and onboard units (OBUs) enable access the content server or other RSUs that have cached it,
automated driving, providing users with more leisure time to leading to additional accessing delays. To handle these ac-
Manuscript received April 00, 2024; revised August 00, 2024. This work cessing delays, precaching schemes have been introduced and
was supported by a funding for the academic research program of Chungbuk researched in CCN-IoV [12]–[14]. This scheme proactively
National University in 2023. This research was supported by Basic Science caches requested content at the next predicted RSU along
Research Program through the National Research Foundation of Korea(NRF)
funded by the Ministry of Education(RS-2023-00272850)(Corresponding au- the user vehicle’s route. As a result, the accuracy of this
thor: Euisin Lee.) prediction is paramount to the performance of precaching. In
Euisin Lee is with the School of Information and Communication Engi- numerous studies, the frequent recalculations of predictions
neering, Chungbuk National University, Cheongju, Republic of Korea (e-mail:
[email protected]). have presented a significant computational burden [15], [16].
Youngju Nam and Hyeonseok Choi are with Research Institute for Fortunately, predicting the mobility of a vehicle on roads
Computer and Information Communication, Chungbuk National Univer- where speed limits are enforced is much easier than predicting
sity, Cheongju 28644, Republic of Korea (e-mail: [email protected];
[email protected]). the movement of living entities, such as humans, animals,
Digital Object Identifier 10.0000/JIOT.2024.0000000 etc. Nevertheless, if cached or precached content surpasses

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 2

the storage capacity of an RSU, older content is deleted, requester vehicle within the tolerable delay. Simultaneously,
resulting in additional traffic and access delays when requested to reflect vehicles’ requests for various content, we make a
by other users. In the worst case, this can lead to the wastage content request model based on Poisson distribution, Zipf’s
of over double the required traffic. Additionally, these issues law, and Gaussian distribution. To evaluate the efficiency of
can introduce prolonged delays and buffering for vehicle users our scheme comprehensively, we design a Manhattan-based
while they enjoy their content. For delay-sensitive applications mobility model incorporating speed with Gaussian distribution
such as driver assistance systems, these challenges are directly and skewness. Through simulation results conducted in various
related to user safety, especially at roads of high speeds. network environments, we validate our scheme’s performance
For precaching in CCN-IoV, the design of effective caching enhancement in terms of minimizing traffic consumption on
storage management is one of the crucial and challenging backhaul links, reducing storage costs, and mitigating repeated
issues due to the constrained cache size of RSUs and BSs requests and computation overhead in prediction processes.
[17]–[19]. The main factors for this design involve prior- The remainder of this paper is organized as follows. Related
itizing content, preventing inadvertent content erasure, and work is reviewed in Section II. In Section III, we describe
precaching requested content. Then, efficient methods to solve the network model of the proposed CSMP scheme. Then, we
this issue can improve the hit ratio for content requests of introduce the CSMP scheme and propose our three methods:
vehicle users, resulting in enhanced network performances, the CSM method, the DSCP method, and the DTCP method in
including reduced delay and traffic consumption. Despite its Section IV. We present the performance evaluation and results
importance, existing precaching schemes (e.g. [19], [20]) often analysis in Section V. Finally, Section VI concludes this paper.
focus on content popularity and overlook a critical factor,
particularly the neglect of delay-tolerant content (DTC) in II. R ELATED W ORK
CCN-IoV. This neglect can lead to unnecessary backhaul link In the current IoV paradigm, content download faces sig-
traffic, increased computing resource consumption, and higher nificant challenges as routing is traditionally based on IP
storage costs. Some studies propose schemes that intelligently addresses, and vehicles often operate at high speeds. When
manage node storage by predicting content requests based multiple vehicles within the coverage area of a Roadside
on vehicle mobility and content popularity [1], [21]. These Unit (RSU) request the same content, the conventional IP
schemes divide precaching into proactive caching of popular address-based routing results in each request being individu-
content and proactive caching of requested content based on ally resolved through connections between the content server
the prediction of the next RSU encounter. However, existing and the RSU where the vehicles are located. This method
schemes may lack prioritization, leading to additional traffic incurs more than double the necessary traffic on backhaul
and decreased efficiency. Furthermore, without considering links. Moreover, the frequent handovers driven by the high
delay-tolerant content, RSUs result in wasteful traffic and speeds of vehicles contribute to access delays, as each new
storage costs for reckless precaching. Thus, a novel caching RSU’s coverage area triggers a request to the content server.
storage management approach, considering DTC, is imperative Furthermore, the download of content from the server over
to optimize the network performance of precaching in CCN- long-distance connections not only leads to access delays
IoV. for vehicle users but also contributes to increased traffic
Therefore, to enhance the efficiency of precaching in CCN- consumption on backhaul links.
IoV, we propose a Content Storage Management and Pre-
caching (CSMP) scheme consisting of three methods: the
Content Storage Management (CSM) method to set the priority A. CCN-IoV
of the precached or cached content; the Delay Sensitive Con- In response to challenges within vehicular networks, re-
tent Precaching (DSCP) method that conducts recalculation searchers have delved into the realm of Content-Centric Net-
to improve the prediction accuracy by updating a vehicle’s works (CCNs) [22]–[24]. The integration of CCNs into the
mobility information; the Delay-Tolerant Content Precaching Internet of Vehicles (IoV) presents a paradigm shift, alleviating
(DTCP) method with considering tolerable delay of the DTC. delays and reducing traffic consumption by prioritizing content
First, we design the CSM method to establish cache modes based on its name rather than relying on IP addresses for
and to set caching priority values according to modes in location. Embodying the principles of CCNs, all nodes within
order to prevent the inadvertent removal of the content that is the IoV architecture are furnished with storage devices to
likely to be requested. Subsequently, to account for mobility cache the forwarded or provided content. Therefore, in an
changes in vehicles for prediction to precache delay-sensitive IoV framework integrated with CCNs, every RSU is equipped
content (DSC), we mathematically design a prediction model with a dedicated storage device, fostering an environment that
to calculate the recalculation time in our DSCP method. efficiently harnesses the advantages of CCNs. [22] postulated
Our prediction model for precaching DSC not only relies that adopting a CCN strategy could surpass the traditional
on Gaussian distribution but also incorporates skewness for TCP/IP protocol suite, adeptly managing the dynamic, brief,
enhanced prediction accuracy. Moreover, we design the DTCP and sporadic connections prevalent in vehicular environments.
method to utilize the cached RSUs by considering the tolerable A comprehensive simulation study was undertaken to evaluate
delay of the requested DTC in order to reduce backhaul the performance of the proposed content-centric vehicular
link traffic. This consideration of tolerable delay ensures networking architecture across diverse traffic loads, vehicle
the completion of providing delay-tolerant content to the densities, and content popularity scenarios, assessing both its

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 3

efficacy and efficiency. [23] introduced an innovative content- on a probabilistic data structure. The bloom filter model
centric vehicular network (CCVN) framework, unveiling an optimizes time complexity for content insertion, deletion, and
integrated algorithm for delivering content to vehicles through search operations, turning vehicles into caches and facilitating
content-centric units. These units facilitated content storage cooperative content distribution. Popularity-based Precaching
based on the priorities that are determined by content popular- involves RSUs precaching popular content due to its higher
ity and vehicle density. The incorporation of a content-centric likelihood of being requested. While this strategy showcases
unit in their CCVN enabled the management of exchanged promise, the fundamental challenge arises from the finite
content between vehicles based on naming information, with storage capacity of RSUs. The inherent limitation impedes the
pending interests regularly updated through analysis of trans- comprehensive precaching of all popular content, necessitating
mission ratios and network topology. [24] introduced an IP- careful management and prioritization.
based framework for vehicular content-centric networking, In the realm of Mobility-based Precaching, the proactive
emphasizing content acquisition based on specific positions. caching of content is driven by predictions regarding the
The framework allowed requesters to acquire content in an RSU a requester vehicle will encounter [1], [20], [21], [29]–
address-centric unicast manner, ensuring the return of content [34]. Leveraging trajectory and speed information, the RSU
to the requester without relying on reverse paths. Additionally, predicted to be encountered precaches the requested content in
the framework facilitated content retrieval from the clos- anticipation of the vehicle’s arrival. However, the efficiency of
est provider at a given position, effectively reducing costs this approach hinges on the accuracy of mobility predictions.
associated with content acquisition. However, despite these If inaccuracies occur, leading to unnecessary precaching, it
advancements, a formidable challenge persists in the finite results in wasted backhaul link traffic, diminishing the ef-
storage capacity of each node. This constraint impedes RSUs fectiveness of the strategy. [32] introduced the Left-Right-
from caching all content, leading to complications when a Front (LRF) cache strategy tailored for precaching in IoV.
vehicle requests content that is not cached in the RSU’s The LRF cache strategy proactively caches the requested data
storage. This challenge of limited storage capacity and the at upcoming nodes/RSUs for vehicles within a predefined
associated consequences for content availability and access Information-Centric Networking (ICN) architecture. Notably,
forms a critical issue within the domain of CCN-applied it addresses the challenges posed by the dynamic nature
IoV. It underscores the necessity for innovative solutions to of the network and mobility, characteristic of IoV nodes
optimize storage management, enhance content accessibility, in an ICN environment. This strategy significantly enhances
and elevate the overall efficiency of vehicular networks. cache utilization, hop ratios, and resolved interest ratios. [20]
aimed to design a novel edge-computing-enabled hierarchical
cooperative caching framework. This framework employed a
B. Precaching scheme in IoV predictive approach, utilizing historical vehicle trajectory data
To address the intricacies of CCNs in the context of the and user requests to anticipate vehicle trajectory and content
IoV, the development of the precaching scheme has emerged popularity. The resulting predictions enabled the framework
as a promising solution [25]–[31]. This scheme aims to to achieve an effective hit ratio and minimize average delay.
proactively cache content, considering two primary strategies: [33] proposed a proactive edge computing and caching scheme
popularity-based precaching and mobility-based precaching. to optimize task offloading. The proximal policy optimization
Popularity-based precaching is the precaching scheme that (PPO) was used to predict which RSU would compute and
RSUs precache popular content because it is more likely cache the requested content. This was based on several factors,
to be requested. [25] introduced the multi-metric content including the total time delay of the RSU, the total time delay
replacement policy (M2 CRP) tailored for content stores in of the vehicle, the expected content popularity, the content
Vehicular Ad-hoc Networks (VANETs) driven by named data caching status, and the traffic status of the RSU. The prediction
networking (NDN). For enhanced performance in VANET aimed to optimize both the hit ratio and the total delay, which
applications, M2 CRP incorporates three key metrics, which are in a trade-off relationship. [1] proposed a novel hierarchical
are the freshness of content, its popularity, and the distance proactive caching (i.e. precaching) scheme that factors in both
between content receipt and storage locations in content the anticipated demands of autonomous vehicle users and their
stores, relative to the caching node’s current location. [26] mobility. This innovative scheme employs the non-negative
proposed the Diversity-Improved Caching of Popular Transient matrix factorization (NMF) technique to predict user pref-
Content (DANTE) strategy. DANTE empowers vehicles to erences, subsequently forecasting users’ future demands by
autonomously decide locally cached content based on fac- considering the historical popularity of videos. The calculation
tors like content residual lifetime, popularity, and perceived of video chunk numbers for precaching incorporates the user’s
content availability in the vicinity. DANTE also introduces arrival and departure times at an edge node, contingent on
minor architectural modifications in NDN nodes and packet their current velocity vector. This scheme takes into account
fields to streamline its operations. Vehicles make caching both predicted ratings and the prior popularity of videos to
decisions independently, enabling the discovery of fresh and effectively anticipate users’ future demands. [21] presented a
popular distinct content without overwhelming the network novel video prefetch caching and replacement strategy that
with requests reaching the original source. To enhance content introduces a mobility-aware utility function. This function is
distribution efficiency in CCN-IoV, [27] presented a content based on the user’s moving probability and the popularity
caching scheme utilizing a bloom filter model which is based of video clips. By factoring in network and storage resource

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 4

TABLE I
T HE SUMMARY OF THE RELATED WORKS ON PRECACHING IN CCN-I OV

Prediction Caching
Index Purpose DTC Description
matrix priority
freshness, A scheme for proactively replacing content by presenting metrics in two ways,
Least Recently
[25] hit ratio popularity, X taking into account the freshness and popularity of the content and the distance
Used
distance from the content server.
A novel distributed caching strategy that distributively caches the content based
delay, popularity, popularity, on its popularity, residual lifetime, and the perceived availability of the same
[26] X
traffic availability lifetime content in the neighborhood before being requested, leading to reducing the
network traffic load and content retrieval time.
A Hierarchical Hybrid Content Delivery scheme using Bloom Filter (H2CDBF)
and a learning automata-based cache update policy based on content popularity
[27] traffic popularity popularity X
prediction to alleviate traffic congestion by addressing the challenges of efficient
content sharing and high mobility in vehicular networks.
A new proactive caching strategy for Vehicular Ad-hoc Networks (VANETs)
mobility to provide mobility support by precaching the requested content to the next
[32] mobility popularity X
support RSU, taking into account vehicle mobility in the left, right, and straight ahead
directions at an intersection.
A novel hierarchical proactive caching approach based on the non-negative
delay, popularity, request
[1] X matrix factorization technique, which considers both the user’s future demands
traffic mobility probability
and vehicle mobility to minimize delays and the network load.
A mobility-aware video precaching and replacement strategies that precache
popularity, request and replace chunks based on vehicle mobility and its popularity by formulating
[21] traffic X
mobility probability the optimization problem as an integer linear programming at every time slot
to alleviate the load of the core network.
An adaptive decentralized prefetching mechanism that replaces the outdated
mobility hops,
[35] freshness X content and precaches the popular and low-cost content to provide the mobility
support popularity
support service for ICNs in vehicular scenarios.
A novel message routing scheme by the timeliness-aware trajectory data mining
reliability,
[41] mobility X O algorithm based on the prediction of vehicles’ future RSU entrances in VANETs
traffic
that enable delay-tolerant networks.
A new delay-tolerant data transmission architecture that integrates cloud com-
[42] delay mobility reliability O puting, fog computing, software-defined network, and other technologies to
solve high transmission latency problems in IoV

costs at each time slice, a unified cache replacement strategy thereby improving content accessibility and increasing the
is constructed through prefetching and caching to establish efficiency of vehicular networks.
utility functions. This strategy adeptly quantifies the value of
caching and prefetching video properties, leading to improved
utility and cache hit rates for unit storage. The overall outcome C. Content Storage Management
ensures the stability of video services, particularly during In addressing the challenges associated with erasing con-
user cell handover events. The effectiveness of this strategy tent in the context of CCN-IoV, researchers have undertaken
was thoroughly evaluated in the taxi trajectory-based scenario. innovative content storage management strategies within pre-
Furthermore, [29]–[31] optimized the number of chunks that caching schemes. Commonly, priority assignment based on
should be precached based on requester vehicles’ mobility content popularity has been a prevalent approach, recognizing
while considering various constraints such as link capacity, that popular content is more likely to be requested. These exist-
storage, or buffer time. The challenges extend further to ing schemes demonstrate superiority over traditional methods
the management of precached and cached content. Finite like first-in-first-out (FIFO) or random approaches, notably in
storage capacity forces decisions on erasing content under terms of the hit ratio for vehicle users’ requests. [21], which
certain conditions. Efficiently managing the lifecycle of cached has a caching storage strategy among the existing precaching
content is critical, involving considerations of when to erase scheme, mentioned the trade-off relationship between caching
content to optimize space while ensuring that relevant and and precaching content. They precache and replace chunks
frequently requested content remains accessible. Moreover, the based on vehicle mobility and the requested content’s popu-
frequent recalculations necessary for prediction in Mobility- larity by formulating the optimization problem as an integer
based Precaching present a computational burden, impacting linear programming at every time slot. The replacement is
the system’s overall efficiency and responsiveness. Striking based on the balanced values between normalized caching cost
a balance between recalculation frequency and compute re- and normalized hit ratio using weight value. [35] proposed
source consumption becomes critical to maintaining optimal an adaptive decentralized prefetching mechanism for ICNs in
performance. In essence, the challenges within the precaching vehicular scenarios. When the caching storage is full, outdated
scheme in IoV underscore the need for innovative solutions to and unpopular content is replaced. [36] proposed the mobile
navigate the complexities of storage limitations, prediction ac- content caching/prefetching method in the context of the Mo-
curacy, and resource efficiency. Addressing these challenges is bilityFirst future Internet architecture that naturally facilitates
essential to realizing the full potential of precaching schemes, the network-level mobility prediction by logging the network

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 5

association records. They maintain a recent usage count for within the caching storage of RSUs. Thus, there is a risk
each chunk of content that has been requested through the that important content, although scheduled for delivery, may
AP within a time window. Therefore, when the caching be inadvertently deleted due to inadequate consideration of
storage is filled, they replace the unpopular and lower recent caching storage management.
usage count content. However, there remains a significant To address the above issues, the proposed scheme has the
gap in research regarding the treatment of precached content following contributions.
during storage replacement due to caching storage limitations. • We introduce an innovative content storage manage-
Many studies overlook this scenario, potentially leading to the ment and precaching (CSMP) scheme for the precaching
unintentional removal of precached content when faced with scheme in CCN-IoV, addressing challenges associated
storage limitations. In addition, since the number of contents with content caching. This scheme considers the prior-
is significantly large, the requested contents are usually not ity of precached or cached content, tolerable delay for
cached because an RSU cannot cache most of the contents. delay-tolerant content, and recalculation time for updating
Moreover, the existing schemes do not consider delay-tolerant vehicle mobility information.
content (DTC), resulting in wasted backhaul traffic for pre- • We design the CSM method to establish cache modes and
caching DTC even when it is not immediately needed. The assign caching priority values to enhance the hit ratio for
lack of optimization mechanisms for managing DTC within vehicle user requests and prevent the inadvertent loss of
existing precaching schemes contributes to suboptimal perfor- content likely to be requested.
mance. For these issues, the existing precaching scheme cannot • We design the DSCP method by mathematically formu-
achieve optimized performance. lating the recalculation time based on Gaussian distribu-
tion with skewness to enhance prediction accuracy.
• By effectively utilizing cached RSUs, we design the
D. Precaching for Delay Tolerant Content
DTCP method to minimize backhaul link traffic based
The study of Delay-Tolerant Content (DTC) in the context on the tolerable delay of requested content.
of CCN-IoV has emerged as a nascent field characterized by • We design a content request model based on Poisson
limited research efforts [37]–[42]. Researchers, aware of the distribution, Zipf’s law, and Gaussian distribution to con-
inherent characteristics of DTC that often allow it to tolerate duct vehicles’ requests for contents, contributing to more
significant download delays, have focused their efforts on accurate modeling and prediction of content demands.
improving reliability and minimizing traffic consumption in • We present a Manhattan-based mobility model that incor-
this specific category of content. In [40], a new transmission porates speed with Gaussian distribution and skewness,
approach based on proactive retransmission for bundle pro- providing a comprehensive evaluation framework for the
tocol is proposed for highly efficient data delivery in deep- proposed scheme’s performance.
space communications. The approach addresses challenges • Through graphical representations of results, we validate
such as high data loss rate, long signal propagation delay, and the proposed scheme’s effectiveness in minimizing traf-
highly asymmetric channel rate, even with ample transmission fic consumption on backhaul links, reducing delays for
bandwidth. In deep-space environments, which are extremely vehicle users, and mitigating repeated requests.
delay-tolerant, the focus is mainly on communication re-
liability. [41] proposed a novel timeliness-aware trajectory
data mining algorithm based on the prediction of vehicles’ III. N ETWORK M ODELS
future positions to achieve cost-efficient and reliable routing In this section, we describe the network model employed
in VANETs that enable delay-tolerant networks. In this paper, for our proposed scheme, as shown in Figure 1. The network
they don’t consider the tolerant delay of content because they encompasses the communication infrastructure involving con-
only deliver single messages. [42] proposed a new network tent servers such as YouTube and Netflix. The set of every
architecture that integrates cloud computing, fog computing, RSU is R = {R1 , · · · , RJ }, where J is the total number
the software-defined network, and other relevant technologies, of RSUs in the RSU set and Rj denotes the j-th RSU. As
taking into account the fog network equipment deployed shown in Figure 1, all intersections are equipped with RSUs
in IoV. They identify out the trade-off relationship between in traffic lights and each RSU has its caching priority table
delays and energy consumption and evaluate the reliability of (CPT) to manage its caching storage which is explained in
data provision based on the tolerable delay of single chunks. Section IV-A. The deployment and functionality of RSUs are
However, a critical limitation of existing research is the integral, involving considerations of caching storage capacity
assumption that DTC can tolerate infinite delays or consider and communication modalities, both wired and wireless. Each
only single chunks. In reality, there are tolerable delays that are RSU is interconnected and communicates with each other
determined by the unique characteristics of the content and the and with the backbone via backhaul fiber links that connect
driving time of the requester vehicles. This oversight hinders the RSUs to the content server. To ensure reliable and real-
the comprehensive understanding and optimization of DTC time communication, RSUs are strategically deployed at every
delivery within CCN-IoV. Furthermore, a notable absence in intersection. RSUs exploit wireless WAVE technology to dis-
the discourse relates to caching storage management. Focusing seminate content to vehicles efficiently [43]. Vehicles traverse
primarily on reliability and traffic consumption, these studies along roads guided by their own determined destinations,
do not address the critical aspect of content preservation leading them to encounter RSUs strategically positioned at

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 6

Fig. 1. Overview of the proposed scheme.

intersections. The set of every vehicle is V = {V1 , · · · , VI }, reducing backhaul link traffic. However, the dynamic nature of
where I is the total number of vehicles in the vehicle set vehicle mobility and RSU communication capabilities causes
and Vi denotes the i-th vehicle. The speed of each vehicle uncertainties in prediction accuracy. Consequently, if the next
is dynamically determined by the acceleration that is recal- RSU precaches more chunks than the vehicle can download
culated every 0.1 seconds. This recalculation is influenced by within its coverage, the excess chunks go unused, representing
a Gaussian distribution with skewness, reflecting the average wasted backhaul link traffic. Because of the limitation of
speed characteristics of the respective area where the vehicle is RSUs’ storage capacity, the unused chunks affect the caching
situated. Then, the content requested by a vehicle is classified of content for other requester vehicles. On the other hand,
into two different categories: DSC, which requires fast delivery precaching fewer chunks results in access delays for the
with minimal delay, and DTC, which is characterized by its vehicles to get rest chunks. That emphasizes the critical role
ability to withstand delays due to the vehicle’s determined of prediction accuracy in influencing RSU storage efficiency,
destination and the inherent characteristics of the content. minimizing wasted traffic, and preventing access delays for
The tolerable delay of the delay-tolerant content (such as requester vehicles.
backup services, software updates, 10 minutes for multimedia
file sharing, and 5 minutes for traffic information [44], [45])
A. Transmission Rate Model
depends on the intrinsic properties of the requested application.
This categorization of the DTC, which leverages its delay To reflect real-world scenarios regarding transmission rates,
tolerance, enables a strategic precaching scheme that reduces we adopt the Shannon–Hartley theorem for determining the
backhaul link traffic by making optimal use of cached RSUs. transmission rate. Once an RSU decides to precache requested
Both DSC and DTC are further subdivided into smaller units content from a requester vehicle, it computes the chunks for
called chunks, which are assumed to be uniform in size for precaching within the storage of the next RSU that will be
simplicity. encountered by the requester vehicle based on its trajectory
information. To do this, the RSU must forecast the dwell time,
Three models are designed to capture real-time environ- indicating the duration the requester vehicle will stay within
mental changes within network areas to reflect the practicality the coverage of the next RSU. Utilizing the predicted dwell
of the proposed scheme. When a vehicle wants to download time, the RSU calculates the chunks by taking into account
certain content, it sends a request for the content to an RSU the transmission rate of the next RSU. Then, the transmission
that is encountered by the vehicle. When the RSU cannot rate rj (t) of the RSU Rj at the time t is denoted as follows:
provision all chunks of the requested content to the requester !
vehicle within its communication coverage, it then utilizes the Ptr |h|2
vehicle’s mobility information to predict and precache the next rj (t) = Bj (t)log2 1 + , (1)
Bj (t)N0 dα i,j
requested content at the next RSU. The precaching scheme
diverges based on the content type: for DSC, the primary aim where, Bj (t) denotes the bandwidth at the time t, Ptr denotes
is to mitigate access delays, while for DTC, the focus is on the transmission power of transmitters. h denotes the channel

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 7

is [−5, 5]km/(h×s), which means that the maximum acceler-


ation amax is 5km/(h × s). Therefore, µ, which indicates the
middle value of the range, is 0. Then, δ is denoted as follows:
v
uπ |γ|2/3
u
|δ| = u . (5)
t2
2/3
4 − π 2/3
|γ| + ( )
2
The sign of δ is the same as the sign of the skewness γ that
Fig. 2. Gaussian normal distribution with skewness.
is denoted as follows:

Pn 1.5 
n 2
n X q=1 (a (q) (t))
γ=  , (6)
 
fading coefficient and N0 denotes the white Gaussian noise (n − 1)(n − 2) p=1

a(p) (t)3 n1.5
power. α denotes the path loss exponent and di,j denotes the
distance between Rj and Vi . where n is the number of samples. We denotes the standard
However, due to the temporal mismatch between the predic- deviation σ as follows:
tion time for precaching and the actual time when the requester
vehicle downloads the precached chunks, there might be a amax − µ
σ= , (7)
gap in the transmission rate. Therefore, when performing the k
prediction to precache the content, each RSU Rj uses its where k is 2.576 in 99% confidence interval. Then, to make
managed average transmission rate rj,av (t) by gathering its the acceleration follow the average speed within the area, we
transmission rates over the defined period to take into account set the Mode value M that is the shifted average value by
the vehicle density and communication capabilities as follows: skewness γ as follows:
Ptmax
q=0 rj (t − q) M =ξ + ωmo (α)
rj,av (t) = , (2) 
tmax
!
 v cur − vav
−amax vmax − vav if vcur > vav ,


where tmax denotes the defined period. For accurate predic-

(8)
tion, every RSU updates its average transmission rate every = !
 v cur − vav
time t based on tmax . amax vmin − vav if vcur ≤ vav ,


 
B. Vehicle Speed Model 2π 
−
p 
We design a vehicle speed model to reflect real-world vehi- γ 1 − µ2z sgn(α) |α|

cle speed scenarios and to evaluate the prediction accuracy for mo (α) = µz − − e , (9)
2 2
calculating the chunks that should be precached by predicting p
the dwell time within the coverage of the next RSU. Within where µz is δ( 2/π) [46].
the area where the vehicles are located, their speeds follow an Based on the determined acceleration of Vi according to
average speed, taking into account several factors such as rush P [a = ai (t)], we can denote the speed vi (t) of the vehicle Vi
hour congestion and hot spots. Accordingly, we formulate the after the time t as follows:
probability of the acceleration ai (t) of the vehicle Vi at the t
X
time t based on Gaussian distribution with skewness [46] as vi (t) = vi (0) + ai (q), (10)
follows: q=0

(a − ξ)2 where vi (0) is the current speed of Vi . Through the velocity,


2 −
we can calculate the distance Disti (t) that the vehicle moves
P [a = ai (t)] = √ e 2ω 2
ω 2π  until the time t as follows:

a − ξ (3)
t
Z α
 q2 X
ω 1 − Disti (t) = Disti (0) + vi (q), (11)

× √ e 2 dq, q=0
−∞ 2π
where ω is the scale parameter, ξ is the location parameter, where Disti (0) is the current location of the Vi . With the
and α is the shape parameter. They are denoted as follows: integrated information between the distance traveled until time
s t and the trajectory information from the navigator application,
2 σ δ we can predict the location of the vehicle at the time t.
ξ = µ − ωδ , ω=r , α= √ , (4) In the calculation, neither the current speed of Vi nor the
π 2δ 2 1 − δ2
1− average speed of vehicles in the next RSU reflects the vehicle
π speed at the time when the precached chunks are downloaded
where µ is the mean acceleration. For the comfort of users in a in the next RSU. Therefore, to reflect the road conditions of the
vehicle, we assume that the range of the vehicle’s acceleration next RSU in the calculation, the average speed over a period

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 8

of time is taken into account, and the average speed can be table not only outlines the prioritization, but also includes a
obtained as follows: mode of the cached content. Then, we describe our DSCP
P Ptmax method in the subsection IV-B. In this scheme, we formulate
i∈Dwellj q=0 vi,j (t − q)
vav,j (t) = , (12) the recalculation time using vehicle mobility based on a
N um(Dwellj )tmax Gaussian distribution with skewness. This recalculation time
where Dwellj is the set of the vehicles that dwell within the is critical to improving mobility prediction accuracy, which
coverage of Rj . N um(Dwellj ) is the number of the vehicles has a direct impact on QoS and user safety. Finally, our DTCP
in Dwellj . Therefore, we can use vav,j (t) to reflect conditions method is provided in the subsection IV-C, which makes use of
of the coverage area of Rj . cached RSUs while taking into account the tolerable delay of
the requested content. To optimize the utilization of backhaul
C. Content Request Model link traffic, RSUs make informed decisions to select the most
appropriate RSUs to participate in providing the requested
To reflect a realistic environment in terms of vehicles’ DTC. This optimization process is achieved through Integer
content requests, we design a content request model. The Linear Programming (ILP) in our delay-tolerant content pre-
total content set is C = {C1 , · · · , CC }, where C is the total caching method.
number of the content set and Cc denotes the c-th content.
Based on Zipf’s law [47], we define the popularity P op(Cc )
of each content Cc , where P op(Cc ) is the probability that Cc A. Content Storage Management method
is requested, and is calculated as follows: We present our CSM method to ensure the prevention
!−1 of inadvertently erasing cached contents. For managing the
C
X content caching storage, we use the CPT in RSUs, which is
P op(Cc ) = c−α2 k −α2 , (13)
to determine what content will be erased when the RSU’s
k=1
caching storage is fully filled. Each entry in the CPT includes
where c indicates the popularity rank of Cc and α2 is the Zipf information about the requester vehicle ID, the requested
exponent parameter. content ID, priority mode, and priority value. Our proposed
Then, we define the probability that Cc has its size Size(Cc ) method involves the classification of cached contents by the
based on Gaussian distribution as follows: CPT based on different priority modes, each assigned a priority
 2
1 q − µ3  value according to its mode. The method consists of three
1 2

σ3
 modes:
P rsize [q = Size(Cc )] = √ e , (14) • P rovide mode: The content in this mode is actively
σ3 2π
provided within the RSU’s coverage. To maintain fair-
where σ3 is the standard deviation and µ3 is the mean value. ness, the requested content by a requester vehicle, which
To avoid the situation that the content size is smaller than 0, has already received a significant number of chunks, is
we set the standard deviation according to µ − 3σ = 0. assigned low priority. Consequently, the priority value
Also, we define the request frequency by designing the for this mode is denoted by the number of the received
probability of the next request time treq by a vehicle Vi based chunks.
on the Poisson distribution as follows: • P recached mode: The content in this mode is precached

P rreq [t = treq ] = λe−λt , (15) at the current RSU based on the mobility information of
the requester vehicle by the calculation of the previous
where λ is the rate parameter, representing the average number RSU. To avoid access delays associated with content
of requests per unit of time. retrieval, the priority value for this mode is defined as
Consequently, vehicles, considering content popularity, size, the remaining time to provision, which is specifically the
and request frequency distributions, individually decide their time until the requester vehicle enters the current RSU’s
requested content at specified intervals. This model enables a coverage.
vehicle to request various content items while actively down- • Cached mode: The content in this mode is initially
loading another. Furthermore, for fairness, content is randomly cached based on its popularity or has completed the
categorized as DSC or DTC. In real-world scenarios, this provisioning process. Since the content is not expected to
model allows diverse content requests, ranging from real-time be requested in the immediate future, the priority value
streaming services to background applications like software takes into account the probability of being requested.
updates, enhancing the realism of the simulated environment. Therefore, the priority value of this mode is determined
by the popularity of the content.
IV. C ONTENT S TORAGE M ANAGEMENT AND P RECACHING This comprehensive classification and prioritization strategy
(CSMP) SCHEME ensures efficient content management within the RSU’s stor-
In this section, we describe the details of our caching storage age, minimizing the impact of content elimination on content
management and precaching scheme, taking into account the availability and user experience.
delay tolerance of the requested content. We first present our An RSU’s storage can be empty for various reasons such as
CSM method based on an RSU’s Caching Priority Table (CPT) installing a new RSU or resetting the RSU. In such situations,
for content requests by vehicles in the subsection IV-A. This the caching storage (CS) is precached with popular content

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 9

Fig. 3. The Content Storage Management (CSM) method in the CSMP scheme: a change of Rj ’s caching priority table in a situation where Rj provides
the requested contents to V1 , V2 , and V3 .

that vehicles are likely to request, as shown in the initialized downloaded size to give a chance to the requester vehicle that
CPT in Figure 3. Then, their mode is labeled as Cached starts downloading content. Thus, a priority value in this mode
because they are not cached due to a request and are not is the received content size ni,j,c
down that Cc has already been
immediately delivered to the vehicle. Also, since the likelihood provided by Vi from Rj , as shown in Figure 3. In order to
of cached contents being requested by vehicles is related to resume downloading the content after other vehicles receive
their popularity, we set their priority values and rank them by more chunks of their requested content and the priority of that
popularity based on Zipf’s law. content has been lowered, the RSU erases the content with the
When a vehicle requests content from an RSU, the RSU lowest priority value from the CS but not from the CPT, and
first looks for the requested content in its caching storage. If stops providing the content. Therefore, the RSU will resume
the content is found in the RSU’s caching storage, the RSU downloading the stopped content when the priority values in
changes the content’s Cached mode to P rovide mode and other entries become lower.
puts the downloaded size by the vehicle in its CPT, as shown
in V1 in Figure 3. Otherwise, the RSU takes the requested
content from the content server or another cached RSU via B. Delay Sensitive Content Precaching method
backhaul links based on CCN policy. Then, the taken content’s We design the Delay Sensitive Content Precaching (DSCP)
mode becomes P rovide mode, as shown in V2 in Figure 3. method to ensure the delay minimization for DSC download
However, in that case, if RSU’s caching storage was already in this subsection. For a content request by a vehicle, an
full, the RSU erases the lowest priority Cached mode or RSU employs predictive analytics to estimate the dwell time
P recached mode content in its CPT and caching storage. In of the vehicle, which plays a crucial role in determining the
Figure 3, C6 is erased to cache taken C7 because the CS of necessity of precaching. If the predicted dwell time is short or
Rj is full. Unexpectedly, when the P rovide mode content the requested content is substantial in size, the RSU may be
is re-requested by another requester vehicle, the RSU adds unable to complete the delivery of all chunks of the content
a re-requests entry containing the requester vehicle ID in its within its coverage. In such cases, the responsibility falls to
CPT. Therefore, the CPT of the RSU has entries that have the the next RSU the vehicle will encounter later, which must
duplicated content ID but a different requester vehicle ID, as precache the remaining chunks to minimize access delays.
shown in V3 in Figure 3. To ensure fairness, if the caching However, this way presents a dilemma: if the next RSU
storage is fully filled with contents from the P rovide mode, precaches a large number of chunks, it will consume additional
the RSU will stop providing the content that has the largest backhaul link traffic and risk exceeding its storage capacity,

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 10

(m)
Fig. 4. A concept of tgap , where the initial calculation process in black line graphs, the first recalculation process in red line graphs, and the second
recalculation process in blue line graphs.

potentially resulting in the inadvertent erase of other critical maximum speed limit of the road on which Vi moves. Dj
(0)
content. Conversely, precaching too few chunks can result in represents the communication range of Rj , and xij represents
insufficient content for delivery within the RSU’s coverage, the distance that Vi has moved within Rj ’s coverage up until
leading to increased access delays to get the rest of the chunks. (0)
now. Therefore, Dj − xij indicates the remaining distance
Therefore, the challenge for efficient content precaching until Vi leaves the communication range of Rj .
is to accurately predict the optimal number of chunks to (0)
Using tmin,dwell (i, j), we calculate the number of available
precache, taking into account factors such as backhaul link (0)
chunks Navail (i, j) as follows:
traffic, storage efficiency, and access delays. To achieve this
challenge, RSUs should rely on accurate dwell time predic- (0)
tmin,dwell (i, j) × rj,av (t)
(0)
tions for calculating the optimal number of chunks. Since the Navail (i, j)
= , (18)
s
mobility of vehicles is usually constrained by road structures,
legal speed limits, and interactions among vehicles, a vehicle’s where rj,av (t) represents the average transmission rate of Rj
speed becomes an indicator of the average speed at its current and s represents the size of one chunk. If Rj does not have
location. Consequently, RSUs manage and utilize the average the chunks, it retrieves them from other RSUs or the content
speed information within their coverage areas for conducting server. Rj designates the chunks as being in P rovid mode and
(0)
effective mobility prediction. sets their priority value. However, if Navail (i, j) is less than
When a vehicle Vi requests delay-sensitive content Cc from the total number of chunks, Rj calculates the maximum dwell
(0)
an RSU Rj , the RSU calculates the minimum dwell time time tmax,dwell (i, j) to determine which chunks the next RSU
(0)
tmin,dwell (i, j) required to secure the guaranteed number of Rj+1 should precache as follows:
chunks for Cc for Vi as follows: (
(0) vi − amax t if vi − amax t > vmin,j
vmin,i,j (t) = (19)
(
(0) vi + amax t if vi + amax t < vmax vmin,j else,
vmax,i (t) = (16)
vmax else,
(0)
Z tmax,dwell (i,j)
(0) (0)
Z (0)
tmin,dwell (i,j) Dj − xij = vmin,i,j (t)dt, (20)
(0) (0) 0
Dj − xij = vmax,i (t)dt, (17)
0 (0)
where vmin,i,j (t) represents the minimum speed that Vi can
(0)
where vmax,i (t)is the maximum speed that Vi can have at have at the time t based on amax . vmin,j is the minimum
the time t and (0) means this calculation is not repeated speed that vehicles can have in the communication coverage
for mobility prediction. amax represents the maximum ac- of Rj . Rj updates vmin,j based on the maximum dwell time
celeration for user comfort, while vmax represents the legal of vehicles within Rj ’s coverage to reflect a realistic scenario

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 11

(0) (m)
and prevent the situations where tmax,dwell (i, j) = ∞ and At each m-th recalculation time tmin,dwell (i, j), Rj itera-
(0) (0) (m) (m) (m)
vmin,i,j = 0. Then, the first chunk number Nstart (i, j + 1) tively calculates tmax,dwell (i, j), Navail (i, j) and Nstart (i, j +
that Rj+1 should precache can be formulated as follows: (m) (m)
1). The gap, which is expressed as tgap = tmin,dwell (i, j) −
(m)
(0)
tmax,dwell × rj+1,av (t) tmax,dwell (i, j), is reduced because there is not enough time to
(0)
Nstart (i, j+ 1) = . (21) change the speed of Vi due to the reduced distance. Therefore,
s
the recalculations of Rj continue until the prediction accuracy
Additionally, we can formulate the final chunk number (m) (m)
reaches 99%, which is denotes by 1−(tgap /tmax,dwell (i, j)) =
(0)
Nend (i, j+1) that Rj+1 can guarantee to provide to Vi through (m)
0.99, because the difference between tmin,dwell (i, j) and
precaching by using the following equations (22) and (23): (m)
( tmax,dwell (i, j) is almost the same.
(0) vav,j − amax t if vav,j − amax t > vmax As shown in Figure 4, the iterative recalculation process
vmax,i,j+1 (t) =
vmax else, continues until the prediction accuracy reaches 99%. The
(22) above graphs represent the time-velocity relationships, while
Z t(0)
min,dwell (i,j+1) the below graphs show the time-probability relationships.
(0) (0)
Dj+1 = vmin,i,j+1 (t − tmax,dwell (i, j))dt, The solid line graphs among the below graphs represent the
(0)
tmax,dwell (i,j)
probability distribution of Vi ’s location based on a Gaussian
(23)
(0) distribution with skewness, while the dashed line graphs
(0) tmin,dwell (i, j + 1) × rj+1,av (t) represent the accuracy of the dwell time prediction. This
Nend (i, j + 1) = . (24)
s accuracy can be calculated using 1 − CDF , where CDF
(0) represents the cumulative distribution function of Vi ’s location
If Nstart (i, j +1) is less than the total number of chunks for
probability. By utilizing variables such as vi , vmax , vmin , and
Cc , Rj will not send a P recache packet to Rj+1 to request (0)
precaching. The P recache packet is intended to allow Rj+1 amax , Rj can accurately calculate tmax,dwell (i, j + 1) and
(0)
to precache the chunks of Cc . Since Cc has a limited chunk tmin,dwell (i, j + 1) with a 100% accuracy. As the provisioning
(0) (m)
number, Nend (i, j + 1) becomes the last chunk number of Cc and precaching chunks are recalculated, the gap tgap between
when it is bigger than the total number of Cc ’s chunks. The predicted and actual dwell times decreases, indicating an
(0)
P recache packet includes tmax,dwell (i, j), the ID of Vi , the ID increasing accuracy of the predictive model.
(0) (0) Once Vi departs from Rj , all content pertaining to Vi in Rj ’s
of Cc , Nstart (i, j+1), and Nend (i, j+1). When Rj+1 receives
the P recache packet, it precaches the chunks of Cc that range CPT is transitioned to the Cached mode, and Rj erases Vi ’s
(0) (0)
between Nstart (i, j+1) and Nend (i, j+1), as shown in Figure information. Then, after moving its traveling route, Vi enters
5. Then, it adds the precached chunks as a P recached mode the communication coverage of Rj+1 . Upon recognizing the
(0)
in its CPT and sets their priority value to tmax,dwell (i, j) in arrival of Vi through its requests, Rj+1 updates the entries
order to prevent erasing content that will soon be requested, of all contents related to Vi to P rovide mode in its CPT,
(0)
because tmax,dwell (i, j) denotes the expected time to arrive at and assigns respective priority values to them. Therefore, the
(0) DSC, which depends significantly on prediction accuracy, can
the next RSU. That is, a vehicle with a smaller tmax,dwell (i, j) be precached with high accuracy to minimize delays caused
will enter the coverage area of the next RSU faster and request by recalculation.
the precached content.
(0)
Rj prepares the chunks according to expected Navail (i, j)
(0) C. Delay Tolerant Content Precaching method
that is based on the minimum dwell time tmin,dwell (i, j)
and Rj+1 precaches the chunks according to expected Our DTCP method provides efficient precaching for DTC
(0)
Nstart (i, j + 1) that is based on the maximum dwell time because a vehicle can request DTC according to its content
(0)
tmax,dwell (i, j). Therefore, a temporal gap t(0) gap emerges characteristics of content services and applications. Especially,
(0) (0)
between Navail (i, j) and Nstart (i, j + 1) due to the inherent to optimize the consumption of backhaul link traffic, we can
uncertainty in the mobility prediction. To improve the predic- use cached RSUs (i.e., RSUs that store the requested DTC)
(0) (0)
tion accuracy, Rj recalculates Navail (i, j) and Nstart (i, j +1) deployed along the path of a requester vehicle to provide
(0) (0) DTC, which can withstand delays to up to its tolerable limit.
after tmin,dwell (i, j) because tmin,dwell (i, j) indicates that the
When a vehicle Vi requests content Cc to an RSU Rj , if
vehicle must leave at this time and its accuracy is 100% until
(1) the content is DTC, Rj estimates the delay tolerance of Cc
the time. Thus, the recalculated values are tmax,dwell (i, j),
(1) (1) (1)
based on its tolerable delay ttol,i,c , the number of remaining
tmin,dwell (i, j), Navail (i, j) and Nstart (i, j + 1). Whenever chunks Nremain,i,c , and the dwell time tdwell (i, j). Therefore,
the recalculation is repeated, the iterator increases by 1. Rj Rj calculates the dwell time tdwell (i, j) of Vi to determine
prepares additional cached chunks as P rovide mode based whether it has enough time to withstand delay as follows:
(1)
on Navail (i, j), and sends a P recache packet, which includes
(1) (1)
tmax,dwell (i, j) and Nstart (i, j + 1). Meanwhile, Rj+1 con- Dj − xij
tdwell (i, j) = , (25)
(1) vav,j
ducts precaching of additional chunks based on Nstart (i, j+1)
as P recached mode, and updates their priority value to where xij represents the distance that Vi has moved within the
(1)
tmax,dwell (i, j). communication coverage of Rj up to the present moment. If

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 12

Fig. 5. The Delay-Sensitive Content Precaching (DSCP) method: CPT of Rj+1 in a situation where Rj lets Rj+1 to precache C1 for V2 .

ttol,i,c is less than tdwell (i, j) + tdwell (i, j + 1), Rj classifies When the m-th next RSU Rj+m receives the packet, it adds
Cc for Vi as a DSC because it does not have enough cached its tdwell,i,(j+m) to E[ttravel,i ], and compares the updated
RSUs within ttol,i,c , where xi(j+1) is 0 because Vi is not yet in E[ttravel,i ] with ttol,i,c to determine whether it is the last
Rj+1 . Additionally, Cc for Vi is classified as a DSC if Vi lacks RSU Rj+M . If E[ttravel,i ] is less than ttol,i,c , it indicates
sufficient time to tolerate delays in receiving the remaining that Rj+m is not equal to Rj+M . Thus, Rj+m adds its ID,
chunks of Cc . Rj calculates the remaining time tremain,i,c Navail,i,c (j + m), and caching state sj+m,c to the packet and
that Vi needs to receive the remaining chunks as follows: forwards the updated packet to the next RSU according to
Nremain,i,c
Vi ’s trajectory that is included in the packet, where sj+m,c
X s represents the caching state set of Cc ’s chunks and is denoted
tremain,i,c = . (26)
rj,av as sj+m,c = {sj+m (hc,1 ), · · · , sj+m (hc,Nc )}, as shown in
n=1
Figure 6. If Rj+m has n-th chunk of Cc , sj+m (hc,n ) is 1.
When tremain,i,c is less than ttol,i,c , Rj uses an error handling If Rj+m doesn’t, sj+m (hc,n ) is 0, denoted as sj+m (hc,n ) =
mechanism to address prediction inaccuracies. This mecha- {0, 1}. However, if E[ttravel,i ] becomes bigger than ttol,i,c , it
nism takes into account the available time tdwell (i, j + 1) for means Rj+m is the last RSU Rj+M . Based on the information
utilization by a single RSU. If ttol,i,c is less than tremain,i,c + contained in the packet, Rj+M selects the cached RSUs to
tdwell (i, j + 1), Rj identifies Cc as a DSC. This is because provide Cc to Vi without using a precaching process that
there is not enough time to deliver Cc to Vi within ttol,i,c , consumes additional backhaul link traffic. This is because Cc ,
even when considering the error handling time. Otherwise, Cc which is a DTC, does not require immediate provision. To
is handled as a DTC for precaching. optimize the number of provided chunks, Rj+M optimally
To determine if there are sufficient cached RSUs to provide selects the cached RSUs using an ILP approach as follows:
Cc to Vi within ttol,i,c , Rj sends a T ravel packet to the
last RSU Rj+M that Vi can reach within ttol,i,c , where M Nc
X M
X
represents the number of RSUs that can be encountered by max sj+m (hc,n )×ξcached (j+m, hc,n )×(1+ωm,c )
Vi within ttol,i,c . Rj sends the packet to the next RSU Rj+1 n=Navail,i,c (j) m=1
based on the trajectory of Vi . On receiving the packet, Rj+1 (27)
Nc
also sends it to its next RSU Rj+2 . This process continues X
that the packet is eventually forwarded to the last RSU Rj+M s.t. ξcached (j + m, hc,n ) ≤ Navail,i,c (j + m)
n=Navail,i,c (j)
based on Vi ’s trajectory
(28)
Then, the T ravel packet contains ttol,i,c , Vi ’s trajectory and M
ID, expected travel time E[ttravel,i ], and Navail,i,c (j), where X
ξcached (j + m, hc,n ) ≤ 1 (29)
E[ttravel,i ] is initialized as tdwell,i,j , as shown in Figure 6.
m=1

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 13

ξcached (j + m, hc,n ) ≤ sj+m (hc,n ) (30) where ξprec (j + m, hc,n ) is the value for letting Rj+m to
precache the chunk hc,n when it is 1. If Rj+m does not
sj+m (hc,n ) = {0, 1} (31)
provide hc,n to Vi , ξprec (j + m, hc,n ) is 0. After the op-
ξcached (j + m, hc,n ) = {0, 1} (32) timization, the last RSU Rj+M makes a P recache packet
containing ξcached (j + m, hc,n ), ξprec (j + m, hc,n ), Vi ’s ID,
where ξcached (j + m, hc,n ) is the value for the optimal and trajectory, as shown in Figure 6. The packet is then
selection. Therefore, if Rj+m prepares n-th chunk of Cc , forwarded from Rj+M to its previous RSU Rj+M −1 based
ξcached (j + m, hc,n ) is 1. Otherwise, it is 0. Equation (28) on the trajectory of Vi . Based on the forwarded order in the
restricts the number of preparation chunks to the number of T ravel packet, the P recache packet is forwarded in reverse
available chunks for each RSU Rj+m , because the RSU has up to Rj+1 . Upon receiving the P recache packet, Rj+m
limited time to provide the chunks. Equation (29) ensures verifies ξcached (j + m, hc,n ) and ξprec (j + m, hc,n ) in the
that other RSUs do not precache hc,n , which has already packet. ξcached (j + m, hc,n ) and ξprec (j + m, hc,n ) can each
been assigned to be provided by Rj+m . Equation (30) means have a value of either 0 or 1. If ξcached (j + m, hc,n ) is 1,
that Rj+m cannot be ready for providing hc,n if it does not then ξprec (j + m, hc,n ) must be 0. If ξcached (j + m, hc,n ) is
store hc,n in its caching storage. The equation (31) and the 0, then ξprec (j + m, hc,n ) can be either 0 or 1. As a result,
equation (32) denote the characteristics of sj+m (hc,n ) and if either ξcached (j + m, hc,n ) or ξprec (j + m, hc,n ) is 1, then
ξcached (j + m, hc,n ), respectively. The weight value ωm,c is the sum of ξcached (j + m, hc,n ) and ξprec (j + m, hc,n ) is 1.
used to select an RSU when multiple RSUs have hc,n in This indicates that the RSU must precache or be prepared
their caching storage. To avoid making ωm,c the primary to provide hc,n in its caching storage. If Rj+m has 0 of
consideration, the sum of all ωm,c must be less than 1. This ξcached (j + m, hc,n ) + ξprec (j + m, hc,n ) in all of the chunks,
is because the primary variable ξcached (j + m, hc,n ) can be 1. it will add a P recached mode element that includes Vi ’s ID,
For this reason, we denote ωm,c as follows: Cc ’s ID, and the chunk number as 0 in its CPT, as shown
m in Rj+3 in Figure 6. After adding, the packet is forwarded
ωm,c = . (33) to the previous RSU up to Rj+1 . As the chunk number of
Nc × M
content starts with 1, a value of 0 indicates that this RSU will
Based on ωm,c , the RSU closest to Vi is more likely to be not provide Cc to Vi to save the backhaul link traffic until
selected to reduce caching burden and delays. To determine the vehicle leaves the coverage area of the RSU. If Rj+m
whether the cached RSUs are sufficient to provide all of the has 1 of ξcached (j + m, hc,n ) + ξprec (j + m, hc,n ) in all the
chunks of Cc , Rj+M compares Nc to the total number of chunks, it prepares hc,n in its caching storage and CPT. If
selected chunks Ncached,i,c , denoted as follows: Rj+m does not have hc,n in its caching storage, it brings
Nc
X M
X the chunk from other RSUs or the content server. Then, it
Ncached,i,c = ξcached (j + m, hc,n ). (34) updates its CPT with Vi ’s ID and prepared chunk numbers,
n=Navail,i,c (j) m=1 and changes its mode to P recached. Also, the priority value
is added, which represents the predicted entrance time of the
If Ncached,i,c is greater than Nc , it indicates that there are
vehicle. This value is calculated using the vehicle’s mobility
sufficient cached RSUs to provide Cc to Vi within ttol,i,c
information as follows:
without any additional backhaul link traffic. However, if Nc
is greater than Ncached,i,c , it indicates that the number of m−1
the cached RSUs within ttol,i,c is insufficient. Therefore, in tent,i,j+m =
X
tdwell,i,q . (41)
that case, Rj+M must select other RSUs to precache Cc . q=0
Prior to making the selection, Rj+M calculates the number
of remaining chunks using Nremain,i,c = Nc − Ncached,i,c . Generally, a vehicle that requests a DTC to the RSU has a
Then, Rj+M optimally chooses additional RSUs using an ILP lower priority than another vehicle that requests a DSC, due
approach as follows: to the distance between the two. That’s because the vehicle that
XNc M
X requests a DSC is usually in the previous RSU’s coverage, but
max (1−sj+m (hc,n ))×ξprec (j+m, hc,n )×(1+ωthem,c )
vehicle that requests a DTC is farther from this RSU due
n=Navail,i,c (j) m=1 to its tolerable delay.
(35)
Nc Taking Figure 6 as an example, if a vehicle requests one
X
s.t. ξprec (j +m, hc,n ) ≤ Navail,i,c (j +m) (36) DTC consisting of 100 chunks, and Rj can only provide
n=Navail,i,c (j) 10 chunks within its coverage area, then Navail,i,c (j) is 10.
Based on the optimization, cached RSUs are pre-assigned to
M
X provide the remaining chunks after 11. The requested DTC is
ξprec (j + m, hc,n ) ≤ 1 (37) prepared by the cached RSUs, specifically R , R , and
j+1 j+4
m=1
Rj+5 , which are selected by the optimization. However, due
ξprec (j + m, hc,n ) ≤ 1 − sj+m (hc,n ) (38) to the dwell time within each RSU coverage area, they cannot
provide all the chunks within ti,c
tol . Therefore, Rj+2 precaches
sj+m (hc,n ) = {0, 1} (39)
the remaining chunks, even though it is not a cached RSU.
ξcached (j + m, hc,n ) = {0, 1} (40) In this scenario, since Rj+3 is not selected to provide the

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 14

Fig. 6. The Delay-Tolerant Content Precaching (DTCP) method: The forwarded T ravel packet and P recached packet when the requested DTC consists of
100 chunks and K is 5.

requested DTC, it has zero value for both Nstart,i,c (j + 3) TABLE II


and Nend,i,c (j + 3). S IMULATION PARAMETERS .
Whenever the requester vehicle enters a new RSU, the Parameters Value
RSU assesses the tolerance of the requested DTC from two Simulation time 7200 s
perspectives. As the first perspective, it compares ttol,i,c with Network size 10 km × 10 km
Distance between RSUs 1 km
tdwell (i, j + m) + tdwell (i, j + m + 1) to determine whether Vehicle mobility model Manhattan case
there are enough RSUs available for utilization. When ttol,i,c Backhaul link latency 10 ms
is less than tdwell (i, j +m)+tdwell (i, j +m+1), there are only Backhaul link rate 10 Gbps
Maximum rj (t) 54 Mbps
two RSUs available for utilization: the current RSU and the An exponent of content popularity 0.75
next RSU. Then, as the next perspective, it compares ttol,i,c Chunk size 25 kbytes
to tremain,i,c + tdwell (i, j + m + 1). If ttol,i,c is less than RSU transmission range 1 km
Vehicle’s CS 128 GB
tremain,i,c + tdwell (i, j + m + 1), the remaining chunks must RSU’s CS 1 TB
be provided by using all but one of the RSUs within ttol,i,c . Vehicle density [2, 20] per km2
In these situations, the RSU will reclassify the requested Vehicles’ average speed [20, 60] km/h
Tolerable delay time [50, 500] s
DTC as a DSC. The content that is reclassified as a DSC Content request mean λ [5,50] s
follows the subsection IV-B to be downloaded within its Size of requested content [300,3000] MB
tolerable delay. By utilizing the ILP-based optimizations and
considering the delay tolerance characteristics of requested
content, we minimize the backhaul link traffic consumption proposed scheme and compare it to other schemes through
and ensure content provision within the tolerable delay. simulation results conducted in various environments.

V. P ERFORMANCE E VALUATION A. Simulation Environment


In this section, we evaluate the performance of the proposed To assess the effectiveness of the proposed scheme, we
scheme through simulations conducted in NS3. To evaluate conducted simulations employing the enhanced Manhattan
the scheme’s performance, we consider request hit ratio, con- mobility model within NS3 [48]. The simulation parameters
tent download delay, backhaul link traffic consumption, and are detailed in Table II. Over a network area of 10 km × 10
success ratio for DTC provision. We describe the simulation km, 100 RSUs are strategically deployed at intersections, each
environment and metrics for comparison in subsection V-A. spaced 1 km apart, with a communication range represented by
Then, in subsection V-B, we evaluate the performances of the a 1km-radius circle. Interconnecting these RSUs are backhaul

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 15

links boasting a transmission rate of 10 Gbps and a latency provision because it reduces backhaul link traffic consumption
of 10 ms. These links facilitate communication between the and caching storage of RSUs.
content server and other RSUs. Furthermore, RSUs operate To compare the performance of the proposed scheme with
with a maximum wireless transmission rate of 54 Mbps those of the other two comparison schemes, three metrics are
when providing the requested content to a vehicle, a standard measured as follows:
feature within NS3 based on WAVE. The RSUs maintain their • Request hit ratio: This is the ratio of the number of
own caching storage and precache requested and provided hits to the number of requests for all chunks of all
content. The caching storage has a size of 1 TB. Vehicles requested content by vehicles. This hit indicates that a
are randomly distributed across a bidirectional 4-lane road requested chunk is already cached or precached in an
at each intersection and traverse the network’s roads using RSU’s caching storage. Therefore, the hit is subject to
the enhanced Manhattan mobility model, tailored to mimic prediction accuracy and unexpected erasures. If an RSU
a city scenario [49]. Each speed of a vehicle is determined experiences a traffic storm due to the large number of
based on a Gaussian distribution with the skewness. Vehicles requests and precached chunks, it will erase cached and
can store content up to 128 GB, initiating content download precached content. In addition, if there is no proper
requests to an RSU via WAVE when the need arises. Each caching policy in place, the erasures will get worse. Thus,
vehicle, on average, makes content-related decisions every 10 the hit ratio indicates the prediction accuracy for pre-
seconds, following a Poisson distribution with λ set to 15 caching, the saved traffic consumption, and the efficiency
seconds. The requested content is determined based on Zipf’s of a caching policy.
law, employing an exponent of 0.75 and considering a pool • Content download delay: This is the average time it takes
of 1,000,000 contents to reflect their popularity. Notably, half for a vehicle to download all chunks of the requested
of the content is categorized as DSC, while the other half content from the request time, for all vehicles and all
is DTC. When a vehicle designates a requested content as content. If a request for a chunk is not hit, the time
DTC, it indicates an average tolerable delay of 300 seconds, is delayed to get it from other RSUs or the content
accounting for the driving time of the vehicle to its destination server to an RSU’s caching storage. Also, if the RSU
or characteristics of the requested content. Each received is experiencing traffic congestion, it may not be able
chunk of content from an RSU has a size of 25 kbytes. The to deliver some chunks due to limited resources on the
simulations are conducted with 10,000 iterations, each with a backhaul links, resulting in delayed services. Therefore,
simulation time of 7,2000 seconds. the content download delay indicates the number of re-
To evaluate the performance of the proposed scheme, we requests due to missed hits and saved traffic consumption
compare it with two other schemes: the existing Delay- on backhaul links. We measure this metric because it
Sensitive Precaching (DSP) and Delay-Tolerant Precaching is very important to vehicle users in consuming various
(DTP) schemes. The existing DSP schemes classified all content.
content as a DSC [1] and didn’t have any caching priority • Efficiency value: This metric is used to measure the
policy [32]. The only purpose of the existing DSP schemes is efficiency of improving the DTC download success ratio
to minimize delays in delivering requested content to requester to save backhaul link traffic consumption. Since a DTC
vehicles. Therefore, reflecting these characteristics, we im- must be provided within its tolerable delay, we measure
plement the comparison DSP that unconditionally precached the success ratio of the DTC download. Also, since RSUs
all content, even if it is a DTC. It leads to heavy traffic can prepare the DTC by consuming huge backhaul link
consumption on backhaul links. RSU’s caching storage can traffic to maximize the success ratio, we measure the
become full, resulting in the erasure of precached and cached backhaul link traffic consumption. Then, we formulate
content. Thus, this can paradoxically increase delays due to re- the efficiency value based on the measured success ratio
requests caused by the erasures. Additionally, the prediction and backhaul link traffic consumption as follows:
for precaching only occurs when the vehicle requests the !(1+T raf f )
intended content, which makes the situation worse. rsuccess
The existing DTP schemes are the precaching scheme for Ef f = , (42)
2
DTCs in delay-tolerant networks [41], [42]. They utilized
cached RSUs on the trajectory of a requester vehicle by where T raf f indicates the backhaul link traffic con-
not conducting precaching because they didn’t consider the sumption, and rsuccess indicates the success ratio of
tolerable delay of the requested DTC. Also, they didn’t have DTC download. rsuccess is a value between 0 and 1,
any caching priority policy. To ensure a fair comparison, we and T raf f is measured in GBs greater than 0. To
implement the comparison DTP scheme that precaches the prevent rsuccess from going to 1 and making the traffic
requested DSC. The scheme does not precache the requested meaningless, we multiply rsuccess by 0.5 to make it less
DTC, which can save traffic consumption on the backhaul than 1. Also, to prevent the traffic from becoming less
links. However, this approach has a low success ratio in than 1 and changing its meaning, we add 1 to T raf f .
providing the requested DTC to the requester vehicle within Thus, if rsuccess is 0, the saved traffic consumption
its tolerable delay because it does not precache the requested becomes meaningless. Ef f depends on T raf f when
DTC or consider its tolerable delay. Although there are rsuccess becomes greater than 0. Since rsuccess is a
disadvantages to DTC provision, it has advantages for DSC positive number less than 1, Ef f squared by T raf f

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 16

will be smaller the larger T raf f is. If the requested DTC requests cause more content download delay. Since DSP has
doesn’t have enough tolerable delay, it will be classified the lowest hit ratio, it has the largest content download delay.
as a DSC in our proposed scheme. Therefore, the content Since the proposed scheme has a better hit ratio than DTP, it
download delay is directly related to the success ratio. has a minimum content download delay.
Also, since the hit ratio is related to re-requests and Figure 7(c) shows the efficiency value according to the
RSUs precache content based on the precaching policy, average request iteration time. Erased chunks from cached RUs
the backhaul link traffic consumption is related to the affect the success ratio for DTC download because DTC is
hit ratio and the precaching policy. Thus, we measure provided from cached RSUs. Also, delays within cached RSUs
the efficiency value to evaluate the backhaul link traffic can cause the time to complete content download to exceed the
consumption and the success ratio of the DTC download. tolerable delay. Thus, frequent requests cause erased chunks
and delays, reducing the success ratio. Especially, DTP has
a very low success ratio because it doesn’t precache the
B. Simulation Results DTC chunks because it doesn’t consider the tolerable delay.
In this subsection, we will show the performances of Although it consumes very little traffic, it has the lowest
the proposed scheme by comparing it with the other two efficiency value because its success ratio is too low. Con-
schemes in terms of three metrics, which are the request hit versely, DSP has a very high ratio because it precaches all
ratio, content download delay, and efficiency value. Then, for the chunks of DTC by not considering the content tolerance
performance comparison, we implement five environmental characteristics. However, since it consumes too much traffic
parameters: average request iteration time, vehicle density, to precache all the chunks that are not cached, it has a lower
average vehicle speed, average content size, and average efficiency value than the proposed scheme.
content tolerable delay. 2) Vehicle density: Vehicle density is an evaluation param-
1) Average Request Iteration Time: This average request eter that characterizes the concentration of vehicles within a
iteration time is an evaluation parameter that signifies the network area, typically expressed as the number of vehicles
average time interval between successive content requests from per 1km of one lane. When evaluating content delivery per-
vehicles. Each vehicle requests content at a time that is de- formance, vehicle density provides critical insight into traffic
termined based on a Poisson distribution with this average. A and interaction patterns within the network. High vehicle
shorter iteration time indicates more frequent content requests, density indicates a more congested environment, which can
which means dynamic user interaction or a high demand for lead to increased contention for network resources, such as
fresh content. This metric is relevant for understanding user RSU radio resources or backhaul links. Higher densities of
engagement and system load, and provides insight into the this parameter can affect the efficiency of content delivery,
temporal dynamics of content requests. It can affect caching which requires more robust caching and delivery mechanisms
strategies, with shorter iteration times leading to more content to handle concurrent requests. Therefore, vehicle density is
being precached or cached. Therefore, to evaluate its perfor- an important consideration in evaluating its performance in
mance regarding adaptation to temporal patterns in content, adapting to spatial content patterns.
the average request iteration time is important. Figure 8(a) shows the request hit ratio according to the
Figure 7(a) shows the request hit ratio according to the vehicle density. As vehicle density increases, the frequency
average request iteration time. Frequent requests for various of content requests also increases, potentially leading to a
content cause the RSUs’ caching storage to cache and precache faster turnover of cached and precached content in the RSU’s
more chunks. Therefore, cached or precached chunks are more caching storage. With limited storage capacity, RSUs may
likely to be erased because of the RSU’s storage limit. Thus, struggle to retain all requested content, resulting in a lower
requests for the erased chunks are considered not to have hit ratio. Frequent and varied requests can cause older or
been hit. Therefore, frequent requests will decrease the request less popular content to be continually replaced, making it less
hit ratio for requests. Because DSP does reckless precaching, likely that a new request will match existing cached content
more chunks are likely to be erased, resulting in the lowest by providing and precaching more requested content. Because
request hit ratio. Furthermore, as DTP does not precache the DSP caches more chunks through reckless precaching, this
requested DTC, it has a higher request hit ratio compared problem is worse, resulting in the lowest hit ratio. Since DTP
to DSP by reducing erased chunks. The proposed scheme can reduce precaching chunks, but the proposed scheme has
improves prediction accuracy for precaching DSC by the better prediction accuracy for precaching DSC, the proposed
recalculation and reduces erased chunks by utilizing cached scheme has the best hit ratio.
RSUs for DTC provision, leading to the best request hit ratio. Figure 8(b) shows the content download delay according
Figure 7(b) shows the content download delay according to to the vehicle density. The higher density of vehicles causes
the average request iteration time. As previously explained in a higher turnover of cached and precached content in RSUs,
the graph, frequent requests result in a decrease in the request which can cause unexpected erases. When important content
hit ratio. Un-hit requests cause backhaul link traffic. Due to is erased, the request for the content can cause repeated traffic
limited resources on the backhaul links, heavy traffic can cause consumption on backhaul links. In addition, as vehicle density
scheduling delays. Also, the un-hit requests may be resolved increases, the number of content requests increases, resulting
by bringing the requested chunks from the other RSUs or the in more traffic on the backhaul links. The increased traffic can
content server, resulting in access delay. Therefore, frequent cause scheduling delays due to resource competition within

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 17

(a) (b) (c)

Fig. 7. According to the average request iteration time: (a) request hit ratio; (b) content download delay; (c) efficiency value.

(a) (b) (c)

Fig. 8. According to the vehicle density: (a) request hit ratio; (b) content download delay; (c) efficiency value.

(a) (b) (c)

Fig. 9. According to the average vehicle speed: (a) request hit ratio; (b) content download delay; (c) efficiency value.

RSUs and on backhaul links. Since DSP consumes more Therefore, even though DTP doesn’t consume any backhaul
traffic due to reckless precaching, it has the highest content link traffic for precaching DTC, it has the lowest efficiency
download delay. In the proposed scheme, by improving the value because its success rate is too low. Conversely, although
prediction accuracy and preventing the erases of important DSP has the highest success ratio and the lowest download
content, the content download delay is the lowest then the delay due to the reckless precaching of DTC, it has a lower
other two schemes. efficiency value than the proposed scheme because it consumes
Figure 8(c) shows the efficiency value according to the ve- too much backhaul link traffic.
hicle density. As the number of vehicles on the road increases, 3) Average Vehicle Speed: Average vehicle speed is a
the number of content requests also increases. This increased critical evaluation parameter in assessing the performance of
demand can lead to more frequent requests for content, which vehicular content delivery systems. Since the acceleration of
in turn can strain available resources such as RSU caching each vehicle is independently determined based on Gaussian
storage and backhaul links. The increased competition for distribution with a skewness, this parameter affects vehicles’
resources among vehicles could result in more content being movement speed within the network. Faster vehicular mobility,
erased from the caching storage, which is directly related to resulting from a higher average speed, affects the duration
the success rate of DTC download within its tolerable delay. of interactions between vehicles and RSUs. Faster-moving

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 18

(a) (b) (c)

Fig. 10. According to the average content size: (a) request hit ratio; (b) content download delay; (c) efficiency value.

(a) (b) (c)

Fig. 11. According to the average content tolerable delay: (a) request hit ratio; (b) content download delay; (c) efficiency value.

vehicles may have shorter dwell times within an RSU’s Figure 9(b) shows the content download delay according to
coverage area, affecting the feasibility of content precaching the average vehicle speed. A shorter dwell time due to faster
and the time available for content provision. Vehicles traveling vehicle speeds results in less time for an RSU to predict,
at higher speeds may spend less time within the coverage area cache, and provide chunks, as well as for the next RSU to
of an RSU, which can affect the ability to precache content and precache chunks. This leads to spending more time to cache
the time available for providing it. Additionally, the frequency and precache the requested chunks and to predict vehicle
and timing of content requests are affected by the average mobility. The speed of the vehicles has a significant impact on
vehicle speed because it contributes to the overall dynamics the turnover rate of requests in the RSU. This results in a large
of vehicular traffic. A precise understanding of the average amount of content being cached, precached, and erased from
vehicle speed is essential for optimizing content provision the caching storage. As a result, when vehicles are traveling at
schemes, especially in situations where vehicular mobility high speeds, the time spent on caching and precaching chunks,
patterns vary, affecting the system’s ability to predict and meet as well as the time required to bring erased chunks to the
content delivery demands efficiently. RSU, causes more delays. In DSP, reckless precaching leads
Figure 9(a) shows the request hit ratio according to the to even more delays due to the need to bring the erased chunks,
average vehicle speed. Fast-moving vehicles result in less so DSP has the highest delay. The proposed scheme has the
time spent within the communication range of an RSU. This lowest delay because it prevents reckless precaching in DTC
reduced dwell time reduces the likelihood that the RSU will and the erasure of important content.
be able to fulfill content requests, especially for DTC. This Figure 9(c) shows the efficiency value according to the aver-
reduced dwell time makes it difficult for the RSU to provide age vehicle speed. Higher vehicle speeds result in reduced time
cached and precached chunks to vehicles before they leave the spent within an RSU’s coverage, which in turn decreases the
RSU’s coverage area. Therefore, faster vehicle speeds create a time available for handling prediction errors for precaching,
scenario where vehicles have less time to interact with RSUs, leading to a lower success ratio. Additionally, prediction errors
making it more difficult for RSUs to successfully provide can cause an increase in backhaul link traffic to bring the
content requests within the limited time, which ultimately requested chunks from other RSUs or the content server. The
has a negative impact on the request hit ratio. The proposed efficiency value decreases due to faster vehicle speeds, caused
scheme has a higher hit ratio than the other two schemes due by prediction errors and erased chunks in the caching storage
to more accurate precaching prediction. The DSP scheme has of RSUs. Specifically, since DTP does not precache the chunks
the lowest hit ratio due to its caching and precaching of more of the requested DTC, the impact of prediction errors increases
chunks. in terms of success ratio, resulting in the lowest efficiency

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 19

value. The proposed scheme has the highest efficiency value it does not precache DTC chunks, as it ignores their tolerable
due to its higher prediction accuracy through recalculation delay, resulting in the lowest efficiency value. The proposed
compared to DSP. scheme is more efficient than DSP because DSP consumes a
4) Average Content Size: Average content size is an impor- significant amount of traffic on backhaul links to precache all
tant parameter in evaluating vehicular content delivery systems chunks of DTC and DSC.
as it provides insights into the characteristics of transmitted 5) Average Content Tolerable Delay: Average content tol-
data. This metric represents the average size of requested con- erable delay is a critical evaluation parameter in vehicular
tent from vehicles, reflecting the variety of multimedia files, content delivery systems. It represents the average duration
software updates, or other digital assets within the network. that a vehicle can endure while waiting for a requested content
The size of each content is independently determined based on item. This metric reflects the balance between the timeliness of
Gaussian distribution with this average. Larger average content content provision and the tolerable delay perceived by users.
sizes can impact both the required bandwidth and the storage A shorter average tolerable delay enhances QoS and user
capacity of RSUs, affecting the efficiency of data transmission satisfaction, especially for delay-sensitive content. Conversely,
and storage. Moreover, variations in content size can impact a higher average tolerable delay may be acceptable for delay-
the duration of content delivery and retrieval, resulting in al- tolerant content, considering the diverse nature of applications
terations to network traffic. Understanding the average content in vehicular scenarios. Striking the right balance in the Aver-
size is essential for developing schemes related to caching, age Content Tolerable Delay is crucial for optimizing system
precaching, and overall content management, as it directly performance, minimizing access delays, and aligning with the
affects the resource utilization and responsiveness of vehicular expectations of users who engage with both delay-sensitive
content delivery systems. and delay-tolerant content in dynamic vehicular environments.
Figure 10(a) shows the request hit ratio according to the Figure 11(a) shows the request hit ratio according to the
average size of all the content. Larger content sizes result in average tolerable delay of content. The graph shows a hor-
more erased content due to storage limitations, which can lead izontal trend, indicating that the hit ratio is insensitive to
to the erasure of popular cached and precached content. As changes in the tolerable delay of the content. Regardless of
a result, requests for such content may not be hit, leading the tolerable delay parameter, the hit ratio remains relatively
to a reduction in the request hit ratio. In DSP, excessive constant, suggesting that variations in content tolerances do not
precaching causes RSUs to precache more chunks, resulting in significantly influence the hit ratio. In this scenario, DSP has
the lowest hit ratio. The proposed scheme has a higher hit ratio the lowest hit ratio due to lower prediction accuracy and erased
for precaching than DTP, indicating its superior prediction chunks resulting from reckless precaching and the absence of
accuracy. a proper caching policy. When comparing the two schemes
Figure 10(b) shows the content download delay according that do not use reckless precaching, the proposed scheme has
to the average size of all the content. The time required the highest hit ratio due to a proper precaching algorithm for
to provide content increases with its size due to the larger DTC and improved prediction accuracy through recalculation.
number of chunks. As a result, there is a linear relationship However, rushing to precache DTC with a shorter tolerable
between the size of the content and the content download delay increases backhaul link traffic, causing erased chunks
delay. Furthermore, when the storage limit of RSUs’ caching and decreasing the hit ratio in the proposed scheme.
storage is exceeded, erased chunks due to the increased size Figure 11(b) shows the content download delay according
of the content cause further delays in bringing them back. to the average tolerable delay of content. The graph shows
Reckless precaching in DSP leads to more erased chunks, a horizontal trend, indicating insensitivity to changes in the
resulting in additional delays. Therefore, DSP has the highest tolerable delay. This suggests that the impact of tolerable delay
content download delay. The DTP and proposed scheme aim is minimal, resulting in consistent performance across different
to reduce backhaul link traffic and caching storage of RSUs by tolerable delay scenarios. The delay in downloading content is
considering the delay tolerance characteristics of content. This affected by the hit ratio, as the requested chunks are provided
approach can effectively reduce the number of erased chunks. by other RSUs or the content server when they are not hit. The
Additionally, the proposed scheme minimizes prediction errors two comparison schemes do not take into account the tolerance
for precaching, resulting in the shortest content download characteristics of the content, but the proposed scheme does.
delay possible. When the requested content has a small tolerable delay, the
Figure 10(c) shows the efficiency value according to the proposed scheme precaches almost all chunks of the content.
average size of all the content. When a large DTC size is re- However, due to its proper caching policy, it has a lower
quested, more RSUs are required for successful downloading. content download delay than DSP and DTP.
However, due to the tolerable delay of the DTC, the success Figure 11(c) shows the efficiency value according to the
ratio decreases as more RSUs are needed. Additionally, an average tolerable delay of content. If the requested content has
increased size of content leads to erased chunks and a low hit a shorter tolerable delay, it may not be downloaded within its
ratio, resulting in a further decrease in the success ratio. The tolerable delay. Therefore, DTP, which only uses the cached
size of the content affects the required traffic consumption on RSUs without precaching DTC, has the worst performance in
the backhaul links, and the erased chunks can increase the terms of the efficiency value, even though it hardly consumes
traffic. As a result, larger content sizes reduce the efficiency backhaul link traffic for DTC due to no precaching. Although
of precaching schemes. DTP has a low success rate because DSP has the highest success ratio, its efficiency value is lower

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 20

than that of the proposed scheme due to reckless precaching. [9] L. Zhang, A. Afanasyev, J. Burke, V. Jacobson, K. Claffy, P. Crowley,
The proposed scheme has the highest efficiency value because C. Papadopoulos, L. Wang and B. Zhang, “Named data networking,”
ACM SIGCOMM Comput. Commun. Rev., vol. 44, issue. 3 pp. 66-–73,
of its high success ratio and the saved traffic consumption Jul. 2014.
on backhaul links due to the proper precaching algorithm [10] Z. Li, Y. Chen, D. Liu and X. Li, “Performance analysis for
for DTC. When the tolerable delay of DTC is reduced, the an enhanced architecture of IoV via Content–Centric Networking,”
EURASIP J. Wireless Commun. Netw., vol. 2017, no. 1, pp. 1–7, Dec.
proposed scheme uses more traffic to precache DTC chunks, 2017.
increasing the success ratio but decreasing the efficiency value. [11] Y. Hichri, S. Dahi and H. Fathallah, “Candidate architectures for
emerging IoV: a survey and comparative study,” Des. Autom. Embed.
VI. C ONCLUSION Syst., vol. 25, pp. 237-–263, Aug. 2021.
[12] H. Ding, Y. Ma, C. Zhang, X. Li, B. Lin, Y. Fang, and S. Chen, “Proba-
In this paper, we proposed a comprehensive Content Storage bilistic Data Prefetching for Data Transportation in Smart Cities,” IEEE
Management and Precaching (CSMP) scheme to prevent the Internet Things J., vol. 9, no. 3, pp. 1655–1666, Feb. 2022.
[13] Y. Wu, X. Fang, C. Luo and G. Min, “Intelligent Content Precaching
inadvertent erasure of content and alleviate high backhaul link Scheme for Platoon–Based Edge Vehicular Networks,” IEEE Internet
traffic delays in CCN-IoV. Initially, the CSMP scheme includes Things J., vol. 9, no. 20, pp. 20503–20518, Oct. 2022.
the Content Storage Management (CSM) method, which pri- [14] Y. AlNagar, R. H. Gohary, S. Hosny and A. A. El-Sherif, “Mobility–
Aware Edge Caching for Minimizing Latency in Vehicular Networks,”
oritizes cached content to prevent inadvertent content erasures IEEE Open J. Veh. Technol., vol. 3, pp. 68–84, Feb. 2022.
caused by storage limitations, followed by the implementation [15] S. Yu, E. Sheng, Y. Zhang, Y. Li, H. Chen and Y. Hao, “Efficient Non-
of the Delay-Tolerant Content Precaching (DTCP) method linear Model Predictive Control of Automated Vehicles,” Mathematics,
vol. 10, no. 21, pp. 4163, Nov. 2022.
to alleviate backhaul link traffic. Additionally, we proposed [16] S. Xu, C. Guo, “Computation Offloading in a Cognitive Vehicular
the Delay-Sensitive Content Precaching (DSCP) method to Networks with Vehicular Cloud Computing and Remote Cloud Com-
improve mobility prediction accuracy and minimize delays puting,” Sensors, vol. 20, no. 23, pp. 6820, Nov. 2020.
[17] L. Yao, Y. Wang, X. Wang and G. WU, “Cooperative Caching in
in provisioning delay-sensitive content. Moreover, to reflect Vehicular Content Centric Network Based on Social Attributes and
various realistic scenarios, we have designed a vehicle mobility Mobility,” IEEE Trans. Mob. Comput., vol. 20, no. 2, pp. 391–402,
model based on a Gaussian normal distribution with skewness Feb. 2021.
[18] L. Yao, A. Chen, J. Deng, J. Wang and G. Wu, “A Cooperative Caching
and a content request model based on Zipf’s law, Gaussian Scheme Based on Mobility Prediction in Vehicular Content Centric
normal distribution, and Poisson distribution. This paper shows Networks,” IEEE Trans. Veh. Technol., vol. 67, no. 6, pp. 5435–5444,
that, based on simulation results, the evaluation value that con- Jun. 2018.
[19] R. Wang, Z. Kan, Y. Cui, D. Wu and Y. Zhen, “Cooperative Caching
siders the success ratio and the backhaul link traffic improves Strategy With Content Request Prediction in Internet of Vehicles,”
by 23.21% compared to the comparison delay sensitive pre- IEEE Internet Things J., vol. 8, no. 11, pp. 8964–8975, Jun. 2021.
caching scheme, leading to reducing content download delay [20] B. Feng, C. Feng, D. Feng, Y. Wu and X. -G. Xia, “Proactive
Content Caching Scheme in Urban Vehicular Networks,” IEEE Trans.
and enhancing the request hit ratio. Therefore, through this Commun., vol. 71, no. 7, pp. 4165-4180, July 2023.
approach, autonomous vehicles can conserve backhaul links [21] W. Liu, Y. Jiang, S. Xu, G. Cao, W. Du and Y. Cheng, “Mobility–Aware
without sacrificing the provision of delay-tolerant content, Video Prefetch Caching and Replacement Strategies in Mobile–Edge
Computing Networks,” in Proc. IEEE 24th Int. Conf. Parallel Distrib.
while efficiently prioritizing content caching, ensuring that Syst. (ICPADS), Singapore, Dec. 2018, pp. 687–694.
delay-sensitive content is delivered with minimal delay in [22] M. Amadeo, C. Campolo and A. Molinaro, “Content–centric vehicular
CCN-IoV. networking: An evaluation study,” in Proc. Int. Conf. Netw. Future
(NOF), Tunis, Tunisia, Nov. 2012, pp. 1–5.
R EFERENCES [23] Z. Su, Y. Hui and Q. Yang, “The Next Generation Vehicular Networks:
A Content-Centric Framework,” IEEE Wirel. Commun., vol. 24, no. 1,
[1] Z. Zhang, C. -H. Lung, M. St-Hilaire and I. Lambadaris, “Smart pp. 60–66, Feb. 2017.
Proactive Caching: Empower the Video Delivery for Autonomous [24] X. Wang and X. Wang, “Vehicular Content–Centric Networking Frame-
Vehicles in ICN–Based Networks,” IEEE Trans. Veh. Technol., vol. work,” IEEE Syst. J., vol. 13, no. 1, pp. 519–529, Mar. 2019.
69, no. 7, pp. 7955–7965, Jul. 2020. [25] S. Ostrovskaya, O. Surnin, R. Hussain, S. H. Bouk, J.Y. Lee, N.
[2] Q. Yuan, H. Zhou, J. Li, Z. Liu, F. Yang and X. S. Shen, “Toward Mehran, S. H. Ahmed and A. Benslimane, “Towards Multi–metric
Efficient Content Delivery for Automated Driving Services: An Edge Cache Replacement Policies in Vehicular Named Data Networks,” in
Computing Solution,” IEEE Netw., vol. 32, no. 1, pp. 80–86, Jan.–Feb. Proc. IEEE Int. Symp. Pers. Indoor Mobile Radio Commun. (PIMRC),
2018. Bologna, Italy, Sep. 2018, pp. 1–7.
[3] V. Kumar, S. Mishra and N. Chand, “Applications of VANETs: Present [26] M. Amadeo, G. Ruggeri, C. Campolo and A. Molinaro, “Diversity–
& Future,” Commun. and Netw., vol. 5, no. 1B, pp. 12–15, Feb. 2013. improved caching of popular transient contents in Vehicular Named
[4] P.K. Singh, S.K. Nandi and S. Nandi, “A tutorial survey on vehicular Data Networking,” Comput. Netw., vol. 184, pp. 107625, Jan. 2021.
communication state of the art, and future research directions,” Veh. [27] A. Dua, M. Shishodia, N. Kumar, G.S. Aujla and N. Kumar, “Bloom
Commun., vol. 18, pp. 100164, Aug. 2019. Filter Based Efficient Caching Scheme for Content Distribution in Ve-
[5] D. Reichardt, M. Miglietta, L. Moretti, P. Morsink and W. Schulz, hicular Networks,” in Proc. IEEE 17th Int. Conf. Commun. Workshops
“CarTALK 2000: safe and comfortable driving based upon inter– (ICC Workshops), Shanghai, China, May 2019, pp. 20–24.
vehicle–communication,” Intell. Veh. Symp., 2002. IEEE, vol. 2, pp. [28] S. Park, S. Oh, Y. Nam, J. Bang and E. Lee, “Mobility–aware
545–550, Jun. 2002. Distributed Proactive Caching in Content–Centric Vehicular Networks,”
[6] Y. -T. Yen, J. -J. Chou, C. -S. Shih, C. -W. Chen and P. -K. Tsung, in Proc. 12th IFIP Wireless Mobile Netw. Conf., Paris, France, Sep.
“Proactive Car–Following Using Deep–Reinforcement Learning,” pre- 2019, pp. 175–180.
sented at the 23rd Int. Conf. Intell. Transp. Syst., Rhodes, Greece, Sep. [29] G. Ahani and D. Yuan, “On Optimal Proactive and Retention-Aware
20–23, 2020. Caching with User Mobility,” in Proc. IEEE 88th Veh. Technol. Conf.
[7] Ericsson Mobility Report. Available online: https://www.ericsson. (VTC-Fall), Chicago, IL, USA, Nov. 2018, pp. 1–5.
com/en/reports-and-papers/mobility-report/dataforecasts/mobile-traffic [30] Y. AlNagar, S. Hosny and A. A. El-Sherif, “Towards Mobility–
-forecast/ (Accessed: March 29, 2024). Aware Proactive Caching for Vehicular Ad hoc Networks,” in Proc.
[8] V. Jacobson, D.K. Smetters, J.D. Thornton, M.F. Plass, N.H. Briggs IEEE Wireless Commun. Netw. Conf. Workshop (WCNCW), Marrakech,
and R.L. Braynard, “Networking Named Content,” in Proc. ACM 5th Morocco, Apr. 2019, pp. 1–6.
Int. Conf. Emerg. Netw. Exp. Technol., Rome, Italy, Dec. 2009, pp. [31] C. Liang, F. R. Yu, N. Dao, G. Senarath and H. Farmanbar, “En-
1–12. abling Adaptive Data Prefetching in 5G Mobile Networks with Edge

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
This article has been accepted for publication in IEEE Internet of Things Journal. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/JIOT.2024.3522322

IEEE INTERNET OF THINGS JOURNAL, VOL. 00, NO. 0, AUGUST 2024 21

Caching,” in Proc. IEEE Conf. Global Commun. (GLOBECOM), Abu Youngju Nam received the B.S., M.S., and Ph.D.
Dhabi, United Arab Emirates, Dec. 2018, pp. 1–6. degrees from the School of Information and Com-
[32] I. Ud Din, B. Ahmad, A. Almogren, H. Almajed, I. Mohiuddin and J. munication Engineering, Chungbuk National Uni-
J. P. C. Rodrigues, “Left–Right–Front Caching Strategy for Vehicular versity, Cheongju, South Korea, in 2017, 2019, and
Networks in ICN–Based Internet of Things,” IEEE Access, vol. 9, pp. 2023, respectively. He is currently researching with
595–605, Dec. 2021. the Research Institute for Computer and Information
[33] W. Zhao, C. Wu, R. Zhong, K. Shi and X. Xu, “Edge Computing and Communication, Chungbuk National University. His
Caching Optimization Based on PPO for Task Offloading in RSU– research interests include computer communication
Assisted IoV,” in Proc. IEEE 9th World Forum on Internet of Things and networking, wireless sensor networks, content-
(WF–IoT), Aveiro, Portugal, Oct. 2023, pp. 01–06. centric vehicular networks, content precaching, op-
[34] M. T. R. Khan, Y. Z. Jembre, M. M. Saad, S. H. Bouk, S. H. Ahmed timization, and social networks.
and D. Kim, “Proactive Content Retrieval Based on Value of Popularity
in Content–Centric Internet of Vehicles,” IEEE Trans. Intell. Transp.
Syst., vol. 25, no. 8, pp. 8514–8526, Aug. 2024.
[35] D. Grewe, S. Schildt, M. Wagner and H. Frey, “ADePt: Adaptive
Distributed Content Prefetching for Information–Centric Connected
Vehicles,” in Proc. IEEE 87th Veh. Technol. Conf. (VTC Spring), Porto,
Portugal, Jun. 2018, pp. 1–5.
[36] F. Zhang, X. Chenren, Y. Zhang, K. K. Ramakrishnan, S. Mukherjee,
R. Yates, and N. Thu, “EdgeBuffer: Caching and prefetching content
at the edge in the MobilityFirst future Internet architecture,” in Proc.
16th Int. Symp. World Wireless Mobile Multimedia Netw. (WoWMoM),
Boston, MA, USA, Jun. 2015, pp. 1–9.
[37] L. Vigneri, T. Spyropoulos and C. Barakat “Low Cost Video Streaming
through Mobile Edge Caching: Modelling and Optimization,” IEEE
Trans. Mob. Comput., vol. 18, no. 6, pp. 1302–1315, Jun. 2019.
[38] A. A. Prates, I. V. Bastos and I. M. Moraes, “GeoZone: An Hyunseok Choi received the B.S. and Ph.D. de-
interest–packet forwarding mechanism based on dissemination zone grees from the School of Information and Com-
for content–centric vehicular networks,” Comput. & Electr. Eng., vol. munication Engineering, Chungbuk National Uni-
73, pp. 155–166, Jan. 2019. versity, Cheongju, South Korea, in 2017 and 2023.
[39] N. Aung, S. Dhelim, L. Chen, A. Lakas, W. Zhang, H. Ning, S. He is currently researching with the Research In-
Chaib and M. T. Kechadi, “VeSoNet: Traffic-Aware Content Caching stitute for Computer and Information Communica-
for Vehicular Social Networks Using Deep Reinforcement Learning,” tion, Chungbuk National University. His research
IEEE Trans. Intell. Transp. Syst., vol. 24, no. 8, pp. 8638–8649, Aug. interests include computer communication and net-
2023. working, wireless sensor networks, vehicular ad-hoc
[40] R. Wang, A. Sabbagh, S. C. Burleigh, K. Zhao and Y. Qian, “Proactive networks, vehicular clustering, and cloud computing.
Retransmission in Delay–/Disruption–Tolerant Networking for Reliable
Deep–Space Vehicle Communications,” IEEE Trans. Veh. Technol., vol.
67, no. 10, pp. 9983–9994, Oct. 2018.
[41] W. Qi, Q. Song, X. Wang and L. Guo, “Trajectory Data Mining–Based
Routing in DTN–Enabled Vehicular Ad Hoc Networks,” IEEE Access,
vol. 5, pp. 24128–24138, Oct. 2017.
[42] B. Xia, F. Kong, J. Zhou, X. Tang and H. Gong, “A Delay–Tolerant
Data Transmission Scheme for Internet of Vehicles Based on Software
Defined Cloud-Fog Networks,” IEEE Access, vol. 8, pp. 65911–65922,
Mar. 2020.
[43] A. Paier, R. Tresch, A. Alonso, D. Smely, P. Meckel, Y. Zhou and N.
Czink, “Average Downstream Performance of Measured IEEE 802.11p
Infrastructure-to-Vehicle Links,” Proc. IEEE Int. Conf. Commun. Work-
shops (ICC Workshops), Cape Town, South Africa, May 2010, pp. 1–5.
[44] E. Lee, E. Lee, M. Gerla and S. Y. Oh, “Vehicular cloud networking:
architecture and design principles,” IEEE Comm. Magazine, vol. 52, Euisin Lee received the B.S., M.S., and Ph.D.
no. 2, pp. 148–155, Feb. 2014. degrees in computer engineering from Chungnam
[45] B. B. Chen and M. C. Chan, “MobTorrent: A Framework for Mobile National University, Daejeon, South Korea, in 2005,
Internet Access from Vehicles,” IEEE INFOCOM, Rio de Janeiro, 2007, and 2012, respectively. He studied as a Post-
Brazil, Apr. 2009, pp. 1404–1412. doctoral Researcher with the Department of Com-
[46] A. Azzalini and A. Capitanio, “The Skew–Normal and Related Fami- puter Science, University of California at Los An-
lies,” Cambridge University Press, 2013. geles, from 2012 to 2014. He joined the School
[47] L. Breslau, Pei Cao, Li Fan, G. Phillips and S. Shenker, “Web caching of Information and Communication Engineering,
and Zipf–like distributions: evidence and implications,” in Proc. IEEE Chungbuk National University, in 2014, where he
INFOCOM, New York, NY, USA, Mar. 1999, pp. 126–134. is currently working as a Professor. His research
[48] Network Simulator 3 (NS3). [Online]. Available: https://www.nsnam. interests include computer communication and net-
org/ (Accessed: March 29, 2024). working, routing, multicasting, mobility management, location service, QoS
[49] A. Hanggoro and R. F. Sari, “Performance evaluation of the manhattan (real-time and reliability) in mobile ad hoc networks (MANETs), wireless
mobility model in vehicular ad-hoc networks for high mobility vehicle,” sensor networks (WSNs), vehicular ad hoc networks (VANETs), Internet of
in Proc. IEEE Int. Conf. Commun. Netw. Satellite (COMNETSAT), Things (IoT), and Information-Centric Networking (ICN).
Yogyakarta, Indonesia, Dec. 2013, pp. 31–36

Authorized licensed use limited to: Bahria University. Downloaded on January 30,2025 at 09:21:52 UTC from IEEE Xplore. Restrictions apply.
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.

You might also like