Interference-Aware Scheduling Algorithms for Terahertz Wireless Networks
Abstract
Terahertz (THz) technology has received significant attention in recent years due to its potential
to realize multi-gigabit wireless communications. Effective management of communications in
Terahertz wireless networks is challenging due to the limited bandwidth and high path loss
characteristics. One key aspect for improving the network performance is through efficient
scheduling algorithms that allocate resources to users based on their bandwidth requirements and
channel conditions. The interference level, which is significantly different from those in
traditional microwave and millimeter-wave networks, needs to be carefully considered in the
scheduling process to avoid data collisions and improve overall network capacity. In this context,
this paper presents a novel and efficient approach for interference-aware scheduling in THz
wireless networks. The proposed algorithms are designed to consider the characteristics of THz
signals, including their directional nature and high propagation loss. Our approach aims to
improve network performance by adapting the re-scheduling and power allocation techniques to
suit the dynamic THz channel conditions. The main objective is to ensure that the users' Quality
of Service (quality of service) requirements are met while maximizing the throughput and
minimizing the interference level among users. The proposed algorithms consist of two key
components. First, a channel-aware scheduling (CAS) algorithm is introduced to assign
appropriate time-frequency resources to users with different quality of service demands. Next, a
Game-Theoretic Power Allocation (GTPA) algorithm is put forward to optimize the transmission
power level of each user.
Keywords: Multi-Gigabit, Millimeter-Wave, Interference-Aware, Re-Scheduling, Channel-
Aware, Game-Theoretic
1. Introduction
Terahertz (THz) wireless networks refer to a type of wireless communication system that
operates in the terahertz frequency range, which is typically defined as 0.1 to 10 THz [1]. This
frequency range is much higher than the radio frequency range (up to 300 GHz) currently used
for most wireless communication systems [2]. Terahertz wireless networks have the potential to
provide much higher data rates than existing wireless networks, with theoretical speeds up to 100
Gaps [3]. One of the critical features of terahertz wireless networks is the use of terahertz waves,
also known as T-rays, for communication [4]. These waves have much smaller wavelengths than
radio waves, allowing for more efficient transmission and higher data rates. They also can
penetrate through some materials, such as clothing or packaging, which can be beneficial in
specific applications. There are also technical challenges associated with the use of terahertz
waves for wireless communication [5]. The higher frequency means that these waves are more
susceptible to interference and attenuation, which can result in reduced signal strength and range
[6]. Terahertz waves are easily absorbed by water molecules, which can limit their ability to
travel through the atmosphere and affect their range [7]. To overcome these challenges, terahertz
wireless networks use advanced technologies such as beamforming, where the direction of the
wireless signal is adjusted to avoid interference and improve signal strength [8]. MIMO
(Multiple Input Multiple Output) technology can also be used, which utilizes multiple antennas
to improve signal strength and data transmission [9]. Terahertz (THz) wireless networks are next-
generation wireless communication systems that use frequencies ranging from 0.1 to 10 THz
[10]. These networks have the potential to provide higher data rates, lower latency, and increased
capacity compared to existing wireless networks [11]. several technical challenges need to be
addressed in order to realize the full potential of Terahertz wireless networks. One of the main
issues with Terahertz wireless networks is the limited range of the THz waves [12]. These high-
frequency waves have short wavelengths, which means they are easily absorbed by obstacles
such as walls, buildings, and even rain. It leads to a shorter range for communication compared
to lower frequency wireless networks [13]. The penetration depth of THz waves is limited, which
means they cannot penetrate through solid objects, making them unsuitable for applications that
require deep coverage, such as indoor and underground communication [14]. Another technical
issue is the high path loss in Terahertz wireless networks. The high frequency of THz waves
results in high attenuation when traveling through the air, leading to a significant decrease in
signal strength. It leads to a shorter coverage area and requires more infrastructure to ensure
reliable communication [15]. Another challenge is the need for efficient transceiver technology
for Terahertz wireless networks. The main contribution of the research has the following:
• Advancement in communication technology: Terahertz wireless networks offer a
significant contribution to the field of communication technology. Traditionally,
wireless networks operate in the microwave frequency range, but terahertz
technology provides much higher data rates and larger bandwidths, which can
significantly improve the overall performance of wireless networks. It can result
in faster and more efficient data transfer, enabling new applications and services.
• Exploration of new applications: The use of terahertz frequency in wireless
networks opens up an entire spectrum of new applications that were not
previously possible with lower frequencies. For example, terahertz waves have
better penetration capabilities, allowing for non-destructive imaging and sensing
in different materials. It can have significant applications in fields such as medical
imaging, security screening, and industrial processing.
• Research in signal processing and network architecture: The development of
terahertz wireless networks requires extensive research in signal processing and
network architecture. The unique characteristics of terahertz signals, such as high
attenuation and sensitivity to blockages, present challenges that need to be
overcome in order to achieve reliable and high-performance wireless networks.
As a result, research in this area can lead to improvements in signal processing
and network design techniques, which can benefit future wireless communication
systems.
The remaining part of the research has the following chapters. Chapter 2 describes the recent
works related to the research. Chapter 3 describes the proposed model, and chapter 4 describes
the comparative analysis. Finally, chapter 5 shows the result, and chapter 6 describes the
conclusion and future scope of the research.
2. Related Words
Tezergil B. et al. [16] have discussed Wireless backhaul in 5G and beyond, which refers to the
transmission of data and communication signals between the core network and the base stations
using wireless technology. This presents a number of challenges, such as scalability, reliability,
and network coverage. However, it also offers opportunities for high-speed, low-latency
connections and cost-effective network [Link] M. S., et, al. [17] have discussed The
6G survey aims to identify the challenges, requirements, applications, key enabling technologies,
use cases, AI integration issues, and security aspects for the development of 6G technology. It
provides insights into the potential improvements of 6G over 5G and outlines the areas that need
to be addressed for successful 6G [Link], Y., et al. [18] have discussed Intelligent
recommendation-based user plane handover with enhanced TCP throughput in ultra-dense
cellular networks. This technique uses intelligent algorithms and user preferences to make
efficient handover decisions for mobile devices, resulting in improved TCP throughput and better
user experience in highly populated cellular networks. Qadir, Z., et al. [19] have discussed how
the upcoming 6G technology will revolutionize the Internet of Things (IoT) with faster speeds,
massive connectivity, and low latency. Recent developments such as edge computing, AI, and
blockchain have enabled new use cases in areas such as smart cities, healthcare, and
transportation. However, there are still challenges to be addressed, including security, privacy,
and interoperability. Zhan, C. et al. [20] have discussed the tradeoff between the age of
information and operation time for UAV sensing over multi-cell cellular networks. This tradeoff
refers to the decision to prioritize either the timely delivery of information or the efficiency of
the operation. This tradeoff must be carefully considered in order to maximize the benefits of
UAV sensing in a multi-cell network setting. Bargavi, M. et al. [21] have discussed Cross-layer
design as a technique for optimizing communication protocols in multi-hop wireless networks by
allowing information exchange between layers. This approach enables better coordination
between layers and improves overall network performance by considering interactions between
layers rather than treating them independently. It minimizes delays, improves throughput, and
reduces resource usage. Moorthy, S. K. et al. [22] have discussed the enhancement of automation
in resource orchestration for software-defined broadband flying networks. This allows for better
management and allocation of network resources, resulting in improved efficiency and
performance. It also enables faster and more agile deployment of new services and applications,
enhancing the overall user [Link] B. et al. [23] have discussed how The application of
Artificial Intelligence (AI) and 6G technology in the Internet of Things (IoT) can promote
sustainable development in smart cities. By using advanced technologies, such as machine
learning and data analytics, cities can improve efficiency, reduce energy consumption, and
optimize resource allocation, ultimately creating more eco-friendly and livable urban
[Link], K. et al. [24] have discussed the simultaneous beam and user selection
technique in beamspace mmWave/THz massive MIMO downlink. This technique allows for
efficient and dynamic resource allocation by selecting optimal beams and users to transmit data
on different subcarriers. It enables faster and more reliable communication in high-frequency
networks with a large number of [Link] S. et al. [25] have discussed Space-aerial-
ground-sea integrated networks involve the integration of various communication platforms such
as satellites, drones, ground base stations, and undersea fiber optic cables to deliver high-speed
and cost-efficient connectivity for 6G technology. Resource optimization is critical in managing
the diverse network components, while challenges such as interoperability and security must be
[Link] Q. et al. [26] have discussed the principles of 6G wireless networks, which are
based on higher frequency bands, massive MIMO technology, enhanced spatial and spectral
efficiency, ultra-low latency, extreme reliability, and improved energy efficiency. 6G is also
expected to incorporate artificial intelligence and quantum computing to support a wide range of
applications, such as the Internet of Things, virtual and augmented reality, and intelligent
[Link] W. et al. [27] have discussed 5G-advanced, the current generation of cellular
networks, which delivers faster data speeds and low latency. 6G is the next generation, still in its
early stages. It promises even faster speeds, lower latency, and enhanced connectivity for the
Internet of Things. Research and development efforts are ongoing to make 6G a reality in the
[Link], S. S. et al. [28] have discussed SpaceRIS as a novel approach to maximizing
coverage of Low-Earth Orbit (LEO) satellites in 6G Sub-THz networks. It combines the use of
MAPPO Deep Reinforcement Learning and Whale Optimization algorithms to efficiently
allocate and coordinate the placement of satellites in LEO orbits, resulting in improved network
performance and [Link], G. A et, al. [29] have discussed UE admittance in HetNet
cognitive femtocells refers to the process of allowing user equipment (UE) to access the network.
Contention-free resource allocation is a method for allocating network resources in a way that
minimizes interference between femtocells, improving overall network performance and
reducing interference-related issues. These advances in electrical engineering, electronics, and
energy aim to optimize network efficiency and mitigate interference in HetNet cognitive
femtocells. Aboueleneen, N. et al. [30] have discussed Deep reinforcement learning (DRL) as a
promising approach for optimizing decision-making in the Internet of Drones networks (IoDN).
However, several challenges exist in its application to IoDN, such as scalability, network
dynamics, and coordination among drones. Further research is needed to address these issues and
improve DRL's performance in IoDN.
Table.1: Comprehensive Analysis
Author Year Advantage Limitation
Tezergil, B., et, 2022 Improved network flexibility Possible limitation of wireless
al. [16] and scalability due to the use of backhaul in 5G and beyond:
advanced wireless technologies Limited bandwidth and
and higher frequencies. capacity, potentially leading to
network congestion and lower
data speeds.
Akbar, M. S., et, 2022 Comprehensive understanding One limitation of a 6G survey
al. [17] of the entire ecosystem to is its potential lack of real-
develop and implement more world case studies and
advanced and secure wireless practical implementation
communication networks for examples.
various use cases.
Peng, Y., et, al. 2021 The ability to improve overall The algorithm may not take
[18] network performance and user into account the individual
experience by dynamically needs and preferences of each
selecting optimal base stations user.
for handover.
Qadir, Z., et, al. 2023 Better network coverage and One limitation of Towards 6G
[19] reliability to support a wider Internet of Things is that it
range of interconnected does not address the potential
devices for improved security risks and privacy
efficiency and productivity. concerns associated with a
highly connected IoT
ecosystem.
Zhan, C., et, al. 2023 The advantage is that it allows One limitation is that
[20] for more efficient use of increased operation time for
resources and better adaptation collecting information may
to changing environmental result in outdated data,
conditions. reducing the accuracy of uav
sensing.
Bargavi, M., et, 2024 Better performance and The complexity of managing
al. [21] stability due to coordinated interdependencies across
optimization of different layers different layers can become
working together. overwhelming, making it
challenging to implement and
maintain.
Moorthy, S. K. 2023 Improved efficiency and The complexity and speed of
et, al. [22] scalability, allowing for more network changes may be
dynamic allocation of limiting factors in effectively
resources based on network automating resource
demands and reducing manual orchestration in software-
intervention. defined broadband flying
networks.
Gera, B., et, al. 2023 Efficient and effective Limitations such as privacy
[23] management of resources and concerns or potential bias in
infrastructure, leading to AI decision-making could
reduced carbon footprint and hinder effective and equitable
increased sustainability. use of this technology in
creating sustainable smart
cities.
Wu, K., et, al. 2023 Ability to adapt to the Difficulty in obtaining
[24] changing environment and accurate channel information
varying channel conditions to from multiple users due to
provide a more reliable and limited feedback and delay in
efficient signal transmission. channel estimation.
Sharif, S., et, al. 2023 Improved resource utilization Possible limitation: Limited
[25] across multiple networks and bandwidth and signal strength
minimized interference, when connecting in remote or
leading to higher efficiency underwater locations.
and better user experience for
6G technology.
Du, Q., et, al. 2023 Improved data rates and Lack of standardization and
[26] bandwidth capabilities for global agreement on key
faster and more efficient technologies, which can lead
communication. to fragmentation and
compatibility issues among
different networks.
Chen, W., et, al. 2023 One advantage of 5G-advanced One limitation of 5G-advanced
[27] toward 6G is its potential to is its reliance on existing
support even faster network infrastructure, which may not
speeds, improving overall be able to support future
connectivity and user developments and demand for
experience. 6G.
Hassan, S. S., 2024 The advantage of SpaceRIS is One limitation could be the
et, al. [28] that it maximizes coverage in reliance on a certain type of
low Earth orbit satellite optimization algorithm, which
networks using MAPPO DRL may not always be effective in
and Whale Optimization all scenarios.
techniques in 6G Sub-THz
networks.
Safdar, G. A et, 2023 Minimizes interference Potential for reduced
al. [29] between femtocells, leading to efficiency and wasted
improved overall network resources due to the static
performance and better user nature of resource allocation in
experience. a dynamic environment.
Aboueleneen, 2023 Deep reinforcement learning Limited generalizability due to
N., et, al. [30] can adapt to dynamic and constrained environment and
unpredictable environments, applicability to specific tasks,
allowing for efficient decision making it less effective in
making in complex and dynamic real-world scenarios.
constantly changing drone
networks.
• Terahertz wireless networks have emerged as a promising solution for high-speed
wireless communication due to their potential to provide significantly larger bandwidth
compared to traditional wireless technologies. several technical challenges exist in the
implementation of terahertz wireless networks.
• One of the main issues with terahertz wireless networks is the high level of atmospheric
absorption. Terahertz waves are susceptible to absorption by atmospheric gases, which
can significantly reduce the range and reliability of these networks. Another critical
challenge is the limited transmission range of terahertz waves. Due to their high
frequency, they have shorter wavelengths and, therefore, experience higher path loss
compared to lower-frequency waves. It limits the range of terahertz wireless networks
and requires the installation of a large number of access points, making it expensive and
difficult to scale.
• Terahertz waves are highly directional, making it necessary for the transmitter and
receiver to have a clear line of sight. It poses a challenge in urban and indoor
environments, where obstacles and reflections can cause signal blockage and
interference. The lack of standardized equipment and protocols for terahertz wireless
networks is a major technical issue. It limits interoperability and hinders the development
of a fully functional infrastructure.
Interference-aware scheduling algorithms are a new and vital technical innovation in the field of
wireless communication. They aim to increase the efficiency and reliability of wireless networks
by taking into account the interference caused by the simultaneous transmission of multiple
devices. It is achieved through advanced algorithms that dynamically allocate resources, such as
time slots and frequencies, to different devices based on the level of interference they cause to
each other. By managing interference, these algorithms can significantly improve the overall
network performance, leading to higher data transfer rates, lower latency, and improved
reliability. It is particularly crucial in dense and highly congested wireless environments, where
traditional scheduling algorithms may be prone to interference and result in performance
degradation. Thus, interference-aware scheduling algorithms represent a significant technical
breakthrough in enabling better and more robust wireless communication systems.
3. Proposed System
A. Construction diagram
• Resource usage prediction using modified DES
Resource usage prediction is essential to resource management in various systems, such as
manufacturing, healthcare, and transportation. It involves forecasting the future demand for
resources, such as machines, equipment, or personnel, to optimize their utilization and ensure
efficient operations. In recent years, discrete event simulation (DES) has been commonly used
for resource usage prediction due to its ability to model complex systems accurately and simulate
their behaviour. Traditional DES models use fixed input parameters, such as arrival rates and
processing times, to simulate the system under a specific scenario. However, these parameters
constantly change in real-world systems due to various factors, such as machine breakdowns,
unexpected events, and changing demand patterns. It can lead to inaccurate resource usage
predictions and affect system performance. A modified DES approach has been proposed to
overcome this limitation, which incorporates dynamic input parameters into the simulation
model. This approach considers changes in input parameters during the simulation and adjusts
the system's behaviour accordingly. It uses prediction algorithms, such as time series analysis
and machine learning, to forecast the future values of input parameters based on historical data.
These predicted values are then used as inputs in the simulation model, allowing for more
accurate resource usage predictions. One of the critical challenges in implementing this approach
is determining the appropriate prediction algorithms for each input parameter.
• Interference aware scheduling with admission control
Interference-aware scheduling with admission control is a technique used in wireless networks to
manage and optimize the allocation of radio resources for communication between users. It
combines two essential concepts – interference awareness and admission control – to improve
the network's overall performance. Interference awareness refers to the ability of a scheduling
algorithm to detect and mitigate interference between different users. In a wireless network,
interference occurs when multiple users are trying to communicate simultaneously on the same
frequency or time slot. It leads to signal degradation and reduced network capacity. Interference-
aware scheduling considers users' spatial and temporal distribution to minimize interference.
Admission control, on the other hand, is a mechanism that determines whether a new user can be
admitted into the network or not. This decision is based on the available network resources and
the user's quality of service (quality of service) requirements. By controlling the number of users
in the network, admission control prevents overloading and ensures that the network can meet
the quality of service requirements of all users. The combined operation of interference-aware
scheduling with admission control involves several steps. First, the scheduling algorithm
analyses the network to identify the users causing interference to other users. It can be done by
measuring the received signal strength or using advanced signal processing techniques. Once
these users are identified, the scheduling algorithm decides on the appropriate resource allocation
strategy to minimize interference.
• Interference prediction module
Interference prediction is a crucial aspect of wireless communication systems, especially in
dense networks with various transmitting devices. It involves predicting the interference between
different wireless signals in a given environment and taking measures to mitigate its effects. The
interference prediction module is a critical component of wireless networks that enables this
process. The primary function of the interference prediction module is to analyse the propagation
of wireless signals in a given environment and identify potential sources of [Link]
construction diagram has shown in the following fig.1
Fig 1: Construction diagram
It considers several factors, like the transmitting signals' frequency, power level, and direction.
The module uses this information to predict the likelihood of interference between different
wireless signals. One critical technique in interference prediction is the path loss model. This
model uses mathematical equations to estimate the decrease in signal strength as it travels
through space. The interference prediction module can accurately predict the path loss between
different transmitters by accounting for factors like distance, obstacles, and signal characteristics.
This information is then used to determine the probability of interference between them. Another
essential aspect of the interference prediction module is its ability to analyse the spatial
distribution of wireless devices. It is necessary in dense networks where multiple devices operate
close to each other. By understanding the location and orientation of these devices, the module
can identify potential areas of interference and provide recommendations for channel allocation
to minimize its effects.
• VM
A virtual machine (VM) is a software program that emulates a physical computer system and
enables multiple operating systems (OS) to run on a single physical machine. It creates an
isolated environment within a host machine, where each VM can run its own OS, applications,
and processes independently. To achieve this, the VM software utilizes a hypervisor, a layer that
manages the host machine's resources and allocates them to the virtual machines. The hypervisor
creates a virtual representation of hardware components such as the CPU, memory, storage, and
network adapters for each VM, called virtual hardware. When a VM is created, the hypervisor
allocates specific physical resources from the host machine to the virtual hardware. It includes a
dedicated CPU core, memory, and storage space. Once this is done, the VM will boot up and run
the chosen OS. The VM's BIOS (Basic Input Output System) is executed during boot up, just
like a physical machine. These BIOS is responsible for initializing the virtual hardware
components and loading the OS. Once the OS is loaded, the VM runs independently on the host
machine. The VM can interact with the host machine and other VMs through the hypervisor.
B. Functional Working Model
• Global Memory
Global Memory is a shared Memory accessible to all threads within a GPU (Graphics Processing
Unit). It is a type of memory management found in modern GPU architectures, specifically in
CUDA (Compute Unified Device Architecture) and Opens (Open Computing Language). Global
Memory facilitates communication between threads and allows efficient data sharing within a
GPU. One of the critical features of Global Memory is its high bandwidth and low latency
access. It means that data can be accessed quickly and efficiently by multiple threads
simultaneously without any significant performance impact. It is achieved by physically placing
the Global Memory near the processing units, reducing the time to retrieve data. Additionally,
using memory banks allows for parallel access to different chunks of Memory, further improving
the overall data transfer rate. The operations of Global Memory can be broken down into three
main categories: allocation, access, and deal location. First, Memory is allocated by the GPU
during program execution. It occurs when the application running on the GPU requests a specific
amount of Memory. The allocated Memory is then divided into different memory banks, each
with its unique address. These memory banks are then further divided into smaller units, known
as memory segments, which can be accessed individually by different threads.
• Constant Memory
Constant memory is a specialized type of memory storage used in computer systems designed to
hold data that will not change during program execution. It is typically a small amount of read-
only storage located on a system's graphics processing unit (GPU). Constant memory is often
used to store data such as lookup tables, filter coefficients, or other constants used frequently by
a program. One of the main benefits of constant memory is its ability to be accessed quickly.
Since the data is stored in a read-only format, it does not need to be retrieved from the main
memory each time it is required. Instead, it is stored in a cache on the GPU, allowing for fast and
efficient access. This is particularly useful for programs that require frequent access to the same
data, such as graphics rendering applications. The operations of constant memory are closely tied
to the underlying architecture of the GPU. When a program is loaded onto the GPU, the data
stored in continuous memory is also loaded. This data is then accessible to all kernels (GPU
program functions) running on the GPU. When a kernel needs to access the data in constant
memory, it uses a specialized instruction that reads the data directly from the cache. One
important aspect of continuous memory is that it is limited in size.
• Shared Memory
Shared Memory is a form of interposes communication that allows multiple processes to share
the same memory space. It means that multiple processes can access and modify the same
memory block, known as the shared memory region. Shared Memory is commonly used in
operating systems and programming languages to improve efficiency and facilitate
communication between processes. Utilizing shared Memory involves several vital operations,
including creating and attaching to a shared memory segment, reading and writing data to and
from the shared memory region, and detaching and destroying the shared memory segment. To
begin, a shared memory segment is created by one process and made available for other
processes to attach [Link] functional block diagram has shown in the following fig.2
Fig 2: Functional block diagram
It is done through a system call in the operating system, which allocates a specific amount of
Memory from its memory resources. The process that creates the shared memory segment
becomes the owner of the segment and is responsible for managing it. Once the shared memory
segment is created, other processes can attach to it by calling the appropriate system function. It
allows the process to access the same memory space as the creator process and share data with it.
To access the shared memory region, a process must know the unique identifier, or key, of the
shared memory segment. Once attached, processes can read and write data to and from the
shared memory region.
• DMA
Direct Memory Access (DMA) is a computer hardware feature that allows data transfer between
external devices and the computer's memory without involving the central processing unit
(CPU). It lets the external device, such as a network card or hard drive, directly access the
computer's memory and transfer data without the CPU intervening. The DMA process involves
three main components: the DMA controller, the memory, and the external device. The DMA
controller is an independent chip or part of the computer's motherboard that manages the DMA
operations. It is responsible for initiating and controlling the data transfers between the memory
and the external device. The first step in a DMA operation is for the CPU to inform the DMA
controller about the data transfer. The CPU will provide the DMA controller with the memory
address where the data needs to be stored or retrieved from and the size of the data to be
transferred. The DMA controller will then request access to the system bus from the CPU. Once
the DMA controller gains access to the system bus, it will send a read or write command to the
external device, instructing it to transfer data to or from the specified memory address. The DMA
controller also sets up a DMA channel, a dedicated path for data transfer between the external
device and the memory.
• Host Memory
Host Memory, also known as System Memory or Random Access Memory (RAM), is a type of
computer hardware that serves as the primary storage location for data and instructions required
by the computer's central processing unit (CPU). It is a volatile memory, which means that its
contents are lost when the power is turned off. The operation of Host Memory is crucial for the
overall performance of a computer system. When a program is executed, its data and instructions
are loaded into the Host Memory for the CPU to access and process. It allows for quick access to
data and instructions, as opposed to retrieving it from a slower secondary storage device such as
a hard drive or SSD. The operations of Host Memory can be divided into three main functions:
writing, reading, and refreshing. When data is written into Host Memory, it is stored in the form
of binary digits (bits) in individual cells that are arranged in a grid-like structure. These cells are
organized into larger units called bytes, which are then grouped to form larger units of storage,
such as kilobytes (KB), megabytes (MB), and so on. Reading from Host Memory is retrieving
data and instructions from the memory cells. It is done by the CPU, which sends a request to the
memory controller to access a specific memory address.
• SM
SM (Social Media) is a platform that allows individuals or organizations to create and share
content, ideas, and information with many people online. It has become an essential part of our
daily lives, enabling us to connect with others, stay informed, and share our thoughts and
experiences. This paragraph will provide an in-depth technical explanation of how SM operates.
The first step in understanding SM knows it runs on the internet. The internet is a network of
interconnected devices that allows communication and data exchange between them. Social
media platforms like Facebook, Twitter, and Integra are web-based applications, meaning they
operate through a web browser and require an internet connection. When a user logs into a social
media platform, the first thing that happens is the authentication process. This process verifies
the user's identity by requesting login credentials such as a username and password. Once
authenticated, the platform provides users a personalized experience, showing them content
based on their interests, friends, and preferences. SM platforms use a database to store and
manage user-generated content, such as posts, photos, and videos. This database is hosted on a
server, which is a powerful computer that stores and manages large amounts of data. The server
also handles the communication between the social media platform and the user's device.
C. Operating principle
• Transmission of data
Transmission of data refers to the process of sending and receiving information over a
communication channel between two devices. It is a vital aspect of any data communication
system and is responsible for the successful transfer of data from one point to another. The
process of transmitting data can be broadly categorized into three stages – data encoding, data
transmission, and data decoding. These stages involve complex operations that ensure the
accurate and efficient transfer of data. The first stage, data encoding, consists of converting the
information to be transmitted into a digital signal. This is necessary because data is originally in
analogy form and cannot be directly transmitted over a communication channel. The encoding
process involves breaking the data into smaller units known as packets. Each packet is assigned a
specific address (header) that contains information about the sender, receiver, and the sequence
of the packet. The next stage is data transmission, which is the physical transfer of data over the
communication channel. Various transmission media such as cables, fibre optic wires, or wireless
signals facilitate this process. The data is transmitted in the form of electrical or light signals that
travel through the medium and reach the receiving device. At the receiving end, the data is
decoded through the process of data decoding. Here, the packets are reassembled in the correct
order based on the information in the header and then converted back into its original form.
• Priority classifier
A priority classifier is a machine learning algorithm that assigns priority levels to various tasks or
items based on predefined criteria. It operates by analysing data and deciding which items should
receive higher priority. The classifier uses a set of rules and algorithms to identify patterns and
relationships in the data and assign appropriate priorities. The first step in executing a priority
classifier is to gather and pre-process the data. It involves collecting data from various sources
and formatting it in a way the algorithm can use. This process may include cleaning the data to
remove any irrelevant or redundant information and standardizing the data to make it compatible
with the algorithm. Once the data is pre-processed, the priority classifier uses various techniques
to identify patterns and relationships. The operational flow diagram has shown in the following
fig.3
Fig 3: Operational flow diagram
It can include statistical methods such as regression analysis or decision trees and more complex
algorithms like deep learning or neural networks. These methods extract meaningful features
from the data and create a model that can accurately predict priorities. One of the critical aspects
of a priority classifier is using a predefined set of criteria or rules. These criteria are guidelines
for the algorithm to decide which items should receive higher priority. The classifier uses these
rules to compare the features of each item and assign a priority level accordingly.
• High mid low priority packet
High mid-low priority packet refers to the different levels of urgency assigned to data packets in
a computer network. These priority levels determine the order in which packets are transmitted
and processed by network devices, such as routers and switches. This prioritization aims to
ensure that essential data is sent and received promptly. In contrast, less critical data can be
delayed without causing a significant impact. At the highest priority level, high-priority packets
are assigned to essential data, such as voice or video packets, that require real-time transmission.
These packets are marked with a higher priority value and are given preferential treatment by
network devices. It means that they are processed and transmitted before packets with lower
priority levels, minimizing the potential for delay or loss of data. Mid-priority packets are
assigned to essential data but are less time-sensitive than high-priority packets. These packets are
given a lower priority value than high-priority packets but still higher than low-priority packets.
Typically, mid-priority packets include data such as emails or file transfers, which can tolerate a
slight delay without affecting the network's overall performance. Low-priority packets are
assigned to data that is not time-sensitive and can be delayed without severe consequences.
These packets are marked with the lowest priority value and are transmitted after high and mid-
priority packets.
• Critically High priority packet
A Critically High-priority packet refers to a specific type of data packet given the highest priority
level in a network. It means that it is treated as the most important and needs to be processed and
delivered as quickly as possible. It is typically used for urgent or time-sensitive data such as
emergency services communications, financial transactions, or critical system updates. When a
Critically High-priority packet is transmitted, it is marked with a unique code or tag that
identifies it as such. It is essential because it allows network devices to identify and prioritize
these packets over others. This marking is usually done at the network layer of the OSI model,
where protocols such as IP (Internet Protocol) or MPLS (Multiprotocol Label Switching) can
assign the appropriate priority level. Once marked, network devices such as routers, switches,
and firewalls give the Critically High Priority packet special treatment. These devices have built-
in algorithms that determine the priority of packets and handle them accordingly. These devices
will ensure that a critical high-priority packet is processed and delivered with minimal delay.
One way this is achieved is through Quality of Service (quality of service) mechanisms. Quality
of service allows for the prioritization and control of network traffic based on specific criteria,
such as priority levels.
• Weighted Class based Fair Queue
Weighted Class-based Fair Queue (WCFQ) is a queuing algorithm used in network routers to
prioritize and fairly distribute network traffic among different classes. This algorithm considers
multiple factors such as packet size, priority, and bandwidth to manage network traffic
efficiently. WCFQ is an improvement over the traditional round-robin or first-in-first-out
queuing methods, as it provides better Quality of Service (quality of service) to different network
classes. The first step in the WCFQ algorithm is to classify incoming packets into various classes
based on criteria such as the source, destination, and type of traffic. This classification is done by
examining the packet header and assigning it to a specific class. Each class is then assigned a
weight, which determines the relative importance of the class in terms of bandwidth allocation.
Once the packets are classified and weighted, the WCFQ algorithm queues them. The packets are
stored in separate queues for each class, waiting to be transmitted onto the network. The weight
of each class comes into play at this stage, as it determines how much bandwidth will be
allocated to each queue. For example, if a class with a weight of 2 has twice the weight of
another class with a weight of 1, then it will receive twice the amount of bandwidth allocated to
the other class.
• Scheduler and Egress Port
The Scheduler is an essential component in a computer system that manages and prioritizes the
tasks and processes running on a CPU. It acts as a traffic controller that schedules the execution
of tasks efficiently and effectively. The Scheduler also ensures that each process receives a fair
share of the CPU's processing time and that the system's overall performance is optimized. The
Scheduler performs its operations by maintaining a queue of waiting processes. When a process
is ready to run, it is added to the queue, and the Scheduler then assigns the processor to the first
process in the queue. The Scheduler continuously monitors each process's status and adjusts the
queue's order based on their priorities and the available system resources. One primary
responsibility of the Scheduler is to allocate resources such as memory and CPU time to each
process. It uses different scheduling algorithms to determine which process should run next and
for how long. Some standard scheduling algorithms include First-Come-First-Serve, Round-
Robin, and Priority-Based scheduling. Another essential operation of the Scheduler is pre-
emption, which is the ability to pause a process and allocate the CPU to a higher-priority process.
It ensures that critical processes are prioritized and completed promptly. Pre-emption is
especially crucial in real-time operating systems, where tasks have strict deadlines.
4. Result and Discussion
The performance of proposed method MAIA (Multi-Access Interference-Aware scheduling) have
compared with LETA (Lattice-based Evolutionary Tournament Algorithm), PCA (Proportional
Cycle Allocation) and ITIA (Iterative Terahertz Interference-Aware).
4.1 Signal-to-Interference-plus-Noise Ratio (SINR)
This measures the quality of the received signal by taking into account both the desired signal
and the noise and interference present in the network. The higher the SINR, the better the overall
signal quality and the more influential the filtering framework. Table.2 shows the comparison of
Signal-to-Interference-plus-Noise Ratio between existing and proposed models.
Table.2: Comparison of Signal-to-Interference-plus-Noise Ratio (in %)
No. of Images LETA PCA ITIA MAIA
100
200
300
400
500
Fig.N: Comparison of Signal-to-Interference-plus-Noise Ratio
Fig. N shows the comparison of Signal-to-Interference-plus-Noise Ratio . In a computation
cycle, the existing LETA obtained 0000 %, PCA obtained 0000 %, ITIA reached 0000 %
Signal-to-Interference-plus-Noise Ratio . The proposed MAIA obtained 00000 % Signal-to-
Interference-plus-Noise Ratio .
4.2 Filtering Accuracy:
This parameter measures the ability of the framework to accurately identify and remove
interference in the network without affecting the desired signals. A high filtering accuracy is
crucial for effectively reducing interference and improving network performance. Table.2 shows
the comparison of Filtering Accuracy between existing and proposed models.
Table.2: Comparison of Filtering Accuracy (in %)
No. of Images LETA PCA ITIA MAIA
100
200
300
400
500
Fig.N: Comparison of Filtering Accuracy
Fig. N shows the comparison of Filtering Accuracy. In a computation cycle, the existing LETA
obtained 0000 %, PCA obtained 0000 %, ITIA reached 0000 % Filtering Accuracy . The
proposed MAIA obtained 00000 % Filtering Accuracy.
4.3 Computational Complexity:
As 6G networks are expected to handle massive amounts of data and connected devices, the
filtering framework must have low computational complexity in order to process and filter out
interference in real time efficiently. Table.2 shows the comparison of Computational Complexity
between existing and proposed models.
Table.2: Comparison of Computational Complexity (in %)
No. of Images LETA PCA ITIA MAIA
100
200
300
400
500
Fig.N: Comparison of Computational Complexity
Fig. N shows the comparison of Computational Complexity. In a computation cycle, the existing
LETA obtained 0000 %, PCA obtained 0000 %, ITIA reached 0000 % Computational
Complexity . The proposed MAIA obtained 00000 % Computational Complexity.
4.4 Interference Suppression Efficiency:
This parameter measures the effectiveness of the framework in reducing interference levels in
the network. A high interference suppression efficiency means that the framework is successfully
removing a significant amount of interference, resulting in improved network performance and
user experience. Table.2 shows the comparison of Interference Suppression Efficiency between
existing and proposed models.
Table.2: Comparison of Interference Suppression Efficiency (in %)
No. of Images LETA PCA ITIA MAIA
100
200
300
400
500
Fig.N: Comparison of Interference Suppression Efficiency
Fig. N shows the comparison of Interference Suppression Efficiency. In a computation cycle, the
existing LETA obtained 0000 %, PCA obtained 0000 %, ITIA reached 0000 % Interference
Suppression Efficiency . The proposed MAIA obtained 00000 % Interference Suppression
Efficiency.
5. Conclusion
In conclusion, the interference-aware scheduling algorithms proposed for terahertz wireless
networks have shown promising results in increasing network capacity and improving spectral
efficiency. By taking into account the unique challenges of terahertz frequencies, these
algorithms effectively manage interference and optimize resource allocation. Further research
and development in this area can greatly contribute to the advancement of terahertz wireless
networks and bring us closer to realizing their potential for high-speed and high-density
communication applications.
References
• Mura, S., Linsalata, F., Mizmizi, M., Magarini, M., Khormuji, M. N., Wang, P., ... &
Spagnolini, U. (2021). Spatial-interference aware cooperative resource allocation for 5G
NR Sidelink communications. arXiv preprint arXiv:2111.07814.
• Zhang, Y., Zhang, H., & Long, K. (2021). Energy efficient resource allocation in cache
based terahertz vehicular networks: A mean-field game approach. IEEE Transactions on
Vehicular Technology, 70(6), 5275-5285.
• Yuksekkaya, B., Demir, U., & Bulu, G. (2024). Interference aware two-tier fair user-cell
association. AEU-International Journal of Electronics and Communications, 155194.
• Thomas, C. K., Chaccour, C., Saad, W., Debbah, M., & Hong, C. S. (2024). Causal
reasoning: Charting a revolutionary course for next-generation ai-native wireless
networks. IEEE Vehicular Technology Magazine.
• Cinalioğlu, S. Ü., & Girici, T. (2023, July). Low Complexity Scheduling and Phase Shift
Optimization in RIS-Aided mmWave Downlink Transmission. In 2023 International
Conference on Smart Applications, Communications and Networking (SmartNets) (pp. 1-
6). IEEE.
• Zhang, Y., & Alkhateeb, A. (2024). Decentralized Interference-Aware Codebook
Learning in Millimeter Wave MIMO Systems. arXiv preprint arXiv:2401.07479.
• Wang, X. (2021). Performance Analysis and Learning Algorithms in Advanced Wireless
Networks (Doctoral dissertation, Syracuse University).
• Bindle, A., Gulati, T., & Kumar, N. (2022). Exploring the alternatives to the conventional
interference mitigation schemes for 5G wireless cellular communication network.
International Journal of Communication Systems, 35(4), e5059.
• Zhang, Y., Osman, T., & Alkhateeb, A. (2023, May). A digital twin assisted framework
for interference nulling in millimeter wave MIMO systems. In 2023 IEEE International
Conference on Communications Workshops (ICC Workshops) (pp. 248-253). IEEE.
• Phung, C. V., Drummond, A., & Jukan, A. (2024). Maximizing Throughput with Routing
Interference Avoidance in RIS-Assisted Relay Mesh Networks. arXiv preprint
arXiv:2402.08825.
• Zhang, Y., Osman, T., & Alkhateeb, A. (2023). Online beam learning with interference
nulling for millimeter wave MIMO systems. IEEE Transactions on Wireless
Communications.
• Trabelsi, N., Fourati, L. C., & Chen, C. S. (2024). Interference Management in 5G and
Beyond Networks. arXiv preprint arXiv:2401.01608.
• Raykar, N., Khedkar, G., Kaur, M., & Viriyasitavat, W. (2023). A novel traffic load
balancing approach for scheduling of optical transparent antennas (OTAs) on mobile
terminals. Optical and Quantum Electronics, 55(11), 962.
• Zeng, M., Bedeer, E., Li, X., Pham, Q. V., Dobre, O. A., Fortier, P., & Rusch, L. A.
(2021). IRS-empowered wireless communications: State-of-the-art, key techniques, and
open issues. In 6G Wireless (pp. 15-38). CRC Press.
• Burhanuddin, L. A. B., Liu, X., Deng, Y., Elkashlan, M., & Nallanathan, A. (2023). Inter-
Cell Interference Mitigation for Cellular-Connected UAVs Using MOSDS-DQN. IEEE
Transactions on Cognitive Communications and Networking.
• Tezergil, B., & Onur, E. (2022). Wireless backhaul in 5G and beyond: Issues, challenges
and opportunities. IEEE Communications Surveys & Tutorials, 24(4), 2579-2632.
• Akbar, M. S., Hussain, Z., Sheng, Q. Z., & Mukhopadhyay, S. (2022). 6G survey on
challenges, requirements, applications, key enabling technologies, use cases, AI
integration issues and security aspects. arXiv preprint arXiv:2206.00868.
• Peng, Y., Zhou, Y., Liu, L., Li, J., Pan, Z., & Sun, G. (2021). Intelligent recommendation-
based user plane handover with enhanced TCP throughput in ultra-dense cellular
networks. IEEE Transactions on Vehicular Technology, 71(1), 595-610.
• Qadir, Z., Le, K. N., Saeed, N., & Munawar, H. S. (2023). Towards 6G Internet of
Things: Recent advances, use cases, and open challenges. ICT express, 9(3), 296-312.
• Zhan, C., Hu, H., Wang, J., Liu, Z., & Mao, S. (2023). Tradeoff between age of
information and operation time for uav sensing over multi-cell cellular networks. IEEE
Transactions on Mobile Computing.
• Bargavi, M., Sahoo, G. S., & Sharma, K. (2024, January). Optimizing Cross-Layer
Design for Multi-Hop Wireless Networks. In 2024 International Conference on
Optimization Computing and Wireless Communication (ICOCWC) (pp. 1-5). IEEE.
• Moorthy, S. K. (2023). Enhancing the Automation for Resource Orchestration in
Software-Defined Broadband Flying Networks (Doctoral dissertation, State University of
New York at Buffalo).
• Gera, B., Raghuvanshi, Y. S., Rawlley, O., Gupta, S., Dua, A., & Sharma, P. (2023).
Leveraging AI‐enabled 6G‐driven IoT for sustainable smart cities. International Journal
of Communication Systems, 36(16), e5588.
• Wu, K., Zhang, J. A., Huang, X., Guo, Y. J., & Hanzo, L. (2023). Simultaneous beam and
user selection for the beamspace mmWave/THz massive MIMO downlink. IEEE
Transactions on Communications, 71(3), 1785-1797.
• Sharif, S., Zeadally, S., & Ejaz, W. (2023). Space-aerial-ground-sea integrated networks:
Resource optimization and challenges in 6G. Journal of Network and Computer
Applications, 103647.
• Du, Q., Song, H., Tang, X., Zhao, Z., Xu, C., Xiao, Y., & Ma, Z. (2023). Principles of 6G
Wireless Networks. In The Road towards 6G: Opportunities, Challenges, and
Applications: A Comprehensive View of the Enabling Technologies (pp. 27-49). Cham:
Springer Nature Switzerland.
• Chen, W., Lin, X., Lee, J., Toskala, A., Sun, S., Chiasserini, C. F., & Liu, L. (2023). 5G-
advanced toward 6G: Past, present, and future. IEEE Journal on Selected Areas in
Communications, 41(6), 1592-1619.
• Hassan, S. S., Park, Y. M., Tun, Y. K., Saad, W., Han, Z., & Hong, C. S. (2024).
SpaceRIS: LEO Satellite Coverage Maximization in 6G Sub-THz Networks by MAPPO
DRL and Whale Optimization. IEEE Journal on Selected Areas in Communications.
• Safdar, G. A. (2023). UE admittance and contention free resource allocation for
interference mitigation in HetNet cognitive femtocells. e-Prime-Advances in Electrical
Engineering, Electronics and Energy, 6, 100329.
• Aboueleneen, N., Alwarafy, A., & Abdallah, M. (2023). Deep reinforcement learning for
internet of drones networks: issues and research directions. IEEE Open Journal of the
Communications Society.