0% found this document useful (0 votes)
3 views

3. Optimization_Algorithms_for_Efficient_Workflow_Scheduling_in_IaaS_Cloud

The document discusses optimization algorithms for workflow scheduling in IaaS cloud computing, highlighting the significance of efficient resource allocation and scheduling techniques. It examines existing heuristic and meta-heuristic methods, their limitations, and proposes strategies for developing more effective scheduling algorithms. The research aims to enhance the performance and cost-efficiency of cloud services while addressing challenges such as energy consumption and service quality requirements.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

3. Optimization_Algorithms_for_Efficient_Workflow_Scheduling_in_IaaS_Cloud

The document discusses optimization algorithms for workflow scheduling in IaaS cloud computing, highlighting the significance of efficient resource allocation and scheduling techniques. It examines existing heuristic and meta-heuristic methods, their limitations, and proposes strategies for developing more effective scheduling algorithms. The research aims to enhance the performance and cost-efficiency of cloud services while addressing challenges such as energy consumption and service quality requirements.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Optimization Algorithms for Efficient Workflow

Scheduling in IaaS Cloud


2022 2nd International Conference on Emerging Frontiers in Electrical and Electronic Technologies (ICEFEET) | 978-1-6654-8875-4/22/$31.00 ©2022 IEEE | DOI: 10.1109/ICEFEET51821.2022.9848291

Md Khalid Jamal Mohd Muqeem


Department of Computer Application Department of Computer Application
Integral university Integral University
Lucknow, India Lucknow, India
[email protected] [email protected]

Abstract—Cloud computing is a novel way of delivering infrastructure as a service (IaaS) providers. Customers can
resources over the Internet. Cloud computing, albeit a newer lease enterprise software as a service from SaaS providers,
computer paradigm, has many obsolete predecessors. In addition, and PaaS providers provide access to the components needed
it is so popular that it doesn’t need taxonomies to define. Our
early study looked at the origins of cloud computing and their to develop applications over the internet from anywhere in the
significance to cloud services. Many applications in research and world. In addition, infrastructure-as-a-service clouds provide
industry have made workflow scheduling in cloud computing pop- infrastructure resources such as handling, storage, networks,
ular. Several articles have proposed using the heuristics method and, soon, other services. When it comes to cloud computing,
to account for energy savings, cost, and makespan. However, virtualization is one of the most important enabler technologies
most heuristics or hybrids methodology does not provide near-
optimal solutions. Therefore, we examine the limitations and because it allows network virtualization (VMs) to coexist on
potential advancements of cloud computing workflow scheduling a single physical machine. A Virtual Machine (VM) is a
technologies. The objective is to develop time and cost-efficient computer system that emulates a specific computer system and
heuristic and hybrid workflow scheduling methods. Finally, we performs the tasks that have been assigned to it by the user.
discuss and propose strategies to overcome impediments to the Subscribers can utilise their implementations on resources
growth, application, and development of cloud computing.
Index Terms—Optimization, Workflow, Scheduling, Heuristic,
with varying levels of performance and cost by utilising the
IaaS instantiation of virtual machines (VMs). The virtual machines
(VMs) on each physical machine or server are managed by a
software layer known as a hypervisor or a VM monitor, which
I. I NTRODUCTION
facilitates the creation of virtual machines and their isolated
Cloud computing is a paradigm that makes use of the execution.
internet and centralised distant servers to deliver scalable Workflow has a broad range of applications in business as
services to its customers. It has becoming more popular day well as scientific fields such as astronomy, weather forecasting,
by day in previous decade. There are countless different medicine, and bio-informatics. In general, these workflows are
services that it provides to its users, each with a unique quality massive in size because they consist of a high number of inde-
of service (QoS) requirement, and it does so by utilising pendent and/or dependent processes, and as a result, they need
an incredible amount of heterogeneous distributed resources. the use of substantial amounts of computing, communication,
Cloud computing platforms such as Amazon EC2, GoGrid, and storage resources [3]. Cloud computing infrastructures
Google App Engine, Microsoft Azure, and Aneka are just enable workflows to be executed on virtualized resources
a few of the many options available [1]. A cloud can be that are provided on-demand. The allocation of resources, as
divided into several categories, the most common of which well as the sequence in which the activities of a particular
are public clouds, private clouds, community clouds, hybrid workflow will be executed, are, nonetheless, very important
clouds, and cloud federations. A public cloud can be accessed considerations. This is referred to as the workflow scheduling
by anyone, whereas private clouds and their facilities are problem in the industry. Finding an optimal solution by using
owned and accessed by specific companies or organisations. a brute force approach is computationally very expensive
In addition, community clouds are shared among several when the values of n and m are large. As a result, a meta-
organisations and can be preserved by them or by third- heuristic approach to solving this problem can be extremely
party service providers, among other things. Hybrid clouds effective. However, each meta-heuristic algorithm has its own
manage resources that are both public and private in nature. set of advantages and disadvantages. Hybridization of such
In addition, as a result of the limited availability of single approaches has been shown to produce better results in the
clouds, a trend toward multi-cloud computing has emerged, past, and as a result, this has become a recent trend in cloud
that either focuses on the coalition of various cloud resources. computing research.
Additionally, cloud-based services can be classified as soft- Working with workflow scheduling is one of the most
ware as a service (SaaS), platform as a service (PaaS), or challenging aspects of cloud computing, as it tries to map the

978-1-6654-8875-4/22/$31.00 ©2022 IEEE

Authorized licensed use limited to: VIT University. Downloaded on November 24,2022 at 10:33:32 UTC from IEEE Xplore. Restrictions apply.
Virtual placment
methods

Machine
Static Dynamic Hybrid Heuristics
learning

Fig. 1. Different type of VM placement methods based on the literature [2]

workflow tasks to the virtual machines (VMs) in accordance dition, the cloud customer may have specific requirements
with different functional and non-functional requirements. A for the level of service quality and security that the cloud
workflow is made up of a series of interdependent functions provider must maintain. These criteria provide difficulties for
that are bound together by data or functional dependencies, researchers who must try to meet as many of the demands of
and it is important to take these dependencies into account cloud applications as they possibly can. These issues include
when scheduling. Workflow scheduling in cloud computing, on determining how to assign resources to tasks, workflows of
the other hand, is an NP-hard optimization problem, making tasks, or even ensembles of workflows, while also determining
it difficult to achieve an optimal schedule. Because there are how the systems must execute these activities in the proper
numerous virtual machines (VMs) in a cloud and numerous sequence. The existence of dependencies across jobs as well as
user tasks to be scheduled, it is necessary to consider a overheads such as VM provisioning, terminating, or switching
variety of scheduling objectives and factors. The common between tasks enhances the difficulty of the work at hand. An
goal of workflow scheduling techniques is to reduce the optimization technique [5] for diverse IaaS federated multi-
amount of time that tasks take to complete by allocating cloud systems has been developed and shown. It is well-suited
tasks to the appropriate virtual resources [4]. For example, for handling the autonomic feature inside clouds as well as the
a scheduling scheme may attempt to meet the SLAs that have diversification feature among virtual machines. They suggested
been promised, as well as user-specified deadlines and cost two online dynamic algorithms for resource allocation and
constraints. As part of the scheduling decisions, scheduling job scheduling, both of which were implemented in practise.
solutions may take into account resource utilisation and load They have also taken into account resource conflicts while
balancing, as well as the availability of cloud resources and planning the job schedule. There have been several attempts
services. to construct algorithms for creating such a schedule using
certain heuristic and meta-heuristic approaches for use in a
II. R ELATED WORK variety of computer settings. In [6], proposed an architecture
Cloud technology is a rapidly developing field of advanced for mapping virtual machines to servers that made use of
computing that allows for more effective use of computer an alternate version of the Best Fit Decreasing heuristic to
resources. It is expected to grow in importance in the future. In achieve this mapping. This strategy is effective in situations
the case of infrastructure as a service (IaaS), it seeks to share of moderate complexity. Heuristics, on the other hand, might
a pool of virtual resources, such as virtual machines (VMs). provide answers that are far from optimum. Furthermore, the
When it comes to cloud computing, consumers may pay only opportunity to work model seems to be lacking in detail.
for what they use, and the service has the possibility of offering Configuration of the Aneka Enterprise Cloud infrastructure
high-performance computation and storage. It is common in The Department of Computer Science and Software Engineer-
real-world applications to perform activities that are resource- ing at the University of Melbourne [7], has demonstrated that
intensive in nature. Examples include investment risk analysis, it is capable of presenting a unified resource to users or brokers
aviation simulation, and molecular docking among others. by harnessing computing resources that are physically sepa-
Customers that utilise the cloud may access these resources rated in different laboratories, and this has been demonstrated
depending on their needs rather than being worried about the in several publications. A resource allocation issue for an IaaS
physical location of the resources they want. However, while cloud in a cloud system has been solved using an integer
outsourcing their duties to specific cloud service providers, programming approach [8], which was published in the paper.
cloud customers may face limits depending on their budget They solve it on a regular basis but use a control loop. In
and available service time, as well as other factors. In ad- particular, they concentrate on a heterogeneous cluster capable

Authorized licensed use limited to: VIT University. Downloaded on November 24,2022 at 10:33:32 UTC from IEEE Xplore. Restrictions apply.
of supporting DVFS, and they suggest a set of limitations to well-known heuristic that was originally developed for task
reduce energy consumption while still allowing task migration. scheduling in heterogeneous multiprocessor systems, and it is
A self-adaptive learning particle swarm optimization (PSO) still in use today. It is well known that HEFT outperforms
based technique [9] to solve this issue has been developed many other heuristics, such as those used in task scheduling,
by the authors, which outperforms the classic PSO approach. in many situations. However, it only takes into account the
They also claimed to offer a solution that ensures high quality minimization of makespan. It was proposed to use an energy-
of service at the user level, as well as increased credibility efficient scheduling method [16] with a deadline constraint in
and economic advantage for IaaS providers. In addition, the a heterogeneous cloud environment. In this work, a new virtual
work reported here has been expanded to include research into machine scheduler is developed, and it is demonstrated that it
resource allocation and job scheduling for high-performance can reduce energy consumption when workflows are executed.
computing systems. They claimed to be able to achieve up to a 20% decrease in en-
DAG scheduling method [10] with look-ahead variation of ergy consumption while simultaneously increasing processing
the HEFT algorithm has been devised and is being tested. capacity by a factor of 8 percent.
They offered many methods for sorting the servers and vir- In [17] ,Using auto-scaling resources, we propose a strategy
tual machines (VMs) in preparation for the mapping phase, in this study for cloud workflow applications that allows for
including multiple types of best-fit and first-fit algorithms, as cost savings while still achieving application deadlines. To
well as an ad-hoc technique derived from vector packing. A discover the optimum approach for turning on and off nodes,
number of issues plague the algorithm, including its excessive it takes do with a Markov decision process, which enables
complexity and reliance on a single user model that is wildly it to identify the optimal tradeoff between quality and power
unrealistic. In this paper, we offer an energy-efficient schedul- usage. When it comes to dealing with real offloading rates, it
ing system [11] in the cloud with deadline limitations. The has a high degree of difficulty to overcome.
heuristics were initially created for the purpose of bin-packing. In [18] , the methods for off-line and on-line planning
The heuristics produced solutions that were far from ideal, have both been documented, and they have been based on the
particularly in the face of heterogeneity. However, it does not ambiguities and latencies in task implementations, as well as
account for the variability of the offloading linkages. In [12], cost and schedule limitations, among other factors. This kind
a resource optimization mechanism for heterogeneous IaaS of forecasting makes use of process structure information, such
federated multi-cloud systems is presented, which allows for as essential paths and levels in a process, as well as estimates
preemptable task scheduling in these systems. This mechanism of task length to make its predictions.
is appropriate for the autonomic feature of clouds as well In [19] , the Multi-objective list scheduling (MOLS) is an
as the diversity feature of virtual machines. They suggested additional approach that was developed in and that provides a
two online dynamic algorithms for resource allocation and basic foundation for multi-objective static scheduling problem
job scheduling, both of which were implemented in practise. utilising a list of goals. It assists to the attainment of four goals:
They have also taken into account resource conflicts while on-time delivery, low cost, reliability, and energy conservation.
planning the job schedule. Semi Elastic Cloud (SEC) [2] is On the basis of the objectives that have been provided, it
a cloud-based execution model for high-performance clusters, develops an execution plan.
and it is described in detail in this paper. In an organisation,
it manages variable-size clusters that can be shared by a III. P ROPOSED W ORK
large number of users at the same time. By incorporating
resource provisioning and management problems into parallel The development of efficient scheduling algorithms that
scheduling, this model expands the capabilities of existing use particular heuristic and meta-heuristic techniques for a
batch job scheduling algorithms. Despite the fact that they range of computer settings has been the focus of various
have not dealt with the situation where a request for compute academics in recent years. These algorithms are comprised
resources exceeds the available capacity, they should. There is of the following: It has been observed in our research that the
a multi-objective approach to workflow scheduling [13] that standard assumption may be applied to a very broad variety
uses bio-inspired heuristics to find the most energy-efficient of scenarios. The duties that they all have in common are
hosts, which is presented in this paper. It has a very high rate homogenous, which minimises the importance of reliability
of convergence. In [14], they have presented the system model and cost while raising the importance of speed and efficiency.
in a very fluid and effective way. However, if the rate of task When it comes to the cloud, it is a complex system with
arrivals increases, this can result in a significant increase in various shared resources that are sensitive to unforeseen de-
management overhead. Furthermore, the review presented in mands and are impacted by external events that are beyond of
can point the way in the direction of recent advancements its control, as is the case with any large-scale system. As a
in infrastructure as a service, resource allocation, multi-agent result of the creation of intricate rules and judgements, cloud
systems, and service provisioning. For workflow scheduling resource management is required in order to accomplish multi-
in cloud computing [15], a large number of heuristic and objective optimization. Cloud resource management is very
meta-heuristic based algorithms have been proposed, some challenging due to the complexity of the system, which makes
of which are relevant to our proposed scheme. HEFT is a it impossible to get exact global status information, as well

Authorized licensed use limited to: VIT University. Downloaded on November 24,2022 at 10:33:32 UTC from IEEE Xplore. Restrictions apply.
as unforeseen interactions with the surrounding environment, Algorithm 1: Energy efficient virtual machine place-
among other issues [20]. ment algorithm
In this algorithm, a task is represented in the form of five- Data: Task as input
Result: Schedule virtual machine
tuples, the form is represented as taskx = Lx , dx , Mx , Ix , Bx ,
where Lx is the length of xth task and is represented in the 1 begin
form of Million Instruction (MI). dx is the deadline of the xth 2 // a and c are constant a ̸= c
3
task and is represented in the form of second. Mx is the main
memory of the xth task and is represented in the form of Bx 4 x1 , x2 , ..xn astask
(megabyte). Ix is the xth task input/output needs. Bx is the 5 y1 , y2 , ..yn ashost
bandwidth of the xh task and is in the form of mb(megabyte). 6 Initializing request and host // y To give input
7
The resource management tactics connected with cloud
delivery models vary from one another in this regard. Cloud 8 // Deployment of virtual machine
resources are confronted with enormous, variable loads in all 9 for i=1 to xn do
of these scenarios, calling into question the notion of cloud 10 X[i]=(a*X[i-1]+c)%m
elasticity. It is possible to supply resources in advance if a 11 Y[i]=(a*Y[i-1]+a)%m
spike can be foreseen, such as when Web services are prone 12
to seasonal spikes. However, this is not always the case. The 13 // Make Graph
issue is significantly more difficult in the case of an unexpected 14 for i=1 to n do
surge. Auto Scaling may be used to handle unexpected spikes 15 G.add node(Point(X[i],Y[i]))
in traffic, provided that the following conditions are met:
• There is a pool of resources that can be released or
allocated on demand
• There is a monitoring system that allows a control loop
to decide in real-time to reallocate resources. more favourable financial scenario. In [21] , [22], the classifi-
cation suggested that WSNs mainly three critical parameters,
It has been contended for such time that centralized power
such as energy efficiency , delay, and QoS. MAC protocol
is unlikely to give uninterrupted quality and service assurances
plays a vital role in energy consumption because it controls
in the cloud, since changes are rapid and unexpected. Indeed,
the duty cycle of radio.Therefore, we compare our result with
centralised control is incapable of providing sufficient answers
theses result. The greater the score of the standardised fitness,
to the plethora of cloud management regulations that must
on the other hand, is preferable in this situation. This report
be applied in the cloud environment. Because of the scale,
presents the conclusions obtained through the use of the same
the enormous number of support requests, the big number of
machine configuration, constraints, and collection of workflow
users, and the unpredictable nature of the load, autonomous
applications as in the previous experiment (of various sizes
policies are of considerable relevance. The difference between
and types). The bar chart in fig. 3–4 indicate mean difference
the mean and peak resource requirements might be significant.
between two groups (MCR), standard deviation between two
Therefore, the solution should have a deterministic time frame
groups (SLR), and normalised fitness value (NFV), which
for VM assignments or scheduling so that it can serve the
would be used to compare the existing with one another.
extensive load. The load of request handling can be compatible
The fig.3 reveal that the MCR of the recommended HGSA
and will be based on the requirements users and service
is better than that of the existing for all workflow categories,
provider. Still, the latency in terms of the assignment will
including small, medium, and large, demonstrating that the
directly affect the energy, reliability, and cost, often lack for
suggested HGSA is superior to the others. When comparing
standard practices.
the proposed HGSA to the alternatives, it is obvious that it
We need to develop some hybrid optimization algorithms exceeds the competition in terms of MCR for any of the
for efficient scheduling of workflow(s) in IaaS cloud while overview of the previous. It’s also worth noting that the
ensuring the optimally of the objectives, which includes sev- SLR generated by the recommended approach is much higher
eral QoS parameters, with the given scenario. The procedure than the SLR produced by either the GSA or the HGA.
is written in algorithms. However, it is a significant reduction from the amount granted
by the round robin. A contributing factor to this outcome
IV. R ESULT AND DISCUSSION
has been the reality that it is a clear objective scheduling
In terms of energy cost, the following sections compare and approach that only considers makespan. Fig. 2 shows two
contrast the performance of our proposed algorithm with that input parameters that were utilised in the calculation of the
of the existing among other algorithms. The MCR, the SLR, normalised fitness value: and According to the normalised
and the normalised fitness are the performance measures that fitness value, the entire quality of the product in accordance to
have been created for the purpose of conducting a comparison the needs of the consumer is reflected. According to Fig.1, the
study. Reduced SLR and MCR values are preferred since they suggested our algorithm beats the existing algorithms in terms
represent shorter makespan and reduced expenses, leading in a of performance. When SLR is bad in contrast to the HEFT, we

Authorized licensed use limited to: VIT University. Downloaded on November 24,2022 at 10:33:32 UTC from IEEE Xplore. Restrictions apply.
Task Queue VM's Host Manager

Task Manager Host's

VM-1 VM-n
Proposed
mechanism

Host-1 Host-m

Fig. 2. Proposed Method overview along with the participated entities in the scheduling

acquire better results by applying the our algorithm since the


difference in cost is adequate to compensate for the difference
in makespan between the two methods of calculation.

Fig. 4. Makespan Comparision

Notably, the suggested approach’s SLR is substantially greater


than the GSA or HGA’s. But it’s a big cut from the round
Fig. 3. Energy consumption comparison robin’s award. The fact that it is a clear objective scheduling
method that only considers makespan has contributed to this
We compare and contrast our proposed algorithm’s per- conclusion. The normalised fitness value reflects the product’s
formance with other algorithms’ performance in terms of overall quality in relation to the consumer’s demands. As seen
makespan. The MCR, SLR, and normalised fitness are per- in fig.2, our technique outperforms current algorithms. When
formance metrics designed for comparison. Reduced SLR and SLR is inferior than HEFT, our approach produces superior
MCR values are desirable due to shorter makespan and lower results since the cost difference is sufficient to compensate for
expenditures, resulting in a better financial situation. In this the difference in computation time.
case, a higher standardised fitness score is preferred. Following Alternatively, in this situation, a higher score on the stan-
the same machine setup, limitations, and process applications dardised fitness test is preferable than a lower score on the
as the prior experiment (of various sizes and types). In fig. same test in this situation. As a follow-up to the previous
3–4, the bar chart shows the mean difference between two experiment, these results are reported in this report, which is
groups (MCR), standard deviation between groups (SLR), and based on the same machine configuration, constraint settings,
normalised fitness value (NFV). In Fig.3, it shows that the and collection of workflow applications as in the preceding
proposed HGSA has a better MCR than the present HGSA experiment (of various sizes and types). On the bar chart in
for all workflow categories, small, medium, and big, proving fig.4-6, the mean difference between four groups (MCR) was
that it is superior. When compared to the alternatives, the depicted, as was the standard deviation between four groups
suggested HGSA clearly outperforms them in terms of MCR. (SLR), and the normalised fitness value (NFV), all of which

Authorized licensed use limited to: VIT University. Downloaded on November 24,2022 at 10:33:32 UTC from IEEE Xplore. Restrictions apply.
were used to compare the existing with one another. The Fig.4 [4] H. L. Van Trees and K. L. Bell, “Bayesian bounds for parameter
shows that the recommended HGSA’s MCR outperforms the estimation and nonlinear filtering/tracking,” AMC, vol. 10, no. 12, pp.
10–1109, 2007.
existing HGSA in all workflow categories, including small, [5] M. K. Gupta, S. Shrivastava, A. Raghuvanshi, and S. Tiwari, “Channel
medium, and large. This is true for all workflow categories, estimation for wavelet based ofdm system,” in 2011 International
including small, medium, and large. This demonstrates that conference on devices and communications (ICDeCom). IEEE, 2011,
pp. 1–4.
the HGSA proposed is preferable than the alternatives in this [6] R. K. Lenka, A. K. Rath, Z. Tan, S. Sharma, D. Puthal, N. Simha,
aspect. According to the results of the preceding review, it is M. Prasad, R. Raja, and S. S. Tripathi, “Building scalable cyber-physical-
clear that the proposed HGSA outperforms its competitors in social networking infrastructure using cloud computing,” IEEE Access,
vol. 6, pp. 30 162–30 173, 2018.
terms of MCR for any of the alternatives when compared to the [7] A. Vosoughi and A. Scaglione, “On the effect of receiver estimation
alternatives listed therein. The SLR produced by the proposed error upon channel mutual information,” IEEE transactions on signal
technique is also significantly greater than that produced by processing, vol. 54, no. 2, pp. 459–472, 2006.
[8] J.-J. Xiao, S. Cui, Z.-Q. Luo, and A. J. Goldsmith, “Power scheduling
the GSA or the HGA, which is an important point to consider of universal decentralized estimation in workflow scheduling in cloud
when developing the technique. However, as compared to computing,” IEEE Transactions on Signal Processing, vol. 54, no. 2, pp.
the total amount granted by the round robin, it constitutes 413–422, 2006.
[9] A. Dutt and V. Rokhlin, “Fast fourier transforms for workflow scheduling
a significant reduction. In this case, it was discovered that in cloud computing,” SIAM Journal on Scientific computing, vol. 14,
one of the contributing aspects to reaching this decision was no. 6, pp. 1368–1393, 1993.
the fact that it is a clearly objective scheduling approach [10] F. S. Hillier, “Linear and nonlinear programming,” 2008.
[11] S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization for
that solely takes into account makespan. In fig.4 depicts two workflow scheduling in cloud computing. Cambridge university press,
input parameters that were taken into account in the derivation 2004.
of the normalised fitness value. These are: Moreover, the [12] N. O’Donoughue and J. M. Moura, “On the product of independent
complex gaussians,” IEEE Transactions on signal Processing, vol. 60,
normalised fitness value reflects the fact that the complete no. 3, pp. 1050–1063, 2011.
product’s quality is in compliance with the requirements of [13] S. Kunis, Nonequispaced FFT: generalisation and inversion. Shaker,
the client. The suggested approach beats the already known 2007.
[14] P. Joshi and A. S. Raghuvanshi, “A dual synchronization prediction-
techniques in terms of overall performance, as seen in fig.4. based data aggregation model iaas workflow scheduling in cloud com-
Even in cases when SLR is inadequate in contrast to HEFT, puting,” Journal of Intelligent & Fuzzy Systems, no. Preprint, pp. 1–20,
we may gain better results by using our algorithm, since the 2022.
[15] M. Shirazi and A. Vosoughi, “On distributed estimation in hierarchical
cost difference between the two techniques of computation is power constrained workflow scheduling in cloud computing,” IEEE
sufficient to compensate for the time difference between the Transactions on Signal and Information Processing over Networks,
two methods of calculation. vol. 6, pp. 442–459, 2020.
[16] D. Sah, C. Shivalingagowda, and D. P. Kumar, “Optimization problems
in wireless sensors networks,” in Soft computing in wireless sensor
V. C ONCLUSION networks. Chapman and Hall/CRC, 2018, pp. 29–50.
[17] T. Canli, A. Gupta, and A. Khokhar, “Power efficient algorithms for
In this paper, we proposed the heuristic time driven work computing fast fourier transform over workflow scheduling in cloud
flow scheduling for cloud computing. The virtual machine computing,” in IEEE International Conference on Computer Systems
placement have task queue to handle the user request. Along and Applications, 2006. IEEE Computer Society, 2006, pp. 549–556.
[18] D. K. Sah, T. N. Nguyen, K. Cengiz, B. Dumba, and V. Kumar, “Load-
with that, the task manager will be responsible for the handling balance scheduling for intelligent sensors deployment in industrial
of the resources. Our, machine learning algorithm then perform internet of things,” Cluster Computing, pp. 1–13, 2021.
the classification for the type of request from user. It later sort [19] D. K. Sah, D. P. Kumar, C. Shivalingagowda, and P. Jayasree, “5g ap-
plications and architectures,” in 5G Enabled Secure Wireless Networks.
in such a way so that the the host machine can able to provide Springer, 2019, pp. 45–68.
service in lesser time with respect to the existing methods. [20] C. Shivalingagowda, P. Jayasree, and D. K. Sah, “Efficient energy and
We shown in our simulation result that our proposal is able to position aware routing protocol for wireless sensor networks,” KSII
Transactions on Internet and Information Systems (TIIS), vol. 14, no. 5,
outcomes the existing proposal on several parameters. pp. 1929–1950, 2020.
[21] D. K. Sah, K. Cengiz, P. K. Donta, V. N. Inukollu, and T. Amgoth, “Edgf:
ACKNOWLEDGMENT Empirical dataset generation framework for wireless sensor networks,”
Computer Communications, 2021.
This work is acknowledged under Integral university [22] D. K. Sah, T. N. Nguyen, M. Kandulna, K. Cengiz, and T. Amgoth,
“3d localization and error minimization in underwater sensor networks,”
Manuscript Number: MCN NO: IU/R&D/2022 - MCN ACM Transactions on Sensor Networks (TOSN), 2022.
0001546.

R EFERENCES
[1] M. Döhler, S. Kunis, and D. Potts, “Nonequispaced hyperbolic cross fast
work scheduling,” SIAM journal on numerical analysis, vol. 47, no. 6,
pp. 4415–4428, 2010.
[2] D. K. Sah and T. Amgoth, “Parametric survey on cross-layer designs
for wireless sensor networks,” Computer Science Review, vol. 27, pp.
112–134, 2018.
[3] S. Bubeck et al., “Convex optimization: Algorithms and complexity,”
Foundations and Trends® in Machine Learning, vol. 8, no. 3-4, pp.
231–357, 2015.

Authorized licensed use limited to: VIT University. Downloaded on November 24,2022 at 10:33:32 UTC from IEEE Xplore. Restrictions apply.

You might also like