0% found this document useful (0 votes)
50 views19 pages

Jordy Ariel Loor Vélez

1. The document proposes a novel neural network architecture called Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (rTPNN-FES) to provide efficient residential demand control. 2. rTPNN-FES simultaneously forecasts renewable energy generation and schedules household appliances using an embedded structure that eliminates separate algorithms for forecasting and scheduling. 3. The proposed method was evaluated using publicly available smart home datasets and was shown to provide near-optimal scheduling 37.5 times faster than optimization techniques, while outperforming state-of-the-art forecasting methods.

Uploaded by

Jordy Loor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views19 pages

Jordy Ariel Loor Vélez

1. The document proposes a novel neural network architecture called Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (rTPNN-FES) to provide efficient residential demand control. 2. rTPNN-FES simultaneously forecasts renewable energy generation and schedules household appliances using an embedded structure that eliminates separate algorithms for forecasting and scheduling. 3. The proposed method was evaluated using publicly available smart home datasets and was shown to provide near-optimal scheduling 37.5 times faster than optimization techniques, while outperforming state-of-the-art forecasting methods.

Uploaded by

Jordy Loor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Renewable Energy Management in Smart Home Environment via Forecast

Embedded Scheduling based on Recurrent Trend Predictive Neural


Network
Mert Nakıp∗,a , Onur Çopurb , Emrah Biyikc , Cüneyt Güzelişd
a
Institute of Theoretical and Applied Informatics, Polish Academy of Sciences (PAN), 44–100 Gliwice, Poland
b
Prime Vision, 2600 JA, Delft,Netherlands
c
Department of Energy Systems Engineering, Yaşar University, 35100, Izmir, Turkey
d
Department of Electrical and Electronics Engineering, Yaşar University, 35100, Izmir, Turkey
arXiv:2307.01622v2 [cs.LG] 6 Jul 2023

Abstract
Smart home energy management systems help the distribution grid operate more efficiently and reliably,
and enable effective penetration of distributed renewable energy sources. These systems rely on robust
forecasting, optimization, and control/scheduling algorithms that can handle the uncertain nature of demand
and renewable generation. This paper proposes an advanced ML algorithm, called Recurrent Trend Predictive
Neural Network based Forecast Embedded Scheduling (rTPNN-FES), to provide efficient residential demand
control. rTPNN-FES is a novel neural network architecture that simultaneously forecasts renewable energy
generation and schedules household appliances. By its embedded structure, rTPNN-FES eliminates the
utilization of separate algorithms for forecasting and scheduling and generates a schedule that is robust against
forecasting errors. This paper also evaluates the performance of the proposed algorithm for an IoT-enabled
smart home. The evaluation results reveal that rTPNN-FES provides near-optimal scheduling 37.5 times
faster than the optimization while outperforming state-of-the-art forecasting techniques.
Keywords: energy management, forecasting, scheduling, neural networks, recurrent trend predictive neural
network

1. Introduction the increased adoption of residential demand


and generation control systems, because they
Residential loads account for a significant
improve system flexibility, help to achieve a better
portion of the demand on the power system.
demand-supply balance, and enable increased
Therefore, intelligent control and scheduling of
penetration of renewable energy sources. Increasing
these loads enable a more flexible, robust, and
flexibility of the building energy demand depends
economical power system operation. Moreover,
on multiple developments, including accurate
the distributed nature of the local residential load
forecasting and effective scheduling of the loads,
controllers increases system scalability. On the
incorporation of renewable energy sources such
distribution level, the smart grid benefits from
as solar and wind power, and integration of
suitable energy storage technologies (e.g. batteries

Corresponding author and/or electric vehicle charging) into the building
The final version of this preprint is published at Applied Energy
https://doi.org/10.1016/j.apenergy.2023.121014.
energy management system. Advanced control,
Email addresses: [email protected] (Mert Nakıp∗), optimization and forecasting approaches are
[email protected] (Onur Çopur), necessary to operate these complex systems
[email protected] (Emrah Biyik), seamlessly.
[email protected] (Cüneyt Güzeliş)
Preprint submitted to Applied Energy July 7, 2023
In this paper, in order to address this problem, combination of a forecaster and an optimal
we propose a novel embedded neural network scheduler.
architecture, called Recurrent Trend Predictive
Neural Network based Forecast Embedded 3. rTPNN-FES proposes a considerably high
Scheduling (rTPNN-FES), which simultaneously scalability for the systems in which the set
forecasts the renewable energy generation and of loads varies over time, e.g. adding new
schedules the household appliances (loads). devices into a smart home Internet of Things
rTPNN-FES is a unique neural network architecture (IoT) network.
that enables both accurate forecasting and heuristic We numerically evaluate the performance of
scheduling in a single neural network. This the proposed rTPNN-FES architecture against 7
architecture is comprised of two main layers: 1) different well-known ML algorithms combined
the Forecasting Layer which consists of replicated with optimal scheduling. To this end, publicly
Recurrent Trend Predictive Neural Networks available datasets [2, 3] are utilized for a smart home
(rTPNN) with weight-sharing properties, and environment with 12 distinct appliances. Our results
2) the Scheduling Layer which contains parallel reveal that the proposed rTPNN-FES architecture
softmax layers with customized inputs each of achieves significantly high forecasting accuracy
which is assigned to a single load. In this paper, while generating a close-to-optimal schedule over
we also develop a 2-Stage Training algorithm that a period of one year. It also outperforms existing
trains rTPNN-FES to learn the optimal scheduling techniques in both forecasting and scheduling tasks.
along with the forecasting. However, the proposed The remainder of this paper is organized as
rTPNN-FES architecture does not depend on follows: Section 2 reviews the differences between
the particular training algorithm, and the main this paper and the state-of-the-art. Section 3 presents
contributions and advantages are provided by the system set-up and initiates the optimization
the architectural design. Note that the rTPNN problem. Section 4 presents the rTPNN-FES
model was originally proposed by Nakıp et al. architecture and the 2-Stage Training algorithm
[1] for multivariate time series prediction, and which is used to learn and emulate the optimal
its superior performance compared to other ML scheduling. Section 5 presents the performance
models was demonstrated when making predictions evaluation and comparison. Finally, Section 6
based on multiple time series features in the summarizes the main contributions of this paper.
case of multi-sensor fire detection. On the other
hand, rTPNN has not yet been used in an energy
2. Related Works
management system and for forecasting renewable
energy generation. In this section, we present the comparison of
Furthermore, the advantages of using this paper with the-state-of-the art works in three
rTPNN-FES instead of a separate forecaster and categories: 1) The works in the first category
scheduler are in three folds: develop an optimization-based energy management
system without interacting with ML. 2) The works in
1. rTPNN-FES learns how to construct
the second category focus on forecasting renewable
a schedule adapted to forecast energy
energy generation using either statistical or deep
generation by emulating (mimicking)
learning techniques. 3) The works in the last
optimal scheduling. Thus, the scheduling
category develop energy management systems using
via rTPNN-FES is highly robust against
ML algorithms.
forecasting errors.

2. The requirements of rTPNN-FES for 2.1. Optimization-based Energy Management


the memory space and computation time Systems
are significantly lower compared to the We first review the recent works which
developed optimization-based energy management
2
systems. In [4], Shareef et al. gave a comprehensive have also been reviewed in more detail in the
summary of heuristic optimization techniques used literature, i.e. [10, 11].
for home energy management systems. In [5], The earlier research in this category forecast
Nezhad et al. presented a model predictive controller energy generation using statistical methods.
for a home energy management system with loads, For example, in [12], Kushwaha et al. use the
photovoltaic (PV) and battery electric storage. well-known seasonal autoregressive integrated
They formulated the MPC as a mixed-integer moving average technique to forecast the PV
programming problem and evaluated its economic generation in 20-minute intervals. In [13], Rogier
performance under different energy pricing schemes. et al. evaluated the performance of a nonlinear
In [6], Albogamy et al. utilized Lyapunov-based autoregressive neural network on forecasting the
optimization to regulate HVAC loads in a home with PV generation data collected through a LoRa-based
battery energy storage and renewable generation. In IoT network. In [14], Fentis et al. used Feed
[7], S. Ali et al. considered heuristic optimization Forward Neural Network and Least Square Support
techniques to develop a demand response scheduler Vector Regression with exogenous inputs to perform
for smart homes with renewable energy sources, short-term forecasting of PV generation. In [15]
energy storage, and electric and thermal loads. analyzed the performances of (Autoregressive
In [8], G. Belli et al. resorted to mixed integer Integrated Moving Average) ARIMA and Artificial
linear programming for optimal scheduling of Neural Network (ANN) for forecasting the PV
thermal and electrical appliances in homes within a energy generation. In [16], Atique et al. used
demand response framework. They utilized a cloud ARIMA with parameter selection based on Akaike
service provider to compute and share aggregate information criterion and the sum of the squared
data in a distributed fashion. In [9], variants of estimate to forecast PV generation. In [17],
several heuristic optimization methods (optimal Erdem and Shi analyzed the performance of
stopping rule, particle swarm optimization, and grey autoregressive moving averages to forecast wind
wolf optimization) were applied to the scheduling speed and direction in four different approaches
of home appliances under a virtual power plant such as decomposing the lateral and longitudinal
framework for the distribution grid. Then, their components of the speed. In [18], Cadenas et al.
performance was compared for three types of homes performed a comparative study between ARIMA
with different demand levels and profiles. and nonlinear autoregressive exogenous artificial
There is a wealth of research on optimization neural network on the forecasting wind speed.
and model predictive controller-based scheduling The recent trend of research focuses on the
of residential loads. In this literature, usually, development of ML and (neural network-based)
prediction of the load demand and generation (if deep learning techniques. In [19], Pawar et al.
available) are pursued independently from the combined ANN and Support Vector Regressor
scheduling algorithm and are merely used as a (SVR) to predict renewable energy generated via
constraint parameter in the optimization problem. PV. In [20], Corizzo et al. forecast renewable energy
The discrepancy in predicted and observed demand using a regression tree with an adopted Tucker tensor
and generation may lead to poor performance and decomposition. In [21] forecast the PV generation
robustness issues. The proposed rTPPN-FES in based on the historical data of some features such
this paper handles forecast and scheduling in a as irradiance, temperature and relative humidity.
unified way and, therefore, provides robustness in In [22], Shi et al. proposed a pooling-based deep
the presence of forecasting errors. recurrent neural network technique to prevent
overfitting for household load forecast. In [23],
2.2. Forecasting of Renewable Energy Generation Zheng et al. developed an adaptive neuro-fuzzy
We now briefly review the related works on system that forecasts the generation of wind turbines
forecasting renewable energy generation, which in conjunction with the forecast of weather features
such as wind speed. In [24], Vandeventer et al. used
3
a genetic algorithm to select the parameters of SVM home system. In [34] developed a hybrid system of
to forecast residential PV generation. In [25], van renewable and grid-supplied energy via exponential
der Meer et al. performed a probabilistic forecast of weighted moving average-based forecasting and a
solar power using quantile regression and dynamic heuristic load control algorithm. In [35], Aurangzeb
Gaussian process. In [26], He and Li have combined et al. developed an energy management system
quantile regression with kernel density estimation to which uses a convolutional neural network to
predict wind power density. In [27], Alessandrini forecast renewable energy generation. Finally, in
et al. used an analogue ensemble method to [36], in order to distribute the load and decrease
problematically forecast wind power. In [28], the costs, Sarker et al. developed a home energy
Cervone et al. combined ANN with the analogue management system based on heuristic scheduling.
ensemble method to forecast the PV generations in The second group of works in this category
both deterministic and probabilistic ways. Recently developed energy management systems based on
in [29], Guo et al. proposed a combined load reinforcement learning. In [37], Ren et al. developed
forecasting method for a Multi Energy Systems a model-free Dueling-double deep Q-learning
(MES) based on Bi-directional Long Short-Term neural network for home energy management
Memory (BiLSTM). The combined load forecasting systems. In [38], Lissa et al. used ANN-based
framework is trained with a multi-tasking approach deep reinforcement learning to minimize energy
for sharing the coupling information among the consumption by adjusting the hot water temperature
loads. in the PV-enabled home energy management
Although there is a significantly large number system. In [39], Yu et al. developed an energy
of studies to forecast renewable energy generation management system using a deep deterministic
and/or other factors related to generation, this paper policy gradient algorithm. In [40], Wan et al. used
differs sharply from the existing literature as it a deep reinforcement learning algorithm to learn
proposes an embedded neural network architecture the energy management strategy for a residential
called rTPNN-FES that performs both forecasting building. In [41], Mathew et al. developed a
and scheduling simultaneously. reinforcement learning-based energy management
system to reduce both the peak load and the
2.3. Machine Learning Enabled Energy electricity cost. In [42], Liu et al. developed a home
Management Systems energy management system using deep and double
In this category, we review the recent studies that deep Q-learning techniques for scheduling home
aim to develop energy management systems enabled appliances. In [43], Lu et al. developed an energy
by ML, especially for residential buildings. management system with hybrid CNN-LSTM based
The first group of works in this category forecasting and rolling horizon scheduling. In [44],
used scheduling (based on either optimization or Ji et al. developed a microgrid energy management
heuristic) using the forecasts provided by an ML system using the Markov decision process for
algorithm. In [30], Elkazaz et al. developed a modelling and ANN-based deep reinforcement
heuristic energy management algorithm for hybrid learning for determining actions.
systems using an autoregressive ML for forecasting Deep learning-based control systems are also
and optimization for parameter settings. In [31], very popular for off-grid scenarios, as off-grid
Zaouali et al. developed an auto-configurable energy management systems are gaining increasing
middle-ware using Long-Short Term Memory attention to provide sustainable and reliable energy
(LSTM) based forecasting of renewable energy services. In References [45] and [46], the authors
generated via PV. In [32], Shakir et al. developed a developed algorithms based on deep reinforcement
home energy management system using LSTM for to deal with the uncertain and stochastic nature of
forecasting and Genetic Algorithm for optimization. renewable energy sources.
In [33], Manue et al. used LSTM to forecast the load All of these works have used ML techniques, es-
for battery utilization in a solar system in a smart pecially deep learning and reinforcement learning, to
4
Figure 1: The illustration of the system considered by rTPNN-FES

build energy management systems. Moreover, in a consumption per slot denoted by En . In addition,
recent work [47], Nakıp et al. mimicked the schedul- n should be active uninterruptedly for an successive
ing via ANN and developed an energy management slots. That is, when n is started, it consumes an En
system using this ANN-based scheduling. However, until it stops. Moreover, we assume that the con-
in contrast with rTPNN-FES proposed in this paper, sidered renewable energy system contains a battery
none of them has used ANN to generate scheduling with a capacity of Bmax , where the stored energy in
or combined forecasting and scheduling in a single this battery is used via an inverter with a supply limit
neural network architecture. of Θ. We assume that there is enough energy in total
(the sum of the stored energy in the battery and total
3. System Setup and Optimization Problem generation) to supply all devices within [0, H].
At the beginning of the scheduling window, we
In this section, we present the assumptions, math- forecast the renewable energy generation and sched-
ematical definitions and the optimization problem re- ule the devices accordingly. To this end, as the main
lated to the system setup which is used for embedded contribution of this paper, we combine the forecaster
forecasting scheduling via rTPNN-FES and shown in and scheduler in a single neural network architecture,
Figure 1. During this paper, rTPNN-FES is assumed called rTPNN-FES, which shall be presented in Sec-
to perform at the beginning of a scheduling window tion 4.
that consists of equal-length S slots and has a total Optimization Problem: We now define the opti-
duration of H in actual time (i.e. the horizon length). mization problem for the non-preemptive scheduling
In addition, the length of each slot s equals H/S , and of the starting slots of devices to minimize user dis-
the actual time instance at which the slot s starts is satisfaction. In other words, this optimization prob-
denoted by m s . Then, we let gms denote the power lem aims to distribute the energy consumption over
generation by the renewable energy source within slots prioritizing “user satisfaction”, assuming that
slot s. Also, ĝms denotes the forecast of gms . the operation of each device is uninterruptible. In
We let N be the set of devices that need to be this article, we consider a completely off-grid sys-
scheduled until H (in other words until the end of tem –which utilizes only renewable energy sources–
slot S ), and N denote the total number of devices, i.e. where it is crucial to achieve near-optimal scheduling
|N| = N. Each device n ∈ N has a constant power
5
to use limited available resources. Recall that this The objective function (1) minimizes the total
optimization problem is re-solved at the beginning user dissatisfaction cost over all devices as
( n∈N Ss=1 x(n,s)

P P
of each scheduling window for the available set of c(n,s) ). While minimizing user
devices N using the forecast generation ĝms over the dissatisfaction, the optimization problem also
scheduling window in Figure 1. considers the following constraints:
Moreover, for each n ∈ N, there is a predefined
cost of user dissatisfaction, denoted by c(n,s) , for • Uniqueness and Operation constraint in (2)
scheduling the start of n at slot s. This cost can take ensures that each device n is scheduled to start
value in the range of [0, +∞), and c(n,s) set to +∞ if exactly at a single slot between 1-st and [S −
the user does not want slot s to be reserved for device (an −1)]-th slot. The upper limit for the starting
n. As we shall explain in more detail in Section 5, of the operation of device n is set to [S − (an −
we determine the user dissatisfaction cost c(n,s) as 1)] because n must operate for successive an
the increasing function of the distance between s slots before the end of the last slot S .
and the desired start time of the considered device • Inverter Limitation constraint in (3) limits
n. We should note that the definition of the user total power consumption at each slot s
dissatisfaction cost only affects the numerical results to the maximum power of Θ that can be
since the proposed rTPNN-FES methodology does provided by the inverter. Note that the term
not depend on its definition. Ps ∗
s′ =s−(an −1) x(n,s′ ) is a convolution which equals
Then, we let x(n,s) denote a binary schedule for 1 if device n is scheduled to be active at slot s
the start of the activity of device n at slot s. That (i.e. n is scheduled to start between s − (an − 1)
is, x(n,s) = 1 if device n is scheduled to start at the and s).
beginning of slot s, and x(n,s) = 0 otherwise. In
addition, in our optimization program, we let x(n,s)∗
• Maximum Storage constraint in (4) ensures
be a binary decision variable and denote the optimal that the scheduled consumption at each slot s
value of x(n,s) . Accordingly, we define the optimiza- does not exceed the sum of the predicted gen-
tion problem as follows: eration (ĝms ) at this slot and the maximum en-
ergy (Bmax ) that can be stored in the battery.
S
XX

min x(n,s) c(n,s) (1) • Total Consumption constraint in (5) ensures
n∈N s=1
that the scheduled total power consumption
subject to until each slot s is not greater than the
summation of the stored energy, B, at the
S −(a
X n −1) beginning of the scheduling window and the

x(n,s) = 1, ∀n ∈ N (2) total generation until s. This constraint is used
s=1 as we are considering a completely off-grid
s
X X system.
′ ) ≤ Θ, ∀s ∈ {1, . . . , S } (3)

En x(n,s
n∈N s′ =[s−(an −1)]+
X X s 4. Recurrent Trend Predictive Neural Network

En x(n,s′) ≤ ĝ ms
+ Bmax , (4) based Forecast Embedded Scheduling
n∈Ni s′ =[s−(an −1)]+ (rTPNN-FES)
∀s ∈ {1, . . . , S }
In this section, we present our rTPNN-FES
s s′ s
XX X X neural network architecture. Figure 2 displays the

En x(n,s′′ ) ≤ B+ ĝms′ , (5)
architectural design of rTPNN-FES which aims
n∈N s′ =1 s′′ =[s′ −(an −1)]+ s′ =1
to generate scheduling for the considered window
∀s ∈ {1, . . . , S }
while forecasting the power generation through this
where [Ξ]+ = Ξ if Ξ ≥ 1; otherwise, [Ξ]+ = 1. window automatically and simultaneously. To this

6
Figure 2: Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (rTPNN-FES)

end, rTPNN-FES is comprised of two main layers of has observed that the feature f has periodicity; τ0
“Forecasting Layer” and “Scheduling Layer”, and it represents the periodicity duration for gms . Note
is trained using the “2-Stage Training Procedure”. that we do not assume that the features will have a
We let F be the set of features and periodic nature. If there is no observed periodicity,
F ≡ {1, . . . , F}. In addition, zmf s denotes the value of τ f can be set to H.
input feature f in slot s which starts at m s , where this As shown in Figure 2, the inputs of rTPNN-FES
m −2τ m −τ
feature can be considered as any external data , such are {gms −2τ0 , gms −τ0 } and {z f s f , z f s f } for f ∈ F ,
as weather predictions, that are directly or indirectly s∈{1,...,S }
and the output of that is {xn,s }n∈{1,...,N} .
related to power generation gms . We also let τ f
be a duration of time when the system developer

7
4.1. Forecasting Layer denoted by DP0 or for each time series feature f ,
Forecasting Layer is responsible for forecasting denoted by DP f . That is, DP f for any feature f (in-
the power generation within the architecture of cluding f = 0) has the same structure but its corre-
rTPNN-FES. For each slot s in the scheduling sponding input is different for each f . For example,
m −2τ m −τ
window, rTPNN-FES forecasts the renewable energy the input of DP f is {z f s f , z f s f } corresponding to
generation ĝms based on the collection of the past any time series feature f ∈ {1, . . . , F} while the in-
m −2τ m −τ
feature values for two periods, {z f s f , z f s f } f ∈F , put of DP0 is the past values of energy generation
as well as the past generation for two periods {gms −2τ0 , gms −τ0 }. Thus, one may notice that DP0 is
{gms −2τ0 , gms −τ0 }. To this end, this layer consists the only unit with a special input.
of S parallel rTPNN models that share the same During the explanation of the DP unit, we focus
parameter set (connection weights and biases). on a particular instance DP f , which is also shown in
m −2τ m −τ
That is, in this layer, there are S replicas of a detail in Figure 3. Using {z f s f , z f s f } input pair,
single trained rTPNN; in other words, one may say DP f aims to learn the relationship between this pair
that a single rTPNN is used with different inputs and each of the predicted trend t sf and the predicted
to forecast the traffic generation for each slot s. level l sf . To this end, DP f consists of Trend Predic-
Therefore, all but one of the Trained rTPNN blocks tor and Level Predictor sub-units each of which is a
are shown as transparent in Figure 2. linear recurrent neuron.
The weight sharing among rTPNN models (i.e. As shown in Figure 3, Trend Predictor of DP f
using replicated rTPNNs) has the following advan- computes the weighted sum of the change in the
tages: value of feature f from m s − 2τ f to m s − τ f and
the previous value of the predicted trend. That is,
• The number of parameters in the Forecasting DP f calculates the sum of the difference between
Layer decreases by a factor of S ; thus reducing m −τ m −2τ
(z f s f − z f s f ) with connection weight of α1f and
both time and space complexity.
the previous value of the predicted trend t s−1 f with
• By avoiding rTPNN training repeated S times, the connection weight of α f as 2

the training time is also reduced by a factor of


m −τ f m −2τ f
S. t sf = α1f (z f s − zf s ) + α2f t s−1
f (6)

• Because a single rTPNN is trained on the data By calculating the trend of a feature and learning
collected over S different slots, the rTPNN can the parameters in (6), rTPNN is able to capture be-
now capture recurrent trends and relationships havioural changes over time, particularly those re-
with higher generalization ability. lated to the forecasting of ĝms .
Level Predictor sub-unit of DP f predicts the level
4.1.1. Structure of rTPNN of feature value, which is the smoothed version of the
m −τ
We now briefly explain the structure of rTPNN, value of feature f , using only z f s f and the previous
which has been originally proposed in [1], for our state of the predicted level l s−1f . To this end, it com-
m s −τ f
rTPNN-FES neural network architecture. As shown putes the sum of the z f and l s−1
f with weights of
in Figure 3 displaying the structure of rTPNN, for β f and β f respectively as
1 2
any s, the inputs of rTPNN are {gms −2τ0 , gms −τ0 } and
m −2τ m −τ
{z f s f , z f s f } for f ∈ F , and the output is ĝms . In m −τ f
l sf = β1f z f s + β2f l s−1
f (7)
addition, the rTPNN architecture consists of (F + 1)
Data Processing (DP) units and L fully connected By predicting the level, we can reduce the effects
layers, including the output layer. on the forecasting of any anomalous instantaneous
changes in the measurement of any other feature f .
4.1.2. DP units Note that parameters α1f , α2f , β1f and β2f of Trend
In the architecture of rTPNN, there is one DP Predictor and Level Predictor sub-units are learned
unit either for the past values of energy generation,
8
Trend Predictor

Level Predictor

-
+

Trend Predictor

Level Predictor

Figure 3: The structure of rTPNN used in rTPNN-FES

during the rTPNN training like all other parameters 2. Level Predictors of DP0 -DPF :
(i.e. connection weights).
l0s = β10 gms −τ0 + β20 l0s−1 ,
4.1.3. Feed-forward of rTPNN l sf = β1f zmf s −τ0 + β2f l s−1
f , ∀f ∈ F (9)
We now describe the calculations performed dur-
ing the execution of the rTPNN; that is, when making 3. Concatenation of the outputs of DP0 -DPF to
a prediction via rTPNN. To this end, first, let Wl feed to the hidden layers:
denote the connection weight matrix for the inputs
of hidden layer l, and bl denote the vector of biases z s = [t0s , l0s , gms −τ0 , . . . , tFs , lFs , zmF s −τF ] (10)
of l. Thus, for each s, the forward pass of rTPNN is
as follows: 4. Hidden Layers from l = 1 to l = L:
1. Trend Predictors of DP0 -DPF :
O1s = Ψ(W1 (z s )T + b1 ), (11)
t0s = α10 (gms −τ0 − gms −2τ0 ) + α20 t0s−1 , Ol = Ψ(Wl Ol−1 + bl ), ∀l ∈ {2, . . . , L − 1} (12)
s s
m −τ f m −2τ f
t sf = α1f (z f s − zf s ) + α2f t s−1
f , ∀f ∈ F ĝms = Ψ(WL OL−1
s
+ bL ), (13)
(8)
where (z s )T is the transpose of the input vector z s ,
9
Ols is the output vector of hidden layer l, and Ψ(·) 4.3. 2-Stage Training Procedure
denotes the activation function as an element-wise We train our rTPNN-FES architecture to learn the
operator. optimal scheduling of devices as well as the fore-
casting of energy generation in a single neural net-
4.2. Scheduling Layer
work. To this end, we first assume that there is a col-
The Scheduling Layer consists of N parallel soft-
lected dataset comprised of the actual values of gms
max layers, each responsible for generating a sched-
and {zmf s } f ∈F for s ∈ {1, . . . , S } for multiple schedul-
ule for a single device’s start time. A single softmax
ing windows. Note that rTPNN-FES does not depend
layer for device n is shown in Figure 4. Since this
on the developed 2-stage training procedure, so it
layer is cascaded behind the Forecasting Layer, each
can be used with any training algorithm. For each
device n is scheduled to be started at each slot s based
window in this dataset, the 2-stage procedure works
on the output of the Forecasting Layer ĝms as well as
as follows:
the system parameters c(n,s) , En , B, Bmax and Θ for
this device n and this slot s. 4.3.1. Stage 1 - Training of rTPNN Separately for
Forecasting
In this first stage of training, in order to create
a forecaster, the rTPNN model (Figure 3) is trained
separately from the rTPNN-FES architecture (Fig-
ure 2). To this end, the deviation of ĝms from gms for
s ∈ {1, . . . , S }, i.e. the forecasting error of rTPNN, is
measured via Mean Squared Error as
S
1 X ms
MS Eforecast ≡ (g − ĝms )2 (16)
S s=1
Figure 4: The structure of Scheduling Layer
We update the parameters (connection weights and
In Figure 4, each arrow represents a connection biases) of rTPNN via back-propagation with gradient
weight. Accordingly, for device n for slot s in a descent, in particular the Adam algorithm, to min-
softmax layer of the Scheduling Layer, a neuron first imize MS Eforecast , where the initial parameters are
calculates the weighted sum of the inputs as set to parameters found in previous training. We re-
peat updating parameters as many epochs as required
B
α(n,s) = wg(n,s) gms + w(n,s)
B
− wc(n,s) c(n,s) (14) without over-fitting to the training samples.
S When Stage 1 is completed, the parameters of
E
−w(n,s) En − wΘ(n,s) Θ − w(n,s)
Bmax
Bmax “Trained rTPNN” in Figure 2 are replaced by the
resulting parameters found in this stage. Then, the
where all connection weights of wg(n,s) , w(n,s) B
, wc(n,s) ,
parameters of Trained rTPNN are frozen to continue
E
w(n,s) , wΘ(n,s) , and w(n,s)
B max
are strictly positive. In ad- further training of rTPNN-FES in Stage 2. That is,
dition, the signs of the terms are determined con- the parameters of Trained rTPNN are not updated in
sidering the intuitive effect of the parameter on the Stage 2.
schedule decision for device n at slot s. For exam-
ple, the higher gms makes slot s a better candidate to 4.3.2. Stage 2 - Training of rTPNN-FES for Schedul-
schedule n, while the higher user dissatisfaction cost ing
c(n,s) makes slot s a worse candidate. In addition, In Stage 2 of training, in order to create a
a softmax activation is applied at the output of this scheduler emulating optimization, the rTPNN-FES
neuron: architecture (Figure 2) is trained following the steps
eα(n,s) shown in Figure 5.
x(n,s) = Φ(α(n,s) ) = PS (15)
s=1 α(n,s)
10
Figure 5: The steps in Stage 2 training of rTPNN-FES to learn to schedule

The steps in Stage 2 shown in Figure 5 are as 5. Results


follows:
In this section, we aim to evaluate the
∗ s∈{1,...,S } performance of our rTPNN-FES. To this end, during
1 The optimal schedule, {xn,s
is }n∈{1,...,N}
computed by solving the optimization this section, we first present the considered datasets
problem given in Section 3 in (1)-(5). and hyper-parameter settings. We also perform a
brief time-series data analysis aiming to determine
2 The feed-forward output of rTPNN-FES, the most important features for the forecasting of PV
s∈{1,...,S }
{xn,s }n∈{1,...,N} , which is the estimation of energy generation. Then, we numerically evaluate
scheduling, is computed through (6)-(15) the performance of our technique and compare that
using the architecture in Figure 2. with some existing techniques.
3 The performance of rTPNN-FES for schedul- 5.1. Methodology of Experiments
ing, i.e. total estimation error of rTPNN-FES, 5.1.1. Datasets
is measured via Categorical Cross-Entropy as For the performance evaluation of the proposed
N X
X S rTPNN-FES, we combine two publicly available
CCEschedule ≡ − ∗
xn,s log(xn,s ) (17) datasets [2] and [3]. The first dataset [2] consists
n=1 s=1 of hourly solar power generation (kW) of various
residential buildings in Konstanz, Germany between
4 The parameters (connection weights and 22-05-2015 and 12-03-2017. Within this dataset,
biases) in the “Scheduling Layers” of we consider only the residential building called
rTPNN-FES are updated via back-propagation “freq DE KN residential1 pv” which corresponds
with gradient decent (using Adam to 15864 samples in total. The second dataset
optimization algorithm) to minimize contains weather-related information which is
CCEschedule . scraped with World Weather Online (WWO) API
[3]. This API provides 19 features related to
As soon as this training procedure is completed, i.e. temperature, precipitation, illumination and wind.
during real-time operation, rTPNN-FES generates
both forecasts of renewable energy generations, 5.1.2. Experimental Set-up
s∈{1,...,S }
{ĝms } s∈{1,...,S } and a schedule {xn,s }n∈{1,...,N} that Considering the limitations of the available
emulates the optimization. dataset, we perform our experiments on a virtual
11
Table 1: Household Appliances in the Smart Home Environment

Appliance Name Power Consumption (kW) Active Duration Desired Start Time
Washing Machine (warm wash) 2.3 2 14
Dryer (avg. load) 3 2 16 (earliest 15)
Robot Vacuum Cleaner 0.007 2 15
Iron 1.08 2 8
TV 0.15 3 20
*Refrigerator 0.083 24 non-stop
Oven 2.3 1 18
Dishwasher 2 2 21
Electric Water Heater 0.7 1 6, 17
Central AC 3 2 6, 18
Pool Filter Pump 1.12 8 10
Electric Vehicle Charger 7.7 8 21 (earliest 18 latest 23)

residential building which is, each year, actively start time.


used between May and September. It is assumed Recall that the Water Heater and AC, which are
that there are 12 different smart home appliances activated twice a day, are modelled as two separate
in active months. These appliances are shown in devices.
Table 1, where each appliance should operate at
least once a day. Note that Electric Water Heater 5.1.3. Implementation and Hyper-Parameter
and Central AC operate twice a day, where the Settings for rTPNN-FES
desired start times are 6:00 and 17:00 for the We implemented rTPNN-FES by using Keras
heater, and 6:00 and 18:00 for the AC. In order to API on Python 3.7.13. The experiments are executed
produce sufficient energy for the operation of these on the Google Colab platform with an operating
appliances, this building has its own PV system system of Linux 5.4.144 and a 2.2GHz processor
which consists of the following elements: 1) PV with 13 GB RAM.
panels for which the generations are taken from the Forecasting Layer is trained on this platform via
dataset [2] explained above, 2) three batteries with the adam optimizer for 40 epochs with 10−3 initial
13.5 kWh capacity of each, and 3) inverter with a learning rate. In order to exploit the PV generation
power rate of 10kW. trend on daily basis, the batch size is fixed at 24.
Furthermore, during our experimental work, we Moreover, an L2 regularization term is injected into
set H = 24 h, and we define the user dissatisfaction Trend and Level Predictors in the rTPNN layer in
cost c(n,s) for each device n at each slot s based on the order to avoid gradient vanishing. Finally, we used
“Desired Start Time”, which is given in Table 1, as fully connected layers of rTPNN which are respec-
tively comprised of F + 1 and ⌈(F + 1)/2⌉ neurons
1  s − µn 2
!
1 with sigmoid activation. Scheduling Layer of each
c(n,s) = 1 − √ exp − (18)
σn 2π 2 σn device is trained on the same platform also using the
adam optimizer for 20 epochs with a batch size of 1
where µn is the desired start time of n, and σn is the and initial learning rate of 10−3 . Note that setting the
acceptable variance for the start of n. The value of batch size to 1 is due to the particular implementa-
σn is 1 for Iron and Electric Water Heater, 2 for TV, tion of rTPNN-FES which uses the Keras library. In
Oven, Dishwasher and AC, 3 for Washing Machine addition, the infinity values of c(n,s) are set to 100 at
and Dryer, and 5 for Robot Vacuum Cleaner. Also, the inputs of the scheduling layer in order to be able
the value of c(n,s) is set to infinity for s lower than the to calculate the neuron activation. We also set the
earliest start time and for that greater than the latest periodicity τ0 of gms as 24 h.
12
Furthermore, the source codes of the days (corresponding to 7200 samples) and the rest
rTPNN-FES and experiments in this paper are 361 days (corresponding to 8664 samples) respec-
shared in [48] in addition to the repository of the tively.
original rTPNN. First, Table 2 presents the performances of all
models on both training and test sets with respect to
5.1.4. Genetic Algorithm-based Scheduling for Mean Squared Error (MSE), Mean Absolute Error
Comparison (MAE), Mean Absolute Percentage Error (MAPE)
Genetic algorithms (GAs) have been widely used and Symmetric Mean Absolute Percentage Error
in scheduling tasks due to their ability to effectively (SMAPE) metrics, which are calculated as
solve complex optimization problems. GAs are
S
able to incorporate various constraints and prior 1 X ms
MS E = (g − ĝms )2 (19)
knowledge into the optimization process, making S s=1
them well-suited for scheduling tasks with many S
1 X ms
constraints. GAs are also able to efficiently search MAE = g − ĝms (20)
through a vast search space to find near-optimal S s=1
solutions, even for problems with a large number S
100% X gms − ĝms
of variables [49]. These characteristics make GAs MAPE = (21)
S s=1 gm s
powerful tools for finding high-quality solutions
S m m
in our experimental setup and good candidates to 100% X g s − ĝ s
compare against rTPNN-FES. S MAPE = (22)
S s=1 (|gms | + |ĝms |)/2
The experiments are executed on the Google Co-
lab platform with the same hardware configurations In Table 2, the results on the test set show that
of rTPNN-FES. In this experimental setting, a chro- rTPNN outperforms all of the other forecasters
mosome is a daily schedule matrix. The cross-over for the majority of the error metrics while some
is made by swapping device schedules by selecting forecasters may perform better in individual metrics.
a random cross-over point out of the total number However, observations on an individual error metric
of devices and mutation is introduced by changing (without considering the other metrics) may be
the scheduled time of a single device randomly with misleading due to its properties. For example, the
probability 0.1. The GA application starts with sam- MAPE of Ridge Regression is significantly low but
pling feasible solutions out of 5000 random solutions MSE, MAE and SMAPE of that are high. The reason
as an initial population. After that, 1000 new genera- is that Ridge is more accurate in forecasting samples
tions are simulated while the population size is fixed with high energy generation than forecasting those
to 200 by making selections in an elitist style. with low generations. Moreover, rTPNN is shown
to have high generalization ability since it performs
5.2. Forecasting Performance of rTPNN-FES well for both training and test sets with regard to
We now compare the forecasting performance all metrics. Also, only rTPNN and LSTM are able
of rTPNN with the performances of LSTM, MLP, to achieve better performances than the benchmark
Linear Regression, Lasso, Ridge, ElasticNet, performance of the 1-Day Naive Forecast with
Random Forest as well as 1-Day Naive Forecast.1 respect to MSE, MAE and SMAPE.
Recall that in recent literature, References We also see that SMAPE yields significantly
[31, 32, 33] used LSTM, and Reference larger values than those of other metrics (including
[14, 15, 19, 38] used MLP. MAPE) because SMAPE takes values in [0, 200] and
During our experimental work, the dataset is par- has a scaling effect as a result of the denominator
titioned into training and test sets with the first 300 in (22). In particular, the absolute deviation of
forecast values from the actual values is divided
1
1-Day Naive Forecast equals to the original time series
by the sum of those. Therefore, under- and
with 1-day lag. over-forecasting have different effects on SMAPE,
13
Table 2: Comparison of the forecasting performance of rTPNN with that of state-of-the-art forecasters with respect to MSE, MAE,
MAPE, and SMAPE excluding nights

Training Set Test Set


Forecasting Methods
MSE MAE MAPE SMAPE MSE MAE MAPE SMAPE
rTPNN 2.23 1.13 3.72 51.84 2.58 1.21 10.67 54.42
LSTM 2.18 1.18 4.95 54.83 2.56 1.26 13.59 57.98
MLP 2.77 1.35 6.33 60.57 3.09 1.42 14.25 63.06
Linear Regression 2.78 1.28 4.92 57.71 3.16 1.35 6.08 60.38
Lasso Regression 8.61 2.12 4.06 88.68 8.7 2.14 11.16 90.68
Ridge Regression 2.78 1.29 4.93 57.74 3.16 1.36 6.11 60.41
ElasticNet Regression 8.61 2.12 4.06 88.68 8.7 2.14 11.16 90.69
RandomForestRegressor 0.3 0.41 1.5 24.75 3.18 1.36 6.82 60
1-Day Naive Forecast 3.68 1.25 2.76 56.63 4.25 1.37 1.26 58.29

where under-forecasting results in higher SMAPE. 5000

4000
4
Actual
rTPNN 3000
LSTM
3
MLP
2000

2
1000

1 0
-6 -4 -2 0 2 4

0
0 10 20 30 40 50 Figure 7: Histogram of the forecasting error in kW measured as
(ĝms − gms ) for each m s in the test set
Figure 6: Forecasting results of the three most competitive
models (rTPNN, LSTM and MLP) with respect to results in
samples (around 5000 out of 8664 samples). We
Table 2 for the time between fifth and seventh days in the test
set also see that the absolute error is smaller than 2 for
93% of the samples. We also see that the overall
Next, in Figure 6, we present the actual energy forecasting error is lower for rTPNN than both
generation between the fifth and the seventh days of LSTM and MLP.
the test set as well as those forecast by the best three
5.3. Scheduling Performance of rTPNN-FES
techniques (rTPNN, LSTM and MLP). Our results
show that the predictions of rTPNN are the closest to We now evaluate the scheduling performance of
the actual generation within the predictions of these rTPNN-FES for the considered smart home energy
three techniques. In addition, we see that rTPNN can management system. To this end, we compare the
successfully capture both increases and decreases in schedule generated by rTPNN-FES with that by opti-
energy generation while LSTM and MLP struggle to mization (solving (1)-(5)) using actual energy gener-
predict sharp increases and decreases. ations as well as the GA-based scheduling (presented
Finally, Figure 7 displays the histogram of the in Section 5.1.4). Note that although the schedule
forecasting error that is realized by each of rTPNN, generated by the optimization using actual genera-
LSTM, and MLP on the test set. Our results in this tions is the best achievable schedule, it is practically
figure show that the forecasting error of rTPNN is not available due to the lack of future information
around zero for the significantly large number of about the actual generations.

14
Figure 8: Comparison of rTPNN-FES against the optimal scheduling and GA-based scheduling with respect to the scheduling cost
(top) for the days of the test set and (bottom) as the boxplot of the cost difference.

Figure 8 (top) displays the comparison of and optimal scheduling is 1.3% and the maximum
rTPNN-FES against the optimal scheduling and the difference is about 3.48%.
GA-based scheduling regarding the cost value for Furthermore, Figure 8 (bottom) displays the
the days of the test set. In this figure, we see that summary of the statistics for the cost difference
rTPNN-FES significantly outperforms GA-based between rTPNN-FES and the optimal scheduling
scheduling achieving close-to-optimal cost. In as well as the difference between GA-based and
other words, the user dissatisfaction cost – which optimal scheduling as a boxplot. In Figure 8
is defined in (1) – of rTPNN-FES is significantly (bottom), we first see that the cost difference is
lower than the cost of GA-based scheduling, and it significantly lower for rTPNN-FES, where even the
is slightly higher than that of optimal scheduling. upper quartile of rTPNN-FES is smaller than the
The average cost difference between rTPNN-FES lower quartile of GA-based scheduling. We also

15
see that the median of the cost difference between rTPNN) in seconds. Note that we do not present the
rTPNN-FES and optimal scheduling is 0.13 and the computation of GA-based scheduling in this figure
upper quartile of that is about 0.146. That is, the since it takes 4.61 seconds on average – which is
cost difference is less than 0.146 for 75% of the days approximately 3 orders of magnitude higher than
in the test set. In addition, we see that there are only the computation time of rTPNN-FES and 1 order of
7 outlier days for which the cost is between 0.19 and magnitude higher than that of optimization – to find
0.3. According to the results presented in Figure 8, a schedule for a single window. Our results in this
rTPNN-FES can be considered as a successful figure show that rTPNN-FES requires significantly
heuristic with a low increase in cost. lower computation time than optimization to
generate a daily schedule of household appliances.
5.4. Evaluation of the Computation Time The average computation time of rTPNN-FES is
In Table 3, we present measurements on the about 4 ms while that of optimization with LSTM is
training and execution times of each forecasting 150 ms. That is, rTPNN-FES is 37.5 times faster
model. Our results first show that the execution than optimization with LSTM to simultaneously
time of rTPNN (0.17 ms) is comparable with the forecast and schedule. Although the absolute
execution time of LSTM and highly acceptable computation time difference seems insignificant for
for real-time applications. On the other hand, the a small use case (as in this paper), it would have
training time measurements show that the training important effects on the operation of large renewable
of rTPNN takes longer than that of other forecasting energy networks with a high number of sources and
models. Accordingly, one may say that there is a devices.
trade-off between training time and the forecasting
performance of rTPNN.

Table 3: Training and Execution Times for Forecasting

Forecasting Training Time Execution Time


Methods (seconds) (milliseconds)
rTPNN 210 0.17
LSTM 70 0.14
MLP 47 0.08
Random Figure 9: Computation time (in seconds) comparison between
11.8 0.12 rTPNN-FES and optimal scheduling under LSTM forecaster
Forest
Linear
0.004 0.0025
Regression 6. Conclusion
Lasso
0.005 0.0012 We have proposed a novel neural network
Regression
architecture, called Recurrent Trend Predictive
Ridge Neural Network based Forecast Embedded
0.004 0.0012
Regression Scheduling (namely rTPNN-FES), for smart home
Elastic Net energy management systems. The rTPNN-FES
0.007 0.0012 architecture forecasts renewable energy generation
Regression
and schedules household appliances to use
renewable energy efficiently and to minimize
Figure 9 displays the computation time of
user dissatisfaction. As the main contribution of
rTPNN-FES and that of optimization combined
rTPNN-FES, it performs both forecasting and
with LSTM (the second-best forecaster after
scheduling in a single architecture. Thus, it 1)
16
provides a schedule that is robust against forecasting dispatch in microgrids, flow control in networks,
and measurement errors, 2) requires significantly and smart energy distribution in future work.
low computation time and memory space by
eliminating the use of two separate algorithms References
for forecasting and scheduling, and 3) offers high
scalability to grow the load set (i.e. adding devices) [1] M. Nakip, C. Güzelı́ş, O. Yildiz, Recurrent trend
predictive neural network for multi-sensor fire
over time. detection, IEEE Access 9 (2021) 84204–84216.
We have evaluated the performance of doi:10.1109/ACCESS.2021.3087736.
rTPNN-FES for both forecasting renewable energy [2] A. Amato, R. Aversa, B. D. Martino, M. Scialdone,
generation and scheduling household appliances S. Venticinque, A simulation approach for the optimiza-
tion of solar powered smart migro-grids, in: Conference
using two publicly available datasets. During the
on Complex, Intelligent, and Software Intensive Systems,
performance evaluation, rTPNN-FES is compared Springer, 2017, pp. 844–853.
against 8 different techniques for forecasting and [3] Weather api: Json: World weather online.
against the optimization and genetic algorithm for URL https://www.worldweatheronline.com/dev
scheduling. Our experimental results have drawn the eloper/
[4] H. Shareef, M. S. Ahmed, A. Mohamed, E. Al Hassan,
following conclusions:
Review on home energy management system considering
demand responses, smart technologies, and intelligent
• The forecasting layer of rTPNN-FES outper-
controllers, IEEE Access 6 (2018) 24498–24509.
forms all of the other forecasters for the major- [5] A. E. Nezhad, A. Rahimnejad, P. H. Nardelli, S. A.
ity of MSE, MAE, MAPE, and SMAPE met- Gadsden, S. Sahoo, F. Ghanavati, A shrinking horizon
rics. model predictive controller for daily scheduling of home
energy management systems, IEEE Access 10 (2022)
• rTPNN-FES achieves a highly successful 29716–29730.
schedule which is very close to the optimal [6] F. R. Albogamy, M. Y. I. Paracha, G. Hafeez, I. Khan,
S. Murawwat, G. Rukh, S. Khan, M. U. A. Khan,
schedule with only 1.3% of the cost difference. Real-time scheduling for optimal energy optimization
in smart grid integrated with renewable energy sources,
• rTPNN-FES requires a much shorter time than IEEE Access 10 (2022) 35498–35520.
both optimal and GA-based scheduling to gen- [7] S. Ali, A. U. Rehman, Z. Wadud, I. Khan, S. Murawwat,
erate embedded forecasts and scheduling, al- G. Hafeez, F. R. Albogamy, S. Khan, O. Samuel, Demand
though the forecasting time alone is slightly response program for efficient demand-side management
in smart grid considering renewable energy sources, IEEE
higher than other forecasters.
Access (2022).
[8] G. Belli, A. Giordano, C. Mastroianni, D. Menniti,
Future work shall improve the training of
A. Pinnarelli, L. Scarcello, N. Sorrentino, M. Stillo, A
rTPNN-FES by directly minimizing the cost of unified model for the optimal management of electrical
user dissatisfaction (or other scheduling costs) and thermal equipment of a prosumer in a dr environ-
to eliminate the collection of optimal schedules ment, IEEE Transactions on Smart Grid 10 (2) (2017)
for training. In addition, the integration of 1791–1800.
[9] J. U. A. B. W. Ali, S. A. A. Kazmi, A. Altamimi,
a predictive dynamic thermal model into the Z. A. Khan, O. Alrumayh, M. M. Malik, Smart energy
rTPNN-FES framework shall be pursued in future management in virtual power plant paradigm with a new
studies. (Such integration is required to utilize improved multilevel optimization based approach, IEEE
more advanced HVAC scheduling/control system Access 10 (2022) 50062–50077.
designs.) It would also be interesting to observe [10] A. Ahmed, M. Khalid, A review on the selected applica-
tions of forecasting models in renewable power systems,
the performance of rTPNN-FES for large-scale Renewable and Sustainable Energy Reviews 100 (2019)
renewable energy networks. Furthermore, since the 9–21.
architecture of rTPNN-FES is not dependent on [11] H. Wang, Z. Lei, X. Zhang, B. Zhou, J. Peng, A review
the particular optimization problem formulated in of deep learning for renewable energy forecasting, Energy
Conversion and Management 198 (2019) 111799.
this paper, rTPNN-FES shall be applied for other
[12] V. Kushwaha, N. M. Pindoriya, Very short-term
forecasting/scheduling problems such as optimal solar pv generation forecast using sarima model:
17
A case study, in: 2017 7th International Confer- and net load: Investigating the effect of seasons, aggrega-
ence on Power Systems (ICPS), 2017, pp. 430–435. tion and penetration on prediction intervals, Solar Energy
doi:10.1109/ICPES.2017.8387332. 171 (2018) 397–413.
[13] J. K. Rogier, N. Mohamudally, Forecasting photovoltaic [26] Y. He, H. Li, Probability density forecasting of wind
power generation via an iot network using nonlinear au- power using quantile regression neural network and ker-
toregressive neural network, Procedia Computer Science nel density estimation, Energy conversion and manage-
151 (2019) 643–650. ment 164 (2018) 374–384.
[14] A. Fentis, L. Bahatti, M. Tabaa, M. Mestari, Short-term [27] S. Alessandrini, L. Delle Monache, S. Sperati, J. Nissen,
nonlinear autoregressive photovoltaic power forecasting A novel application of an analog ensemble for short-term
using statistical learning approaches and in-situ observa- wind power forecasting, Renewable Energy 76 (2015)
tions, International Journal of Energy and Environmental 768–781.
Engineering 10 (2) (2019) 189–206. [28] G. Cervone, L. Clemente-Harding, S. Alessandrini,
[15] L. Fara, A. Diaconu, D. Craciunescu, S. Fara, Forecasting L. Delle Monache, Short-term photovoltaic power fore-
of energy production for photovoltaic systems based on casting using artificial neural networks and an analog
arima and ann advanced models, International Journal of ensemble, Renewable Energy 108 (2017) 274–286.
Photoenergy 2021 (2021). [29] Y. Guo, Y. Li, X. Qiao, Z. Zhang, W. Zhou, Y. Mei, J. Lin,
[16] S. Atique, S. Noureen, V. Roy, V. Subburaj, S. Bayne, Y. Zhou, Y. Nakanishi, Bilstm multitask learning-based
J. Macfie, Forecasting of total daily solar energy gen- combined load forecasting considering the loads cou-
eration using arima: A case study, in: 2019 IEEE pling relationship for multienergy system, IEEE Trans-
9th Annual Computing and Communication Work- actions on Smart Grid 13 (5) (2022) 3481–3492.
shop and Conference (CCWC), 2019, pp. 0114–0119. doi:10.1109/TSG.2022.3173964.
doi:10.1109/CCWC.2019.8666481. [30] M. Elkazaz, M. Sumner, R. Davies, S. Pholboon,
[17] E. Erdem, J. Shi, Arma based approaches for forecasting D. Thomas, Optimization based real-time home energy
the tuple of wind speed and direction, Applied Energy management in the presence of renewable energy and
88 (4) (2011) 1405–1414. battery energy storage, in: 2019 International Conference
[18] E. Cadenas, W. Rivera, R. Campos-Amezcua, C. Heard, on Smart Energy Systems and Technologies (SEST),
Wind speed prediction using a univariate arima model and 2019, pp. 1–6. doi:10.1109/SEST.2019.8849105.
a multivariate narx model, Energies 9 (2) (2016) 109. [31] K. Zaouali, R. Rekik, R. Bouallegue, Deep learning
[19] P. Pawar, M. TarunKumar, et al., An iot based intel- forecasting based on auto-lstm model for home solar
ligent smart energy management system with accurate power systems, in: 2018 IEEE 20th International Confer-
forecasting and load strategy for renewable generation, ence on High Performance Computing and Communica-
Measurement 152 (2020) 107187. tions; IEEE 16th International Conference on Smart City;
[20] R. Corizzo, M. Ceci, H. Fanaee-T, J. Gama, Multi-aspect IEEE 4th International Conference on Data Science and
renewable energy forecasting, Information Sciences 546 Systems (HPCC/SmartCity/DSS), 2018, pp. 235–242.
(2021) 701–722. doi:10.1109/HPCC/SmartCity/DSS.2018.00062.
[21] I. Parvez, A. Sarwat, A. Debnath, T. Olowu, M. G. [32] M. Shakir, Y. Biletskiy, Forecasting and optimisation
Dastgir, H. Riggs, Multi-layer perceptron based pho- for microgrid in home energy management systems, IET
tovoltaic forecasting for rooftop pv applications in Generation, Transmission & Distribution 14 (17) (2020)
smart grid, in: 2020 SoutheastCon, 2020, pp. 1–6. 3458–3468.
doi:10.1109/SoutheastCon44009.2020.9249681. [33] A. Manur, M. Marathe, A. Manur, A. Ramachandra,
[22] H. Shi, M. Xu, R. Li, Deep learning for house- S. Subbarao, G. Venkataramanan, Smart solar home
hold load forecasting—a novel pooling deep rnn, IEEE system with solar forecasting, in: 2020 IEEE Inter-
Transactions on Smart Grid 9 (5) (2018) 5271–5280. national Conference on Power Electronics, Smart Grid
doi:10.1109/TSG.2017.2686012. and Renewable Energy (PESGRE2020), 2020, pp. 1–6.
[23] D. Zheng, A. T. Eseye, J. Zhang, H. Li, Short-term doi:10.1109/PESGRE45664.2020.9070340.
wind power forecasting using a double-stage hierarchical [34] Y. Ma, B. Li, Hybridized intelligent home renewable
anfis approach for energy management in microgrids, energy management system for smart grids, Sustainability
Protection and Control of Modern Power Systems 2 (1) 12 (5) (2020) 2117.
(2017) 1–10. [35] K. Aurangzeb, S. Aslam, S. I. Haider, S. M. Mohsin, S. u.
[24] W. VanDeventer, E. Jamei, G. S. Thirunavukkarasu, Islam, H. A. Khattak, S. Shah, Energy forecasting using
M. Seyedmahmoudian, T. K. Soon, B. Horan, multiheaded convolutional neural networks in efficient
S. Mekhilef, A. Stojcevski, Short-term pv power renewable energy resources equipped with energy storage
forecasting using hybrid gasvm technique, Renewable system, Transactions on Emerging Telecommunications
energy 140 (2019) 367–379. Technologies 33 (2) (2022) e3837.
[25] D. W. Van der Meer, J. Munkhammar, J. Widén, Proba- [36] E. Sarker, M. Seyedmahmoudian, E. Jamei, B. Horan,
bilistic forecasting of solar power, electricity consumption A. Stojcevski, Optimal management of home loads with

18
renewable energy integration and demand response strat-
egy, Energy 210 (2020) 118602.
[37] M. Ren, X. Liu, Z. Yang, J. Zhang, Y. Guo, Y. Jia, A
novel forecasting based scheduling method for household
energy management system based on deep reinforce-
ment learning, Sustainable Cities and Society 76 (2022)
103207.
[38] P. Lissa, C. Deane, M. Schukat, F. Seri, M. Keane,
E. Barrett, Deep reinforcement learning for home energy
management system control, Energy and AI 3 (2021)
100043.
[39] L. Yu, W. Xie, D. Xie, Y. Zou, D. Zhang, Z. Sun,
L. Zhang, Y. Zhang, T. Jiang, Deep reinforcement
learning for smart home energy management, IEEE
Internet of Things Journal 7 (4) (2020) 2751–2762.
doi:10.1109/JIOT.2019.2957289.
[40] Z. Wan, H. Li, H. He, Residential energy management
with deep reinforcement learning, in: 2018 International
Joint Conference on Neural Networks (IJCNN), 2018, pp.
1–7. doi:10.1109/IJCNN.2018.8489210.
[41] A. Mathew, A. Roy, J. Mathew, Intelligent residential
energy management system using deep reinforcement
learning, IEEE Systems Journal 14 (4) (2020) 5362–5372.
doi:10.1109/JSYST.2020.2996547.
[42] Y. Liu, D. Zhang, H. B. Gooi, Optimization strategy based
on deep reinforcement learning for home energy manage-
ment, CSEE Journal of Power and Energy Systems 6 (3)
(2020) 572–582. doi:10.17775/CSEEJPES.2019.02890.
[43] R. Lu, R. Bai, Y. Ding, M. Wei, J. Jiang, M. Sun, F. Xiao,
H.-T. Zhang, A hybrid deep learning-based online energy
management scheme for industrial microgrid, Applied
Energy 304 (2021) 117857.
[44] Y. Ji, J. Wang, J. Xu, X. Fang, H. Zhang, Real-time energy
management of a microgrid using deep reinforcement
learning, Energies 12 (12) (2019) 2291.
[45] S. Totaro, I. Boukas, A. Jonsson, B. Cornélusse, Lifelong
control of off-grid microgrid with model-based reinforce-
ment learning, Energy 232 (2021) 121035.
[46] Y. Gao, Y. Matsunami, S. Miyata, Y. Akashi, Operational
optimization for off-grid renewable building energy sys-
tem using deep reinforcement learning, Applied Energy
325 (2022) 119783.
[47] M. Nakip, A. Asut, C. Kocabıyık, C. Güzeliş, A smart
home demand response system based on artificial neural
networks augmented with constraint satisfaction heuris-
tic, in: 2021 13th International Conference on Electrical
and Electronics Engineering (ELECO), IEEE, 2021, pp.
580–584.
[48] M. Nakıp, Recurrent Trend Predictive Neural
Network - Keras Implementation (5 2022).
doi:10.5281/zenodo.6560245.
URL https://github.com/mertnakip/Recurrent-
Trend-Predictive-Neural-Network
[49] S. Katoch, S. S. Chauhan, V. Kumar, A review on genetic
algorithm: past, present, and future, Multimedia Tools
and Applications 80 (5) (2021) 8091–8126.

19

You might also like