Journal Pone 0278071
Journal Pone 0278071
RESEARCH ARTICLE
1 Graduate School of International Studies, Yonsei University, Seoul, South Korea, 2 Graduate School of
Information, Yonsei University, Seoul, South Korea
a1111111111 * [email protected]
a1111111111
a1111111111
a1111111111 Abstract
a1111111111
The stress placed on global power supply systems by the growing demand for electricity has
been steadily increasing in recent years. Thus, accurate forecasting of energy demand and
consumption is essential to maintain the lifestyle and economic standards of nations sus-
OPEN ACCESS tainably. However, multiple factors, including climate change, affect the energy demands of
Citation: Chung J, Jang B (2022) Accurate local, national, and global power grids. Therefore, effective analysis of multivariable data is
prediction of electricity consumption using a hybrid required for the accurate estimation of energy demand and consumption. In this context,
CNN-LSTM model based on multivariable data. some studies have suggested that LSTM and CNN models can be used to model electricity
PLoS ONE 17(11): e0278071. https://doi.org/
demand accurately. However, existing works have utilized training based on either electric-
10.1371/journal.pone.0278071
ity loads and weather observations or national metrics e.g., gross domestic product, imports,
Editor: Yogendra Arya, J.C. Bose University of
and exports. This binary segregation has degraded forecasting performance. To resolve
Science and Technology, YMCA, INDIA, INDIA
this shortcoming, we propose a CNN-LSTM model based on a multivariable augmentation
Received: June 13, 2022
approach. Based on previous studies, we adopt 1D convolution and pooling to extract undis-
Accepted: November 8, 2022 covered features from temporal sequences. LSTM outperforms RNN on vanishing gradient
Published: November 23, 2022 problems while retaining its benefits regarding time-series variables. The proposed model
Copyright: © 2022 Chung, Jang. This is an open exhibits near-perfect forecasting of electricity consumption, outperforming existing models.
access article distributed under the terms of the Further, state-level analysis and training are performed, demonstrating the utility of the pro-
Creative Commons Attribution License, which posed methodology in forecasting regional energy consumption. The proposed model out-
permits unrestricted use, distribution, and
performs other models in most areas.
reproduction in any medium, provided the original
author and source are credited.
snowfall increase the chances of electrical failure. Therefore, the accurate forecasting of elec-
tricity consumption has become more important to ensure a stable electricity supply.
Several studies have been conducted to predict energy consumption trends. Long short-
term memory (LSTM) models combined with convolutional neural networks (CNNs) have
been proposed for the hourly and daily forecast of energy demands, which outperform multi-
layer perceptron (MLP) and recurrent neural network (RNN) models [5–7]. Electricity load
forecasting (LF) studies [5, 7, 8] have utilized historical electricity load and climate data
uploaded in real time. Most energy consumption forecasting studies [9–12] have been con-
ducted considering factors such as population, import/export values, and gross domestic prod-
uct (GDP). The nomenclature utilized in this study is presented in Table 1.
Most existing studies on electrical energy consumption (EEC) and total energy consump-
tion (TEC) forecasting methods focus on macroeconomic features, including population,
GDP, and import/export values. On the other hand, existing studies on short-term load fore-
casting (STLF) and mid-term load forecasting (MTLF) consider only historical load and
weather/time information [13]. However, this binary separation of components constrains the
forecasting accuracy of energy demand and consumption—the components must be inte-
grated to improve prediction performance.
Table 2 presents an overview of existing load and energy consumption forecasting, includ-
ing their input and target variables, forecasting time intervals, and forecasting areas. In terms
of forecasting time intervals, energy forecasting can be categorized into two groups: 1)
monthly, seasonal, and annual forecasting [1, 10–12, 14–16] and 2) per-minute, hourly, and
daily forecasting [5–8, 17]. The former type considers socio-economic input variables from
GDP, population, sales index, production index, import/export, demographic, personality,
and Google Trends data. The latter type involves observations of the actual electric load, volt-
age, submetering, and weather. Further, the target areas of the two types are different. Methods
of the first group predict energy consumption at state [9, 14, 15], national [11, 12, 14, 16], and
global [10] levels whereas those in the second group [6–8, 17] predict the energy consumption
of relatively smaller areas. In [9], a neural network ensemble approach based on a novel sparse
adaboost framework and an echo state network was used to improve generalization ability and
construct nonlinear relations between electricity demand and other factors. The proposed
model was validated by using it to predict the industrial electricity consumption of Hubei
Province of China. However, further real-life applications in different regions of China are
required to establish its generality. In [10], Google Trends data, including the search history of
certain keywords, were used as the input dataset and an online big data-driven oil consump-
tion forecasting approach was constructed for application alongside statistical and machine
learning (ML) techniques. It was the first attempt to use the trends of google search data to pre-
dict uncertain yet essential oil consumption. It also proved the predictive power of Google
search trends through relationship investigation. However, the proposed prediction improve-
ment techniques did not explore deep learning (DL) modules. In [14], the forecasting speed of
residential electricity consumption was prioritized, contrary to most residential electricity con-
sumption studies that focus on forecasting accuracy. The authors proposed a hybrid Improved
Whale Optimization Algorithm—Optimized Grey Seasonal Variation Index which exhibited
both high accuracy and fast convergence. Consideration of forecasting speed is practical
because real-life implementation requires fast forecasting. The aforementioned model also
exhibited “excellent” forecasting accuracy. In [8], the periodic part of the household residual
load forecasting was modeled based on the behavioral patterns of overestimated/underesti-
mated residual components. The model exhibited a significant improvement in the prediction
accuracy of periodic residual demand. When combined with climatic data, it also improved
total power consumption prediction. Nevertheless, as the real dataset of the experiment per-
tained to a single household, its application in broader residential households remains to be
established. To improve the performance of RNN on electric load forecasting at a specific
time, a recurrent inception convolution neural network combining RNN and 1D CNN was
proposed in [6]. To this end, a 1D CNN inception model was used to balance prediction time
and hidden state vector values. Its performance was verified based on power usage data
obtained from three distribution complexes in South Korea. The model outperformed the
benchmarks of MLP, RNN, and 1D CNN. However, because there are multitudinous power
distribution complexes in South Korea, its application to the entire country requires further
verification.
Contribution
As mentioned in the Literature Review and the Research Gap and Motivation sections, previ-
ous approaches to the prediction of electricity consumption and demands have not considered
the two types of input factors simultaneously. However, greater integrity in input is necessary
to improve prediction accuracy. Moreover, monthly state-level predictions are required to
account for variable energy consumption patterns of different states. Finally, we utilize an
interpolation technique for convergence between monthly data points. While the time units of
LF datasets are as detailed as seconds and milliseconds, the finest unit of time in the case of
EEC/TEC is a month. Our interpolation bridges these gaps between different data points and
ensures better prediction scores. The various contributions of this paper are as follows:
• We present a novel EEC forecasting method based on small datasets using data
augmentation.
• The proposed model integrates input components of EEC/TEC forecasting and LF to
improve prediction performance.
• To the best of our knowledge, this is the first study to conduct the state-level monthly EEC
forecasting for South Korea.
Paper organization
The rest of this paper is structured as follows. In Materials and Methods, the proposed deep
learning model is introduced along with the requisite background information. Experimental
validation of the proposed method is presented in the Results section. In the Discussion sec-
tion, the motivation of the study is re-stated and the results obtained using the proposed
method are analyzed. Finally, the study is concluded in the Conclusions section.
Data augmentation
Certain input variables cannot be collected at the frequency of a target variable. Specifically,
macroeconomic factors, such as import and export values, are usually collected monthly, quar-
terly, and annually, while empirical observations of temperature, rain, and electric load are
measured every second, minute, or hour. To preserve the characteristics of both domains, a
new dataset of daily frequency can be used as a middle ground.
Fig 2 illustrates the workflow of the proposed method. Once the rows of new daily values
are generated, interpolation is used to generate the simulated values. In power forecasting, as
mentioned in [23], piecewise cubic polynomial interpolation represents a reasonable compro-
mise between computational cost and flexibility. In [24] as well, piecewise cubic interpolation
was preferred over other spline methods to smooth the down-sampled data, suggesting a data-
driven load-forecasting method. However, in the aforementioned study, a significant perfor-
mance improvement was not reported in the case of the cubic spline compare to the case of
quadratic spline—instead, piecewise cubic interpolation required greater computational time
during the training process than the quadratic spline. Therefore, piecewise quadratic-spline
interpolation is used for data augmentation in this study.
The function, S(x), interpolates each local data point piecewise to restrict Runge’s phenom-
enon. Here,
2
Si ðxÞ ¼ ai þ bi ðx xi Þ þ ci ðx xi Þ ; ð1Þ
where i 2 [0, 1, . . ., n] and x 2 R. The parameters ai, bi, and ci denote quadratic polynomial
coefficients. A multi-dimensional array of size 6180 × lag × features defines a new daily poly-
nomial coefficient structure. For each region, the length of the time series is taken to be 6180,
lag denotes the length of the shifted time-frequency for single-step forecasting, and features
denote multivariable factors from the given datasets containing data regarding electricity con-
sumption, weather information, and import/export indexes.
Despite its nonlinearity, MLP experiences overfitting and vanishing gradient problems. Over-
fitted models fail to predict test data correctly, thereby diminishing their feasibility. In addi-
tion, in MLPs containing hundreds of hidden layers, the backpropagation process can
diminish the gradient to 0, or close to 0, impeding further training.
2. Recurrent neural networks. RNNs and RNN-based models exhibit superior perfor-
mance on sequence data, such as text and time series data [25]. While feedforward neural net-
works remember only the current input, RNNs store information regarding the temporal
order and consider previous inputs as well as the current state during decision-making. As in
the case of a feedforward network, an RNN first calculates the loss function on a batch (for-
ward propagation). Then, it updates the gradients based on the current state by calculating the
state memory of earlier instances (backpropagation through time).
ahti ¼ tanhðWax xhti þ Waa aht 1i
þ ba Þ ð3Þ
Eq (3) presents a basic RNN cell, where xhti denotes the current input, and aht−1i denotes a
previous hidden state with previous information. A single RNN unit takes xhti and aht−1i as
inputs and outputs ahti to the following RNN cell. Output ahti is also used to predict yhti. How-
ever, RNNs are also prone to the vanishing gradient problem, as their cells have short-term
memory.
3. Long-short term memory. To counteract the vanishing gradient problem of RNNs,
three gates that forget, update, and output the hidden states of current and previous instances
are used in LSTM [26]. In Eq (5) the forget gate decides whether to delete or retain stored
memory values of the input state. If one of the values of Ghti f is 0, or close to 0, the LSTM cell
removes information from the corresponding component of cht−1i. If one of the values is 1, the
LSTM cell retains the information. Similarly, the update gate reflects the information from the
forget gate, whereas the output gate determines the outputs to be used. All these decisions are
then added to the cell state, and the memory variable is subsequently delivered to the subse-
quent LSTM cell with a hidden state. Thus, each LSTM-cell tracks and updates a memory vari-
able chti (i.e. a cell state) at each time step, which can be different from ahti.
Ghti
f ¼ sðWf ½a
ht 1i
; xhti � þ bf Þ ð5Þ
Ghti
u ¼ sðWu ½a
ht 1i
; xhti � þ bu Þ ð6Þ
chti ¼ Ghti
f �c
ht 1i
þ Ghti
u �^c hti ð8Þ
Ghti
o ¼ sðWo ½a
ht 1i
; xhti � þ bo Þ ð9Þ
series sequences. Then, the CNN layer transmits the results to the LSTM layer, and the dense
layer forecasts the future electricity consumption of each region.
In Eq (11), hij1 denotes the output vector of the first convolution layer, where i 2 [0, 1, . . .,
n], xi0 ¼ x1 ; x2 ; . . . ; xn denotes the input energy consumption input vector, n denotes the
length of time-series input sequence, j denotes the index value of the feature map correspond-
ing to each lag, M denotes the number of filters, and W denotes the weight vector of the kernel.
hlij denotes the result of the lth convolution layer and the ith value of the layer. To reduce the
network computation costs and the number of trainable parameters, the CNN layer uses a
pooling layer to reduce the spatial size of the representation. The pooling layer plij integrates a
neuron filter on a previous layer into a scalar in the subsequent layer, where T denotes the size
of the stride and R denotes the pooling size. Max-pooling is used in this paper, which selects
the maximum value from each filter during feature extraction.
!
X
M
1 1 0 1
h ¼s
ij W x m;j iþm 1;j þb j ð11Þ
m¼1
!
X
M
l l 0 l
h ¼s
ij Wm;j xiþm 1;j þb j ð12Þ
m¼1
The kernel of the 1D CNN moves vertically across a feature map; hence, it receives a single
integer as the filter height. In contrast, a 2D CNN has a two-dimensional kernel, e.g., a 2 × 1 fil-
ter [7]. A 1D CNN is selected in this study instead of a 2D CNN because our input data con-
sists of a one-dimensional time series sequence. In addition, 1D CNN is more competitive in
terms of computational cost than 2D CNN [28].
5. Evaluation metrics. The proposed model was compared with existing ones in terms of
RMSE and MAPE. RMSE calculates prediction errors and indicates how far the residuals are
from the line of best fit. RMSE is useful to configure models for a specific variable and compare
prediction errors between different models. RMSE is defined as follows:
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1X n
2
RMSE ¼ ðC At Þ ð14Þ
n t¼1 t
where n denotes the number of observations, Ct denotes the forecasted values of consumption,
and At denotes the observed actual values at time stamp t.
MAPE is used to compare the forecasting accuracy of different time-series models. For each
forecasted point in period t, the prediction error is given by et = At − Ct, and the absolute value
of the percentage error, et = (At − Ct)/At, is summed and divided by the number of fitted
points, n. MAPE corresponding to a period t is given by:
� �
1X n �
� ðAt Ct Þ��
MAPE ¼ � � 100 ð15Þ
n t¼1 � At
Results
Data pre-processing
For data pre-processing, the multivariable time-series data are rescaled from monthly to daily
periods. Each feature contains 204 rows of monthly values collected between January 2004 and
December 2020 (204 months). 204 months are upsampled into 6180 days for 16 South Korean
regions. The upsampled dates contain NaN values for all 3 features of historical electricity
usage, export value, and import value. Historical electricity consumption is defined as the sum
of residential, industrial, educational, and other electricity usages. Export and import values
denote the total price of exported and imported merchandise, respectively. As mentioned in
Extraction of Important Features, because the climate variable has little correlation with histor-
ical electricity usage, climatic data is excluded from experiments. The empty values are esti-
mated using spline interpolation. The total length of the preprocessed data is taken to be
6180 × 16 × 4, without any missing values. Following previous energy forecasting studies [6],
MLP is adopted as our baseline model. The baseline method is taken to be univariate single-
step forecasting based on a previous consumption time series. To confirm the effect of aug-
mentation, multivariable forecasting is also included in this experiment. The unaugmented
univariate and multivariable methods are used to predict the monthly consumption based on
the data of the previous two months. The proposed multivariable augmented model uses the
data of the interpolated past 30 days to predict the consumption figure for each day. To com-
pare the daily forecasts with the monthly forecasts, we extract consumption predictions for
specific dates corresponding to the time intervals before interpolation.
Model hyperparameters
To focus on the impact of data augmentation, the architecture of the DL models is simplified.
The MLP consists of a single dense layer with one node and one linear activation function.
Both the RNN and LSTM comprise 128 units, a ReLU (Rectified Linear Unit) activation func-
tion and a dropout layer are included following a single hidden layer. The CNN-LSTM con-
tains 2 layers of 1D CNN with 128 filters, kernel size of 1, stride of 1, pooling size of 2, and a
ReLU activation function, followed by the LSTM layer with identical hyperparameter as the
LSTM-only model. All models accept a 2D array of (length of dataset, number of features) as
the input and use the Adam optimizer for compilation. The final dense layer of the augmented
model forecasts the power consumption for the subsequent day and those of the other predicts
the consumption for the subsequent month. All models split the training and test datasets fol-
lowing a 70:30 ratio. The training is repeated for each model, region, and method to ensure a
fair comparison.
regions using different data processing methods. The proposed augmented multivariable
model is observed to yield useful results in all states, exhibiting the lowest RMSE and MAPE
scores.
Ablation studies
To establish the superiority of the proposed model, two experiments are conducted as ablation
studies. First, following the 1D CNN kernel visualization technique presented in [7, 31], Fig 6
illustrates the noise-reducing power of each 1D CNN layer. The intermediate output obtained
from the second convolution network is observed to be less spiky than the first kernel. This is
because each CNN filter reduces the noise of the input dataset. This selectivity over input
information also allows cost-effective computation and is, hence, useful for mid-term time
series forecasting. The illustration aids the analysis of the number of layers that benefit from
smoothing the noise of raw data and selectively extracting important information. Moreover,
the loss landscapes of LSTM and the proposed model are compared in Fig 7. In [32], the loss
landscape was introduced as a method to represent loss convergence of a model throughout
the training process. From the figure, it is evident that the proposed CNN-LSTM hybrid
model converges the loss to the minima satisfactorily, and the graph is smooth and concave,
indicating facile training. On the other hand, the LSTM-only model exhibits convex loss sur-
faces and the minima are not visible, indicating that the training of the model is more difficult.
This proves that the inclusion of 1D CNN layers before the LSTM layer simplifies training.
Discussion
Electricity is the backbone of modern society. As such, the accurate prediction of electricity
demand and consumption is more significant now than ever before. However, many factors
complicate the forecasting accuracy of energy consumption, necessitating the development of
advanced forecasting models. In this paper, we presented a new hybrid CNN-LSTM multivari-
able EEC forecasting techniques that integrates the advantages of CNNs and LSTMs. To incor-
porate economic insights into energy usage, the proposed technique was expanded using
import/export values. The proposed approach enables the prediction of regional monthly elec-
tricity consumption. We tested the proposed techniques (components and algorithm) using
baseline techniques on univariable historical energy consumption data and compared them to
conventional deep learning models. Comprehensive experiment results proved that the pro-
posed technique extracts useful information between international trade values and electricity
usage, thereby improving the accuracy of national/regional predictions. We also expanded the
proposed technique to consider data interpolation. The proposed mechanism can be further
improved using data augmentation.
Conclusions
In this study, we investigated the effects of CNN-LSTM on augmented multivariable time
series datasets. We concluded that dimension reduction using the pooling layer of 1D CNN
reduces noise and thereby reduces the RMSE and MAPE scores. The LSTM layer was also
observed to be well suited to process time series data as it receives inputs for each time step.
Extensive experiments and ablation studies were performed, establishing the benefits afforded
by the proposed CNN-LSTM architecture paired with multivariable augmentation to provin-
cial time series forecasting for EEC.
Author Contributions
Conceptualization: Jaewon Chung, Beakcheol Jang.
Data curation: Jaewon Chung.
Formal analysis: Jaewon Chung.
Funding acquisition: Beakcheol Jang.
Supervision: Beakcheol Jang.
Visualization: Jaewon Chung.
Writing – original draft: Jaewon Chung.
Writing – review & editing: Jaewon Chung.
References
1. Wang Q, Su M, Li R, Ponce P. The effects of energy prices, urbanization and economic growth on
energy consumption per capita in 186 countries. Journal of cleaner production. 2019; 225:1017–1032.
https://doi.org/10.1016/j.jclepro.2019.04.008
2. Arora V, Lieskovsky J. Electricity use as an indicator of US economic activity; 2016.
3. Haes Alhelou H, Hamedani-Golshan ME, Njenda TC, Siano P. A survey on power system blackout and
cascading events: Research motivations and challenges. Energies. 2019; 12(4):682. https://doi.org/10.
3390/en12040682
4. Cissokho L, Seck A. Electric power outages and the productivity of small and medium enterprises in
Senegal. Investment climate and business environment research fund Report. 2013; 77:13.
5. Mujeeb S, Javaid N, Ilahi M, Wadud Z, Ishmanov F, Afzal MK. Deep long short-term memory: A new
price and load forecasting scheme for big data in smart cities. Sustainability. 2019; 11(4):987. https://
doi.org/10.3390/su11040987
6. Kim J, Moon J, Hwang E, Kang P. Recurrent inception convolution neural network for multi short-term
load forecasting. Energy and buildings. 2019; 194:328–341. https://doi.org/10.1016/j.enbuild.2019.04.
034
7. Kim TY, Cho SB. Predicting residential energy consumption using CNN-LSTM neural networks.
Energy. 2019; 182:72–81. https://doi.org/10.1016/j.energy.2019.05.230
8. Amara F, Agbossou K, Dubé Y, Kelouwani S, Cardenas A, Hosseini SS. A residual load modeling
approach for household short-term load forecasting application. Energy and Buildings. 2019; 187:132–
143. https://doi.org/10.1016/j.enbuild.2019.01.009
9. Wang L, Lv SX, Zeng YR. Effective sparse adaboost method with ESN and FOA for industrial electricity
consumption forecasting in China. Energy. 2018; 155:1013–1031. https://doi.org/10.1016/j.energy.
2018.04.175
10. Yu L, Zhao Y, Tang L, Yang Z. Online big data-driven oil consumption forecasting with Google trends.
International Journal of Forecasting. 2019; 35(1):213–223. https://doi.org/10.1016/j.ijforecast.2017.11.
005
11. Hu H, Wang L, Peng L, Zeng YR. Effective energy consumption forecasting using enhanced bagged
echo state network. Energy. 2020; 193:116778. https://doi.org/10.1016/j.energy.2019.116778
12. Kaytez F. A hybrid approach based on autoregressive integrated moving average and least-square sup-
port vector machine for long-term forecasting of net electricity consumption. Energy. 2020; 197:117200.
https://doi.org/10.1016/j.energy.2020.117200
13. Aslam S, Herodotou H, Mohsin SM, Javaid N, Ashraf N, Aslam S. A survey on deep learning methods
for power load and renewable energy forecasting in smart microgrids. Renewable and Sustainable
Energy Reviews. 2021; 144:110992. https://doi.org/10.1016/j.rser.2021.110992
14. Xiong Xin, Hu Xi, Guo Huan. A hybrid optimized grey seasonal variation index model improved by
whale optimization algorithm for forecasting the residential electricity consumption. Energy. 2021
https://doi.org/10.1016/j.energy.2021.121127
15. Hadjout D, Torres JF, Troncoso A, Sebaa A, Martı́nez-Álvarez F. Electricity consumption forecasting
based on ensemble deep learning with application to the Algerian market. Energy. 2022 https://doi.org/
10.1016/j.energy.2021.123060
16. Jana RK, Ghosh I, Sanyal MK. A granular deep learning approach for predicting energy consumption.
Applied Soft Computing. 2020; 89:106091. https://doi.org/10.1016/j.asoc.2020.106091
17. Liu C, Sun B, Zhang C, Li F. A hybrid prediction model for residential electricity consumption using holt-
winters and extreme learning machine. Applied energy. 2020; 275:115383. https://doi.org/10.1016/j.
apenergy.2020.115383
18. K Indicator. Exports and Imports Ratio (GDP); 2022.
19. KOSIS. Electric Power Consumption by Use; 2022.
20. OPEN DATA PORTAL. KEPCO Electricity Usage Status by Region Application; 2022.
21. Open MET Data Portal. Statistic Division; 2022.
22. Korea Customs Service. Trade Statistics for Import/Export by Region; 2022.
23. Wang G, Jia L. Short-term wind power forecasting based on BOMLS K-means similar hours Clustering
method. In: 2019 IEEE PES Asia-Pacific Power and Energy Engineering Conference (APPEEC). IEEE;
2019. p. 1–5.
24. Williams S, Short M. Electricity demand forecasting for decentralised energy management. Energy and
Built Environment. 2020; 1(2):178–186. https://doi.org/10.1016/j.enbenv.2020.01.001
25. Lipton ZC, Berkowitz J, Elkan C. A critical review of recurrent neural networks for sequence learning.
arXiv preprint arXiv:150600019. 2015.
26. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997; 9(8):1735–1780.
https://doi.org/10.1162/neco.1997.9.8.1735 PMID: 9377276
27. Livieris IE, Pintelas E, Pintelas P. A CNN–LSTM model for gold price time-series forecasting. Neural
computing and applications. 2020; 32(23):17351–17360. https://doi.org/10.1007/s00521-020-04867-x
28. Kiranyaz S, Avci O, Abdeljaber O, Ince T, Gabbouj M, Inman DJ. 1D convolutional neural networks and
applications: A survey. Mechanical systems and signal processing. 2021; 151:107398. https://doi.org/
10.1016/j.ymssp.2020.107398
29. Xiao D, Huang Y, Wang H, Shi H, Liu C. Health assessment for piston pump using LSTM neural net-
work. In: 2018 International Conference on Sensing, Diagnostics, Prognostics, and Control (SDPC).
IEEE; 2018. p. 131–137.