0% found this document useful (0 votes)
252 views268 pages

Prediction Techniques For Renewable Energy Generation and Load Demand Forecasting - Anuradha Tomar

The document is a comprehensive overview of the book series 'Lecture Notes in Electrical Engineering', highlighting its focus on the latest developments in electrical engineering, particularly in renewable energy generation and load forecasting. It details the contributions of various editors and authors, the structure of the book, and the specific topics covered, including machine learning techniques and hybrid methods for energy prediction. The book serves as a valuable resource for students and researchers in the field, providing insights into state-of-the-art forecasting techniques and future trends.

Uploaded by

aishwarymishrax
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
252 views268 pages

Prediction Techniques For Renewable Energy Generation and Load Demand Forecasting - Anuradha Tomar

The document is a comprehensive overview of the book series 'Lecture Notes in Electrical Engineering', highlighting its focus on the latest developments in electrical engineering, particularly in renewable energy generation and load forecasting. It details the contributions of various editors and authors, the structure of the book, and the specific topics covered, including machine learning techniques and hybrid methods for energy prediction. The book serves as a valuable resource for students and researchers in the field, providing insights into state-of-the-art forecasting techniques and future trends.

Uploaded by

aishwarymishrax
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 268

Volume 956

Lecture Notes in Electrical Engineering

Series Editors
Leopoldo Angrisani
Department of Electrical and Information Technologies Engineering,
University of Napoli Federico II, Naples, Italy

Marco Arteaga
Departament de Control y Robótica, Universidad Nacional Autónoma de
México, Coyoacán, Mexico

Bijaya Ketan Panigrahi


Electrical Engineering, Indian Institute of Technology Delhi, New Delhi,
Delhi, India

Samarjit Chakraborty
Fakultät für Elektrotechnik und Informationstechnik, TU München, Munich,
Germany

Jiming Chen
Zhejiang University, Hangzhou, Zhejiang, China

Shanben Chen
Materials Science and Engineering, Shanghai Jiao Tong University,
Shanghai, China

Tan Kay Chen


Department of Electrical and Computer Engineering, National University
of Singapore, Singapore, Singapore

Rüdiger Dillmann
Humanoids and Intelligent Systems Laboratory, Karlsruhe Institute for
Technology, Karlsruhe, Germany
Haibin Duan
Beijing University of Aeronautics and Astronautics, Beijing, China

Gianluigi Ferrari
Università di Parma, Parma, Italy

Manuel Ferre
Centre for Automation and Robotics CAR (UPM-CSIC), Universidad
Politécnica de Madrid, Madrid, Spain

Sandra Hirche
Department of Electrical Engineering and Information Science, Technische
Universität München, Munich, Germany

Faryar Jabbari
Department of Mechanical and Aerospace Engineering, University of
California, Irvine, CA, USA

Limin Jia
State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong
University, Beijing, China

Janusz Kacprzyk
Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

Alaa Khamis
German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt

Torsten Kroeger
Stanford University, Stanford, CA, USA

Yong Li
Hunan University, Changsha, Hunan, China

Qilian Liang
Department of Electrical Engineering, University of Texas at Arlington,
Arlington, TX, USA
Ferran Martín
Departament d’Enginyeria Electrònica, Universitat Autònoma de
Barcelona, Bellaterra, Barcelona, Spain

Tan Cher Ming


College of Engineering, Nanyang Technological University, Singapore,
Singapore

Wolfgang Minker
Institute of Information Technology, University of Ulm, Ulm, Germany

Pradeep Misra
Department of Electrical Engineering, Wright State University, Dayton,
OH, USA

Sebastian Möller
Quality and Usability Laboratory, TU Berlin, Berlin, Germany

Subhas Mukhopadhyay
School of Engineering and Advanced Technology, Massey University,
Palmerston North, Manawatu-Wanganui, New Zealand

Cun-Zheng Ning
Department of Electrical Engineering, Arizona State University, Tempe, AZ,
USA

Toyoaki Nishida
Graduate School of Informatics, Kyoto University, Kyoto, Japan

Luca Oneto
Department of Informatics, Bioengineering, Robotics and Systems
Engineering, University of Genova, Genova, Genova, Italy

Federica Pascucci
Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome,
Italy

Yong Qin
State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong
University, Beijing, China

Gan Woon Seng


School of Electrical and Electronic Engineering, Nanyang Technological
University, Singapore, Singapore

Joachim Speidel
Institute of Telecommunications, Universität Stuttgart, Stuttgart, Germany

Germano Veiga
Campus da FEUP, INESC Porto, Porto, Portugal

Haitao Wu
Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China

Walter Zamboni
DIEM—Università degli studi di Salerno, Fisciano, Salerno, Italy

Junjie James Zhang


Charlotte, NC, USA

The book series Lecture Notes in Electrical Engineering (LNEE) publishes


the latest developments in Electrical Engineering—quickly, informally and
in high quality. While original research reported in proceedings and
monographs has traditionally formed the core of LNEE, we also encourage
authors to submit books devoted to supporting student education and
professional training in the various fields and applications areas of electrical
engineering. The series cover classical and emerging topics concerning:
Communication Engineering, Information Theory and Networks
Electronics Engineering and Microelectronics
Signal, Image and Speech Processing
Wireless and Mobile Communication
Circuits and Systems
Energy Systems, Power Electronics and Electrical Machines
Electro-optical Engineering
Instrumentation Engineering
Avionics Engineering
Control Systems
Internet-of-Things and Cybersecurity
Biomedical Devices, MEMS and NEMS
For general information about this book series, comments or
suggestions, please contact [email protected].
To submit a proposal or request further information, please contact the
Publishing Editor in your country:

China
Jasmine Dou, Editor ([email protected])

India, Japan, Rest of Asia


Swati Meherishi, Editorial Director ([email protected])

Southeast Asia, Australia, New Zealand


Ramesh Nath Premnath, Editor ([email protected])

USA, Canada
Michael Luby, Senior Editor ([email protected])

All other Countries


Leontina Di Cecco, Senior Editor ([email protected])

** This series is indexed by EI Compendex and Scopus databases. **


Editors
Anuradha Tomar, Prerna Gaur and Xiaolong Jin

Prediction Techniques for Renewable


Energy Generation and Load Demand
Forecasting
Editors
Anuradha Tomar
Department of Instrumentation and Control Engineering, Netaji Subhas
University of Technology, New Delhi, Delhi, India

Prerna Gaur
Director, West Campus, Netaji Subhas University of Technology, New
Delhi, India

Xiaolong Jin
Department of Electrical Engineering, Technical University of Denmark,
Lyngby, Denmark

ISSN 1876-1100 e-ISSN 1876-1119


Lecture Notes in Electrical Engineering
ISBN 978-981-19-6489-3 e-ISBN 978-981-19-6490-9
https://doi.org/10.1007/978-981-19-6490-9

© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Singapore Pte Ltd. 2023

This work is subject to copyright. All rights are solely and exclusively
licensed by the Publisher, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any
other physical way, and transmission or information storage and retrieval,
electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.

The use of general descriptive names, registered names, trademarks, service


marks, etc. in this publication does not imply, even in the absence of a
specific statement, that such names are exempt from the relevant protective
laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice
and information in this book are believed to be true and accurate at the date
of publication. Neither the publisher nor the authors or the editors give a
warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The
publisher remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.

This Springer imprint is published by the registered company Springer


Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway
East, Singapore 189721, Singapore
Preface
The intermittent nature of renewable energy generation acts as a barrier to
renewable energy implementation; therefore, renewable energy generation
and load prediction become a very interesting area of research. This book
gathers a wide range of research on techniques for renewable energy
generation and load forecasting. This book not only covers the generation
forecasting techniques but also has a separate section for load forecasting. It
includes systematic elaboration of the concept of intelligent techniques for
renewable energy and load forecasting. The book reflects the state of the art
in prediction techniques along with the worldwide perspective and future
trends in forecasting. It covers theory, algorithms, simulations, error, and
uncertainty analysis. It offers a valuable resource for students and
researchers working in the fields of sustainable energy generation and
electrical distribution network and prediction techniques. The state-of-the-
art techniques in the areas like hybrid techniques, machine learning,
artificial intelligence, etc., are included in an effort to present recent
innovations in the prediction techniques for renewable energy generation
and load forecasting. The research work shared helps the researchers
working in the field of renewable energy, load forecasting, generation
forecasting, power engineering, and prediction techniques and learns the
technical analysis of the same.
The book covers two sections: renewable energy generation forecasting
and load forecasting. In the first chapter “Introduction to Renewable Energy
Prediction Methods” deals with the introduction to renewable energy
generation prediction. It discusses the renewable energy status across the
world and possible ways to achieve zero-carbon energy systems and
intelligent techniques to achieve efficient generation forecasting. In the
second chapter “Solar Power Forecasting in Photovoltaic Modules Using
Machine Learning” includes solar power forecasting using ML techniques.
It covers different models for the solar power forecasting. In the third
chapter “Hybrid Techniques for Renewable Energy Prediction” covers the
hybrid techniques for renewable energy prediction. It includes different
hybrid methods for hydropower prediction, wind power prediction, and
solar power prediction. Deep learning technique for renewable energy
prediction is discussed in the fourth chapter “A Deep Learning-Based
Islanding Detection Approach by Considering the Load Demand of DGs
Under Different Grid Conditions”. It includes deep learning-based islanding
detection technique. A comparison of PV power estimation methods has
been discussed in the fifth chapter “Comparison of PV Power Production
Estimation Methods Under Non-homogeneous Temperature Distribution for
CPVT Systems”. In the sixth chapter “Renewable Energy Predictions:​
Worldwide Research Trends and Future Perspective” includes worldwide
research trends and future perspective for renewable energy generation. In
the seventh chapter “Models of Load Forecasting” provides an overview
and elaborates on the concept of load forecasting and different models and
state-of-the-art techniques for load forecasting. It also discusses identified
benefits and challenges/barriers to their further development. It includes the
operational issues and key challenges related to load forecasting integrated
with local grid. In the eighth chapter “Load Forecasting Using Different
Techniques”, the future load is predicted with the help of artificial
intelligence techniques, namely fuzzy logic, ANN, and ANFIS. All three
methods are used for the data set considered, and the results are analyzed.
In the ninth chapter “Time Load Forecasting:​A Smarter Expertise Through
Modern Methods” discusses time load forecasting. It provides an extensive
review on the classical methods as well as modern techniques for load
forecasting. In the tenth chapter “Deep Learning Techniques for Load
Forecasting” explains the deep learning techniques for load forecasting
from a range of perspectives. This chapter includes the load forecasting
solutions that can address the key challenges. This work shared helps the
readers in improving their knowledge in the field of power engineering and
state-of-the-art forecasting techniques and learns their technical analysis.
Each chapter provides a comprehensive review and concludes with a case
study for better understanding of the reader. By following the methods and
applications laid out in this book, one can develop the necessary skills and
expertise to help have a rewarding career as a researcher.
Anuradha Tomar
Prerna Gaur
Xiaolong Jin
New Delhi, India
New Delhi, India
Lyngby, Denmark
Contents
Introduction to Renewable Energy Prediction Methods
Saqib Yousuf, Junaid Hussain Lanker, Insha, Zarka Mirza,
Neeraj Gupta, Ravi Bhushan and Anuradha Tomar
Solar Power Forecasting in Photovoltaic Modules Using Machine
Learning
Bhavya Dhingra, Anuradha Tomar and Neeraj Gupta
Hybrid Techniques for Renewable Energy Prediction
Guilherme Santos Martins and Mateus Giesbrecht
A Deep Learning-Based Islanding Detection Approach by Considering
the Load Demand of DGs Under Different Grid Conditions
Gökay Bayrak and Alper Yılmaz
Comparison of PV Power Production Estimation Methods Under Non-
homogeneous Temperature Distribution for CPVT Systems
Cihan Demircan, Maria Vicidomini, Francesco Calise,
Hilmi Cenk Bayrakçı and Ali Keçebaş
Renewable Energy Predictions:​Worldwide Research Trends and
Future Perspective
Esther Salmerón-Manzano, Alfredo Alcayde and Francisco Manzano-
Agugliaro
Models of Load Forecasting
Sunil Yadav, Bhavesh Tondwal and Anuradha Tomar
Load Forecasting Using Different Techniques
Arshi Khan and M. Rizwan
Time Load Forecasting:​A Smarter Expertise Through Modern
Methods
Trina Som
Deep Learning Techniques for Load Forecasting
Neeraj, Pankaj Gupta and Anuradha Tomar
Editors and Contributors
About the Editors
Dr. Anuradha Tomar has 12 years plus experience in research and
academics. She is currently working as Assistant Professor in
Instrumentation and Control Engineering Department of Netaji Subhas
University of Technology, Delhi, India. Dr. Tomar has completed her
postdoctoral research in Electrical Energy Systems Group, from Eindhoven
University of Technology (TU/e), the Netherlands, and has successfully
completed European Commission’s Horizon 2020, UNITED GRID and
UNICORN TKI Urban Research projects as a member. She has received her
B.E. Degree in Electronics Instrumentation and Control with Honours in the
year 2007 from University of Rajasthan, India. In the year 2009, she has
completed her M.Tech. Degree with Honours in Power System from
National Institute of Technology Hamirpur. She has received her Ph.D. in
Electrical Engineering from Indian Institute of Technology Delhi (IITD).
Dr. Anuradha Tomar has committed her research work efforts towards the
development of sustainable, energy-efficient solutions for the empowerment
of society, humankind. Her areas of research interest are operation and
control of microgrids, photovoltaic systems, renewable energy-based rural
electrification, congestion management in LV distribution systems, artificial
intelligent and machine learning applications in power system, energy
conservation and automation. She has authored or co-authored 69
research/review papers in various reputed international, national journals
and conferences. She is Editor for books with international publications like
Springer and Elsevier. Her research interests include photovoltaic systems,
microgrids, energy conservation and automation. She has also filed seven
Indian patents on her name. Dr. Tomar is Senior Member of IEEE and Life
Member of ISTE, IETE, IEI and IAENG.

Prof. Prerna Gaur has completed her B.Tech. in Electrical Engineering


(1988), M.Tech (1996) and Ph.D. (2009), Presently, Director, NSUT, West
Campus. Professor & founder Head in Instrumentation and Control and
Electrical Engineering Department in NSUT. Six years of Industry
experience and 28 years of Teaching. H index-19 and i10 index -42. She is
Director & Member Secretary, Business Incubator of NSUT and NBA Co-
ordinator of NSUT. Has organized IEEE international conference
DELCON2022, INDICON2020 and IICPE-2010 and at NSUT. She is
actively associated with IEEE (Senior Member), ISTE (Life Member),
IETE Fellow and IE (Fellow). Treasurer, IEEE India Council from Jan
2021. Chair, IEEE Delhi Section 2019-20. Outstanding Branch Counsellor
and Advisor Award 2021, IEEE Member of Geographic Activities.
Outstanding Volunteer Award, from IEEE India Council, 2019, Women of
the Decade in Academia, 2018. Maulana Abul Kalam Azad Excellence
award in Education-2015. IEEE PES Outstanding Chapter Engineer Award
2015 from IEEE Delhi Section, Outstanding Chapter award from IEEE
PELS, NJ, USA 2013.Outstanding Branch Counselor Award from Region
10 (Asia Pacific Region) in 2012 and from IEEE USA in 2009.

Xiaolong Jin received the B.S., M.S. and Ph.D. degrees in electrical
engineering from Tianjin University, China, in 2012, 2015 and 2018,
respectively. He is currently Postdoc Researcher with Technical University
of Denmark. From 2017 to 2019, he was a joint Ph.D. student with the
School of Engineering, Cardiff University, Cardiff, UK. His research
interests include energy management of multi-energy buildings and their
integrations with integrated energy systems and the energy and flexibility
markets solutions.

Contributors
Alfredo Alcayde
Department of Engineering, Escuela Superior de Ingeniería, University of
Almeria, Almeria, Spain

Gökay Bayrak
Department of Electrical and Electronics Engineering, Bursa Technical
University, Bursa, Turkey
Hilmi Cenk Bayrakçı
Department of Mechatronics Engineering, Faculty of Technology, Isparta
University of Applied Sciences, Isparta, Turkey

Ravi Bhushan
Department of Electrical Engineering, National Institute of Technology
Srinagar, Srinagar, J & K, India

Francesco Calise
Department of Industrial Engineering, University of Naples Federico II,
Naples, Italy

Cihan Demircan
Department of Energy Systems Engineering, Graduate School of Natural
and Applied Sciences, Süleyman Demirel University, Isparta, Turkey

Bhavya Dhingra
Netaji Subhas University of Technology, Delhi-78, India

Mateus Giesbrecht
Department of Electronics and Biomedical Engineering, School of
Electrical and Computer Engineering, University of Campinas, Campinas,
Brazil

Neeraj Gupta
Department of Electrical Engineering, National Institute of Technology
Srinagar, Srinagar, J & K, India

Pankaj Gupta
Department of Electronics and Communication Engineering, Indira Gandhi
Delhi Technical University for Women, New Delhi, Delhi, India

Insha
Department of Electrical Engineering, National Institute of Technology
Srinagar, Srinagar, J & K, India

Ali Keçebaş
Department of Energy Systems Engineering, Faculty of Technology, Muğla
Sıtkı Koçman University, Muğla, Turkey

Arshi Khan
Department of Electrical Engineering, Delhi Technological University,
Delhi, India

Junaid Hussain Lanker


Department of Electrical Engineering, National Institute of Technology
Srinagar, Srinagar, J & K, India

Francisco Manzano-Agugliaro
Department of Engineering, Escuela Superior de Ingeniería, University of
Almeria, Almeria, Spain

Guilherme Santos Martins


Department of Electronics and Biomedical Engineering, School of
Electrical and Computer Engineering, University of Campinas, Campinas,
Brazil

Zarka Mirza
Department of Electrical Engineering, National Institute of Technology
Srinagar, Srinagar, J & K, India

Neeraj
Department of Electronics and Communication Engineering, Indira Gandhi
Delhi Technical University for Women, New Delhi, Delhi, India

M. Rizwan
Department of Electrical Engineering, Delhi Technological University,
Delhi, India

Esther Salmerón-Manzano
Faculty of Law, Universidad Internacional de La Rioja (UNIR), Logroño,
Spain

Trina Som
Dr. Akhilesh Das Gupta Institute of Technology and Management, Delhi,
New Delhi, India

Anuradha Tomar
Department of Instrumentation and Control Engineering, Netaji Subhas
University of Technology, New Delhi, Delhi, India

Bhavesh Tondwal
Netaji Subhas University of Technology, Dwarka, Delhi, India

Maria Vicidomini
Department of Industrial Engineering, University of Naples Federico II,
Naples, Italy

Sunil Yadav
Netaji Subhas University of Technology, Dwarka, Delhi, India

Saqib Yousuf
Department of Electrical Engineering, National Institute of Technology
Srinagar, Srinagar, J & K, India

Alper Yılmaz
Department of Electrical and Electronics Engineering, Bursa Technical
University, Bursa, Turkey

OceanofPDF.com
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023
A. Tomar et al. (eds.), Prediction Techniques for Renewable Energy Generation and Load Demand
Forecasting, Lecture Notes in Electrical Engineering 956
https://doi.org/10.1007/978-981-19-6490-9_1

Introduction to Renewable Energy


Prediction Methods
Saqib Yousuf1, Junaid Hussain Lanker1, Insha1, Zarka Mirza1,
Neeraj Gupta1 , Ravi Bhushan1 and Anuradha Tomar2
(1) Department of Electrical Engineering, National Institute of Technology
Srinagar, Srinagar, J & K, 190006, India
(2) Instrumentation and Control Engineering, Netaji Subhas University of
Technology, Dwarka Sector-3, New Delhi, 110078, India

Neeraj Gupta
Email: [email protected]

Abstract
Renewable energy prediction began in the early years of the twenty-first
century. As there is so much uncertainty in forecasting the renewable
energy, several different approaches have been developed. The forecasting
methodologies are very difficult to label because each model predicts a
different set of installed and generation capabilities, cost of production,
demand and supply, etc. There are several techniques used to predict
renewable energy, including assessing the current situation or projecting the
future while concentrating on a particular target of interest. Prediction
techniques for renewable energy provide valuable information about the
potential changes in the energy that will be generated in the near future.
This chapter provides the various artificial intelligence techniques used for
more significant prediction of renewable energy, and also their application
is discussed. This includes AI for wind prediction methods, AI for solar
prediction methods, and other energy prediction models such as time series
models, unit root test and co-integration models, ANN models, and expert
systems. These techniques are area and time dependent based on the idea
that weather variables like wind direction and speed, temperature, relative
humidity and solar irradiance, etc., tend to represent strong relation among
areas close to one another. Accurate prediction promotes the significance of
renewable energy by way of improving their reliability and making them
economically feasible.

1 Introduction
1.1 Renewable Energy Status of the World
Renewable energy has the potential to make a significant contribution to
global energy security and carbon reduction. Renewable energy can help to
reduce energy imports and usage of fossil fuels. Since fossil fuels are
constantly depleting, they are becoming increasingly expensive depending
on market prices. To overcome these concerns, renewable energy resources
will be used to replace traditional energy sources. Renewable energy comes
from natural resources that can be replenished in under a human lifetime
without depleting the planet’s resources. Sunlight, wind, rain, tides, waves,
biomass, and thermal energy stored in the Earth’s crust are examples of
resources that are abundantly available worldwide and it cannot be
damaged. Furthermore, their impact on the climate or the environment is
negligible [1]. Over the last decade, renewable energy penetration into the
power grid has greatly expanded. However, in 2020, a minor decline has
been observed due to COVID-19 pandemic. The global demand for power
is increased by nearly 6% and 4% in 2021 and 2022, respectively. In
absolute terms, it was the highest annual rise ever (over 1500 TW). A quick
economic recovery, along with more intense weather conditions than in
2020, raised the electricity demand worldwide [2].
Fig. 1 Global electricity energy contribution
The global electricity energy contribution is depicted in Fig. 1. Figure
exhibits that low carbon sources (36.7%) contribute to about more than one-
third of the global electricity and the rest (63.3%) provided by fossil fuels.
Under the current climate initiatives, decarbonization of the electrical
industry is a key component. As per the Paris Agreement, one of the
essential indicators of climate policy is each country’s nationally
determined contribution (NDC). Emissions of carbon dioxide from coal
fired power plants reached a new high of 10.5 gigatonnes (Gt) in 2021 with
rise of 0.8 Gt as compared in 2020. Despite a surge in coal use, renewable
energy and nuclear power generated more global electricity in 2021 than
coal.By 2050, renewables are expected to account for 80–90% of
worldwide electricity generation. Renewable energy’s share of the power
mix is expected to double in the next 15 years. Due to falling prices, solar
and onshore wind are forecasted to account for the majority of renewable
energy resources (RES) growth in 2050, accounting for 43% and 26% of
total generation, respectively [3, 4].
Fig. 2 Share of installed mini grids by technology
Figure 2 indicates the share of installed mini grids by technology
worldwide. It is observed that the renewable-based mini grids are
increasingly being recognized as a significant booster of energy access. In
March 2020, 87% of the 5,544 mini grids operating in energy access set-ups
(with a total capacity of 2.37 GW) were renewable sources. Solar PV has
become the fastest rising mini grid technology, with 55% of mini grids
consolidated in 2019, up from only 10% in 2009 [3].

1.2 Artificial Intelligence in Power System


A power system is a complex network of generation, distribution, and
transmission lines that are all interrelated. The primary purpose of power
system operation and control is to provide customers with high-quality
electricity at an affordable price, while also ensuring the system’s stability
and reliability. As the electricity system continues to expand and
incorporate new technology, it has evolved into a complicated unit. There
are uncertainties in real power flow due to continual load variation and
increased penetration of renewable resources. Frequency fluctuations in the
power system are caused by any imbalance between generated power and
load demand, or by a mismatch in scheduled power interchange between
areas. Artificial intelligence (AI) is defined as the intelligence demonstrated
by machines and software, such as robots and computer programs. The term
refers to a project aimed at building systems with human-like cognitive
processes and traits, such as the ability to think, reason, find meaning,
generalize, differentiate, learn from past experience, and correct mistakes.
Artificial intelligence is the intelligence of a computer that can successfully
complete any intellectual task that a human being can. Artificial intelligence
helps in the mitigation of frequency deviations. Power systems are
complicated and nonlinear, with variable loading and system characteristics
that are dependent on operating points [5]. Different controlling strategies,
such as conventional controllers, have been developed; however due to the
presence of nonlinear components, they do not produce satisfactory results.
To address the problem of nonlinearity, a few artificial intelligence
techniques as shown in Fig. 3 such as particle swarm optimization (PSO),
grey wolf optimizer (GWO), genetic algorithm (GA), artificial neural
network (ANN), and fuzzy logic controller (FLC) have been used to
determine proportional, integral, and derivative values [6]. These strategies
can be used to optimize nonlinear PID controller parameters, resulting in
enhanced system performance in terms of settling time, overshoot, and
undershoot [7].
Fig. 3 Artificial intelligence in power system
The scale of the power system will continue to grow in future, as will its
complexity, which will bring some more difficult factors to deal with,
where some artificial intelligence currently has their own set of advantages,
disadvantages, and limitations. Artificial intelligence will improve in terms
of maturity and ease of use, allowing it to better tackle problems in power
systems. In a nutshell, combining a number of technologies with artificial
intelligence will be a prominent trend in future development.

1.3 AI for Wind Energy Prediction


With the increase in the consumption of energy and due to depletion of
available conventional energy resources, it has become imperative to
harness the renewable sources of energy, one among which is wind energy.
As per the precursory statistics published by World Wind Energy
Association (WWEA), the capacity of wind turbines has reached a record of
975 GW in 2021 in the world market. One of the challenges to integrate the
wind energy into the grid is the uncertainty, i.e. generation is intermittent
and uncontrollable. Therefore, to predict future generation from wind is
important so as to meet the demand as generation varies. The factors to be
considered for a desired output from wind energy include climate change,
wind reduction, fluctuating weather events, wake turbulence, etc. The
evolution of global cumulative and annual installed wind power
capacity(GW) during 2001–2021 is depicted in Fig. 4.

Fig. 4 Development of global cumulative and annual installed wind power capacity during 2001–
2021
Table 1 shows the development of cumulative and annual installed wind
power capacity in India over the years. The cumulative installed wind
power capacity increased from 1.46 GW in the year 2001 to 40.07 GW in
the year 2022. Also the annual installed wind power capacity increased
from the year 2001, but there is a minor decline in the year 2020 because of
COVID-19 pandemic.
Table 1 Development of cumulative and annual installed wind power capacity (GW) in India during
2001–2021

Year India cumulative (GW) India annual (GW)


2001 1.46 0.289
2002 1.7 0.246
2003 2.13 0.423
Year India cumulative (GW) India annual (GW)
2004 3 0.875
2005 4.43 1.43
2006 6.27 1.84
2007 7.85 1.575
2008 9.66 1.81
2009 10.93 1.271
2010 13.07 2.139
2011 16.08 3.019
2012 18.42 2.337
2013 20.15 1.729
2014 22.47 2.315
2015 25.09 2.623
2016 28.7 3.612
2017 32.85 4.15
2018 35.13 2.28
2019 37.51 2.38
2020 38.63 1.12
2021 40.07 1.44
Technologies like artificial intelligence and machine learning are
turning out to be effective way to predict the wind energy and can predict
the wind speed in a short period of time.
Artificial intelligence is a branch of computer science in which
intelligent devices or artefacts are created and educated to behave like
humans by obeying particular directions in computer programming systems.
It handles huge input data and can build effective representations. AI-based
forecasting models speed up decision-making, data mining, and clustering
challenges. Furthermore, they can perform difficult tasks in a reasonable
amount of time and without being explicitly coded [8]. Depending upon the
geographical conditions, viz. wind speed, wind direction, air pressure etc.,
the AI-based wind prediction basic model developed is shown in Fig. 5.
Fig. 5 AI-based wind prediction model
Time series is analysing data collected over an interval of time and uses
historical information to produce mathematical model, estimating the values
and validating simulation results. A set of observation is taken at set forth
time preferably at same intervals. The future values are based on the
previously observed values, and the data is analysed using artificial
intelligence. They may, however, fail to offer appropriate prediction results,
particularly when the time series happen to be non-stationary [9].
Deep learning program works as a function based on MLP training
algorithm. Data collection is done on hourly basis during a period of 24 h
taking factors like wind direction, air pressure, and the speed of wind into
consideration. To produce the expected output, the data is fed to algorithm
for its training purpose [10].
Artificial neural network-based AI plays a vital role in wind farm
optimization and works like a human brain consisting of a big network of
interconnected neurons, and this structure is replicated to get the desired
results as of brain. ANN can be trained to work in turbulent conditions, and
the network can be used for accurate results. Wind farm can be considered
as a number of clusters with each group of turbines having identical
behaviour for specific leading weather regimes. In [11], conjugate gradient
descent has been used to optimize the artificial neural network model to
develop conjugate gradient neural network (CGNN). While doing
experiments on various scale data sets, it has been observed that the
performance of CGNN increases significantly, with average iterations
dropping by over 90% without compromising the accuracy of prediction.
Long-term and mid-term power predictions of wind output are both well
served by CGNN. The CGNN uses significantly less training time than the
steepest gradient neural network (SGNN), racial basis function (RBF), and
extreme learning machine (ELM).
Hybrid methods: Various methods of AI can be combined for improving
overall efficiency and accuracy. Multiple algorithms are used to develop
diverse predictive models [9]. When compared to single forecasting
modelling methodologies, hybrid forecasting, such as autoregressive
integrated moving average (ARIMA) and artificial neural network, is seen
to be a potentially beneficial alternative [12].
In order to enhance wind power forecasting, a two-stage technique is
also used. After decomposing wind time series with wavelet decomposition,
an adaptive wavelet neural network (AWNN) technique is utilized to predict
wind speed by regressing each decomposed signal hours ahead of time. A
feed-forward neural network is then used to construct a mapping between
wind speed and wind power output. The latter permits expected wind speed
to be converted into predicted wind power. The AWNN technique provides
the best approximation and training capacity when compared to a feed-
forward neural network.
In [10], Bayesian optimization (BO) is used to fine-tune hyper-
parameters of Gaussian process regression (GPR), support vector regression
(SVR) with multiple kernels, and ensemble learning (ES) models (i.e.
boosted trees and bagged trees) to improve predicting performance. In
addition to this, in order to improve the forecasting performance of the
analysed models, dynamic information has been added into their
development.
The study uses the input as wind speed data and the output as wind
power data, both obtained at ten-minute intervals over an eleven-month
period. In order to predict the wind power, artificial intelligence approaches
such as artificial neural networks and genetic algorithms are applied. In
artificial neural networks, the genetic algorithm (GA) and back propagation
algorithm (BPA) are utilized as learning algorithms. In order to obtain the
finest architecture, several parameters such as learning rate, momentum
coefficient and epochs are changed in back propagation algorithm. Likewise
in the genetic algorithm learning approach, the crossover proportion and
elite count are changed together along with various variables to pick the
optimal model.

1.4 Energy Prediction Models


Techniques for forecasting can be used to augment decision-maker’s
management and common sense abilities. The most effective forecaster can
combine a skillful combination of quantitative forecasting approaches with
sound judgement avoiding total reliance of one or the other. A capital-
intensive industry delivers energy with a significant lead time. The goal was
to develop and make accessible to government agencies simple prediction
models that catch the most important characteristics of data patterns that
can be simply comprehended, applied, and have significant output potential.
The various energy prediction models are as follows:
Time series models
Unit root test and co-integration models
Regression models
Genetic algorithm/fuzzy logic
Econometric models
Grey prediction
Decomposition models
Input–output (IPO) models
ARIMA models
ANN models and expert systems.

1.4.1 Time Series Models


The time series models use trend analysis of time series to extrapolate
energy demands for the future. Time series means an ordered series of
values for a variable at equal intervals of time. Traditional energy demand
forecasting methods define correlations between observable factors and the
desired parameter. Average temperature, total count of clients, days with
high temperatures, total units of residence, price of fuels, population, per
capital income of a person, manufacturing value added, index of various
indicators like cost of labour in commercial activities, average price of
electricity, and per cent rural population were all factored into these models.
India’s electricity demand is projected using time series models [13]. Time
series analysis was utilized by Himanshu and Lester [14] to forecast
electricity demand in Sri Lanka.

1.4.2 Unit Root Test and Co-integration Models


Vector error correction method (VECM) and co-integration approach are
frequently used as the key research tools to explore the long-term relation
between macroeconomic factors like refined petroleum, crude oil, liquefied
petroleum gas, etc., the majority of which do not remain stationary. These
two strategies were chosen for two reasons, for starters the traditional
econometric techniques are plagued by false regression issues and the other
reason being that the majority of economic variables utilized in the equation
for energy import demand like industrial output, price, etc., are likely to be
endogenous, hence predicting energy demand with a single equation may
result in simultaneous bias leading to inaccurate conclusions. With the help
of the VECM, both difficulties can be solved. China’s energy imports are
quickly increasing due to its large energy consumption, and the energy
import demands of China have been forecasted using co-integration and
vector error correction (VEC) models [15].

1.4.3 Regression Models


Energy prediction is crucial in the development of energy and
environmental policies. Both short-term and long-term electric load
forecasts are accomplished using regression models. In these models, the
measure of connection between the average value of one variable (e.g.
output) and the value that corresponds to other variables is used (e.g. cost
and time). Economic aspects have been studied in relation to annual power
consumption in North Cyprus using regression [16].

1.4.4 Genetic Algorithm/Fuzzy Logic


In recent years, soft computing technologies have been applied in energy
demand forecasting. GA is the method of optimization inspired by
Darwinian natural selection and evolutionary genetics that uses repeated
search procedures. Fuzzy logic is a kind of variable processing that allows
many true values to be processed utilizing the same variable. Fuzzy logic
was utilized to forecast short-term electric power demand. The fuzzy logic
methodology was used to predict Turkey’s short-term annual power demand
[17].

1.4.5 Econometric Models


Energy consumption is linked to other macroeconomic issues in
econometric models. Econometrics is the quantitative application of
mathematical and statistical models to data in order to develop theories or
test hypothesis in economics and to forecast future trends based on
historical data. Total energy demand for the province of Quebec was
computed as a function of previous year’s energy price, real income, energy
consumption, and heating day [18].

1.4.6 Grey Prediction


Due to its ease of use and capacity to identify unknown systems with only a
few data points, grey prediction has gained popularity in recent years.
Energy demand forecasting is a grey system problem as a few
characteristics like population, GDP, and income have an impact on energy
demand, although the exact nature of that impact is unknown. Grey
prediction is based on a theoretical examination of the original data and the
production of grey models of the data in order to uncover and regulate the
development laws of the system of interest so that scientific quantitative
predictions about the system’s future can be made. A grey prediction model
with a genetic algorithm was used to forecast China’s energy consumption
[19].

1.4.7 Decomposition Models


Two typical methodologies for decomposition are energy intensity (EI) and
energy consumption (EC). Structural change in production, change in
sectoral energy intensities, and change in aggregate production level are the
main defined effects that impact the EC approach, but only the first two
effects are covered in the energy intensity approach. Its research covers
period vs time series decomposition, the importance of different levels of
sector disaggregation, result interpretation, and method selection. In 15
European Union nations, decomposition approach has been utilized to
estimate aggregate energy usage [20].

1.4.8 Input–Output (IPO) Models


IPO framework is a functional graph that represents the processing tasks,
outputs and inputs which are required to convert inputs into outputs. During
the procedure, any storage that takes place is occasionally included in the
model. The inputs indicate the flow of materials and data entering the
process from outside sources. An input–output model was integrated with a
growth model to explore the effects of economic expansion on energy usage
in Brazil [21].

1.4.9 ARIMA Models


In energy demand forecasting, autoregressive integrated moving average
(ARIMA) models are often utilized. Autoregression basically refers to a
statistical model that predicts future values based on past values. It is a form
of statistical analysis that employs time series data to better comprehend the
data or forecast future trends. Regression models that used the seasonal
latent variables generated the best results. It employed three models to
estimate power demand: regression model, ARIMA, and seasonal ARIMA
[22].

1.4.10 ANN Models and Expert Systems


Previously, neural networks and expert systems were extensively utilized to
forecast electrical load. They have also been used to anticipate long-term
energy demands using macroeconomic data in recent years. An expert
system is AI software that solves problems that would ordinarily need a
human expert utilizing knowledge stored in a knowledge base preserving
the expertise of a human expert. A type of neural network is artificial neural
networks (ANN), and it is essentially a computer simulation. It is based on
the structures and functions of biological brain networks. Neural network
changes are dependent on input and output because the structure of the
ANN is modified by information flow [23].

1.5 AI for Solar Prediction


Carbon emissions from monetary mobility are continuing to rise, with India
now ranking third among individual countries in terms of carbon emissions.
Renewable energy is the way forward, and policy and technical solutions
should be used to eliminate the barriers to its collection. The fundamental
issue with most renewable energy supplies is that they are subject to the
whims and vagaries of nature, making them a volatile and unpredictable
source of energy. The system’s operation is defined and determined by
predicting the power from these variable power sources. This chapter
presents a PV generation forecast model based on ANN and ANFIS.
Energy is essential to a country’s monetary success and human
prosperity since it allows living things to evolve, expand, and exist. Energy
has evolved into a critical product, and any uncertainty about its source can
stymie economic activities, particularly in emerging countries. In this
regard, energy security is critical for India’s economic success as well as its
social progress goals of poverty reduction, job creation, and achieving the
Millennium Development Goals [24]. Due to its present level of energy
consumption, India is increasingly shifting its focus to sources of renewable
energy. The Jawaharlal Nehru National Solar Mission (JNNSM), India’s
solar enterprise, was inaugurated to much fanfare. People who submit
requests and show interest in the part will be eligible for a number of
incentives from the government. The solar photovoltaic market in India
grew by 75% in 2010 and half in 2011. India might become a major player
in the global solar market with the correct policy support from the Indian
government. Among the mission’s main goals is to make India the world
leader in solar energy generation by 2022, with a deployment target of 20
GW.
Solar power is abundant in India, its average annual temperature varies
from 250 to 27.50 C because its placement between the Cancer Tropic and
the Equator India, as a tropical country, has a huge potential for PV power
generation. India has a lot of PV power generation potential because it is a
tropical country. India has an average annual solar radiation intensity of 200
MW/km and 250–300 bright days [25]. India receives 5000 trillion kWh
per year, according to government estimates, with 4–7 kWh per square
metre every day for the majority of the country. The International
Electricity Agency estimates that India would require 327 GW of energy
generation capacity by 2020. Energy departments need to be able to predict
the production of these renewable sources since it allows them to change
dispatching arrangements in real time, boost reliability, and minimize
generation system spinning reserve capacity. Solar power forecasting has
received a lot of attention. Physical and statistical methods are the two types
of short-term power forecasting methods for solar power plants. One of the
physical strategies is to develop a physical equation for calculating solar
power production methods and system attributes, as well as expected
meteorological data. Statistical approaches aim to summarize intrinsic laws
in order to forecast solar power using historical data. Although each of the
above systems has its own set of advantages such as non-stationary state
characteristics, the output of solar power has a significant impact on their
characteristics and convergence [26]. Due to the Earth’s rotation and
revolution, solar plant output power data has a one-day periodicity from the
time when solar irradiance at a place on the Earth’s surface has periodicity
and non-stationary features. The output power is presently increasing before
noon and decreasing later.
If an appropriate solution to minimize non-stationary state features of
solar output power is not applied, traditional solar power prediction
methods cannot ensure the accuracy of projected outcomes or even the
system’s convergence. Artificial intelligence algorithms have been lauded
as a viable method of forecasting solar energy production. Artificial
intelligence-based systems are adaptive by nature and can handle
nonlinearity. They do not require any prior modelling knowledge, and the
working algorithms automatically classify the input data and match it to the
proper output values. They are ‘black box’ gadgets that do not always agree
on how to retain data regarding model constituents’ physical relationships
[26].

1.5.1 Determination of Input Variables for the Power


Forecasting Model
Accurate data on solar irradiance is usually included in a computation to
predict expected output power. Weather estimates are intrinsically linked to
the forecasting of renewable energy generation. A variety of environmental
factors must be taken into account in order to predict the amount of solar
irradiance or power generated, including solar irradiance, cloud cover,
atmospheric pressure, and temperature, as well as PV panel conversion
efficiency, installation angles, dust on a PV panel, and other random factors.
All of these factors have an impact on the output of a PV system. As a
result, while choosing input variables for a prediction model, deterministic
elements that are significantly related to power generation should be
considered. Furthermore, because time series data on PV power generation
is substantially autocorrelated, this historical data should be employed as an
input to the forecasting model [27].
In order to build a precise and consistent output power forecast model, it
is required to analyse the effect variables for solar power plant output. The
worldwide sun irradiation measured on the ground has a direct impact on
the voltage impact of solar cells. A non-deterministic relationship’s
direction and quality can be determined using the Pearson product-moment
correlation coefficient or PPMCC estimation ranges from − 1 to + 1, with 1
denoting a positive aggregate relationship, 0 denoting no correlation, and 1
denoting a negative aggregate link. Under normal weather circumstances,
Table 2 shows the Pearson product-moment correlation coefficient between
PV production and environmental variables.
Solar irradiance and solar power output have a correlation coefficient
greater than 0.8, indicating that the two variables are highly correlated,
whereas solar power output and temperature have a correlation coefficient
greater than 0.3, indicating that the two variables are positively and low-
level correlated. A weak but negative association is seen by the humidity
correlation coefficient. The link between solar energy production and wind
speed is shaky [28].
Table 2 Pearson product—moment correlation coefficient

Weather condition Irradiance Temperature Humidity Wind speed


Clear 0.966 0.322 0.527 0.229
Cloudy 0.891 0.441 0.511 0.025
Overcast 0.987 0.409 0.478 0.125
Rainy 0.923 0.410 0.039 0.178

1.5.2 Description of the Proposed Forecasting System


ANN and ANFIS forecast models are used to anticipate the power of a solar
power plant based on past data. Inputs–outputs, network topology, and
weighed node connections make up an artificial neural network. The
properties of the problem are precisely reproduced by input features.
Network topology selection is another important aspect of ANN design.
This is done again to expand the number of hidden layers and nodes
accessible for forecasting and training. The variables that were investigated
were global horizontal irradiance, ambient temperature, global diffuse
irradiance, wind speed precipitation, sunshine duration, air pressure,
relative humidity, and surface temperature. To establish a new network, the
global horizontal irradiance, ambient temperature, global diffuse irradiance,
and surface temperature are all used. The neural network is fed using the
forward back propagation (FBP) technique. TRAINLM and LEARNGDM
are functions that are used to train and tune the neural network.
The mean square error is used to determine the performance measure.
The first layer of the neural network includes nine neurons and calculates
the output using the TANSIG transfer function. The adaptive neuro-fuzzy
inference system (ANFIS) combines the best features of an ANN with the
flexibility of a fuzzy system. The parameters of a Sugeno-type fuzzy
inference system are identified using a hybrid learning technique. The least
squares approach and the back propagation gradient descent method are
used to learn FIS membership function parameters to mimic a given
training data set. An ANFIS identifies and tunes the parameters and
structure of a fuzzy inference system (FIS) using neural learning rules.
The ANFIS possesses a number of characteristics that enable it to excel
in a wide range of scientific applications. Easy of use, rapid and accurate
learning, excellent generalization abilities, superior explanation capabilities
via fuzzy rules, and the ability to solve issues using both verbal and
mathematical information are all appealing features of an ANFIS. The
neuro-fuzzy method proposes using a neural network to build the fuzzy
system, with the goal of defining, adapting, and refining the topology and
parameters of the linked neuro-fuzzy network to establish the structure and
parameters of the fuzzy rule base. The network can be viewed as both a
linguistically meaningful connectionist architecture and an adaptive fuzzy
inference system that can learn fuzzy rules from input [29].

1.5.3 AI for Other Renewable Sources of Prediction


Biomass Energy
Organic biomass is manufactured from substances obtained from living
organisms. Wood, garbage, and plants are the most frequently used biomass
substances utilized in order to derive energy. These creature’s energy can be
changed to useable energy in two ways, i.e. indirect and direct. Biomass is
burned either directly to provide heat which then is turned into electricity or
indirectly by converting into biofuel.
The adaptive neuro-fuzzy inference systems ANN is used to estimate
transmembrane pressure when biohydrogen is being manufactured in
anaerobic membrane bioreactor for bio-energy manufacturing in
biochemical conversion technology production. Dielectric spectroscopy is
used for the purpose of determining growth rate and also the substrate
consumption in the process of fermentation. Chromatography Internet of
things (IoT) is used for the purpose of monitoring the composition of
biogas. ANN-GA-based combined strategy is used for controlling and
analysing the influence of fermentation time. ANN-GA is used to evaluate
the effect of the associated fermentative variables on the production of
bioethanol. ANN-MLR is used to asses most favourable design variables of
MFCs for improving performance. In thermochemical conversion
technology for bio-energy production, Taguchi method is used to search for
the the highest possible yield of sludge pyrolytic oil. Raman spectroscopy
with deep learning is used for rapid differentiation of porous biocarbon. The
bio-oil heating and the product distribution estimation of biomass rapid
pyrolysis are forecasted using an ANN support vector machine (SVM). The
syngas constitution for downdraft biomass gasification is forecasted using
SVM multi-class random forests. Multi-gene genetic programming is used
to forecast lower heating estimate and syngas composition for municipal
solid waste gasification. ASPEN plus simulation is used for optimization of
the process variables and the economic evaluation of manufacturing of
butanol. Fuzzy model particle swarm optimization is used to make better
the gasification rate and conversion of biomass gasification.
In the strategic decision-making process, it is critical to determine the
resources of biomass that could be utilized successfully in manufacturing of
bio-energy and assessing the potential of energy that could be derived from
their wastes. For the productiveness of the said plan, it is required to take
benefit from the past data. Taking into consideration this view point,
quantitative information regarding land utilized for agricultural
manufacturing, agricultural production amount, the aggregate of poultry
and agricultural production yield has been used to solve the problem.
The aggregate of animal and agricultural wastes that could be acquired
in coming time has been anticipated utilizing an AI-based prediction
method called support vector regression (SVR) that takes into consideration
the rate of rise in agricultural production yield to make long-term
judgments. Vapnik and his coworkers proposed SVR which is actually a
supervised learning approach for forecasting and modelling built from the
support vector machines algorithm in the year 1996. The approach was born
because of the need to distinguish between different data kinds. This
method identifies the end points of two or more sets of data known as
support vectors and regression line that runs through the middle of these
vectors which represent the data sets. Data sets are not always
distinguishable in a linear fashion. As a result, the nonlinear problem is cast
into a high-dimensional space, and the problem’s optimal function is
resoluted and linearly stated using kernel functions. Kernel functions are
radial based of linear form, quadratic form, and cubic form [30].
Geographic Information System (GIS) was used to determine the spatial
distribution of biomass energy resources and cultivable lands. Under
different scenarios, the energy that could be acquired from agricultural
wastes as a result of different agricultural items being planted on idle but
cultivable areas has been assessed. Users can acquire, manage, and analyse
spatial and geographical data using a Geographic Information System. GIS
allows you to examine and integrate disparate data spatially by displaying it
in a layered structure. GIS allows distinct raster data and vector to be
indicated on same plane allowing varied analyses. With the progress in
computer technology, spatial analysis has grown increasingly crucial in the
design and management of biomass supply networks. The Geographic
Information System (GIS) has been utilized to assess transportation network
accessibility, biomass raw material availability, distribution, and population.
It is also commonly used in bio-energy supply chains to display the results
in order to decide plant locations, distribute sources, and construct
transportation systems.
By using a scenario approach, the suggested method incorporates the
uncertainties that are present in the decision-making process. The biomass
potential was forecasted utilizing Geographic Information Systems (GIS),
artificial intelligence, and statistical data in a novel integrated manner that
has never been put forward before. This study, which is distinctive in this
regard, could be adapted to many areas and countries and utilized as
decision support system in various processes of decision-making. First,
yearly crop output and poultry figures were predicted using the SVR
technique for animal/plant raw substance resources which are commonly
available/grown in area and have a significant bio-energy potential that can
be extracted from their waste. The level of garbage and bio-energy
potentials were determined in the next stage. Then using GIS, multiple
layers were created, the region’s arable lands were assessed, and their
number is fixed on. Finally, the bio-energy potentials that may be obtained
by adopting various agricultural scenarios were discovered.

1.5.4 Summary
This chapter discusses the recent applications of artificial intelligence in
renewable energy systems, such as artificial neural networks, genetic
algorithm, particle swarm optimization, expert systems, and fuzzy theory.
These applications have the potential to considerably increase power system
efficiency, reduce human and material resource input, and play a key role in
power system security. The scale of the renewable energy in power system
will continue to grow in future, as will its complexity, which will bring
some more difficult factors to deal with, in which some artificial
intelligence currently has their own set of advantages, disadvantages, and
limitations, as well as a lack of a power system applied to the effective
hybrid intelligent, i.e. seek a more suitable method for artificial intelligence
processing problems in the power system that combines the advantages of
AI. It is believed that in future, as research advances, AI will become more
mature and easier to use, allowing it to better handle operation of the power
systems. In a nutshell, combining a number of technologies with artificial
intelligence will be a prominent trend in future development.

References
1. Gielen D, Boshell F, Saygin D, Bazilian M, Wagner N, Gorini R (2019) The role of renewable
energy in the global energy transformation. Energy Strategy Rev 24:38–50. https://​doi.​org/​10.​
1016/​j.​esr.​2019.​01.​006
[Crossref]
2.
World energy transitions outlook (2022) https://​irena.​org/​-/​media/​Files/​IRENA/​Agency/​
Publication/​2022/​Mar/​IRENA_​World_​Energy_​Transitions_​Outlook_​2022.​pdf
3.
Global energy review (2021) https://​iea.​blob.​core.​windows.​net/​assets/​d0031107-401d-4a2f-
a48b-9eed19457335/​GlobalEnergyRevi​ew2021.​pdf
4.
Owusu P, Asumadu Sarkodie S (2016) A review of renewable energy sources, sustainability
issues and climate change mitigation. Cogent Eng 3:1167990. https://​doi.​org/​10.​1080/​23311916.​
2016.​1167990
[Crossref]
5.
Pasupathi Nath R, Nishanth Balaji V, Artificial intelligence in power systems. IOSR J Comput
Eng (IOSR-JCE)
6.
Sharifi A, Sabahi K, Shoorehdeli MA, Nekoui MA, Teshnehlab M (2008) Load frequency
control in interconnected power system using multi-objective PID controller. In: 2008 IEEE
conference on soft computing in industrial applications, pp 217–221. https://​doi.​org/​10.​1109/​
SMCIA.​2008.​5045963
7.
Basa varajappa SR, Nagaraj MS (2021) Load frequency control of three area interconnected
power system using conventional PID, fuzzy logic and ANFIS controllers. In: 2021 2nd
International conference for emerging technology (INCET), pp 1–6. https://​doi.​org/​10.​1109/​
INCET51464.​2021.​9456120
8.
Alkabbani H, Ahmadian A, Zhu Q, Elkamel A (2021) Machine learning and metaheuristic
methods for renewable power forecasting: a recent review. Front Chem Eng 3:665415. https://​
doi.​org/​10.​3389/​fceng.​2021.​665415
[Crossref]
9.
Hanifi S, Liu X, Lin Z, Lotfian S (2020) A critical review of wind power forecasting methods-
past, present and future. Energies 13. https://​doi.​org/​10.​3390/​en13153764
10.
Alkesaiberi A, Harrou F, Sun Y (2022) Efficient wind power prediction using machine learning
methods: a comparative study. Energies 15(7)
11.
Li T, Li Y, Liao M, Wang W, Zeng C (2016) A new wind power forecasting approach based on
conjugated gradient neural network. Math Prob Eng 2016. https://​doi.​org/​10.​1155/​2016/​8141790
12.
Chang GW, Lu HJ, Hsu LY, Chen YY (2016) A hybrid model for forecasting wind speed and
wind power generation. In: 2016 IEEE power and energy society general meeting (PESGM), pp
1–5. https://​doi.​org/​10.​1109/​PESGM.​2016.​7742039
13.
Tripathy SC (1997) Demand forecasting in a power system. Energy Conv Manage 38(14):1475–
1481. https://​doi.​org/​10.​1016/​S0196-8904(96)00101-X
[Crossref]
14.
Amarawickrama H, Hunt L (2007) Electricity demand for Sri Lanka: a time series analysis.
Energy 33:724–739. https://​doi.​org/​10.​1016/​j.​energy.​2007.​12.​008
[Crossref]
15.
Zhao X, Wu Y (2007) Determinants of china’s energy imports: an empirical analysis. Energy
Policy 35:4235–4246. https://​doi.​org/​10.​1016/​j.​enpol.​2007.​02.​034
[Crossref]
16.
Egelioglu F, Mohamad AA, Guven H (2001) Economic variables and electricity consumption in
northern Cyprus. Energy 26(4):355–362. https://​doi.​org/​10.​1016/​S0360-5442(01)00008-1
[Crossref]
17.
Kucukali S, Baris K (2010) Turkey’s short-term gross annual electricity demand forecast by
fuzzy logic approach. Energy Policy 38:2438–2445. https://​doi.​org/​10.​1016/​j.​enpol.​2009.​12.​037
[Crossref]
18.
Arsenault E, Bernard J-T, Carr CW, Genest-Laplante E (1995) A total energy demand model of
Québec: forecasting properties. Energy Econ 17(2):163–171. https://​doi.​org/​10.​1016/​0140-
9883(94)00003-Y
[Crossref]
19.
Lee Y-S, Tong L-I (2011) Forecasting energy consumption using a grey model improved by
incorporating genetic programming. Energy Conv Manage 52:147–152. https://​doi.​org/​10.​1016/​j.​
enconman.​2010.​06.​053
[Crossref]
20.
Sun JW (2001) Energy demand in the fifteen European union countries by 2010: a forecasting
model based on the decomposition approach. Energy 26:549–560
[Crossref]
21.
Arbex M, Perobelli F (2010) Solow meets Leontief: economic growth and energy consumption.
Energy Econ 32:43–53. https://​doi.​org/​10.​1016/​j.​eneco.​2009.​05.​004
[Crossref]
22.
Sumer KK, Goktas O, Hepsag A (2009) The application of seasonal latent variable in forecasting
electricity demand as an alternative method. Energy Policy
23.
Koksal M, Ugursal V, Fung A (2002) Modeling of the appliance, lighting, and space-cooling
energy consumptions in the residential sector using neural networks. Appl Energy 71:87–110.
https://​doi.​org/​10.​1016/​S0306-2619(01)00049-6
[Crossref]
24.
Annual report of year 2013 by Central Electricity Authority of India, Govt. of India. https://​cea.​
nic.​in/​wp-content/​uploads/​2020/​03/​annual_​report-2013.​pdf
25.
A report on economic survey of India, 2014–15. https://​cea.​nic.​in/​wp-content/​uploads/​2020/​03/​
lgbr-2014.​pdf
26.
A report on “Load generation balance report (2014–15)”, Ministry of Power, Central Electricity
Authority of India, Govt. of India. https://​www.​ibef.​org/​economy/​economic-survey-2014-15.​
aspx
27.
Mandal P, Madhira STS, Haque AU, Meng J, Pineda RL (2012) Forecasting power output of
solar photovoltaic system using wavelet transform and artificial intelligence techniques. In:
Complex adaptive systems
28.
Ogliari E, Grimaccia F, Leva S, Mussetta M (2013) Hybrid predictive models for accurate
forecasting in PV systems. Energies 6:1918–1929. https://​doi.​org/​10.​3390/​en6041918
[Crossref][zbMATH]
29.
Lorenz E, Scheidsteger T, Hurka J, Heinemann D, Kurz C (2011) Regional PV power prediction
for improved grid integration. Prog Photovolt: Res Appl 19:757–771. https://​doi.​org/​10.​1002/​pip.​
1033
[Crossref]
30.
Adeyemo J, Enitan-Folami A (2011) Optimization of fermentation processes using evolutionary
algorithms–a review. Sci Res Essays 6:1464–1472

OceanofPDF.com
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023
A. Tomar et al. (eds.), Prediction Techniques for Renewable Energy Generation and Load Demand
Forecasting, Lecture Notes in Electrical Engineering 956
https://doi.org/10.1007/978-981-19-6490-9_2

Solar Power Forecasting in Photovoltaic


Modules Using Machine Learning
Bhavya Dhingra1 , Anuradha Tomar1 and Neeraj Gupta2
(1) Netaji Subhas University of Technology, Delhi-78, India
(2) National Institute of Technology, Srinagar, J &K, India

Bhavya Dhingra (Corresponding author)


Email: [email protected]

Anuradha Tomar
Email: [email protected]

Abstract
As fossil fuels become increasingly scarce, the globe seeks a dependable,
clean, and pollution-free energy source, thus solar power is gaining traction.
This makes the analysis of solar power to be generated highly important.
This chapter analyses various time series methods like seasonal auto-
regressive integrated moving average with exogenous factors (SARIMAX),
auto-regressive integrated moving average (ARIMA), Holt-Winters and
auto-regression (AR) to forecast solar power generated in household solar
panels in order to determine which method can estimate the value of
photovoltaic power accurately. After applying various pre-processing
techniques, it is determined that Holt-Winters method for time series
forecasting in additive mode predicts the values closest to the actual values
of the solar power with a root mean squared error (RMSE) score of 5.3949.

Keywords Solar power forecasting – Photovoltaic modules – Machine


learning Time-series analysis
1 Introduction
Solar power is a highly efficient, pollution-free, reliable and dependable
source of energy. All these characteristics make solar power an ideal source
of power for both domestic and industrial applications although solar power
is available in abundance in nature. It is essential to forecast solar energy
predicted from the photovoltaic (PV) modules for effective management of
the generated power, and this process of predicting solar power is known as
solar power forecasting [1]. The easiest and fastest way to forecast solar
power is by using machine learning to learn from the past data and generate
new values based on the previous trends. A number of researches are going
on in this sector to find the most efficient learning algorithm which can
forecast accurate values for solar power under the given conditions. One
such study is given by Wan et al. [2] which evaluates the performance of
various statistical techniques for the purpose of solar power forecasting in
smart grid. Huang et al. proposed a dendritic neural model (DNM)-based
ultra-short-term hybrid PV power forecasting approach. This study used
improved biogeography-based optimization (IBBO) to train the model,
which is a strategy that integrates a domestication operation to improve the
performance of traditional biogeography-based optimization (BBO) [3].
Panamtash et al. used quantile regression on top of time series models to
provide probabilistic forecasts. A reconciliation was done using a copula-
based bottom-up technique or a proportion-based top-down method, taking
into account the coherency among numerous PV sites [4].
This chapter aims to contribute to the ever-growing field of solar power
forecasting in PV cells by using various pre-processing techniques like
exponentially weighed moving average, exponential smoothing, etc., and a
number of time series techniques like AR, ARIMA, SARIMAX and Holt-
Winters to forecast solar power efficiently.

2 Methodology
This section provides a brief description of the dataset used to train and
evaluate the model, the pre-processing techniques used to improve the
model’s learning abilities and finally a number of time series models to
forecast solar power produced per day.
Fig. 1 Model architecture
Figure 1 represents the architecture used for this chapter, highlighting
various time series models used to forecast the solar power produced.
Initially, the raw data is converted into processed form by applying a
number of pre-processing techniques. This processed dataset is later on split
into two parts, for training the models and testing their performance, and
finally, models like AR, ARIMA, SARIMAX and Holt-Winters are applied
to determine which one is most efficient for this task.

2.1 Dataset
This chapter is based on “Daily Power Production of Solar Panels” dataset
which is an open-source dataset available on Kaggle website. In this data,
24 photovoltaic (PV) panels having a rated power of 210 W are placed at an
inclination of 45 C. These panels are made up of polycrystalline silicon.
The data consists of 3304 rows and four features, which contains data,
cumulative solar power consumption, daily power consumption and gas
used per day. Among the 3304 rows, 2600 rows were used to train the
model, whereas the rest of them were used to test the models efficiency.
This dataset does not contain any null values, thus data cleaning is not
required.

2.2 Data Pre-processing


In this section, various pre-processing techniques are discussed which are
used in order to minimize the error obtained by the forecasting models.
Fig. 2 Pre-processing pipeline
Figure 2 shows the various pre-processing techniques like feature
engineering, seasonal decomposition, exponential moving average and
various types of exponential smoothing used before applying the time series
models for the purpose of enhancing the efficiency of the models as well as
generating insights from the data. These techniques are used before
experimenting with the models as they add new features inside the data
which are essential for the time series models to learn the trends properly.

2.2.1 Feature Engineering


Firstly, three features named day, month and year are created using date to
determine the frequency and the time period of the data. After this, the
cumulative values of electricity consumption and gas consumption are
calculated and are added as features. Cumulative sum is estimated as
follows:

(1)

In Eq. (1), represents the i th row of the feature whose cumulative sum is
to be calculated. Since only cumulative values of solar energy produced
were given in the data, solar energy produced by day is also calculated and
used as the target variable for the predictions.

2.2.2 Seasonal Decomposition


A time series can be considered as a combination of trend, level, seasonality
and noise components. In this PV energy data, series is an additive model,
which is defined as follows:
(2)
Equation (2) represents a linear function which is given as y(t). Variables
x(t), g(t) and s(t) represent level, trend and seasonality of the time series,
respectively. is the noise present in the time series.

Fig. 3 Seasonal decomposition of cumulative solar power


Figure 3 represents the seasonal decomposition of cumulative solar
power. Since cumulative solar power is the sum of solar power produced
per day, the seasonal decomposition was performed in additive mode. From
this figure, it can be interpreted that cumulative solar power follows a
uniform trend for seasonality and has some noise in the form of residue.

2.2.3 Exponentially Weighted Moving Average


Exponentially weighted moving average (EWMA) is a statistical measure
used to analyse the data points of a time series by exponentially weighting
them, i.e. the weight of the older data points will fall exponentially [6].
Mathematically, this can be described as follows:
(3)
Equation (3) defines a recursive function EWMA, where is the weight
used to decay the older values and is current value of the time series. The
effect of EWMA is seen in Fig. 4.

Fig. 4 Exponentially weighted moving average of solar power

2.2.4 Exponential Smoothing


Exponential smoothing is a method used in univariate time series
forecasting to provide support to seasonality and to handle straightforward
trends in the data. EWMA and exponential smoothing are similar in some
regard except that this model employs exponentially diminishing weight for
prior observations, and it calculated a weighted sum of past observations
[7]. Exponential smoothing is of three types:
1. Simple exponential smoothing (SES)

2. Double exponential smoothing (DES)

3. Triple exponential smoothing (TES)

SES uses a single parameter which is known as its smoothing factor (


). is the measure of how quickly the effect of previous observations
decays exponentially. SES is mathematically represented as follows:
(4)
In (4), denotes the level of the series at a time t and the terms are the
data point of the time series. usually ranges in between 0 and 1. When
the values of are closer to 1, it indicates that the model focuses on the
most recent historical observations, while when the values of are closer
to 0, it indicates that the model considers more of the history when
generating the values of PV power.
An extra smoothing factor ( ) is used in DES to manage the decay of
the impact of the trend shift. This provides a support for trends in time
series. Based upon the type of trends, exponential smoothing can be
classified as follows:
1. DES with linear trend (additive trend)

2. DES with exponential trend (multiplicative trend)

The trend equation can be represented as follows:


(5)
In (5), is the trend equation and is the level equation at time t.
Holt-Winters seasonality method also known as TES adds another
smoothing factor ( ) to manage the impact of seasonality in data. Triple
exponential smoothing equation can be represented as follows:
(6)
In (6), represents the current seasonal index and the
seasonal equation is used to find the weighted average of past seasonal
index of m years ago and current seasonal index.
Since the data used in the chapter has trend as well as seasonality, triple
exponential smoothing gives closest values to the data as shown in Fig. 5.
Fig. 5 Exponential smoothing for solar power
Figure 5 shows the effect EWMA, SES, DES and TES on the data, and
from this, it can be inferred that TES has very close values to the actual
values.

2.3 Holt-Winters Method


The Holt-Winters approach is a time series forecasting method that employs
all of the level, trend, and seasonality equations. This approach has two
versions depending on the nature of seasonality. When seasonality in the
time series is constant, the first variant is the additive technique. The second
version is the multiplicative technique, which is employed when seasonality
changes proportionally to the time series level. Holt-Winter additive method
is created using (4), (5) and (6) and is denoted as follows:
(7)
Holt-Winter multiplicative method is expressed mathematically as follows:
(8)
In (7) and (8), k is an integer part of . Seasonality of this dataset
is constant. Therefore, Holt-Winter additive method is used to forecast solar
energy produced per day.

2.4 Auto-Regression Method


AR method is another method to make time series-based predictions. This
method uses previous values of the solar power produced to make the
predictions for newer ones. Since it is a regressive model, it tries to fit the
data in a linear manner. The general equation for this method is given as
follows:
(9)
In (9), X(t) is the value of solar power at a time of t seconds. AR method
makes an assumption that the previous values of the data are correlation
with the future values. Thus if the values are not correlated, AR method will
not be able to produce good results.

2.5 ARIMA
Another model for time series forecasting which is used for the purpose of
PV power prediction is ARIMA; it forms a regressive analysis using
autocorrelation in data [5]. Auto-regressive (AR), integrated (I) and moving
average (MA) are the three hyperparameters used to handle trend in
ARIMA model. The ARIMA’s auto-regressive component is identical to the
AR model described in the preceding section. The moving average
component of the model’s output is similar to the EMWA described
previously in that it is linearly reliant on the present and different previous
observations of a stochastic factor. Finally, the differencing step to construct
stationary time series data, i.e. eliminating the seasonal and trend
components, is referred to as integrated. ARIMA model is useful if the data
is non-stationary and is often represented by (p, d, q), where p refers to the
lag in the AR model, d refers to integration order or differencing, and q is
the MA lags.

2.6 SARIMAX
SARIMAX is an extension to the ARIMA model and is used to handle the
seasonality of the data by adding seasonality parameters to ARIMA model
and can handle external effects as well. The model is useful if the data is
non-stationary and is affected by the seasonality as some time series data
gets affected with the effect of seasons, and this model is able to handle
such data with ease. In addition to these exogenous regressors present in
this model, these variables are not affected by any other variable present,
i.e. they have zero correlation with other variables.

3 Results
In order to forecast solar power generated from these PV modules AR,
ARIMA, SARIMAX and Holt-Winters models were trained on 78% of the
data and the remaining 38% of the data was used for evaluating the models
performance. R squared, mean absolute error (MAE), mean squared error
(MSE) and RMSE are the metrics used to evaluate these models.
R squared or coefficient of determination gives a statistical measure of
how close the forecasted values of solar power are to the actual values of
the solar power generated by estimating sum squared regression (SSR) and
total sum of squares (SST). SSR is given as follows:
(10)
And SST is given as follows:
(11)

From (10) and (11), R squared can be mathematically represented as


follows:

(12)

The MAE is the average of the amount of error in values of predicted solar
power and actual solar power values. Mathematically, it is defined as
follows:

(13)

The MSE is the squared root of the average of the squared difference
between projected and actual solar power levels. MSE is defined
mathematically as follows:

(14)

RMSE is nothing but the squared root of (14) and is represented as follows:
(15)

In (12), (13), (14) and (15), N represents the number of rows in the dataset,
represents the actual values of the solar power, is the mean value of
actual values of the solar power, and represents the forecasted values of
the solar power.
Table 1 Time series models performance comparison

Model R squared MAE MSE RMSE


Holt-Winters 0.606 4.126 29.105 5.394
AR model 0.049 7.194 77.590 8.808
ARIMA 1.791 11.471 206.442 14.368
SARIMAX 1.601 10.907 192.361 13.869
Table 1 represents R squares, MAE, MSE and RMSE errors of the
proposed models. From this, it can be inferred that Holt-Winters method is
performing best in additive mode which is possibly due to the addition of
exponential smoothing features within the data.

4 Conclusion
Power forecasting in PV cells is the process of estimating how much energy
can be generated from the solar radiations. This chapter supports this task of
solar PV power forecasting by applying various pre-processing techniques
and machine learning models over a daily solar power forecasting dataset to
find which model is best when pre-processing techniques like feature
engineering, EWMA and exponential smoothing are applied to the dataset.
This chapter concludes that Holt-Winter method for time series forecasting
produces the most efficient results having a MAE score of 4.126, MSE
score of 29.105 and an R squared score of 0.606.

References
1. Larson DP, Nonnenmacher L, Coimbra CF (2016) Day-ahead forecasting of solar power output
from photovoltaic plants in the American Southwest. Renewab Energy 91:11–20
[Crossref]
2.
Wan C, Zhao J, Song Y, Xu Z, Lin J, Hu Z (2015) Photovoltaic and solar power forecasting for
smart grid energy management. CSEE J Power Energy Syst 1(4):38–46
[Crossref]
3.
Goh HH, Luo Q, Zhang D, Liu H, Dai W, Lim CS, Kurniawan TA, Goh KC (2022) A hybrid SDS
and WPT-IBBO-DNM based model for ultra-short term photovoltaic prediction. CSEE J Power
Energy Syst
4.
Panamtash H, Zhou Q (2018, June) Coherent probabilistic solar power forecasting. In: 2018 IEEE
international conference on probabilistic methods applied to power systems (PMAPS). IEEE, New
York, pp 1–6
5.
Atique S, Noureen S, Roy V, Subburaj V, Bayne S, Macfie J (2019, January) Forecasting of total
daily solar energy generation using ARIMA: a case study. In: 2019 IEEE 9th Annual computing
and communication workshop and conference (CCWC). IEEE, New York, pp 0114–0119
6.
Alevizakos V, Chatterjee K, Koukouvinos C (2022) The quadruple exponentially weighted
moving average control chart. Qual Technol Quant Manage 19(1):50–73
[Crossref]
7.
Zhao HM, He HD, Lu KF, Han XL, Ding Y, Peng ZR (2022) Measuring the impact of an
exogenous factor: an exponential smoothing model of the response of shipping to COVID-19.
Transp Policy 118:91–100
[Crossref]

OceanofPDF.com
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023
A. Tomar et al. (eds.), Prediction Techniques for Renewable Energy Generation and Load Demand
Forecasting, Lecture Notes in Electrical Engineering 956
https://doi.org/10.1007/978-981-19-6490-9_3

Hybrid Techniques for Renewable Energy


Prediction
Guilherme Santos Martins1 and Mateus Giesbrecht1
(1) Department of Electronics and Biomedical Engineering, School of
Electrical and Computer Engineering, University of Campinas, Ave.
Albert Einstein - 400, Campinas, Brazil

Guilherme Santos Martins


Email: [email protected]

Mateus Giesbrecht (Corresponding author)


Email: [email protected]

Abstract
Due to the urgent climate change challenge and the increase in electric
energy demand caused by the electrification of the transport system,
renewable sources of power, such as hydro, wind, and solar, are becoming
more important each day. Those sources are intermittent, and it is necessary
to predict its future generation capacity to guarantee effective planning for
the power system operation. The generation prediction is a time series
forecasting problem, which can be solved using classical statistical methods
or machine learning (ML) algorithms. Each technique presents its strengths
and limitations. One can be more advantageous than the other, depending
on the problem characteristics, such as the prediction horizon, the necessity
to estimate the confidence level of each prediction, etc. Recently, many
hybrid techniques, mixing different tools from statistical methods and ML
have been developed, benefiting from the main strengths of each field to
perform renewable power generation prediction. This chapter will present a
detailed bibliographic review of these techniques, highlighting the recent
advances in this field. An overview of hybrid techniques applied to predict
time series will be presented, highlighting the most recent methods
published in the literature. The following three sections will be devoted to
detail the hybrid techniques already applied to predict the hydropower
generation, which can be considered as one of the first renewable power
sources used massively, the wind and solar power. Finally, a section
containing the conclusions about the state of the art in renewable energy
prediction and the future perspectives will be presented.

Keywords Renewable energy – Hydropower generation – Wind power


generation – Solar power Generation – Generation forecasting – Time series
forecasting

1 An Overview About Hybrid Techniques


for Time Series Prediction
Power generation prediction is an application of the most general problem
known as time series prediction or time series forecasting. To handle this
problem, there are many approaches, which are generally classified in three
major categories: statistical, ML and hybrid methods.
The methods from the first category are based on statistical principles,
such as random variables, cumulative probability functions, statistical
densities, Bayes rule, among others. Roughly speaking, the derivation of
time series models based on statistical methods start from a parametric
model, that has its unknown parameters estimated from data with an
optimization process, which is either a minimization of error or a
maximization of likelihood function. The parametric model structures
include auto-regressive (AR), auto-regressive with moving average
(ARMA) and auto-regressive integrated moving average (ARIMA) models,
where the time series is predicted based on its past values. In some cases,
exogenous variables can be considered resulting into AR with exogenous
variables (ARX), ARMA with exogenous variables (ARMAX) and ARIMA
with exogenous variables (ARIMAX) models. Another structures
commonly applied include seasonal effects, resulting into seasonal ARIMA
(SARIMA) and seasonal ARIMAX (SARIMAX) models.
The main advantages of statistical methods are that the models are
relatively easy to interpret and the computational burden to estimate its
parameters is relatively small. On the other hand, since the methods are
based on statistical concepts, many of them arise from assumptions such as
linearity, stationarity, ergodicity, Gaussian nature of data, etc., which in
many cases are not perfectly valid. Some relevant textbooks describing
those methods are [1–3], and more recent techniques involving state space
models for time series analysis can be found in [4, 5].
The ML time series prediction techniques are based on methods such as
multilayer perceptron (MLP) neural networks (NN), radial base function
(RBF) NN, support vector machines (SVM), artificial neural fuzzy
inference systems (ANFIS), decision trees (DT), random forests (RF),
heuristic optimization methods, k-nearest neighbours (k-NN), among
others. Due to the recurrent nature of the problem, it also is common to find
methods based on recurrent NNs (RNN) like Elman recurrent neural
networks (ERNN), Jordan recurrent neural networks (JRNN), and networks
based gated recurrent units (GRU) or long short term memory units
(LSTM). Reference [6] provides an excellent introduction to these methods,
while in [7] a more advanced discussion is provided. Many of the ML
methods were initially developed for classification, but the regression
problem can also be addressed adapting those tools.
The main advantage of ML techniques is a natural capacity to deal with
nonlinear and nonstationary data. On the other hand, the computational
burden and the interpretability of the models resultant from those methods
are issues that still being discussed by the forecasting community.
Furthermore, for practical cases, it was proven that in many situations those
methods are not as accurate as statistical methods [8].
Hybrid techniques are interesting alternatives to pure statistical or ML
time series prediction methods. Those approaches combine the advantages
of statistical and ML algorithms to deal with cases where hypothesis such as
linearity and stationarity are not present, and in many cases result in
interpretable models with a relatively small computational effort if
compared to pure ML methods. The idea of mixing different methods is
present since the first forecasting competitions, such as M1, M2 and M3,
where combinations of statistical techniques presented more accurate
results than pure methods, indicating that the forecasting performance can
be improved if more than one method is considered [9]. In M4 competition,
ended on May 2018, a hybrid approach based on statistical and ML
methods produced the most accurate forecasts [10]. In the most recent M
competition, ML methods presented better results than hybrid methods [11,
12], but the second are still well accepted by the renewable energy
prediction community.
Given the advantages of hybrid techniques, many methods were already
proposed and applied to the most diverse time series prediction problems.
Most of the hybrid techniques can be categorized in one of the following
classes:
1. Model ensemble
The most natural hybrid approach to forecast a time series is based
on a combination of results of different methods [13]. There are two
subclasses of model ensembles. The first one is the parallel model
ensemble, where the input variables are given independently to
different models and the final result is the combination of the outputs of
each model. This combination varies from a simple mean to weighted
means, with weights calibrated by optimization algorithms.
The second class is known as serial model ensemble. In this
category, input data is given to the first algorithm and the residuals are
calculated by subtracting the results of this algorithm from the real
data. Then, the following model is fitted using the residuals as inputs.
This is repeated for all models considered in the ensemble. Generally,
this kind of ensemble has two models, the first one is commonly a
statistical method that predicts the linear behaviour of the time series,
and the second one is a ML learning method, used to predict the non-
linear components of the time series.

2. Parameters determination based on meta-heuristic methods


Another class of hybrid methods uses metaheuristic optimization
algorithms to estimate parameters of parametric models [14, 15]. In
system identification problem, which is related to the time series
forecasting, a similar procedure can be classified as grey box
identification, since it is between the white box identification, where
the system structure and the parameters are known, and the black box
identification, where neither system structure nor the parameters are
known [16]. The hybrid aspect in those methods is due to the mixture
of a known structure, given by classical time series analysis, for
example, with a heuristic optimization algorithm used to calibrate the
parameters. Moreover, besides the determination of parameters in
classical models, heuristic methods can be used to estimate a set of
parameters for more complex structures, such as NNs, adaptive ANFIS,
among others.
3. Time series decomposition
A third approach consists on decomposing the original time series
into simpler components using either structural time series theory [3],
or decompositions methods such as wavelets transform (WT) [17],
empirical mode decomposition (EMD) [18], singular spectrum analysis
(SSA) [19], among others. After the decomposition, there are two
possible approaches. In the first one, a single model with multiple
inputs is trained with the different time series components. In the
second one, each component of the time series is predicted using a
different model, of the same nature or not, and the final prediction is
the combination of the forecasts for each component [20–23].
There are many works where meta-heuristic methods are used to
estimate the parameters of models obtained from different time series
components. In this chapter, those references will be considered in this
class, because the authors understand that the most relevant
characteristic of these methods is the decomposition algorithm.

4. Other hybrid methods


Besides the three main categories identified above, there are other
manners to hybridize forecasting algorithms. An example is the
combination of different NNs. Another example is based on the mixture
between forecasting methods and similarity-based ML methods, such
as the k-NN or clustering algorithms. In that cases, the similarity
methods are used to find a past moment similar to the one just before
the instants to be predicted [24, 25], and then the prediction algorithm
is applied based only on that part of the time series. Other works are
based on combination of genetical programming (GP) and other
techniques, such as variable selection methods. Since these categories
are not present in all power sources analysed in this paper, these works
will be categorized as other hybrid methods.
A recent review about the application of hybrid approaches for
renewable power prediction can be found in [26]. In this chapter, a
complementary bibliographic review will be provided, focusing on more
recent articles about hydro, wind and solar power prediction and classifying
the references in the categories listed above.

2 Hybrid Techniques for Hydropower Prediction


Hydropower generation capacity is related to river flow. This relates the
hydropower prediction algorithms to hydrology models, that were
developed since XIX century. The first hydrology models were developed
based on relations between rainfall and run-off, as detailed in a recent
review [27]. Then, other models were developed to forecast precipitation,
stream-flow, sediment, groundwater, among other variables [28].
With advances in computational capacity, data-driven methods arose.
The most simple are the statistical ones, but in the current century, ML
methods were deeply studied [29]. More recently, hybrid methods gained
attention in hydrology forecasting community. In this section, recent hybrid
techniques for hydropower prediction will be discussed, following the
categories introduced in Sect. 1.

2.1 Model Ensemble


An extensive study about hydrological time series was presented in [30]. In
that article, the authors performed one-step ahead predictions for a massive
set of 90-year-long river flow time series from stations in North America
and Europe, resulting in 599 time series. The forecasting was performed
using five base methods, which were the Naive, the simple exponential
smoothing, chosen for its good performance in M3-competition, the
complex exponential smoothing, which was part of a competitive ensemble
in M4-competition, the automatic autoregressive fractionally integrated
moving average (ARFIMA) and the Facebook’s Prophet. The model
ensembles were all 26 possible combinations (per two, per three, per for or
per five) of the base methods, and the forecast was calculated as the median
of the involved methods. As a result, the authors observed that the model
ensemble improved the one-step ahead performance more than any other
method alone. It must be noticed that the methods used for comparison
were really competitive in other scenarios, demonstrating that the model
ensemble is a good strategy to deal with hydrological time-series.
Another recent work that dealt with model ensemble to predict
hydrological time series was [31]. Differently from [30], where the model
combination was based on the median of the results from different methods
to estimate one-step ahead predictions, the authors of [31] proposed a three-
phase methodology to combine the ARIMA and the Bidirectional LSTM
(bi-LSTM) for long term predictions. In phase I, the authors performed a
seasonal trend decomposition using loess (STL) [32] and calculated the
forecast using a hybrid method based on an ensemble of ARIMA and bi-
LSTM. In phase II the authors split the data to create different models for
each season, decomposed the data from each season using the STL
decomposition and then used again ARIMA and bi-LSTM to forecast the
data for each season. In phase III an average of the two first phases was
made and a final ensemble model was obtained. The authors tested the
methodology both for hydro and wind power prediction data and the
conclusions were that the model improved the accuracy, the uniformity and
the diversity of the solutions.
In [33], different kinds of unorganized machines were used to predict
streamflow from hydro power plants in Brazil. The methods were the
extreme learning machine (ELM), which is a NN with random weights in
its single hidden layer and output layer weights calculated with the least
squares method, and echo state network, which is a NN that resembles a
state space model, with a dynamic reservoir layer that represents the non-
linear state transition and an output layer that consists on a linear
combination of the states. Besides the networks, ensembles combining
theses models and simpler ones, such as AR and ARMA were tested.
Differently from the ensembles in the former references, different
combiners were tested: average, median, MLP and RBF. The methods were
tested to predict the streamflow for five different power plants and the
ensembles were the most accurate models for one-step ahead predictions.
For longer horizons, the ELM was the most accurate method.

2.2 Parameters Determination Based on Meta-Heuristic


Methods
The forecasting methods described in this section to predict hydropower
related variables are based on models with unknown parameters, which are
determined by meta-heuristic methods. In Sect. 2.3, some of the works
described also use meta-heuristic methods to calibrate model parameters.
The main difference between the papers described in these sections is that
the following section covers articles where some kind of time series
decomposition method is applied before using the model, while the methods
discussed in this section do not use any kind of decomposition.
In [34], a conceptual rainfall-run-off model, based on the physical
relations between hydraulic phenomena such as precipitation, evaporation,
run-off and streamflow, was used to predict the streamflow. The model had
16 parameters and its determination was considered a challenging task, due
to the high dimensionality of the search space. To solve the problem, the
authors used the multi-objective particle swarm optimization (MOPSO) to
find a Pareto front of possible optimal solutions in the parameters space.
The results were a well spread set of solutions, with greater diversity if
compared to other calibration methods.
In [35], the cooperation search algorithm (CSA), which is a heuristic
optimization method, was used to calculate the connection weighs and
biases for neurons in hidden and output layers of an ANN trained to forecast
river flow time series. The reason for applying the heuristic optimization
algorithm, instead of most usual algorithms such as the back-propagation
(BP) or gradient-based learning, was to avoid problems such as local
convergence and slow learning rate. Many ANN structures with different
combinations of inputs were tested and compared to other methods such as
ELM, SVM and ANN trained with classical methods. The conclusions are
that the ANN trained with the CSA algorithm outperformed the other
methods with a smaller root-mean-square error (RMSE).
The Grey Wolf Optimization (GWO) was used in [36] to calibrate the
parameters of membership functions in an ANFIS model to forecast the
hydropower generation in a dam in Iran. The model inputs were the
precipitation, the inflow and the hydropower generation in former months.
The strategy was compared to the classical ANFIS. As a result, the classical
ANFIS failed to produce accurate forecasts for some combinations of input
parameters while the ANFIS trained with GWO was successful in all cases
studied.
A similar idea was used in [37], where three heuristic methods—the
particle swarm optimization (PSO), the genetic algorithm (GA) and the
differential evolution (DE) were used to tune the parameters of an ANFIS to
forecast the rainfall time series, which is related to flows and, consequently,
to the hydro power generation capacity. The heuristic strategies were
chosen due to the fact that classical parameters optimization methods may
be stuck in local minima. The ANFIS with parameters calibrated with the
heuristic algorithms presented better performance indicators than the
classical ANFIS for models with different combinations of regressors.
Following the same idea presented in former references, [38] proposed
to calculate the weighs of a classical ANN with recent physics-inspired
meta-heuristics, which were the Equilibrium Optimization (EO), Henry
Gases Solubility Optimization (HGSO) an Nuclear Reaction Optimization
(NRO). The NNs were trained to predict the streamflow in Nile river. The
accuracies of the resultant models were compared to accuracies obtained
with ANN trained with classical algorithms and hybrid NNs trained with
other well known meta-heuristics. As a result, the NNs trained with physics-
inspired meta-heuristics outperformed the results obtained with the other
methods tested.

2.3 Time Series Decomposition


Many hydrological time series prediction methods are based on
decomposition methods. Generally, the first step consists on decomposing
the time series into its components, then using a forecasting method to
predict the future steps of the time series. As pointed out in the introduction
either a single model with multiple inputs can be used to forecast the series
or a different model can be used to forecast each component, and then, the
results are combined to produce the final forecast. Both strategies are
discussed here, with the ones using single models being discussed firstly.
A common method involves the time series decomposition using WT
and the forecasting using some kind of ANN. The basic idea is to train the
ANN with the sub-components as inputs and future samples of the time
series as outputs. The idea was applied to groundwater level forecasting in
[39] and to forecast hydrological time series in many posterior references,
such as [40], where the classical ANN was used.
The ELM was combined with WT in [41]. In that work, the river flow
time series was decomposed into a finite number of components using the
WT and the past data from each component was used to train the ELM. The
results were compared to the direct application of the ELM on the original
time series and, as a conclusion, the hybrid method proposed reduced
drastically the RMSE and the mean absolute error (MAE). The same
principles were used in [42], with results better than the ones obtained with
the original time series data.
Another variation associated WT with ANFIS to train rainfall-runoff
models [43]. In fact, the association between the WT and ML models, such
as the ones discussed above, gained attention from the hydro-climatology
community since 2004, when one of the first papers combining WT and ML
was published [44]. After that, many hybrid approaches following this
paradigm were proposed to describe precipitation, flow, rainfall-runoff and
sediment models, as pointed out in the review [45].
An idea similar to the one proposed in previous references was also
explored in [46], where instead of WT, STL and SSA were used to
decompose the streamflow time series. After each decomposition, three
different NNs were used to forecast the time series using the components
and other related series as inputs. The NNs were the convolutional neural
network (CNN), the LSTM and the classical NN. As a result, six hybrid
methods were developed, based on the combination of each decomposition
method with each NN. Besides the streamflow time series components, the
authors also used other series such as precipitation, relative humidity and
temperature as potential model inputs and, in order to decide which inputs
should be used in the model, the Gini index method was applied, resulting
in other six hybrid methods, similar to the first ones, but with this additional
feature selection step. The results of each hybrid method were compared to
the NNs alone and the conclusion was that the data decomposition
increased the accuracy of the forecasts. From the methods proposed, the
best performance was obtained with the combination of SSA and the ANN.
For hydro forecasting, few authors proposed hybrid methods including
decomposition techniques in which model is trained for each component.
Some examples of that approach are discussed below.
In [47], the decomposition method was the relatively recent variational
mode decomposition (VMD) [48]. Then each sub-series was predicted
using an ensemble of four ML methods, which were combined using
weights calibrated by solving a multi-objective optimization problem with
the multi-objective grey wolf optimizer (MOGWO). Finally, the
components were combined to produce the final forecast.
In [49], the runnoff data series from two stations in China was
decomposed using the EMD. Then, each component was predicted using
the least-squares SVM (LSSVM), which is a variation of the SVM to
decrease the computational burden. To optimize the LSSVM
metaparameters, a swarm intelligence method known as gravitational search
algorithm (GSA) was used. Then, the results of the prediction of each
component were combined and the final forecast was created. The method
proposed by the authors was compared to the SVM and the ANN alone and
the results showed that the hybrid method presented a substantial
improvement in the root-mean squared error, demonstrating the advantages
of decomposing the time series before applying the forecasting method.

2.4 Other Hybrid Methods


In [50], a hybrid method composed of three stages was proposed to forecast
streamflow. The first stage was the input selection, using the least absolute
shrinkage and selection operator (LASSO). The candidate input considered
included many climatological indexes related to global atmospheric
oscillations, sea surface temperature and rainfall. The second stage was the
classification of the samples in three flow regimes: low, medium and high.
The motivation for this stage was to separate data that follow three distinct
patterns in order to simplify the modelling stage. Two different approaches
were adopted in this stage, which were a single-variable one, in which the
classes were defined based on a rainfall threshold, and a multi-variable
fuzzy C-means (FCM) approach. Then, in the third stage, a different model
was trained for each class. Two models were tested: a traditional ANN and a
deep belief network (DBN). For the majority of the cases studied, the
combination between FCM and DBN was the one that resulted in the best
accuracy. The results were also better than the observed for forecasting
without input selection or classification, demonstrating that the hybrid
approach was relevant to enhance the models performances.
In [51], a combination between the multi-stage genetic program
(MSGP) and the LASSO was used to produce relatively simple and
accurate models to forecast the one-step ahead streamflow based on its past
values. The idea was to use the MSGP to estimate functions between the
variable to be predicted and its past values and then, to use LASSO to select
the functions that were most relevant to produce accurate results. Two
variants of the method were proposed. In the first one, only the functions
obtained with the MSGP are considered as candidates in the LASSO
procedure. In the second one, functions and some past values of the time
series were considered. To compare the models both in accuracy and
complexity, the Akaike Information Criterion (AIC) was used as a
performance metric. The methods proposed were compared to classic GP
and many SARIMA models. The results showed that the hybrid algorithm
performed better considering both RMSE and AIC.

3 Hybrid Techniques for Wind Power Prediction


Wind power is a fundamental source to achieve the net zero emissions
target by 2050. For this reason, the installed capacity is growing each year,
with 93 GW installed in 2020 and 88 GW in 2021. Although the
tremendous increase of this source penetration in electric power generation,
the world needs that the installed capacity of this source grows at least 180
GW per year to achieve the emissions target by 2050 [52].
The main drawback of the wind power generation is the intermittence of
wind. Differently from the hydropower, where the potential energy of water
is stored in dams, there is no technical method to store the wind power
directly, and indirect methods must be used, such as reversible power
plants, batteries or other advanced energy storage techniques. For this
reason, it is crucial to predict the winds in wind farms to plan the power
dispatch.
This section describes the hybrid techniques for time series related to
wind power prediction, such as the wind power itself, wind speed, wind
direction, among others. As mentioned previously, the authors identified
several classes of hybrid methods. In this context, the main current
techniques are discussed.
Some reviews were made about wind power prediction. In [53], the
status of hybrid methods for wind forecasting was described. The authors
classified the hybrid methods into four fields: data preprocessing-based
approaches, where some algorithm is used to decompose the time series
into components easier to predict, parameter optimization-based
approaches, where the parameters of a given model are optimized using
some optimization algorithm, and post processing-based approaches, where
the residuals of a first method are analysed using another method. In the
literature review presented, several models were shown, including
statistical, ML and hybrid models. The classes used by the authors of [53]
are similar to the ones identified in this text, unless for parallel model
ensembles, which were not explicitly classified in that work.

3.1 Model Ensemble


As for hydro power prediction, many authors explored model ensemble
methods to predict wind power generation. From the many works existent
in literature, some of the most recent are discussed in this section.
In [54], a meta learning-based hybrid ensemble approach for short-term
wind speed forecasting was proposed. The ensemble prediction model was
divided into two parts: meta-learning and individual predictor. The first part
was based on a NN and the second one consists of three pre-trained
individual predictors which are BP NN, LSTM and GRU respectively. The
proposed model outperformed accuracy, stability and data correlation
results when compared to other models such as SVM. The approach also
outperformed the LSTM and NN used alone, demonstrating the advantages
of model ensembles.
The authors in [55] introduced a hybrid neuro-fuzzy bootstrap
prediction system for wind power generation. The bootstrap bagging
technique was used to create smaller datasets from the original dataset.
Each one of the smaller datasets has statistical properties similar to the ones
observed in the original set. Then, a neuro-fuzzy model was trained for each
smaller dataset. To forecast wind power generation, the outputs of each
neuro-fuzzy model are combined by calculating the average of the results,
in a parallel ensemble. The method was compared to a single neuro-fuzzy
model trained with the whole original dataset and the results showed that
the proposed hybrid neuro-fuzzy bootstrap method presented smaller
percentage and average errors.
The authors in [56] proposed a novel ensemble model for long-term
forecasting of wind and hydro power generation. The proposed model was
composed of three phases. In the first phase, a hybrid model combining
ARIMA and Bi-LSTM predictions was developed. The inputs to this model
were the seasonal and trend components of the time series obtained using
STL. The second phase is an ARIMA model with inputs defined by a
Diligent Search Algorithm (DSA). This algorithm was used in order to
identify hidden seasonalities of the time series. In step three, phases one and
two are merged to build the final ensemble model. The method presented
more accurate results than other ML and statistical methods both for hydro
and wind power prediction.
In [57], hybrid serial model ensembles were developed to forecast
electric power generation in a small wind turbine. The first model used in
the ensemble, defined as physical model, outputs energy production using
as inputs wind speed forecasts generated with a Numerical Weather
Prediction (NWP) model. The second model used as inputs the outputs from
the first one and other exogenous correlated variables. Many strategies were
used to determine the best structure for the second model, involving naive,
naive smoothing, multiple linear regression (LR), k-NN, SVM and MLP.
Parallel ensembles involving those models were also considered, using
different methods to combine each one of the models, such as average,
weighted average, average without extreme forecasts, where the minimum
and the maximum results are ignored, and ANN. As a conclusion, the most
accurate method was a parallel ensemble of three methods, combined using
the average without extreme forecasts. The results of this work corroborates
the conclusion that a well chosen model ensemble can be more accurate
than any method used alone.
In [58] a serial ensemble hybrid model composed of linear and
nonlinear parts was proposed to forecast wind speed. The ensemble EMD
(EEMD) decomposition technique was used to eliminate noise and
reconstruct the series. Then, the ARIMA model captured the linear patterns
hidden in the time series, while the BPNN model, optimized by the Cuckoo
Search Optimization (CSO) algorithm, was used to forecast the residuals.
The proposed model outperformed other tested methods such as ARMA
alone, BPNN alone, among others.
Eight different hybrid schemes were proposed in [59] to forecast wind
speed. The first step to build the ensembles consisted of input variables
selection, which was made either by auto-correlation analysis or Phase
Space Reconstruction (PSR). Then, the selected inputs were given to a GP
or a SVM algorithm. In some of the schemes, the outputs of these
algorithms were the final results. In other schemes, the residuals from the
first algorithm were fed into a second model, which could be either a GP or
a SVM, and then the final result was the sum of the results of the first and
the second algorithms. These schemes are serial model ensembles. All four
possible combinations between SVM and GP were tested as first and
second models, and the other schemes were obtained by using SVM and GP
alone, with each one of the two input selection algorithms. The most
accurate results were provided by the combination of PSR input selection
method, followed by a SVM to model the main series and the GP to model
the residuals.
In [60], a hybrid ensemble approach, including statistical and ML
methods, and combining series and parallel ensembles, was proposed. The
first step was to pre-process data with the Kalman filter, in order to obtain a
trend and a residual. The trend and the original data were given as inputs to
an ARIMAX and a MLP model. The residual was treated by fuzzy-
ARIMAX (FARIMAX) and fuzzy MLP. Then, the results of the two
methods used to treat the trend, the two methods used to treat the residual
and the Kalman filter outputs were combined to produce the final forecast.
The hybrid approach outperformed other models tested by the authors,
which basically consisted of parts of the whole structure used alone. In this
way, the authors demonstrate that the hybrid approach used was valid.

3.2 Parameters Determination Based on Meta-Heuristic


Methods
Many works were developed to forecast wind power or related variables
using meta-heuristic methods to determine parameters of a given parametric
model. Recently, the review [61] was published, covering the application of
meta-heuristic algorithms to estimate the optimal parameters of wind power
prediction models. The authors identified three layers. The first one was
named as auxiliary and is responsible for decomposing the dataset into
stationary subseries. The second layer was named as forecasting base, and
consists of the actual forecasting model, which can be either a ML
algorithm or a NN, in many of its possible configurations. Then, the third
layer, named as core, is the meta-heuristic algorithm used to calibrate the
parameters of the forecasting model. This framework was identified in
many of the 2195 publications about wind forecasting collected by the
review authors from 2011 to 2020.
In fact, the vast majority of the works related to wind power prediction
that use meta-heuristic methods also use time series decomposition
techniques as a pre-processing step. To maintain the classification adopted
for the other renewable sources discussed in this work, all methods that
include time series decomposition techniques will be discussed in Sect. 3.3.
In this section, the only reference using meta-heuristic methods without
decomposition for wind speed forecasting that was found is discussed in the
sequel.
In [62], some wind speed forecasting techniques were proposed. In that
paper, three hybrid methods were presented. The first combined Wavelet
Neural Network (WNN) with Improved Clonal Selection Algorithm
(ICSA), the second was a combination of WNN and PSO and lastly, the
third model tested was an ELM. The series was not decomposed, as in other
papers discussed. The WNN-ICSA hybrid method obtained better results
in terms of accuracy.

3.3 Time Series Decomposition


One of the classes found in time series forecasting literature is based on
time series decomposition. In this case, the series are decomposed into
components and then, generally two different approaches are adopted: In
the first one, components are used as different inputs of a single model. In
the second approach, each one of the components is predicted by and
individual model and the final result is a combination of the prediction
results for each component. In this section, the works following the first
approach will be discussed firstly, followed by references that adopt the
second approach. In many cases, heuristic algorithms are used to calibrate
parameters of the model used. Since the authors understand that the main
feature of those works is the time series decomposition, they are discussed
in this section, and not in the previous one.
In [63], the authors implemented a hybrid method using VMD, Multi-
Kernel Regularized Pseudo Inverse NN (MKRPINN) and a meta-heuristic
algorithm named vaporization and precipitation water cycle algorithm
(VAPWCA). The VMD was used to decompose the non-linear and non-
stationary time series into components, that were used as inputs to a single
MKRPINN. The MKRPINN parameters were optimized using the
VAPWCA. The results outperformed other models tested, which used EMD
instead of VMD and other NN instead of the MKRPINN.
The authors [64] presented a wind power forecasting using a new and
robust hybrid metaheuristic approach: a case study of multiple locations.
This paper was developed combining Radial Motion Optimization (RMO)
and PSO models. The proposed hybrid model was compared with other
existing models in the literature. The results showed that the hybrid model
design was more accurate than other tested models.
In [65], the decomposition method used to split the subseries was the
EEMD, the forecasting model was the LSTM Enhanced Forget Gate
network (LSTM-EFG) and the meta-heuristic algorithm used to calibrate
the parameters was the CSO. The proposed model showed better results in
terms of accuracy compared to statistical, ML and hybrids methods such as
ARMA, LSTM, BPNN, EEMD-CSO-SVM and others.
In [66] a combination between WT and LSTM was proposed to forecast
wind power. The WT decomposes the non-stationary time series into
stationary components. Then, the components series were used as inputs of
a LSTM model. The hybrid method outperformed traditional methods found
in the literature, such as SVR, LSTM, WD-SVR and others.
The authors in [67] proposed a hybrid deep learning architecture for
wind power prediction using as inputs the wind power and the wind speed.
The data pre-processing stage was done using the EEMD. Then, the
components of wind power and wind speed series, and other information
related to wind direction, were processed by a bi-attention mechanism, to
enhance the weights of the most significant inputs. The inputs were used to
train a residual GRU, which consisted of the series association of a residual
network and a GRU. Initially, the prediction model was trained using the
Adam optimizer, but then a crisscross optimization algorithm (CCSO) was
used to retrain the model, in order to obtain more accurate results. The
proposed hybrid model outperformed in terms of accuracy and forecast
stability compared to other existing models in the literature, such as
persistence model, EMD-CNN-LSTM, VMD-LSTM-ELM, and others.
The authors in [68] proposed a hybrid model based on maximal wavelet
decomposition (MWD), FCM, LSSVM and Non-dominated Sorting
Genetic Algorithm II (NSGA-II) for short-term wind power forecasting.
The MWD was used to separate the different components of the series.
Then, the components were classified into three groups of similar signals
using the C-means. Each group was used to train a LSSVM using the
NSGA-II as optimization algorithm. The results for the proposed hybrid
model outperformed other hybrid models tested such as EMD-LSSVM and
WD-LSSVM, which were simple combinations of decomposition methods
and the LSSVM.
In [69], a wind speed multistep forecasting model using a hybrid
decomposition technique to split the time series under study into its
components was proposed. Then, a deep NN (DNN) was trained using the
selfish herd optimizer. The results outperformed the other models tested,
which were combinations of different decomposition methods and DNNs
tuned with other meta-heuristic optimization algorithms. In addition, the
proposed model was suitable for wind speed prediction in several stages.
The authors in [70] implemented a short-term wind power forecasting
method using the Improved Variational Mode Decomposition (IVMD) to
decompose the time series and Correntropy LSTM as forecasting model.
The proposed model was able to decompose the original series data,
reconstruct the subseries and make wind power prediction. Differently from
other methods, the LSTM parameters were optimized using non-linear
analytic optimization techniques to minimize a criterion based on
Correntropy loss, and not on MSE. This gives a proper treatment to outliers,
that usually are present in wind speed time series. The results outperformed
other traditional hybrid methods found in the literature.
The authors of [71] presented a hybrid model composed of complete
EEMD with adaptive noise (CEEMDAN), Local Mean Decomposition
(LMD), Hurst and BP NN. The hybrid model can decompose the wind
speed time series through the CEEMDAN technique. Thus, the components
obtained are submitted to Hurst analysis in order to be transformed into a
series of micro, meso and macro scale. Finally, the model was applied to the
prediction algorithm. The results obtained showed that the proposed model
had better accuracy compared to other hybrid forecasting methods such as
EEMD-Empirical WT (EWT)-BP, CEEMDAN-BP, CEEMDAN-LMD-BP
and others.
The authors in [72] proposed a model based on multivariate data
secondary decomposition and deep learning algorithm with an attention
mechanism. The SSA technique was applied in order to reduce the noise of
the original multivariate series. The multivariate EMD (MEMD) was
applied in order to decompose the series without noise. The proposed
hybrid model combined CNN and Bi-LSTM to extract spatiotemporal
correlation features from the subseries resultant from the EMD. The results
proposed model outperformed other models in precision and effectiveness.
In [73], a model to forecast wind power output using a hybrid
neuroevolutionary method was proposed. The proposed method consists of
three steps. In step one, the k-means model and an autoencoder are used for
noise detection and filtering. In the second step, the VMD model and two
heuristics called Nelder-Mead greedy search algorithm (GNM) and adaptive
random local search (ARLS) are used to decompose the time series data. In
the third step, a self-adaptive differential evolution (SaDE) algorithm was
used to tune the parameters of a LSTM. The prediction results for the
proposed hybrid model outperformed other hybrid models found in the
literature, such as Bi-LSTM, DE-LSTM and others.
The authors in [74] proposed a method for one-day ahead wind speed
forecasting. In that paper, a hybrid model for wind speed prediction was
presented, consisting of an Adaptive GWO (AGWO), SSA and the hybrid
Encoder-Decoder-Convolutional-Neural-Network-GRU Model (ED-
CNNGRU). The GWO was used to tune the metaparameters of the SSA to
split the series into its different components. Then, the components passed
through a normalization and through the enconder-decoder network, which
gave the final results after a denormalization step. The proposed model
outperformed other tested models, which were a simple CNN, a simple
GRU and the CNNGRU, without the ED part.
Differently from the works discussed above, in some cases a different
model is trained for each component of the time series and then the results
are combined. The papers where this kind of framework is adopted are
discussed below:
In [75], a multi-step wind speed forecasting based on a hybrid
decomposition technique and an improved BP NN was introduced. In that
paper, the hybrid model was based on a hybrid decomposition based on
CEEMDAN and EWT. Then, each component was predicted by a BP-NN
with parameters determined with the Flower-Pollination Algorithm. The
results obtained outperformed individual ML methods and other hybrid
methods existing in the literature such as ELM, EEMD-GA–BP and others.
In [76], a wind speed forecasting based on WT and Recurrent WNN
(RWNN) was proposed. In that paper, the proposed hybrid model was
developed in two phases: in the first phase, the WT technique was used to
decompose the wind speed data, and in the second phase, a RWNN was
trained for each one of the subseries resultant from the first phase. The
proposed model outperformed the conventional RNN model in accuracy.
A decomposition algorithm is one of the key features of [77]. In that
paper, a new hybrid model for wind speed forecasting combining LSTM,
decomposition methods and GWO was proposed. The dataset used in that
work presented some missing data due to sensors malfunctions. For this
reason, the first step of the algorithm was to fill missing data using the
Weighted Moving Average method. Then, the same technique was used to
smooth the data, which was normalized considering its mean and standard
deviation. In the following stage, the time series was decomposed using the
Improved Complementary Ensemble Empirical Mode Decomposition with
Adaptive Noise (ICEEMDAN) method. Each component was fed to a
different LSTM and the results were combined using a moving average
with weights determined by GWO. The method outperformed in accuracy
other individual and hybrid methods.
In [78], several decomposition techniques such as WT, EMD, Empirical
Set Mode Decomposition (ESMD) and EWT were used in order to
decompose the time series into high and low frequency signals and also for
noise reduction. A LSTM model was used to predict each component of the
series and then the results of each LSTM were summed to reconstruct the
forecast for the original time series. The proposed hybrid model with skip
connections showed better accuracy and stability compared to other
individual and hybrid models tested.
In [79], a hybrid model was introduced, which was based on combining
the discrete WT (DWT) with ANN for wind speed prediction. The DWT
filter was used to pre-process the time series data in order to improve the
prediction accuracy. Then, an ANN was trained to predict each component
of the series, and the final forecast was the combination of the ones
obtained for each component. A comparison was made with popular state of
the art wavelet-based algorithms and it was demonstrated that the proposed
model yields better predictions results.
The authors in [80] also proposed a hybrid ML model for short-term
wind speed prediction using a similar framework. The first step was to
decompose the wind speed time series into several subseries using the fast
EEMD (FEEMD) and PSR. For each one of the subseries, an improved
whale optimization algorithm (WOA) was used in order to calibrate the
parameters of an ELM. Finally, the predictions were obtained combining
the predictions for each subseries. The proposed model presented the
advantage of capturing nonlinear characteristics of the time series and
outperformed in terms of accuracy other hybrid methods found in the
literature.
Following the same framework, a parametric model and an optimization
algorithm were adopted in [81] for short-term wind speed forecasting. In
that paper, the time series decomposition was done using VMD, and each
component was an input to a Kernel ELM (KELM), which had its weights
calibrated using an improved Seagull Optimization Algorithm (SOA).
The authors in [82] proposed a method for short-term wind power
forecasting. The prediction consists of three steps: wind direction
prediction, wind speed prediction and wind power prediction. For each one
of the steps, the algorithm detects outliers, decomposes the time series using
the WT technique, normalizes the time series components and predicts the
decomposed time series using the MLP algorithm. The inputs used in MLP
algorithms include the time series components and other variables obtained
from NWP models. Then, to reduce the number of inputs into the MLP, the
NSGA-II was used to select the most relevant features. The proposed
method outperformed other hybrid models tested.
Although the vast majority of wind speed or power forecasting is based
on NN models, some few references use other prediction models such as
ANFIS or statistical methods combined with time series decomposition.
Those methods are discussed in the sequel.
In [83], a wind speed forecasting method based on SSA and ANFIS was
presented. A hybrid model named SSA-ANFIS- FCM was proposed for
wind speed prediction. The SSA was used to decompose the time series into
periodic subseries. Then, the ANFIS model was used for wind speed
prediction. The results showed that the proposed hybrid model obtained
significantly reduced forecast errors compared to other models for the one-
step-ahead and one-step-ahead wind prediction of 10 min.
The same first author of the previous work and other colleagues
presented in [84] a new approach combining two decomposition techniques
for wind speed time series decomposition: The VMD and the SSA. The
decomposition techniques were combined with the ARIMA models, trained
to forecast each component of the series. The proposed hybrid model was
compared to pure ARIMA and presented better accuracy, precision, and
stability results.
The authors in [85] introduced a hybrid approach based on DWT to
forecast wind speed. This paper used physical, statistical, and artificial
intelligence models for wind power prediction. The hybrid models proposed
in that paper combined the time series decomposition technique, using the
DWT, with statistical models, such as the ARIMA and Generalized
Autoregressive Scoring (GAS). The proposed hybrid model provided better
results in accuracy and complexity and it outperformed in most cases
compared with existing statistical models.
In [86] a novel hybrid model based on Bernstein polynomial with a
mixture of Gaussians for wind power forecasting was proposed. First, the
EMD technique was used to decompose the time series and then the hybrid
Bernstein polynomial-with gaussian mixing model was constructed. In
order to optimize the parameters of the hybrid model, a multi-objective
state transition algorithm was used. The results for the proposed hybrid
model outperformed other tested hybrid models in accuracy and stability.

3.4 Other Hybrid Methods


Other hybrid methods include the combinations of statistical and ML
techniques. Differently from the references discussed in other sections,
neither decomposition techniques nor meta-heuristic methods were used in
those papers to forecast variables related to wind power generation.
The authors in [87] proposed an approach to forecast wind power using
deep learning with TensorFlow framework and PCA. The proposed model
was presented to obtain the wind data hidden patterns, enhancing the wind
power prediction performance. The PCA was used to extract and select the
most significant features for the model. For wind power prediction, a deep
learning model optimized with a TensorFlow framework was trained using
the most significant input data. The proposed model outperformed other
traditional methods found in the literature such as BPNN, SVM, CNN and
others. No time series decomposition nor training using meta-heuristic
algorithms were adopted.
In [88] a hybrid nonlinear forecasting method was proposed for short-
term wind speed. The method combined Gaussian process and unscented
Kalman Filter (UKF). The Gaussian process model was considered as a
nonlinear transition function of a state-space model that had its states
estimated using the UKF. The proposed hybrid model outperformed other
tested models, such as persistence model, AR model, Gaussian process
alone and some combinations of those methods, demonstrating the
advantages of the hybridization adopted.
The authors in [89] implemented a short-term wind power prediction of
wind farms based on the LSTM-NARX neural network. The LSTM was
used to predict the wind speed based on past meteorological information.
Then, the wind speed prediction was given as input to a NARX model to
forecast the wind power. The results showed that the proposed hybrid model
outperformed the tested methods such as NARX and WAVELET-BP.
In [90], an adaptive deep learning scheme was proposed to forecast
wind speed. The idea was to use a series scheme to firstly model the data
and then model the residual. To model the data, a linear approach was
adopted and a search method was proposed to determine the optimal set of
inputs. To model the residual, the same idea was used with a non linear
method: the LSTM. Results showed that the proposed model outperformed
other tested models such as statistical, ML and hybrid models.
In [91], a comparison was performed between a physical model, a NN
and a hybrid model including both methods to forecast the wind power. The
physical model adopted consists of the turbine power curve, which receives
as input the wind speed predicted by a NWP model. The NN used many
meteorological data as inputs and its outputs were the predicted wind
power. The hybrid method consisted of a NN with the same inputs as the
first one, plus the power predicted by the physical method. Comparisons
were also made with other simpler methods, such as persistence and naive
ones. As a result, the hybrid method presented better performance in almost
all metrics used to perform the comparison.

4 Hybrid Techniques for Solar Power Prediction


Solar power is a fundamental source to achieve the net-zero emissions. For
this reason, the global installed capacity is growing each year, with 621 GW
installed in 2019 and with about 760 GW in 2020 [92].
Forecasting the electric power production capacity from this renewable
source is a challenging task. This occurs because the associated atmospheric
phenomena provide a probabilistic nature to solar power generation. In this
context, nowadays, research studies about solar radiation forecasting attract
many scholars and managers, and several models are being developed to
solve that problem. These models can be classified as physical, statistical
and artificial intelligence (AI). More recently, hybrid methods gained
attention in the solar forecasting community.
One of the first reviews about hybrid techniques to forecast solar
radiation was presented recently in [93]. The authors identified 6 categories
for hybrid methods, being the first one similar to the parallel model
ensemble category described in this work, the second one based on
similarity, the following two based on decomposition, the fifth based on
evolutionary algorithms and the last one based on residual learning, which
is understood in this work as a serial ensemble method. Although the review
is relatively recent, many papers were published after it was made, and in
this section, recent hybrid techniques for solar prediction will be discussed,
following the categories introduced in Sect. 1.

4.1 Model Ensemble


Few authors explored model ensemble methods to predict solar power
generation. Possibly it is due to the fact that the solar power prediction is a
more recent problem, and more advanced techniques were available when
these studies started.
The only recent reference found that can be classified as a model
ensemble method for solar power prediction is [94]. In that reference, a
hybrid model combining SARIMA-LSTM using a stacking technique was
implemented. Thus, it was possible to create a prediction model combining
the advantages of different prediction models. Furthermore, in that paper,
numerical text data were combined using time series and satellite images as
exogenous variables in order to extract the spatial and temporal features of
solar power generation. Results showed that the proposed model
outperformed in terms of accuracy and precision single models such as
LSTM, RF and SVR, demonstrating that ensemble models can achieve
better performance than individual models.

4.2 Parameters Determination Based on Meta-Heuristic


Methods
Meta-heuristic optimization methods were also applied to forecast solar
power or related variables. For example, in a recent review, almost one
hundred references were found regarding the use of meta-heuristic methods
to optimize SVM models parameters to predict solar radiation [95]. Other
recent applications, involving either SVM and other classes of models, are
presented in this section.
In [96], a hybrid method was proposed for short-term photovoltaic
power forecasting. The method combined GA and SVM and consisted of
two techniques: classification and optimization. The SVM classified
historical weather data. Then, GA was used to optimize the SVM.
Moreover, in order to define the weight/cost matrix, the GA was used again.
This allows a more accurate fit of validation data. Results showed that the
proposed hybrid model outperformed a simple SVM model.
In [97], a method combining Salp Swarm Algorithm (SSWA), RNN and
LSTM was proposed to forecast solar power. SSWA was used to optimize
the LSTM model. The input variables considered in the work were solar
radiation, ambient temperature, module temperature and wind speed,
whereas the model output was the power of each photovoltaic (PV) system.
The proposed hybrid model outperformed other hybrid models such as
PSO-RNN-LSTM, RNN-LSTM and GA-RNN-LSTM in terms of accuracy
and robustness.
The authors in [98] implemented a short-term global solar radiation
prediction based on LSTM and GP. GP was used to perform post-processing
combining the outputs of the LSTM model to find the best prediction of
global solar radiation. The performance of the proposed approach was
compared to the stacking technique. Results showed that the proposed
model outperformed, in terms of performance and consistency, other hybrid
methods using the stacking technique, such as LSTM-KNN, LSTM-SVR,
LSTM-MLP and LSTM-RF, demonstrating the advantages of the meta-
heuristic applied.
In [99], a deep learning scheme was proposed for short-term solar
irradiance prediction. The idea was to use GA to optimize the LSTM, GRU
and RNN models. Moreover, GA was used to find the most suitable meta-
parameters, such as window size and number of neurons in each hidden
layer. In order to pre-process the input data, the normalization technique
was used. Finally, the performance of solar irradiation prediction was
compared within the three NNs mentioned above. Results showed that the
GRU-GA combination resulted in the most accurate model.
In [100], a hybrid model combining SOM, SVM and PSO was
implemented for solar irradiance prediction. First, SOM was used to divide
the input space into several disjointed regions. Then, the SVR was applied
to models each disjointed region in order to identify the characteristic
correlation. Finally, the PSO was used to perform the selection of
parameters in the SVR modeling. Results showed that the proposed
approach outperformed other models alone, such as ARIMA, LES, SES and
RW, demonstrating that the parameters optimization using meta-heuristic
results in more accurate models.
The authors in [101] implemented a solar radiation prediction based on
RF and PSO. The inputs were several meteorological factors, such as
temperature, humidity, wind speed and others. In order to obtain the optimal
performance of the RF model, it was necessary to determine the optimal
parameters values, and to achieve this, the PSO technique was used. Results
showed that the proposed method outperformed other methods alone, such
as RF, MLP and DT.
In [102], three hybrid models combining PSO, GA and DE with ANFIS
were implemented for monthly global solar radiation prediction. The
sunshine duration, temperature and clearness index were considered as
input variables. Results showed that the hybrid model combining ANFIS
and PSO presented greater accuracy and reliability if compared to other
hybrid models, such as ANFIS-GA, ANFIS-DE, SVR-RBF. Moreover, it
also outperformed SVR, ANFIS and KELM models alone, demonstrating
once again the advantages of hybrid algorithms.
The authors in [103] implemented a daily global solar radiation
prediction based on Coral Reefs Optimization (CRO) and ELM. A
combination of CRO and ELM was used for feature selection. Then another
ELM was trained as a prediction mechanism. In other words, the CRO-
ELM was applied to select the best set for daily global solar radiation
prediction, whereas the second ELM was trained to obtain the final
prediction process. Results showed that the proposed method outperformed
other possible hybrid models following the same framework, such as CRO-
ELM-MLR and CRO-ELM-SVR. The proposed approach also
outperformed the multivariate adaptive regression splines (MARS),
multiple linear regression (MLR) and SVR models alone.

4.3 Time Series Decomposition


The frameworks for solar power forecasting involving time series
decomposition techniques are discussed in this section. In a similar way of
what has been done for other sources, references where different
components are fed to a singular model are discussed firstly, and then the
references where each component is predicted by a model and then the final
result is computed are discussed.
In [104], the authors implemented a hybrid model using WT, PSO and
SVM to forecast PV power based on meteorological information, such as
solar radiation, atmospheric pressure, humidity and wind speed. In order to
decompose the meteorological variables into a set of subseries, the WT
technique was used. Then, subseries of the input variables were used to
train the SVM. The PSO was applied to optimize SVM parameters in order
to predict each solar power subseries. Finally, the inverse WT was applied
to reconstruct the solar power prediction. Results showed that the proposed
method was more accurate than other hybrid models such as PSO-NN, GA-
SVM and PSO-SVM. The proposed approach also outperformed SVM
alone, demonstrating the advantages of using hybrid methods based on
decomposition.
In [105], the authors implemented a hybrid method using CEEMDAN,
CNN and LSTM. The CEEMDAN was used to decompose the original time
series into components. The CNN-LSTM framework was capable of
extracting spatial and temporal features respectively. Moreover, it was used
to predict solar radiation one hour ahead. The proposed hybrid model
outperformed other hybrid models such as CEEMDAN-LSTM,
CEEMDAN-SVM, CEEMDAN-BPNN and CEEMDAN-ARIMA in terms
of accuracy. The approach also outperformed LSTM, ARIMA, SVM and
BP used alone, demonstrating the advantages of hybrid models.
In [106], a hybrid method was proposed to forecast hourly solar
irradiance. The model combined WPD, CNN, LSTM and MLP. First, WPD
was used to decompose the original time series. The decomposed time
series was processed by the CNN model. The outputs of the CNN models
were inserted as input to the LSTM model. The LSTM outputs were
concatenated into a fully connected layer. The weather variables, along with
the LSTM outputs, were used as input to the MLP model. The final
prediction value was the output of the MLP model. The proposed was
advantageous in terms of accuracy when compared to other hybrid methods
such as RNN-MLP, BP-MLP, LSTM-MLP, CNN-LSTM-MLP and WPD-
CNN-LSTM. The proposed approach also outperformed BPNN, SVM,
RNN and LSTM models alone.
The authors in [107] implemented a solar irradiance prediction along a
navigation route based on EEMD and Self Organizing Map-Back
Propagation (SOM-BP). First, the EEMD technique was used to decompose
the original time series into several subsequences with various frequency
bands and also to extract the data characteristics. In order to train different
networks, the subsequences obtained were used as input to the SOM model
and their outputs were used as input to the BP model. The final solar
radiation prediction was the sum of the outputs of all sub SOM-BP
networks. The proposed model outperformed in terms of accuracy when
compared to other individual methods such as RBF and BP.
In [108], a hybrid model combining PCA, Discrete Fourier Transform
(DFT) and ERNN was proposed for day-ahead solar prediction. DFT was
used to extract frequency features from historical solar irradiance data. The
PCA was used to identify the most relevant frequency features to be
considered in NN model to carry out the solar radiation forecasting. The
proposed method outperformed other hybrid models such as DFT-PCA-BP
and PCA-BP. The proposed approach also outperformed the ARIMA and
Persistence models alone.
In [109], a hybrid method was proposed for short-term PV power
prediction. The model combined Bayesian Ridge Regression (BRR),
Continuous Wavelet Transform (CWT) and Gradient boosting DT with
categorical features (Catboost). BRR was used to select the most important
features. CWT was applied to convert the chosen features into a time-
frequency domain. Catboost was used for day-ahead PV prediction. Inverse
CWT was applied to obtain the prediction final values. The hybrid model
presented reliability which guaranteed network energy compensation and
preventive maintenance planning.
The authors in [110] proposed an approach for short-term solar
generation prediction using WTP, Generative Adversarial Networks (GAN)
and Dragonfly Algorithm (DA). WTP was used to decompose the series
into subharmonics. GAN was applied to predict solar power generation. DA
was used to train the GAN model in order to improve the prediction. The
proposed method outperformed other individual methods such as ARMA,
ANN, CNN, GAN, SVR, RNN and Fuzzy.
The authors in [111] implemented an hour-ahead PV power prediction
based on the component extraction method, GRU and scenario generation
algorithm. The component extraction method was used to identify PV
power time series patterns. GRU was trained based on the detection of the
daily fluctuating patterns of the PV power generation. The scenario
algorithm was applied to predict the linear trend data for each GRU. Linear
and non-linear parts of the data were inserted into the GRU for PV power
generation prediction. The proposed method was more accurate than
multiple and single GRU models.
Differently from the approaches discussed before, where a single model
is trained considering each sub-series as an input, other authors proposed
methods where each sub-series is modelled by an individual model. The
references following this strategy are discussed in the following paragraphs.
The authors in [112] implemented four hybrid models combining
Wavelet Multiresolution Analysis (WMA)-MLP, WMA-ANFIS, WMA-
NARX and WMA-GRNN for modelling solar radiation. The DWT
technique was applied to decompose weather signals. Then, from the
decomposed series, these were modeled by ANN methods and then
reconstructed to estimate the original signal. In order to model the Global
Horizontal Irradiance (GHI), four meteorological variables were considered
such as temperature, humidity, wind speed and sunshine duration. Results
showed that the hybrid model combining WMA and GRNN outperformed
in terms of accuracy when compared to other hybrid models mentioned
above. Moreover, this approach also outperformed ANFIS, NARX, MLP
and GRNN models alone.
In [113], a hybrid prediction method was proposed for short-term PV
power. The model combined SARIMA, Random Vector Functional Link
(RVFL) and Maximum Overlap Discrete Wavelet Transform (MODWT).
The MODWT was used to decompose the time series. The SARIMA and
RVFL models were used to predict each component of the original time
series. The results from SARIMA and RVFL were linearly combined using
the convex combination method in order to improve the prediction for each
component, in a kind of model ensemble technique. The final forecasting
was the sum of the decomposed forecasts. Results showed that the proposed
hybrid model outperformed hybrid algorithms where only one of the
forecasting methods were used to predict each component (MODWT-
SARIMA and MODWT-RFVL) and other simpler algorithms, such as
Persistence, SARIMA, RVFL and SVR models alone, demonstrating the
advantages of hybrid models combined with time series decomposition.
The authors in [114] proposed an approach for short-term PV power
using Wavelet Packet Decomposition (WPD) and LSTM. The WPD
technique was used to decompose PV power time series. Linear weighting
method was used in the decomposed series in order to improve the
prediction results. Then, to predict each one of the components, a LSTM
was trained considering weather data as inputs. The final forecasting value
was obtained combining the results from each LSTM. The method was
more accurate than LSTM, GRU, RNN and MLP models alone. The
proposed approach also outperformed other hybrid methods.
In [115] a hybrid model combining MODWT and LSTM was proposed
for PV power prediction. First, MODWT was used to decompose the
original historical data into components. LSTM was applied to forecast
each component of the PV power time series. The final value of the
prediction was the weighted contribution of each LSTM. Results showed
that the proposed hybrid model presented more accurate results than a
DWT-LSTM hybrid model, which is an algorithm composed by a time
series decomposition using DWT followed by a prediction step using
LSTM. The proposed approach also outperformed LSTM, RNN, GRU and
neuro-fuzzy models alone, demonstrating that the decomposition was useful
to improve the prediction results.
Differently from the works above, where the same model is used for
each component, the authors in [116] implemented a hybrid model using
VMD, Deep Belief Network (DBN) and ARMA for solar power prediction.
First, VMD was used to decompose the original historical data into
components with different frequencies. DBN was applied to predict high-
frequency components, whereas ARMA was used to predict low-frequency
components. The proposed method outperformed other hybrid methods
using the same structure such as EMD-ARMA-DBN, EEMD-ARMA-DBN,
DWT-RNN-LSTM. The proposed approach also outperformed ARMA,
DBN and RNN models alone.

4.4 Other Hybrid Methods


Other hybrid methods include the combinations of deterministic, statistical,
ML and clustering techniques for solar irradiation or solar power
forecasting. The frameworks proposed in those methods are significantly
different from the other methods studied, and for this reason, are not
described in previous sections.
The authors in [117] proposed an approach for solar and wind power
prediction using post-processing techniques and principal component
analysis (PCA). The basic data consists of solar irradiance and wind power
measured by arrays of sensors scattered along large areas and predicted
using NWP models. To reduce the amount of input data, the PCA technique
was used. Then, the NN and Analog Ensemble (AnEn) were used as post-
processing techniques. The first one provides deterministic forecasts and,
the second, probabilistic predictions. The results obtained showed that
combining PCA with post-processing techniques outperformed when
compared to implementing using NN and AnEn directly on all prediction
data, that is, without dimension reduction using PCA. Moreover, the
proposed method computationally reduces the cost and prediction error,
demonstrating the advantages of using data reduction techniques on large
data.
Another paper where signals from multiple solar plants were considered
was [118]. In that work, a hybrid model combining residual network
(Resnet) and LSTM was implemented for short-term solar irradiance
prediction for twelve neighbour solar plants in the same state of US. The
method was compared to another hybrid method developed with a
combination of Resnet and MLP and presented superior accuracy. The
method also outperformed CNN, LSTM, and Resnet models alone,
demonstrating the advantages of the hybrid approach.
Another combination between NNs to forescast solar power was
presented in [119]. In that paper, a hybrid model combining attention-based
long-term and short-term temporal neural network prediction model
(ALSM), CNN, LSTM and multiple relevant and target variables prediction
pattern (MRTPP) was proposed to predict hourly PV power. Results showed
that the proposed method outperformed a CNN-LSTM hybrid method and
simpler methods such as ARMA, ARIMA and LSTM alone.
The authors in [120] implemented a solar irradiance prediction based on
satellite image analysis and a hybrid model combining exponential
smoothing state space (ESSS) and ANN. The self-organized maps (SOM)
technique was used to classify and detect the cloud cover index and ESSS
was applied to predict the cloud cover index. Finally, the MLP model was
used for solar irradiation prediction. Results showed that the proposed
method outperformed in terms of accuracy when compared to other
individual methods such as ARIMA, Linear Exponential Smoothing (LES),
Simple Exponential Smoothing (SES) and Random Walk (RW).
The authors in [121] proposed a hybrid model using ARIMA and ANN
for daily global solar radiation prediction. ARIMA was used to evaluate
linear aspects. ANN was applied to model the residuals of the ARIMA
model. Results showed superior accuracy when compared with ANN and
ARIMA models alone, as expected. The approach was similar to the one
used in [122] years before.
The authors in [123] proposed a hybrid model combining LSTM and
Gaussian Process Regression (GPR) for short-term solar power prediction.
LSTM was used for point solar prediction whereas the GPR method was
used to estimate the confidence levels of the estimates. This was one of the
few articles identified where the authors propose a probabilistic forecasting
for solar power.
Differently of what was found for hydro and wind power prediction, for
solar power prediction, many similarity based methods were devoped.
Some examples are detailed in the sequel.
The authors in [124] implemented an hourly solar radiation prediction
based on the Mycielski and Markov model. The Mycielski based model
groups the solar radiation data in a matrix and then finds the submatrix
patterns most similar to the last recorded value. Then, Markov model was
applied in order to reflect the probabilistic relationships of the data. Results
showed that the proposed model outperformed other models such as
ARIMA, ANN, and a hybrid method composed of a combination between
Cloud cover index and ARIMA.
In [125], a hybrid method was proposed for short-term power
prediction. The model combined K-means, Gray Relational Analysis (GRA)
and ERNN. In that work, weather variables and historical datasets were
considered. K-means was used to group the similar meteorological factors.
GRA was used to obtain the past day most similar to the day to be
forecasted. Then, ERNN was applied to make the predictions into each
group of days. The proposed method outperformed other hybrid methods
such as GRA-BPNN, GRA-RBFNN, GRA-LSSVM and GRA-ERNN. All
those methods did not have the clustering strategy adopted in the hybrid
method proposed, demonstrating the advantages of similarity techniques.
The approach also outperformed the LSSVM, RBFNN, BPNN and ERNN
models alone.
The authors in [126] implemented an hourly solar radiation prediction
based on Deep Time-series Clustering (DTC) and Feature Attention based
Deep Forecasting (FADF). DTC was used to group time series into similar
patterns in the same cluster. FADF was applied to predict hourly solar
radiation from each cluster that was grouped by the DTC model. Results
showed that the proposed method was more accurante than other hybrid
models such as FADF–k-means, FADF–FCM and FADF–Gaussian Mixture
Model (GMM). The proposed approach also outperformed the FADF model
alone.

5 Conclusion and Future Perspectives


The many references about renewable energy forecasting in the last few
years demonstrate that this topic is important and relevant. With the
increase of penetration of intermittent power sources in national grids, and
the increase in electrical power demand due to current trends, such as
electric cars, it is possible that the number of people dedicated to this theme
grows more and more each year.
Although the recent advances made by many researchers, this review,
including three different renewable power sources, demonstrates that some
trends observed to forecast a power source are not commonly used for the
other ones. For example, although some hybrid methods containing
similarity techniques are found for solar power, almost no reference for
hydro or wind power can be found using this kind of technique. Another
interesting observation is that most of the references for wind power
prediction use decomposition methods, whereas it is not a common practice
for solar or hydropower prediction. Thus, it seems that the different
renewable sources forecasting communities would benefit from knowledge
exchange.
Furthermore, some questions remain open. Generally, the authors that
propose hybrid methods compare the results of their proposals to parts of
the hybrid scheme developed. Sometimes, the complexity of hybrid
methods results in marginal accuracy gains, and the authors claim that the
proposed hybrid method is better, considering just the final result and not
the method complexity. As the AIC or BIC criteria existent for statistical
methods, a generalized criterion could be proposed for hybrid methods.
Another open question is the prevalence of point prediction instead of
probabilistic predictions. While most of the techniques result only on the
average of the predicted series, it is essential to know the confidence level
of the predictions to solve many planning problems. State-space statistical
models can provide probabilistic predictions, but this ability was lost in ML
methods. An important research direction is to answer how ML methods
can make probabilistic predictions.
In conclusion, even with the vast quantity of works related to renewable
energy prediction, there are still many works that the community can
develop about the theme.

References
1. Box GEP, Jenkins GM, Reinsel GC (2008) Time series analysis forecasting and control, 4th
edn. Wiley
2.
Brockwell P, Davis R (2016) Introduction to time series and forecasting. Springer texts in
statistics. Springer International Publishing
3.
Kendall M, Ord JK (1990) Time series, 3rd ed. Edward Arnold
4.
Durbin J, Koopman S (2012) Time series analysis by state space methods, 2nd edn. Oxford
University Press, Oxford Statistical Science Series
[zbMATH]
5.
Harvey AC (2009) Forecasting. Structural time series models & the Kalman filter. Cambridge
University Press
6.
Kubat M An introduction to machine learning. Springer-GmbH (2015)
7.
Hastie T (2009) The elements of statistical learning? data mining, inference, and prediction.
Springer, New York
[zbMATH]
8.
Makridakis S, Spiliotis E, Assimakopoulos V (2018) Statistical and machine learning
forecasting methods: concerns and ways forward. PLoS One 13(3):e0194889
9.
Makridakis S, Spiliotis E, Assimakopoulos V (2018) Statistical and machine learning
forecasting methods: concerns and ways forward. PLoS ONE 13(3):e0194889
10.
Makridakis S, Spiliotis E, Assimakopoulos V (2018) The M4 competition: results, findings,
conclusion and way forward. Int J Forecast 34(4):802–808
11.
Makridakis S, Spiliotis E, Assimakopoulos V (2021) The M5 accuracy competition: results,
findings and conclusions
12.
Makridakis S, Spiliotis E, Assimakopoulos V, Chen Z, Gaba A, Tsetlin I, Winkler RL (2021)
The M5 uncertainty competition: results, findings and conclusions. Int J Forecast
13.
Chen W, Xu H, Chen Z, Jiang M (2021) A novel method for time series prediction based on
error decomposition and nonlinear combination of forecasters. Neurocomputing 426:85–103
14.
Giesbrecht M, Bottura CP (2011) Immuno inspired approaches to model discrete time series at
state space. In: The fourth international workshop on advanced computational intelligence,
pp 750–756
15.
Kuranga C, Pillay N (2022) A comparative study of nonlinear regression and autoregressive
techniques in hybrid with particle swarm optimization for time-series forecasting. Expert Syst
Appl 190:116163
16.
Ljung L (1999) System identification - theory for the user, 2nd edn. Prentice Hall
17.
Meyer Y (2003) Wavelets and operators. Cambridge University Press
18.
Huan NE, Shen Z, Long SR, Wu MC, Shih HH, Zheng Q, Yen N-C, Tung CC, Liu HH (1998)
The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary
time series analysis. In: Proceedings of the royal society of London. Series A: mathematical,
physical and engineering sciences, vol 454, 1971, pp 903–995
19.
Golyandina N (2013) Singular spectrum analysis for time series. Springer, Berlin New York
[zbMATH]
20.
Chevallier J, Zhu B, Zhang L (2020) Forecasting inflection points: hybrid methods with
multiscale machine learning algorithms. Comput Econ 57(2):537–575
21.
Hu W, He Y, Liu Z, Tan J, Yang M, Chen J (2020) Toward a digital twin: time series prediction
based on a hybrid ensemble empirical mode decomposition and BO-LSTM neural networks. J
Mech Des 143(5):051705
22.
Jamal A, Hameed Ashour MA, Abbas Helmi RA, Fong SL (2021) A wavelet-neural networks
model for time series. In: 2021 IEEE 11th IEEE symposium on computer applications industrial
electronics (ISCAIE), pp 325–330
23.
Silvestre GD, dos Santos MR, de Carvalho AC (2021) Seasonal-trend decomposition based on
loess + machine learning: hybrid forecasting for monthly univariate time series. In: 2021
international joint conference on neural networks (IJCNN), pp 1–7
24.
Dudek G, Pełka P (2021) Pattern similarity-based machine learning methods for mid-term load
forecasting: a comparative study. Appl Soft Comput 104:107223
25.
Martins GS, Giesbrecht M (2021) Clearness index forecasting: a comparative study between a
stochastic realization method and a machine learning algorithm. Renew Energy 180:787–805
26.
Hossain Lipu M, Miah MS, Ansari S, Hannan M, Hasan K, Sarker MR, Mahmud MS, Hussain
A, Mansor M (2021) Data-driven hybrid approaches for renewable power prediction toward
grid decarbonization: applications, issues and suggestions. J Clean Prod 328:129476
27.
Peel MC, McMahon TA (2020) Historical development of rainfall-runoff modeling. WIREs
Water 7(5):e1471
28.
ASCE (2000) Artificial neural networks in hydrology. ii: hydrologic applications. J Hydrol Eng
5(2):124–137
29.
Yaseen ZM, El-shafie A, Jaafar O, Afan HA, Sayl KN (2015) Artificial intelligence based
models for stream-flow forecasting: 2000–2015. J Hydrol 530:829–844
30.
Papacharalampous G, Tyralis H (2020) Hydrological time series forecasting using simple
combinations: big data testing and investigations on one-year ahead river flow predictability. J
Hydrol 590:125205
31.
Malhan P, Mittal M (2022) A novel ensemble model for long-term forecasting of wind and
hydro power generation. Energy Conv Manage 251:114983
32.
Cleveland RB, Cleveland WS, McRae JE, Terpenning I (1990) Stl: a seasonal-trend
decomposition procedure based on loess. J Official Stat 6(1):3–33
33.
Belotti J, Siqueira H, Araujo L, Stevan SL, de Mattos Neto PS, Marinho MHN, de Oliveira
JFL, Usberti F, Leone Filho MdA, Converti A, Sarubbo LA (2020) Neural-based ensembles and
unorganized machines to predict streamflow series from hydroelectric plants. Energies 13:18
34.
Gill MK, Kaheil YH, Khalil A, McKee M, Bastidas L (2006) Multiobjective particle swarm
optimization for parameter estimation in hydrology. Water Resources Res 42:7
35.
Feng Z, Niu W (2021) Hybrid artificial neural network and cooperation search algorithm for
nonlinear river flow time series forecasting in humid and semi-humid regions. Knowl-Based
Syst 211:106580
36.
Dehghani M, Riahi-Madvar H, Hooshyaripor F, Mosavi A, Shamshirband S, Zavadskas EK,
Chau K-W (2019) Prediction of hydropower generation using grey wolf optimization adaptive
neuro-fuzzy inference system. Energies 12:2
37.
Yaseen ZM, Ebtehaj I, Kim S, Sanikhani H, Asadi H, Ghareb MI, Bonakdari H, Wan Mohtar
WHM, Al-Ansari N, Shahid S (2019) Novel hybrid data-intelligence model for forecasting
monthly rainfall with uncertainty analysis. Water 11:3
38.
Ahmed AN, Van Lam T, Hung ND, Van Thieu N, Kisi O, El-Shafie A (2021) A comprehensive
comparison of recent developed meta-heuristic algorithms for streamflow time series
forecasting problem. Appl Soft Comput 105:107282
39.
Adamowski J, Chan HF (2011) A wavelet neural network conjunction model for groundwater
level forecasting. J Hydrol 407(1):28–40
40.
Wei S, Yang H, Song J, Abbaspour K, Xu Z (2013) A wavelet-neural network hybrid modelling
approach for estimating and predicting river monthly flows. Hydrol Sci J 58(2):374–389
41.
Yaseen ZM, Awadh SM, Sharafati A, Shahid S (2018) Complementary data-intelligence model
for river flow simulation. J Hydrol 567:180–190
42.
Nourani V, Andalib G, Sadikoglu F (2017) Multi-station streamflow forecasting using wavelet
denoising and artificial intelligence models. Proc Comput Sci 120:617–624 (9th international
conference on theory and application of soft computing, computing with words and perception,
ICSCCW, 2017 22–23 August 2017. Budapest, Hungary)
43.
Abda Z, Chettih M, Zerouali B (2021) Assessment of neuro-fuzzy approach based different
wavelet families for daily flow rates forecasting. Model Earth Syst Environ
44.
Labat D, Goddéris Y, Probst JL, Guyot JL (2004) Evidence for global runoff increase related to
climate warming. Adv Water Resources 27(6):631–642
45.
Nourani V, Hosseini Baghanam A, Adamowski J, Kisi O (2014) Applications of hybrid wavelet-
artificial intelligence models in hydrology: a review. J Hydrol 514:358–377
46.
Apaydin H, Taghi Sattari M, Falsafian K, Prasad R (2021) Artificial intelligence modelling
integrated with singular spectral analysis and seasonal-trend decomposition using loess
approaches for streamflow predictions. J Hydrol 600:126506
47.
Guo Y, Xu Y-P, Xie J, Chen H, Si Y, Liu J (2021) A weights combined model for middle and
long-term streamflow forecasts and its value to hydropower maximization. J Hydrol
602:126794
48.
Dragomiretskiy K, Zosso D (2014) Variational mode decomposition. IEEE Trans Signal
Process 62(3):531–544
[zbMATH]
49.
Niu W, Feng Z, Xu Y, Feng B, Min Y (2021) Improving prediction accuracy of hydrologic time
series by least-squares support vector machine using decomposition reconstruction and swarm
intelligence. J Hydrol Eng 26(9):04021030
50.
Chu H, Wei J, Wu W (2020) Streamflow prediction using lasso-fcm-dbn approach based on
hydro-meteorological condition classification. J Hydrol 580:124253
51.
Mehr AD, Gandomi AH (2021) Msgp-lasso: an improved multi-stage genetic programming
model for streamflow prediction. Inform Sci 561:181–195
52.
Global wind report (2021) Tech. rep., Global Wind Energy Council, 2021
53.
Ahmadi M, Khashei M (2021) Current status of hybrid structures in wind forecasting. Eng Appl
Artif Intel 99:104133
54.
Ma Z, Guo S, Xu G, Aziz S (2020) Meta learning-based hybrid ensemble approach for short-
term wind speed forecasting. IEEE Access 8:172859–172868
55.
Abdullah AA, Hassan TM (2021) A hybrid neuro-fuzzy & bootstrap prediction system for wind
power generation. Technol Econ Smart Grids Sustain Energy 6(1):1–14
56.
Malhan P, Mittal M (2022) A novel ensemble model for long-term forecasting of wind and
hydro power generation. Energy Conv Manage 251:114983
57.
Piotrowski P, Kopyt M, Baczyński D, Robak S, Gulczyński T (2021) Hybrid and ensemble
methods of two days ahead forecasts of electric energy production in a small wind turbine.
Energies 14(5):1225
58.
Huang X, Wang J, Huang B (2021) Two novel hybrid linear and nonlinear models for wind
speed forecasting. Energy Conv Manage 238:114162
59.
Dong Y, Niu J, Liu Q, Sivakumar B, Du T (2021) A hybrid prediction model for wind speed
using support vector machine and genetic programming in conjunction with error
compensation. Stochastic Environ Res Risk Assess, 1–14
60.
Ahmadi M, Khashei M (2021) A fuzzy series-parallel preprocessing (fspp) based hybrid model
for wind forecasting. Transmission & Distribution, IET Generation
61.
Lu P, Ye L, Zhao Y, Dai B, Pei M, Tang Y (2021) Review of meta-heuristic algorithms for wind
power prediction: methodologies, applications and challenges. Appl Energy 301:117446
62.
Abbasipour M, Igder MA, Liang X (2021) Data-driven wind speed forecasting techniques using
hybrid neural network methods. In: 2021 IEEE Canadian conference on electrical and computer
engineering (CCECE). IEEE, New York, pp 1–6
63.
Naik J, Dash S, Dash P, Bisoi R (2018) Short term wind power forecasting using hybrid
variational mode decomposition and multi-kernel regularized pseudo inverse neural network.
Renew Energy 118:180–212
64.
Kerem A, Saygin A, Rahmani R (2019) Wind power forecasting using a new and robust hybrid
metaheuristic approach: a case study of multiple locations. In: 2019 19th international
symposium on electromagnetic fields in mechatronics, electrical and electronic engineering
(ISEF). IEEE, New York, pp 1–2
65.
Devi AS, Maragatham G, Boopathi K, Rangaraj A (2020) Hourly day-ahead wind power
forecasting with the eemd-cso-lstm-efg deep learning technique. Soft Comput 24(16):12391–
12411
66.
Liu B, Zhao S, Yu X, Zhang L, Wang Q (2020) A novel deep learning approach for wind power
forecasting based on wd-lstm model. Energies 13(18):4964
67.
Meng A, Chen S, Ou Z, Ding W, Zhou H, Fan J, Yin H (2022) A hybrid deep learning
architecture for wind power prediction based on bi-attention mechanism and crisscross
optimization. Energy 238:121795
68.
Ding M, Zhou H, Xie H, Wu M, Liu K-Z, Nakanishi Y, Yokoyama R (2021) A time series
model based on hybrid-kernel least-squares support vector machine for short-term wind power
forecasting. ISA Trans 108:58–68
69.
Vidya S, Janani ESV (2021) Wind speed multistep forecasting model using a hybrid
decomposition technique and a selfish herd optimizer-based deep neural network. Soft Comput
25(8):6237–6270
70.
Duan J, Wang P, Ma W, Tian X, Fang S, Cheng Y, Chang Y, Liu H (2021) Short-term wind
power forecasting using the hybrid model of improved variational mode decomposition and
correntropy long short-term memory neural network. Energy 214:118980
71.
Emeksiz C, Tan M (2022) Multi-step wind speed forecasting and hurst analysis using novel
hybrid secondary decomposition approach. Energy 238:121764
72.
Zhang S, Chen Y, Xiao J, Zhang W, Feng R (2021) Hybrid wind speed forecasting model based
on multivariate data secondary decomposition approach and deep learning algorithm with
attention mechanism. Renew Energy 174:688–704
73.
Neshat M, Nezhad MM, Abbasnejad E, Mirjalili S, Groppi D, Heydari A, Tjernberg LB, Garcia
DA, Alexander B, Shi Q et al (2021) Wind turbine power output prediction using a new hybrid
neuro-evolutionary method. Energy 229:120617
74.
Zouaidia K, Ghanemi S, Rais MS, Bougueroua L, Katarzyna W-W (2021) Hybrid intelligent
framework for one-day ahead wind speed forecasting. Neural Comput Appl 33(23):16591–
16608
75.
Qu Z, Mao W, Zhang K, Zhang W, Li Z (2019) Multi-step wind speed forecasting based on a
hybrid decomposition technique and an improved back-propagation neural network. Renew
Energy 133:919–929
76.
Pradhan PP, Subudhi B (2020) Wind speed forecasting based on wavelet transformation and
recurrent neural network. Int J Numer Model Electron Netw Dev Fields 33(1):e2670
77.
Altan A, Karasu S, Zio E (2021) A new hybrid model for wind speed forecasting combining
long short-term memory neural network, decomposition methods and grey wolf optimizer. Appl
Soft Comput 100:106996
78.
Jaseena K, Kovoor BC (2021) Decomposition-based hybrid wind speed forecasting model
using deep bidirectional lstm networks. Energy Conv Manage 234:113944
79.
Khelil K, Berrezzek F, Bouadjila T (2021) Ga-based design of optimal discrete wavelet filters
for efficient wind speed forecasting. Neural Comput Appl 33(9):4373–4386
80.
Lin B, Zhang C (2021) A novel hybrid machine learning model for short-term wind speed
prediction in inner Mongolia, China. Renew Energy 179:1565–1577
81.
Chen X, Li Y, Zhang Y, Ye X, Xiong X, Zhang F (2021) A novel hybrid model based on an
improved seagull optimization algorithm for short-term wind speed forecasting. Processes
9(2):387
82.
Khazaei S, Ehsan M, Soleymani S, Mohammadnezhad-Shourkaei H (2022) A high-accuracy
hybrid method for short-term wind power forecasting. Energy 238:122020
83.
Moreno SR, dos Santos Coelho L (2018) Wind speed forecasting approach based on singular
spectrum analysis and adaptive neuro fuzzy inference system. Renew energy 126:736–754
84.
Moreno SR, Mariani VC, dos Santos Coelho L (2021) Hybrid multi-stage decomposition with
parametric model applied to wind speed forecasting in Brazilian northeast. Renew Energy
164:1508–1526
85.
Kushwah AK, Wadhvani R (2021) Discrete wavelet transforms based hybrid approach to
forecast windspeed time series. Wind Eng 0309524X21998263
86.
Dong Y, Zhang H, Wang C, Zhou X (2021) A novel hybrid model based on bernstein
polynomial with mixture of gaussians for wind power forecasting. Applied Energy 286:116545
87.
Khan M, Liu T, Ullah F (2019) A new hybrid approach to forecast wind power for large scale
wind turbine data using deep learning with tensorflow framework and principal component
analysis. Energies 12(12):2229
88.
Zhao X, Wei H, Li C, Zhang K (2020) A hybrid nonlinear forecasting strategy for short-term
wind speed. Energies 13(7):1596
89.
Xu Z, Zhang X (2021) Short-term wind power prediction of wind farms based on lstm+ narx
neural network. In: 2021 international conference on computer engineering and application
(ICCEA). IEEE, pp 137–141
90.
de Mattos Neto PS, de Oliveira JF, Domingos SdO, Siqueira HV, Marinho MH, Madeiro F
(2021)An adaptive hybrid system using deep learning for wind speed forecasting. Inform Sci
581:495–514
91.
Ogliari E, Guilizzoni M, Giglio A, Pretto S (2021) Wind power 24-h ahead forecast by an
artificial neural network and an hybrid model: comparison of the predictive performance.
Renew Energy 178:1466–1474
92.
Renewables 2021 global status report. Tech. rep., REN21 RENEWABLES NOW (2021)
93.
Guermoui M, Melgani F, Gairaa K, Mekhalfi ML (2020) A comprehensive review of hybrid
models for solar radiation forecasting. J Clean Prod 258:120357
94.
Kim B, Suh D, Otto M-O, Huh J-S (2021) A novel hybrid spatio-temporal forecasting of
multisite solar photovoltaic generation. Remote Sens 13(13):2605
95.
Álvarez-Alvarado JM, Ríos-Moreno JG, Obregón-Biosca SA, Ronquillo-Lomelí G, Ventura-
Ramos E, Trejo-Perea M (2021) Hybrid techniques to predict solar radiation using support
vector machine and search optimization algorithms: a review. Appl Sci 11(3):1044
96.
VanDeventer W, Jamei E, Thirunavukkarasu GS, Seyedmahmoudian M, Soon TK, Horan B,
Mekhilef S, Stojcevski A (2019) Short-term pv power forecasting using hybrid gasvm
technique. Renew Energy 140:367–379
97.
Akhter MN, Mekhilef S, Mokhlis H, Ali R, Usama M, Muhammad MA, Khairuddin ASM
(2021) A hybrid deep learning method for an hour ahead power output forecasting of three
different photovoltaic systems. Appl Energy 118185
98.
Al-Hajj R, Assi A, Fouad M, Mabrouk E (2021) A hybrid lstm-based genetic programming
approach for short-term prediction of global solar radiation using weather data. Processes
9(7):1187
99.
Bendali W, Saber I, Bourachdi B, Boussetta M, Mourad Y (2020) Deep learning using genetic
algorithm optimization for short term solar irradiance forecasting. In: 2020 fourth international
conference on intelligent computing in data sciences (ICDS). IEEE, pp 1–8
100.
Dong Z, Yang D, Reindl T, Walsh WM (2015) A novel hybrid approach based on self-
organizing maps, support vector regression and particle swarm optimization to forecast solar
irradiance. Energy 82:570–577
101.
Gupta S, Katta AR, Baldaniya Y, Kumar R (2020) Hybrid random forest and particle swarm
optimization algorithm for solar radiation prediction. In: 2020 IEEE 5th international
conference on computing communication and automation (ICCCA). IEEE, pp 302–307
102.
Halabi LM, Mekhilef S, Hossain M (2018) Performance evaluation of hybrid adaptive neuro-
fuzzy inference system models for predicting monthly global solar radiation. Appl Energy
213:247–261
103.
Salcedo-Sanz S, Deo RC, Cornejo-Bueno L, Camacho-Gómez C, Ghimire S (2018) An
efficient neuro-evolutionary hybrid modelling mechanism for the estimation of daily global
solar radiation in the sunshine state of australia. Appl Energy 209:79–94
104.
Eseye AT, Zhang J, Zheng D (2018) Short-term photovoltaic solar power forecasting using a
hybrid wavelet-pso-svm model based on scada and meteorological information. Renew Energy
118:357–367
105.
Gao B, Huang X, Shi J, Tai Y, Zhang J (2020) Hourly forecasting of solar irradiance based on
ceemdan and multi-strategy cnn-lstm neural networks. Renew Energy 162:1665–1683
106.
Huang X, Li Q, Tai Y, Chen Z, Zhang J, Shi J, Gao B, Liu W (2021) Hybrid deep neural model
for hourly solar irradiance forecasting. Renew Energy 171:1041–1060
107.
Lan H, Yin H, Hong Y-Y, Wen S, David CY, Cheng P (2018) Day-ahead spatio-temporal
forecasting of solar irradiation along a navigation route. Appl Energy 211:15–27
108.
Lan H, Zhang C, Hong Y-Y, He Y, Wen S (2019) Day-ahead spatiotemporal solar irradiation
forecasting using frequency-based hybrid principal component analysis and neural network.
Appl Energy 247:389–402
109.
Massaoudi M, Refaat SS, Abu-Rub H, Chihi I, Wesleti FS (2020) A hybrid Bayesian ridge
regression-cwt-catboost model for pv power forecasting. In: 2020 IEEE Kansas power and
energy conference (KPEC). IEEE, pp 1–5
110.
Meng F, Zou Q, Zhang Z, Wang B, Ma H, Abdullah HM, Almalaq A, Mohamed MA (2021) An
intelligent hybrid wavelet-adversarial deep model for accurate prediction of solar power
generation. Energy Reports 7:2155–2164
111.
Qu Y, Xu J, Sun Y, Liu D (2021) A temporal distributed hybrid deep learning model for day-
ahead distributed pv power forecasting. Appl Energy 304:117704
112.
Hussain S, AlAlili A (2017) A hybrid solar radiation modeling approach using wavelet
multiresolution analysis and artificial neural networks. Appl Energy 208:540–550
113.
Kushwaha V, Pindoriya NM (2019) A sarima-rvfl hybrid model assisted by wavelet
decomposition for very short-term solar pv power generation forecast. Renew Energy 140:124–
139
114.
Li P, Zhou K, Lu X, Yang S (2020) A hybrid deep learning model for short-term pv power
forecasting. Appl Energy 259:114216
115.
Sharma N, Mangla M, Yadav S, Goyal N, Singh A, Verma S, Saber T (2021) A sequential
ensemble model for photovoltaic power forecasting. Comput Electrical Eng 96:107484
116.
Xie T, Zhang G, Liu H, Liu F, Du P (2018) A hybrid forecasting method for solar output power
based on variational mode decomposition, deep belief networks and auto-regressive moving
average. Appl Sci 8(10):1901
117.
Davò F, Alessandrini S, Sperati S, Delle Monache L, Airoldi D, Vespucci MT (2016) Post-
processing techniques and principal component analysis for regional wind power and solar
irradiance forecasting. Solar Energy 134:327–338
118.
Ziyabari S, Du L, Biswas S (2020) A spatio-temporal hybrid deep learning architecture for
short-term solar irradiance forecasting. In: 2020 47th IEEE photovoltaic specialists conference
(PVSC). IEEE, pp 0833–0838
119.
Qu J, Qian Z, Pei Y (2021) Day-ahead hourly photovoltaic power forecasting using attention-
based cnn-lstm neural network embedded with multiple relevant and target variables prediction
pattern. Energy 232:120996
120.
Dong Z, Yang D, Reindl T, Walsh WM (2014) Satellite image analysis and a hybrid esss/ann
model to forecast solar irradiance in the tropics. Energy Conv Manage 79:66–73
121.
Belmahdi B, Louzazni M, El Bouardi A (2020) A hybrid arima-ann method to forecast daily
global solar radiation in three different cities in morocco. Eur Phys J Plus 135(11):1–23
122.
Bouzerdoum M, Mellit A, Pavan AM (2013) A hybrid model (sarima-svm) for short-term
power forecasting of a small-scale grid-connected photovoltaic plant. Solar Energy 98:226–235
123.
Wang Y, Feng B, Hua Q-S, Sun L (2021) Short-term solar power forecasting: a combined long
short-term memory and gaussian process regression method. Sustainability 13(7):3665
124.
Hocaoglu FO, Serttas F (2017) A novel hybrid (Mycielski-Markov) model for hourly solar
radiation forecasting. Renew Energy 108:635–643
125.
Lin P, Peng Z, Lai Y, Cheng S, Chen Z, Wu L (2018) Short-term power prediction for
photovoltaic power plants using a hybrid improved kmeans-Gra-Elman model based on
multivariate meteorological factors and historical power datasets. Energy Conv Manage
177:704–717
126.
Lai CS, Zhong C, Pan K, Ng WW, Lai LL (2021) A deep learning based hybrid method for
hourly solar radiation forecasting. Expert Syst Appl 177:114941

OceanofPDF.com
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023
A. Tomar et al. (eds.), Prediction Techniques for Renewable Energy Generation and Load Demand
Forecasting, Lecture Notes in Electrical Engineering 956
https://doi.org/10.1007/978-981-19-6490-9_4

A Deep Learning-Based Islanding


Detection Approach by Considering the
Load Demand of DGs Under Different
Grid Conditions
Gökay Bayrak1 and Alper Yılmaz1
(1) Department of Electrical and Electronics Engineering, Bursa Technical
University, 16300 Bursa, Turkey

Gökay Bayrak (Corresponding author)


Email: [email protected]

Alper Yılmaz
Email: [email protected]

Abstract
Islanding detection is a very important issue in the integration of renewable
energy systems with the grid. In recent years, especially artificial
intelligence and deep learning-based islanding detection methods have
come to the fore in terms of providing reliable power quality. In this study, a
deep learning-based islanding detection approach by considering power
quality and load demand problems is proposed. It is aimed to effectively
detect the islanding condition which occurs as a result of unintentional
disconnection of distributed generation (DG) systems from the grid. In the
proposed approach, a deep learning-based islanding detection method is
developed, taking into account the faults and power quality events
occurring on the load side like considering asynchronous motor startup,
capacitor switching, etc., conditions that are not possible to easily detect by
conventional islanding detection methods. With the developed method, it is
seen that the islanding event can be distinguished from the power quality
events that occur on the grid, even under noisy signals. In this way, the
power quality of the grid is increased and the performance of the DG in
dynamic load behavior is developed.

Keywords Deep learning – Islanding detection – Distributed generation –


Artificial intelligence – Load demand

1 Introduction
Today, limited fossil fuel sustainability, environmental concerns, and
increasing energy demand are universal issues that are widely addressed to
find appropriate solutions. Renewable energy (RE)-based distributed
generators (DGs) such as photovoltaic (PV), wind, hydrogen, etc., based
DGs and electric vehicles (EVs) with vehicle-to-grid (V2G) support stand
out in solving these problems [1, 2]. Figure 1 shows the positive and
negative effects of these rapidly increasing systems on the grid. Here, it is
an important criterion to connect the RE-based DGs and EV charging
stations (EVCS) with V2G technology to the grid following the grid code
requirements [3, 4].
Fig. 1 Positive and negative effects of distributed generators on the main grid
Sustainable power flow is required for both the consumers and the grid
side to provide a reliable grid integration. One of these criteria and the most
important is islanding detection. Islanding as defined by IEEE 929-2000
standards [3]: The condition in which a distribution system becomes is
electrically isolated from the grid yet continues to be energized by one or
more local DG through the associated point of common coupling (PCC).
Islanding condition in a microgrid causes serious damage to both the DG
and the operator. Thus, the detection of islanding in time is an essential
issue for a DG system.
In case any voltage and frequency value occurring in the grid exceeds
the acceptable limits, the DG system should be physically isolated from the
grid as soon as possible and continue to feed the local loads. From this point
of view, there is a need for methods that can detect the islanding condition
of the DG system within the periods specified in the standards from the
moment it occurs [3]. Besides, the identification of islanding conditions and
non-islanding events is also of great importance. In microgrids, switching
of different load/capacitor groups, different DG operating conditions, and
short circuit faults cause minor disturbances called power quality events
(PQE) that are not islanding events [5]. Here, evaluation of DG load
demand in different grid conditions and performing tests in all possible load
case scenarios are essential for system reliability and sustainability [6]. The
frequency and voltage range allowed in the standards and the island
detection time are given in Table 1.
Table 1 Islanding condition detection standards

Parameters IEEE Std. 929-2000 IEEE Std. 1547-2003 IEC 62116


Frequency range 59.3 ≤ f ≤ 60.5 Hz 59.3 ≤ f ≤ 60.5 Hz f0 −1.5 Hz ≤ f, f0 +1.5 Hz ≤ f

Voltage range 0.88 ≤ V ≤ 1.10 0.88 ≤ V ≤ 1.10 0.85 ≤ V ≤ 1.15


Quality factor 2.5 1 1
Detection time t<2s t<2s t<2s
Islanding detection methods can be classified as remote, local (passive
and active), and intelligent methods. However, passive and active methods
have several drawbacks, including difficulty in determining a threshold
value, uncertainty due to operating conditions, and susceptibility to noise
[7]. Also, they contain a large non-detection zone (NDZ) [1]. The NDZ
indicates the area where islanding occurred but could not be detected.
Islanding detection should be possible even in the worst case where the
active and reactive power generated in the microgrid is completely
consumed by the loads. For good islanding detection, the NDZ should be as
low as possible. NDZ is low in remote methods, but the cost is quite high
[8]. To overcome the limitations of traditional methodologies, intelligent
methods using signal processing and classifiers are presented [6]. Intelligent
islanding detection approaches are including three steps in the literature:
signal processing [9], feature selection [10], and classification [11]. In some
studies in which signal processing-based techniques are applied, very high
accuracy is obtained in noise-free conditions, while it is observed that this
accuracy decreased in high noise conditions [6]. In classifiers, features must
be correctly defined by users. Besides, the feature selection takes a long
time. Deep learning (DL) methods have automatic feature extraction
capability and eliminate human involvement, and it has closed-loop
feedback. These methods can automatically extract features without the
need for any conventional signal analysis method.
When the literature studies are examined, an effective islanding
detection method should have the following features [9]:
It should be applicable for distributed generation systems with different
characteristics.
Islanding conditions and non-islanding PQE events should be extensively
tested considering the DG load demand.
Minimum measurement parameters should be used.
It must be validated with a large-scale dataset.
Cost should be reduced by using a limited number of measuring devices.
In this study, a DL-based islanding detection method using long short-
term memory (LSTM) and convolutional neural network (CNN) is
proposed for the classification of islanding and PQEs such as sags, swells,
and frequency deviations in DG-based microgrid considering the DG load
demand. The NDZ is almost zero, and the detection time is under the IEEE
Std. 929-2000, IEEE Std. 1547-2003, and IEC 62116 standards. In Sect. 2,
information about the mathematical and simulative data generated for
islanding and non-islanding events (PQEs) is given, and the test system and
data acquisition hardware are discussed in detail. In the next section, signal
analysis and machine learning (ML)-based methods are discussed in detail,
after briefly mentioning the conventional passive, active, and remote
methods. Besides, in Sect. 3, deep learning methods used in fault detection,
especially in microgrids, which are gaining popularity, are discussed.
Section 4 covers the theoretical background of the proposed method, its
application, and the results obtained from the method. The proposed
methodology for the classification of islanding/non-islanding events is
investigated by considering the DG load demand under different grid
conditions. Discussion and conclusion are presented in Sect. 5.

2 Data Generation and Test System


2.1 Data Generation Using Mathematical Models and
Simulation Models
Islanding events and PQEs are generated using mathematical models,
simulation studies, and real data acquisition systems with an experimental
setup. In this study, mathematical PQE data is generated using the integral-
based method [12], following IEEE 1159 standards, with the software
created in the LabVIEW environment. Researchers have the option to
configure such as the number of samples, the sampling frequency, the
fundamental frequency, and the normal amplitude of the signals. Figure 2
shows the LabVIEW interface of the software using the integral-based
method. PQE data parameters selected inf Ref. [8, 12]. 1000 samples are
generated for each event.

Fig. 2 LabVIEW interface of the software using the integral-based method


Simulation models are the ability to provide hundreds of different
operating conditions in the computer environment that did not occur in the
real system but can be realized in simulation. Islanding condition signals
and PQEs are generated according to the references [9, 12] using the
MATLAB/SIMULINK model. Simulations are generated for all scenarios
that will occur depending on the situation on the demand side. The dataset
scenarios used in islanding detection are shown in Fig. 3. The
active/reactive power change in the PCC should be set to zero as specified
in the IEEE 1457 standard. In this case, the variation between voltage and
current values will drop to almost zero before and after the islanding [1].
The method to be proposed for the islanding detection should also be able
to accurately detect the island study even in these cases. In this study, data
is generated by considering various power values with low NDZ between
production and consumption for islanding conditions.

Fig. 3 Dataset scenarios used in islanding detection

2.2 Islanding Test System


The islanding detection test system following IEEE 929-2000 std is shown
in Fig. 4, and the DG system operated under power factor is at
the resonance frequency ( ) [13]. In the applied methods,
it is aimed that the detection time is below the IEEE 929-2000 standards
and the NDZ is almost zero. The test system is operated under a parallel
load of R = 50 Ω, L = 63 mH and C = 0.16 mF. The quality factor ( ) for
demand side load is 2.5.
Fig. 4 Islanding detection test system

3 Islanding Detection Techniques


The main purpose of the islanding condition detection methods is to
determine whether the islanding has occurred by monitoring some electrical
parameters and load demand on the grid and DG sides. Islanding detection
methods can be divided into 5 categories: active, conventional passive,
remote, hybrid, and improved passive (signal analysis and ML-based)
methods. Figure 5 shows islanding detection methods. In the sub-headings,
the methods are detailed.
Fig. 5 Islanding detection methods
3.1 Conventional (Local and Remote) Techniques
Local methods (Passive: over/under frequency and voltage (OFP/UFP,
OVP/UVP), rate of change frequency (RoCoF) frequency, phase jump
detection, etc. Active: sandia frequency shift, active power, reactive power,
harmonic signal injection, impedance measurement, etc.) have several
drawbacks, including difficulty in determining a threshold value,
uncertainty due to operating conditions, and susceptibility to noise. Also,
they contain a large NDZ. In the active method, an additional disturbance
signal is given to the system from the outside and the islanding condition is
detected by monitoring the changes. Although NDZ is relatively low
compared to conventional passive methods, signals injected from the
outside into the system can cause PQ degradation. In addition, detection
times are slower than passive methods. NDZ is low in remote methods
(transfer trip, phasor measurement unit (PMU)-based scheme,
programmable logic controller (PLC)-based scheme, etc.), but the cost is
quite high [1, 6].

3.2 Signal Analysis-Based Methods


Signal analysis methods have a much more flexible structure as they offer
the chance to observe both the time and frequency domain properties of the
signals. Many signal analysis methods such as FT, short-time Fourier
transform (STFT), Hilbert–Huang transform (HHT), wavelet transform
(WT), s-transform (ST), TT transform, curvelet transform (CT), empirical
mode decomposition (EMD), principal component analysis (PCA), etc., are
used in the literature [1, 6]. There are some disadvantages to signal analysis
approaches. STFT has a fixed time–frequency window resolution and
cannot provide appropriate information for all event signals. WT is heavily
influenced by noise. Spectral leakage influences the performance. ST is not
good at real-time applications. Also, ST may cause a false estimation of
harmonics. HHT applied to narrowband only. TT transform has high
complexity. Furthermore, all methods have a computational burden and are
not robust to noise [14].
Figure 6 provides a flowchart for the signal analysis methods. As seen,
in these methods, field transform is performed to the event signal first and
its coefficients/features are extracted. Threshold values are determined with
empirical tests afterward, and islanding/non-islanding events are detected
according to the feature parameters. Applications with threshold values
show trends in false detection and missed detection due to difficulties in
choosing values. This causes problems especially for signals with high
noise conditions when the nominal even and the fault state are very close.

Fig. 6 Flowchart of signal processing-based method [9]

3.3 Machine Learning-Based Techniques Using Signal


Analysis, Feature Selection, and Classifier Methods
To overcome the limitations of traditional methods, intelligent methods
using signal processing, feature selection, and classifier-based three-stage
ML techniques are presented [14–17]. A flowchart of the ML approaches
using signal analysis, feature selection, and classifier is shown in Fig. 7. In
these methods, in the first step, the signal analysis method is applied to the
event signal and feature extraction is performed using statistics. The second
step is the selection of features that affect the classifier’s performance.
However, the feature filtering optimization process is time-consuming and
tedious. Besides, it’s worth noting that the original feature set is still
manually selected. In the last stage, event classification is performed using
the selected features. The ability of conventional three-stage ML methods to
reveal attributes in raw data is limited compared to DL and requires an
expert in the training process. In DL, new features emerge spontaneously by
training the data, while in ML, the features must be defined correctly by the
users. As a result, it is seen that while the traditional ML approach achieves
very high accuracy in noise-free conditions, this accuracy decreases under
high noise conditions.
Fig. 7 Flowchart of signal analysis, feature selection, and classifier-based method [11]

3.4 Deep Learning-Based Techniques


DL techniques have automatic feature extraction capability, eliminate
human involvement, it has closed-loop feedback. These methods can
automatically extract features without the need for a signal processing step.
Unlike traditional neural network structures, DL algorithms extract features
in complex data using multiple layers. When DL is used to solve islanding
identification difficulties, can streamline procedures, enhance accuracy, and
reduce the need for human intervention. The differences between traditional
ML methods and DL are given below:
DL algorithms for training require more data than ML classifiers.
While new features emerge spontaneously by training the data in DL, ML
classifiers need to define the features correctly by the users.
Deep neural networks (DNNs), which contain much more mathematical
operations than classical ML give higher accuracy results in high-
dimensional data thanks to these features.
In ML, problems are divided into small parts and the results are
combined into a single result. In DL, on the other hand, step-by-step
problem solving is approached.
DL architectures can be examined in four different groups according to
the training algorithms used. Examples of commonly used DL architectures
are convolutional neural network (CNN), deep belief network (DBN),
stacked auto-encoder (SAE), and long short-term memory (LSTM).
Convolutional neural network (CNN)-based algorithms [18]: CNN, one
of the widely used DL algorithms, is a DNN equipped with one or more
convolutional layers followed by one or more feedforward layers.
Classical CNN architectures are formed by cascading convolutional
layers, pooling layers, and fully connected (FC) layer structures. Besides,
dropout and batch normalization are often used to standardize inputs and
prevent overfitting. The data to be used in the input layer is given to the
relevant network as raw. The size of the data directly affects the accuracy
of the network. As the data capacity grows, higher memory and training
time are needed. The conversion process on the input data is provided
with filters. With the activation function, the linear structure in the
convolution layer is activated by transforming it into a nonlinear
structure. The pooling layer is used after the activation function and
reduces the input size for the next convolution layer. In this way, the
memorization of data is prevented, and the calculation cost is reduced.
The fully connected layer comes after these layers and depends on all
areas of the convolutional layer before it. The dropout layer, on the other
hand, can be used in these architectures in some cases to prevent the
network from overfitting. Finally, classification is made in the output
layer and the Softmax function is generally preferred because of its
success. It produces a certain amount of output according to the
classification type of the network.
Long short-term memory (LSTM)-based algorithms: Recurrent neural
networks (RNN) are one of the types of artificial neural networks, and it
is formed as a result of the connection of several units consisting of
directional loops with each other. The structure of the network has the
potential for the entry-level neural network plan to predict the next data
using the previous data. The most widely used structure in RNNs is
LSTM. Consisting of memory cells and gates, LSTM is produced as a
solution to the vanishing gradient problem and to overcome complex
time series. LSTM architecture has three gates input, forget and output,
fixed fault loop, output activation function, and peephole connections.
Memory cells store information with the control of gates. There are input
gates, output gates, and forget gates, which are used to control the flow of
information into and out of the memory cell. LSTM is effectively used in
sequential modeling tasks such as text classification and time series
modeling. The application principle is the same as the basis of artificial
neural networks, and it is based on the input vector multiplied by the
weight matrix and summed with the bias vector, and passed through an
activation function. Models using multiple LSTM layers are called deep
LSTM (DLSTM).
Stacked autoencoder (SAE)-based algorithms: Autoencoders (AE) are
neural networks that obtain a generally lower-dimensional representation
of the data and use that representation to produce the same data as output.
With this feature, the training of self-encoders, which is an example of
notation learning, takes place through unsupervised learning. AE has a
feedforward structure; this neural network may have one or more hidden
layers. The main difference between AE and conventional artificial
neural networks is the size of the output layer. In an AE, the size of the
output layer and the size of the input layer are the same. Islanding and
PQE signals have a complex relationship. Therefore, using just one AE is
not enough. A single AE stands out in SAE classification problems
because it cannot reduce the dimensionality of the input features. SAE
consists of multiple encoders.
Deep belief network (DBN)-based algorithms: In ML, DBN is a class of
DNN that consists of multiple layers of hidden nodes, with connections
between layers but not between nodes. DBNs can be viewed as a
combination of simple, unsupervised networks such as restricted
Boltzmann machines (RBM) or AEs. Each RBM layer is connected with
both previous and subsequent layers. However, the nodes of any layer do
not communicate with each other horizontally. DBNs can classify or
cluster for unsupervised learning with a Softmax layer as the last layer.
DBN architectures are applied to image recognition and generation.

4 Proposed Hybrid Model Using CNN and LSTM


ML-based islanding detection approaches are including three steps: signal
processing, feature filtering, and classification. However, the ability of
conventional three-stage ML methods to reveal attributes is limited
compared to DL and requires an expert in the training process. Also, three-
stage ML methods are not robust to noise. Signal processing and feature
selection stages cause an extra computational burden. Human intervention
at this stage makes the closed-loop control seen in DL structures
impossible. To solve these problems, a novel multiple DL-based models are
suggested in this study. In Fig. 8, the general structure of the DCNN,
DLSTM, and proposed hybrid CNN-LSTM-based DL method is shown. In
the study, it is tried to find the best of the existing deep learning algorithm
outputs. Thus, choosing the best result is aimed to increase the accuracy and
reliability of the system. This method provides reliably detecting islanding
conditions differing from conventional methods that cause false detections
due to load demands and noises in the signal. This method will be discussed
in detail in this section.

Fig. 8 a DLSTM algorithm, b DCNN algorithm, and proposed DCNN-DLSTM-based method


The number of layers, layer order, and parameter selection in the DCNN
model may differ based on the model designer. The data for CNN models is
passed through the network layers, and the weights are updated and
transferred to the next layer. The overall error value is computed by
subtracting the DCNN response from the targeted result. A backpropagation
technique is used to distribute the obtained error to all weights in the
network. The influence of each weight on total error is determined using
stochastic descent-based optimization approaches. To get the best network
performance, each iteration aims to lower the overall error. Several models
are formed in this study by changing various parameters such as ordering
layers in the DCNN structures, kernel dimensions, activation functions, and
optimizer functions, filter dimensions. Figure 8a shows the model
parameters that provide the highest accuracy for the DCNN algorithm.
RELU is used as the activation function in the CNN algorithm. In the FC
layer, the sigmoid activation function is used. Dropout and batch
normalization are used to standardize inputs and prevent overfitting.
In Fig. 8b, model parameters are shown in the case where the most
suitable model providing the highest accuracy for the DLSTM algorithm to
be used for island detection is created. This model is determined for the
best-performing case by modifying the hyper-parameters on the network
and using optimization methods. After choosing the most suitable LSTM
model, the effects on the accuracy of the model are investigated by
changing parameters such as the most appropriate input parameters using
this model and changing the training test rates to determine a faster
islanding detection.
In this study, a hybrid network called DCNN-DLSTM combining
DCNN and DLSTM algorithms is proposed for islanding/non-islanding
event classification. DCNN and DLSTM models are created separately in
the previous stage and combined to achieve better performance. The
important information of the input samples is revealed in the first stage with
CNN. The LSTM neural network, on the other hand, is designed to train
and classify islanding and non-islanding conditions in the second step. The
motivation to use LSTM in this model is to extract the dependencies
between each feature row from the CNN network. The model parameters
selected by considering the accuracy and loss factor consist of the number
of convolutional layers, kernel size, maximum pooling, and the number of
neurons in the fully connected layer. Model parameters are optimized by
training with different options to achieve maximum performance. RELU is
used as the activation function in the CNN algorithm. In the FC layer, the
sigmoid activation function is used. In Fig. 8c, model parameters are shown
in the case where the most suitable model provides the highest accuracy for
the DCNN-DLSTM algorithm.

4.1 Results
The number of training sets reached 7000 samples for DCNN, DLSTM, and
proposed hybrid models. Generated test samples for islanding and non-
islanding classes are given in Table 2. Class C1 shows islanding events,
while class C2 covers non-islanding events such as PQEs.
Table 2 Generated test samples for islanding and non-islanding classes

Class Events Number of test


samples
C1 Islanding in different power mismatches (at the PCC of the hydrogen 300
energy-based DG)
C2 Voltage sag (switching on loads and switching off capacitor banks) 100
Voltage swell (switching offloads and switching on capacitor banks) 100
Induction motor starting 100
Presence of drive systems 100
PV disconnection 100
Line-to-ground (LG), two line-to-ground (LLG), and three line-to- 200
ground (LLLG) faults

The classification test results of the proposed method have given in


Table 3. Accuracy performance is very high for test data containing noise at
different signal-to-noise ratio (SNR) values and covering all scenarios. The
proposed approach classifies islanding conditions caused by CB opening
and minor disturbances caused by switching and operating conditions in a
microgrid.
Table 3 The classification test results of the proposed method

Class Number of test samples Corrections Accuracy rate (%)


C1 300 295 98.33
C2 700 688 98.29
The performance comparison of DLSTM and DCNN and the proposed
DCNN-DLSTM-based hybrid method at different noise levels is given in
Table 4. The hybrid model provides better performance than DCNN and
DLSTM in accuracy and noise immunity. LSTM is to extract the
dependencies between each row of features. With this advantage, the
proposed model shows performance superiority in automatic feature
extraction.
Table 4 Comparison

Accuracy rate (%)


Model No- Low-level noise (SNR: High-level noise (SNR:
noise 40 dB) 30 dB)
DCNN 98.43 97.13 96.46
DLSTM 98.25 97.56 96.79
Proposed hybrid 98.85 98.13 98.01
model

The detection time is within the IEEE standards for the hybrid DCNN-
DLSTM with a binary classifier. NDZ region is almost zero for the
proposed method, and its comparison with UVP/OVP, and UFP/OFP
methods is shown in Fig. 9.

Fig. 9 NDZ region comparison with UVP/OVP and UFP/OFP method for different
5 Discussion and Conclusion
Previous ML-based islanding detection approaches are including three
steps: signal processing, feature filtering, and classification. In the signal
analysis step, all methods are affected by noise. The feature filtering and
feature optimization process are tedious and time-consuming. Besides,
signal processing and feature selection cause a computational burden. To
solve these problems, a novel multiple DL-based models are suggested. A
DL-based islanding classification method using LSTM and CNN is
proposed for the classification of islanding and non-islanding PQEs such as
sags, swells, and frequency deviations in DG-based microgrid considering
the DG load demand. The important information of the input samples is
revealed in the first step with CNN in the proposed method. The LSTM is
designed to train and classify events in the second step.
A DCNN-DLSTM-based method is analyzed under different scenarios
that will occur depending on the situation on the demand side. The accuracy
results are also compared with the DCNN and DLSTM models. The
proposed method has 98.85% accuracy under no-noise and 98.01% high-
level noise conditions. The detection time is within the IEEE standards and
NDZ is almost zero for DCNN-DLSTM with a binary classifier.

References
1. Khan MA, Haque A, Kurukuru VB, Saad M (2022) Islanding detection techniques for grid-
connected photovoltaic systems—a review. Renew Sustainable Energy Rev 154:111854
2.
Shobana S, Praghash K, Ramya G, Rajakumar BR, Binu D (2022) Integrating renewable energy
in electric V2G: improved optimization assisting dispatch model. Int J Energy Res 46(6):7917–
7934
[Crossref]
3.
IEEE Std. 929-2000 IEEE recommended practice for utility interface of photovoltaic (PV)
systems, Institute of Electrical and Electronics Engineers, Inc., New York
4.
Interconnecting distributed resources with electric power systems, IEEE Standard 1547-2003;
(2003)
5.
Bayrak G, Yılmaz A (2021) Detection and classification of power quality disturbances in smart
grids using artificial intelligence methods. In: Artificial intelligence (AI). CRC Press, pp 149–
170
6.
Tshenyego O, Samikannu R, Mtengi B (2021) Wide area monitoring, protection, and control
application in islanding detection for grid integrated distributed generation: a review. Meas
Control 54(5–6):585–617
[Crossref]
7.
Bayrak G (2018) Wavelet transform-based fault detection method for hydrogen energy-based
distributed generators. Int J Hydrogen Energy 43(44):20293–20308
[Crossref]
8.
Bayrak G (2015) A remote islanding detection, and control strategy for photovoltaic-based
distributed generation systems. Energy Convers Manage 96:228–241
[Crossref]
9.
Yılmaz A, Bayrak G (2022) A new signal processing-based islanding detection method using the
pyramidal algorithm with undecimated wavelet transform for distributed generators of hydrogen
energy. Int J Hydrogen Energy 47(45):19821–19836
[Crossref]
10.
Hussain A, Kim CH, Admasie S (2021) An intelligent islanding detection of distribution
networks with synchronous machine DG using ensemble learning and canonical methods. IET
Gener Transm Distrib 15(23):3242–3255
[Crossref]
11.
Yılmaz A, Küçüker A, Bayrak G (2022) Automated classification of power quality disturbances
in a SOFC&PV-based distributed generator using a hybrid machine learning method with high
noise immunity. Int J Hydrogen Energy 47(45):19797–19809
[Crossref]
12.
Yılmaz A, Küçüker A, Bayrak G, Ertekin D, Shafie-Khah M, Guerrero JM (2022) An improved
automated PQD classification method for distributed generators with hybrid SVM-based
approach using un-decimated wavelet transform. Int J Electr Power Energy Syst 136:107763
[Crossref]
13.
Yılmaz A, Bayrak G (2019) A real-time UWT-based intelligent fault detection method for PV-
based microgrids. Electric Power Syst Res 177:105984
[Crossref]
14.
Panigrahi BK, Bhuyan A, Shukla J, Ray PK, Pati S (2021) A comprehensive review on
intelligent islanding detection techniques for renewable energy integrated power system. Int J
Energy Res 45(10):14085–14116
[Crossref]
15.
Mishra S, Mallick RK, Gadanayak DA, Nayak P (2021) A novel hybrid downsampling and
optimized random forest approach for islanding detection and non-islanding power quality
events classification in distributed generation integrated system. IET Renew Power Gener
15(8):1662–1677
[Crossref]
16.
Ezzat A, Elnaghi BE, Abdelsalam AA (2021) Microgrids islanding detection using Fourier
transform and machine learning algorithm. Electric Power Syst Res 196:107224
[Crossref]
17.
Sawas AM, Woon WL, Pandi VR, Shaaban MF, Zeineldin HH (2021) A Multistage passive
islanding detection method for synchronous-based distributed generation. IEEE Trans Industr Inf
18(3):2078–2088
[Crossref]
18.
Bayrak G, Yılmaz A (2020) Signal processing-based automated fault detection methods for smart
grids. In: Smart technologies for smart cities, EAI/Springer innovations in communication and
computing. Springer, Cham, pp 57–85. https://​doi.​org/​10.​1007/​978-3-030-39986-3_​4

OceanofPDF.com
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023
A. Tomar et al. (eds.), Prediction Techniques for Renewable Energy Generation and Load Demand
Forecasting, Lecture Notes in Electrical Engineering 956
https://doi.org/10.1007/978-981-19-6490-9_5

Comparison of PV Power Production


Estimation Methods Under Non-
homogeneous Temperature Distribution
for CPVT Systems
Cihan Demircan1 , Maria Vicidomini2, Francesco Calise2,
Hilmi Cenk Bayrakçı3 and Ali Keçebaş4
(1) Department of Energy Systems Engineering, Graduate School of
Natural and Applied Sciences, Süleyman Demirel University,
32260 Isparta, Turkey
(2) Department of Industrial Engineering, University of Naples Federico
II, P.le Tecchio 80, Naples, Italy
(3) Department of Mechatronics Engineering, Faculty of Technology,
Isparta University of Applied Sciences, 32260 Isparta, Turkey
(4) Department of Energy Systems Engineering, Faculty of Technology,
Muğla Sıtkı Koçman University, 48000 Muğla, Turkey

Cihan Demircan
Email: [email protected]

Abstract
The way to increase energy generation in a standard photovoltaic (PV) or
photovoltaic/thermal (PV/T) system is the tracking of the sun and/or
concentrating to increase the solar energy coming into the field. As the
radiation is increased in both concentrated PV and PV/T systems, both PV
power output and PV module temperature increase. The fact that the PV
module temperature increases and exceeds the reasonable level reduces the
life of solar cells and permanently damages the cells. The way to prevent
this is to cool the PV modules. In other words, thermal energy is absorbed
by integrating the thermal system. Thus, both electrical and thermal energy
needs will be met easily, and a concentrating photovoltaic thermal (CPVT)
system produces both electricity and thermal energy from the sun. Electrical
and thermal behavior analyzes of CPVT systems are important issues in
order to robust and accurate deciding for electrical and thermal power
production. In a previous study, finite volume methods were applied for
thermal analysis of the CPVT system. Temperature distribution of the PV
modules and CPVT surfaces was done. In the numerical analysis;
power/temperature coefficient-based method was used for electrical power
estimation. In this chapter, power/temperature coefficient-based and five
parameter models of PV modules were presented and discussed for
forecasting of electrical power production. Decided to PV module
temperature in power/temperature coefficient model and temperature
distribution applications on diode model were discussed.
Power/temperature-based power estimation methods are depending on first,
medium, and end PV module temperature. However, different case studies
for CPVT electrical power production forecasting methods were
investigated.

Keywords Solar energy – Photovoltaics – Concentrating photovoltaic


thermal – Electrical modeling – Uncertainties

1 Introduction
The current traditional energy use causes global climate changes with
increase in energy demand in recent years. Therefore, the creation of an
energy structure that does not or is less risky for the environment gains
importance in terms of sustainable development and climate changes [1].
The limited energy and the increasing demand for energy every day
encourages the use of more efficient energy from energy sources.
Renewable energy sources (RES) are environmentally friendly, non-
renewable and intermittent types of energy. The ever-increasing energy
demand and the inability to store energy encourage more efficient use of
energy produced from RES. Recently, due to the decrease in initial
investment costs, the electricity produced from the sun as RES has
increased by society and industry, institutions and organizations [2].
Solar photovoltaic (PV) cell includes semiconductor materials and
converts the electricity from the sun. The quantity of solar radiation on the
PV surface is raised in the concentrating PV (CPV) cells. However, PV
module temperature increases because of the rising of solar radiation in
CPV cells. Therefore, over temperatures can be permanently damaged to
solar cells. PV cells are actively or passively cooled in order to prevent the
damaging. In order not to damage the PV cells, they are actively or
passively cooled. A channel through which a fluid flows is used on the PV
back surface for active cooling. Thus, solar energy is converted into heat
energy together with electricity. Systems operated in this way are referred to
as concentrated PVT (CPVT).
The solar radiation for the parabolic-trough node has uniform value for
CPVT systems. And the PV modules are, respectively, cooled by the fluid
as the fluid flows through the parabolic trough. On the other hand, as the
liquid flows to the end of the parabolic trough, the cell temperatures rise
and the liquid temperature rises. PV module temperature is higher than
previous node. For this reason, inhomogeneous temperature distribution
occurs in PV modules. For this reason, the current and voltage values of PV
modules vary. The realization of these mismatches creates power losses in
the PV. In this chapter, uncertainty of the amount of electricity from the
CPVT system was evaluated. Uncertainty analysis in output power
estimation caused by nonlinear behavior of solar cells and environmental
factors in literature studies was presented below. In addition, literature
studies about CPVT systems were mentioned.
The performance of PV systems varies according to material properties,
operating conditions, and environmental conditions (temperature, solar
radiation, wind speed, etc.). Mallick and Eames [3] evaluated the electrical
performance analysis of the low-concentrated PV system. Current–voltage
(I–V) and power-voltage (P–V) curves were used for electrical performance
analysis. According to the findings, weak optical coating between the unit
concentrator and the PV module causes more than one MPP. Maka and
O’Donovan [4] performed dynamic performance analysis with thermal and
electrical models for a triple junction solar cell-based CPV module. It has
been observed that the annual change in cell temperature above 80 ℃
covers 13% in the summer season. In addition, it has been emphasized that
one of the causes of current mismatching in the triple junction solar cells is
spectral variation. Durusoy et al. [5] determined the correction factor as
0.33 for the calculation of solar radiation incident on the back surface of the
bifacial PV module. The annual PV efficiency calculation error was 1.4%
after the correlation methods. Metlek et al. [6] estimated the effect of
temperature on electrical power in the natural zeolite PVT system using a
long short-term memory algorithm. It has been seen that the proposed
algorithm accomplished the accurate predictions with very small errors.
Navabi et al. [7] presented work for accurate estimation of output energy
using PV module equivalent circuit models. The proposed modeling in the
study for the planning studies of PV systems was compared with the system
supervisor model and RETScreen software. According to the monthly
analyzes, the average error of the developed model was below 5%, while
the other techniques were found to be above 10%. Carullo and Vallan [8]
analyzed the long-term performance of PV power plants. According to the
power plant data and their calculations, the most uncertainty is seen in the
average PV efficiency with 1.3%, while the plant with CIGS thin film PV
module is more efficient than the others. Makrides et al. [9] presented a
study on the errors and uncertainties in estimating the annual energy yield
of different types of PV modules. They have been reported that the accuracy
is increased by the correlation of the temperature coefficient in the single-
point efficiency model. In addition, study has been observed that the results
of the single diode model better match the real data. Dubard et al. [10]
investigated the uncertainties in PV performance measurement traceability.
According to obtained results; uncertainty varies between 2.5 and 10% in
the PV production line. And practically it is between 3% and 5%, while it is
as low as 2% for the crystalline silicon reference modules. They
emphasized that the uncertainty developed is a key factor for the PV market
and has a significant impact on the economy and the environment.
Dirnberger et al. [11] presented a study on the performance ratios and
uncertainties of eight different PV module types. In the analysis with STC
power uncertainty, it was observed that the uncertainties changed between
1.8 and 3.0%. Roberts et al. [12] presented system models and analyses of
the process from global solar radiation to alternating current output for PV
performance evaluation. In another study [13], literature reviews on the
correct estimation of the maximum power point for quality assurance of
large-scale PV plants were evaluated and compared with experimental data.
In the study evaluation for four different PV technologies, it was stated that
CdTe and CIGS thin film technologies are similar to the technologies with
crystalline silicon. In terms of amorphous/microcrystalline PV technologies,
it is declared that the seasonal variation is 3.5% of the STC power. It has
been emphasized that the uncertainties about the models for such a situation
are great. Zhang et al. [14] performed the analysis of parameter uncertainty
on the reliability and performance of PV cells with the quasi-Monte Carlo
method. In the study based on the single diode model, small series internal
resistance and large parallel internal resistance were reported for increasing
the amount of power generation. Bharadwaj and John [15] proposed a sub-
cell model for PV modules with hotspots. In the proposed model, the
shading cross-section and PV equivalent circuit parameters are correlated.
According to the tests performed in the shading conditions, the proposed
model has proven to be useful with an output prediction accuracy of 93%.
Chin and Salam [16] developed the three-point approach technique for PV
parameter extraction. According to the obtained results, the standard
deviation of the proposed method is lower, and it is superior to other
methods in parameter extraction. Li et al. [17] has been reported that I-V
characteristic curve-based methods are used more frequently in PV
parameter extraction and maximum power point estimation. In this chapter,
power/temperature coefficient-based and five-parameter models of PV
modules were presented and discussed for forecasting of electrical power
production. Decided to PV module temperature in power/temperature
coefficient model and temperature distribution applications on diode model
were discussed. Power/temperature-based power estimation methods are
depending on first, medium, and end PV module temperature. In the
literature studies about CPVT systems were presented as follows.
Ben Youssef et al. [18] developed a two-dimensional numerical model
for electrical and thermal performance analysis in a triple-junction CPVT
system. Thermal model and electrical performance evaluations based on
current–voltage curves in a CPVT system with north–south solar tracking in
the steady state and transient analysis carried out by Wang et al. [19].
According to the comparison with experimental studies, it has been seen
that the thermal model in the transient analysis gives more realistic results.
Bernardo et al. [20] implemented and simulated a parabolic-trough CPVT
system with a triangle receiver. Calise and Vanoli [21] zero-dimensional
energy balance equations, finite volume methods [22], high-temperature
solar tri-generation system [23], air holding unit with dryer [24], CPVT
assisted heating and cooling [25], thermodynamic performance evaluation
[26], thermal modeling and parametric analysis [27], optical modeling and
optimization [28], CPVT-based air heating and thermal energy storage [29],
effect of different absorbers on performance [30], absorption-thermoelectric
cooling [31], optical design [32], and electro-thermal analysis [33]. Various
studies have been carried out on the CPVT and PVT systems. Demircan et
al. [2] investigated electrical connections of PV strings in CPVT systems.
Afzali Gorouh et al. [34] designed a low-concentrated CPVT system with
an A-shaped PV array. The system was evaluated with zero-dimensional
thermal modeling, optical analysis using Monte Carlo light tracking
software, and experimental studies.
In this chapter, power/temperature coefficient-based and five-parameter
model of PV module were presented and discussed for forecasting of
electrical power production. Decided to PV module temperature in
power/temperature coefficient model and temperature distribution
applications on diode model were discussed. Power/temperature-based
power estimation methods are depending on first, medium, and end PV
module temperature. However, different case studies for CPVT electrical
power production estimation methods were investigated.

2 System, Modeling, and Evaluation


In this section, the two string-triangular-based CPVT system is described.
The finite volume method for the analysis of the temperature distribution of
the triangular receiver CPVT system is briefly summarized. Furthermore,
mathematical modeling for PV modules in CPVT systems is introduced.
Information on the comparison of power generation forecasts is presented.

2.1 Definition of the CPVT System


The schematic diagram of the parabolic-trough CPVT system is shown in
Fig. 1. As seen in the figure, this system consists of a concentrator
concentrated on the trough, a triangular parabolic trough, a fluid channel
within the trough, and PV modules positioned on the triangular trough.
Mirror is used to reflect the sunlight coming through the concentrator into
the triangular trough. The triangular prismatic trough is placed at the focus
of the concentrator. While one surface of the trough looks perpendicular to
the sunlight, the junctions of the other two surfaces form the concentrator
two reflection zones. Sunlight from these regions is reflected to the PV
modules on the trough. No PV module is added to the trough surface
perpendicular to the sun when the purpose of thermal treatment is desired.
In response to the electricity production in the PV modules, there is an
increase in both the PV module and triangular trough temperatures. A fluid
channel is placed in the triangular trough to reduce the temperature here.

Fig. 1 Parabolic-trough-based CPVT system diagram

2.2 Mathematical Modeling of PV


Mathematical modeling is necessary to discuss the dynamic behavior of PV
modules under certain conditions. Single diode model, double diode model,
triple diode, or multiple diode models have been proposed to model the
nonlinear dynamic behavior of solar PV modules. The most frequently used
and preferred model among the models is the single diode model (SDM).
This model is based on four and five parameters. It takes into account series
and parallel resistance values. When the SDM is compared with two-diode
model, it is most simple. The equivalent circuit of the PV module based on
five parameters is given in Fig. 2.
Fig. 2 PV equivalent circuit for five parameters
In this section, the single diode model consisting of five parameters in
the PV modules of the CPV system is discussed. For four and five
parameter-based mathematical models of such a PV module, the
relationship between current and voltage can be given in Eqs. (1) and (2)
[35].

(1)

(2)

where Io, Ipv, and Id denote the currents in load, PV and diode, respectively.
In addition, q, k and n are, respectively, the electron charge, the Boltzmann
constant and the diode ideality factor. Rs and Rp represent the series and
parallel resistances, respectively.
Under short-circuit current and open-circuit operating conditions, the
current–voltage relations can be written as follows [35]:

(3)

(4)

where Isc and Voc denote the short-circuit current and open-circuit voltage,
respectively. The estimated Isc and Voc values according to the PV module
temperature are as follows, respectively:
(5)

(6)
where λ, β, and Tpv denote the current–temperature coefficient, the voltage-
temperature coefficient, and the PV module temperature, respectively.
In the CPVT system, TPS105S-5W PV module are taken into account.
The parameters for the PV modules used are given in Table 1 [2, 33, 37].
Their mathematical modeling is performed in the MATLAB/SIMULINK
program [36] using five-parameter equivalent circuit of PV modules.
Table 1 The datasheet values of the TPS105S-5W PV module [37]

Parameter Value Parameter Value


Isc (A) 0.32 Voc (V) 21.5

Imp (A) 0.29 Vmp (V) 17.5


λ (%/K) 0.05 β (%/K) -0.32
Sizes (cm) 19.3 × 23.3 Weight (kg) 0.54
One of the important factors affecting energy performance in
photovoltaic energy conversion systems is environmental parameters. The
main affecting environmental parameters are solar radiation, ambient
temperature, and wind speed. The efficiency of PV modules under certain
environmental conditions is expressed as follows:
(7)
where ηref is the reference efficiency value and is calculated according to
the power value under standard test conditions (1000 W/m2, 25 °C). γ is the
power/temperature coefficient of the PV module. This coefficient is taken
as 0.45% for crystalline PV modules and 0.25% for amorphous PV modules
[38]. TPV shows the PV module temperature. The power that the PV module
can produce at a given concentration ratio (C), solar radiation (G), and TPV
module temperature, as
(8)
In this chapter, temperature coefficient-based power production per PV
module in CPVT system and total power production of CPVT system in
FVM method are compared with five-parameters single diode model
(SDM) based PV power production estimation. In order to SDM power
estimation, temperature distribution is applied on PV mathematical
modeling of the PV strings. However, electrical power uncertainty of the
CPVT system is evaluated.
In the FVM, the size of the CPVT system is divided into each node.
When the next fluid outlet temperature is calculated, this temperature is
considered the next node’s inlet temperature and outlet temperature of that
node. In the next node, the energy balance equations are analyzed again by
taking advantage of the temperatures obtained from the previous nodes.
Thus, the numerical analysis of the CPVT system is evaluated with the help
of the solutions obtained for each point. The five energy balance equations
of the CPVT system are taken into account, respectively for upper surface
—PVT, fluid channel—metallic surface, PVT—metallic surface, upper
surface of the triangular receiver—substrate and parabolic-trough
concentrator.
The finite volume method is based on obtaining the energy balance
equations between the triangular receiver and the parabolic-trough
concentrator [21, 22, 33]. And the least squares method was used to solve
the energy balance equations. In the finite volume method, the length of the
CPVT system is divided into nodes. When the next fluid outlet temperature
is found, this temperature value is accepted as the outlet temperature of that
node and the inlet temperature of the next node. At the next node, the
energy balance equations are solved again by utilizing the temperature
values obtained from the previous nodes. Numerical modeling is done using
MATLAB program [36] and COOLPROP [39] for thermo-physical
specifications of refrigerant fluid and air. Thus, thermal analysis of the
CPVT system is performed. Among the environmental conditions in the
operating conditions of the CPVT system, the ambient temperature is 25
℃, the wind speed is 2 m/s, the direct radiation is 800 W/m2 and the total
radiation is 1000 W/m2. The concentration ratio of the system is about 2.61.
The results obtained are given and discussed below. In this chapter, R134a
fluid is considered for effects of non-homogeneous temperature distribution
on electrical power production. Obtained results are presented and
discussed the next section. However, electrical power production
uncertainties are evaluated in this study.

3 Results and Discussion


In the CPVT system, two string PV modules are used for electricity
production. In order to utilize of thermal energy in PV modules fluid
channel and refrigerant fluid is used. However, electrical and thermal
energy of incoming solar energy is useful. Parabolic concentrator is
reflected to incoming solar radiation to PV cells for increasing of solar
radiation. Thus, electricity and thermal energy increased.
Datasheet values for a PV module (TPS105S-5W) used in PV arrays on
trough in the CPVT system are listed in Table 1. The mathematical
modeling results based on the diode model obtained under standard test
conditions (STC) (1000 W/m2, 25 °C) and different radiation conditions are
shown in Fig. 3. As seen in Fig. 3, the output power (5.07 W) reaches its
maximum value when the module voltage is at 17.5 V for 1 kW/m2. Beyond
this point, the output power starts to decrease.

Fig. 3 Voltage, current, and power curves of PV module for various solar radiation

The I–V and P–V curves in the PV module for different module
temperatures at 1000 W/m2 are plotted in Fig. 4. In the context of Fig. 4, it
is observed that the output power drops rapidly as the open circuit voltage is
approached due to the non-linear behavior. An increase in module
temperature will slightly increase the Isc value, while a greater decrease in
the Voc voltage. PV power generation also reduces [2].

Fig. 4 Voltage, current, and power curves of PV module for different module temperature
In this chapter, temperature coefficient-based power production per PV
module in CPVT system and total power production of CPVT system in
FVM method are compared with five parameters SDM-based PV power
production estimation. In order to SDM power estimation, temperature
distribution is applied on PV mathematical modeling of the PV strings.
However, electrical power uncertainty of the CPVT system is evaluated.
Obtained results are presented as follows.
In the thermal analysis results of the CPVT system using FVM method
for different fluid inlet temperatures are given in Fig. 5. As can be seen in
this figure, PV temperature (Tcpvt) increases when the fluid temperature
increases for each node. However, non-homogeneous temperature
distribution is available in CPVT system characteristics.

Fig. 5 Temperature distribution of CPVT system


Power production for the PV module at each node is given in Fig. 6.
Power production decreases at each node due to fluid temperature. Power
production per module for 50 °C fluid inlet temperature is 8.35 W in the
first node. It decreases up to 6.52 W at the end of the CPVT system. On the
other hand, Tcpvt exceeds 110 °C when the fluid inlet temperature increases.
However, power production decreases according to low inlet temperature.

Fig. 6 Electric power production per module at each node in CPVT system
The goal of this chapter is power uncertainty of the CPVT system. The
reason for the non-homogeneous temperature distribution is given below
due to fluid circulation in the fluid channel of the triangular receiver in the
parabolic-trough CPVT system. Single diode model application results for
non-homogeneous temperature distribution and numerical method results of
the system are given following. However, power uncertainty is evaluated
for the CPVT system under non-homogeneous temperature distribution.
In the finite volume methods, temperature coefficient-based efficiency
and power estimation method are used in given Eq. (8). Power estimation
results for FVM and SDM methods are presented in Table 2 for different
fluid inlet temperature (Tin). Moreover, current–voltage and power-voltage
variations under non-homogenous temperature gradients are presented in
Fig. 7. As can be seen the results of two different methods, power
estimation results are close to each other in terms of maximum power point
of PV modules. When the Tin is 30 °C of the CPVT system, power
differences is approximately 14 W. And, it decreases 4.4 W for the other Tin
parameters. On the other hand, operating voltages in maximum power point
decrease by increasing of fluid temperature. As a result, high fluid inlet
temperatures decrease the power production and can cause to damage of PV
cells. Therefore, low inlet temperature for R134a fluid should be preferred
for more electricity production and physical protection of PV cells. In this
way, solar energy could be efficiently used for a long lifetime.
Table 2 Estimated total electricity production of the system for FVM and SDM methods

Pel/Tin 30 °C 40 °C 50 °C

Pel,FVM (W) 1486.72 1411.47 1336.24

Pel,SDM (W) 1470.71 1405.89 1340.75


Fig. 7 I voltage, current and power characteristics of CPVT system under non-homogeneous
temperature distribution
As a result, obtained results for two estimation methods are in good
agreement with each other. Power differences are very less. And power
uncertainty of the CPVT system could be negligible according to two
different methods when the PV module operates at maximum power point.
However, two different methods are reliable for performance analysis of the
CPVT system.

4 Conclusions
In this chapter, temperature coefficient-based power estimation method and
maximum power point in single diode modeling of the CPVT strings are
compared for power uncertainty analysis. Maximum power difference was
obtained for 30 °C fluid inlet temperature as well as 14 W. It is smaller than
5 W for other fluid temperature when the maximum power point tracking is
used in CPVT system. As a result, power uncertainty of the CPVT system
could be negligible according to FVM and SDM methods. On the other
hand, when the fluid inlet temperature increases, PV module temperatures
reach up to 110 °C. This temperature should be carefully chosen and
designed for R134a refrigerants. In addition, operating at maximum power
point of the CPVT string is useful for efficient solar energy utilization.

References
1. Soyhan HS (2009) Sustainable energy production and consumption in Turkey: a review. Renew
Sustain Energy Rev 13:1350–1360. https://​doi.​org/​10.​1016/​j.​rser.​2008.​09.​002
[Crossref]
2.
Demircan C, Keçebaş A, Bayrakçı HC (2020) Artificial bee colony-based GMPPT for non-
homogeneous operating conditions in a bifacial CPVT system. In: Eltamaly A, Abdelaziz A (eds)
Modern maximum power point tracking techniques for photovoltaic energy systems. Green
Energy and Technology, Springer, pp 331–353. https://​doi.​org/​10.​1007/​978-3-030-05578-3_​12
3.
Mallick TK, Eames PC (2008) Electrical performance evaluation of low-concentrating non-
imaging photovoltaic concentrator. Prog Photovoltaics Res Appl 16:389–398. https://​doi.​org/​10.​
1002/​pip.​819
[Crossref]
4.
Maka AOM, O’Donovan TS (2021) Dynamic performance analysis of solar concentrating
photovoltaic receiver by coupling of weather data with the thermal-electrical model. Thermal Sci
Eng Progress 24:100923. https://​doi.​org/​10.​1016/​j.​tsep.​2021.​100923
[Crossref]
5.
Durusoy B, Ozden T, Akinoglu BG (2020) Solar irradiation on the rear surface of bifacial solar
modules: a modeling approach. Sci Rep 10:13300. https://​doi.​org/​10.​1038/​s41598-020-70235-3
[Crossref]
6.
Metlek S, Kandilli C, Kayaalp K (2022) Prediction of the effect of temperature on electric power
in photovoltaic thermal systems based on natural zeolite plates. Int J Energy Res 46:6370–6382.
https://​doi.​org/​10.​1002/​er.​7575
[Crossref]
7.
Navabi R, Abedi S, Hosseinian SH, Pal R (2015) On the fast convergence modeling and accurate
calculation of PV output energy for operation and planning studies. Energy Convers Manage
89:497–506. https://​doi.​org/​10.​1016/​j.​enconman.​2014.​09.​070
[Crossref]
8.
Carullo A, Vallan A (2012) Outdoor experimental laboratory for long-term estimation of
photovoltaic-plant performance. IEEE Trans Instrum Meas 61:1307–1314. https://​doi.​org/​10.​
1109/​TIM.​2011.​2180972
[Crossref]
9.
Makrides G, Zinsser B, Schubert M, Georghiou GE (2013) Energy yield prediction errors and
uncertainties of different photovoltaic models. Prog Photovoltaics Res Appl 21:500–516. https://​
doi.​org/​10.​1002/​pip.​1218
[Crossref]
10.
Dubard J, Filtz J-R, Cassagne V, Legrain P (2014) Photovoltaic module performance
measurements traceability: uncertainties survey. Measurement 51:451–456. https://​doi.​org/​10.​
1016/​j.​measurement.​2014.​02.​025
[Crossref]
11.
Dirnberger D, Müller B, Reise C (2015) PV module energy rating: opportunities and limitations.
Prog Photovoltaics Res Appl 23:1754–1770. https://​doi.​org/​10.​1002/​pip.​2618
[Crossref]
12.
Roberts JJ, Mendiburu Zevallos AA, Cassula AM (2017) Assessment of photovoltaic
performance models for system simulation. Renew Sustain Energy Rev 72:1104–1123. https://​
doi.​org/​10.​1016/​j.​rser.​2016.​10.​022
13.
de la Parra I, Munoz M, Lorenzo E, Garcia M, Marcos J, Martinez-Moreno F (2017) PV
performance modelling: a review in the light of quality assurance for large PV plants. Renew
Sustain Energy Rev 78:780–797. https://​doi.​org/​10.​1016/​j.​rser.​2017.​04.​080
[Crossref]
14.
Zhang F, Wu M, Hou X, Han C, Wang X, Liu Z (2021) The analysis of parameter uncertainty on
performance and reliability of photovoltaic cells. J Power Sour 507(202):230265. https://​doi.​org/​
10.​1016/​j.​jpowsour.​2021.​230265
15.
Bharadwaj P, John V (2019) Subcell modeling of partially shaded photovoltaic modules. IEEE
Trans Ind Appl 55:3046–3054. https://​doi.​org/​10.​1109/​TIA.​2019.​2899813
[Crossref]
16.
Chin VJ, Salam Z (2019) A new three-point-based approach for the parameter extraction of
photovoltaic cells. Appl Energy 237:519–533. https://​doi.​org/​10.​1016/​j.​apenergy.​2019.​01.​009
[Crossref]
17.
Li S, Gong W, Gu Q (2021) A comprehensive survey on meta-heuristic algorithms for parameter
extraction of photovoltaic models. Renew Sustain Energy Rev 141:110828. https://​doi.​org/​10.​
1016/​j.​rser.​2021.​110828
[Crossref]
18.
Ben Youssef W, Maatallah T, Menezo C, Ben Nasrallah S (2018) Modeling and optimization of a
solar system based on concentrating photovoltaic/thermal collector, Solar Energy, 170 (2018)
301–313, https://​doi.​org/​10.​1016/​j.​solener.​2018.​05.​057.
19.
Wang Z, Wei J, Zhang G, Xie H, Khalid M (2019) Design and performance study on a large-
scale hybrid CPV/T system based on unsteady-state thermal model. Sol Energy 17:427–439.
https://​doi.​org/​10.​1016/​j.​solener.​2018.​11.​043
[Crossref]
20.
Bernardo LR, Perers B, Hakansson H, Karlsson B (2011) Performance evaluation of low
concentrating photovoltaic/thermal systems: a case study from Sweden. Sol Energy 85:1499–
1510. https://​doi.​org/​10.​1016/​j.​solener.​2011.​04.​006
[Crossref]
21.
Calise F, Vanoli L (2012) Parabolic trough photovoltaic/thermal collectors: design and
simulation model. Energies 5:4186–4208. https://​doi.​org/​10.​3390/​en5104186
[Crossref]
22.
Calise F, Palombo A, Vanoli L (2012) A finite-volume model of a parabolic trough
photovoltaic/thermal collector: energetic and exergetic analyses. Energy 46:283–294. https://​doi.​
org/​10.​1016/​j.​energy.​2012.​08.​021
[Crossref]
23.
Calise F, Dentice d’Accadia M, Palombo A, Vanoli L (2013) Dynamic simulation of a novel
high-temperature solar trigeneration system based on concentrating photovoltaic/thermal
collectors. Energy 61:72–86. https://​doi.​org/​10.​1016/​j.​energy.​2012.​10.​008
24.
Calise F, Dentice d’Accadia M, Roselli C, Sasso M, Tarielli F (2014) Desiccant-based AHU
interacting with a CPVT collector: simulation of energy and environmental performance. Sol
Energy 103:574–594. https://​doi.​org/​10.​1016/​j.​solener.​2013.​11.​001
25.
Buonomano A, Calise F, Palombo A (2018) Solar heating and cooling systems by absorption and
adsorption chillers driven by stationary and concentrating photovoltaic/thermal solar collectors:
modelling and simulation. Renew Sustain Energy Rev 82:1874–1908. https://​doi.​org/​10.​1016/​j.​
rser.​2017.​10.​059
[Crossref]
26.
Valizadeh M, Sarhaddi F, Adeli M (2019) Exergy performance assessment of a linear parabolic
trough photovoltaic thermal collector. Renew Energy 138:1028–1041. https://​doi.​org/​10.​1016/​j.​
renene.​2019.​02.​039
[Crossref]
27.
Herez A, El Hage H, Lemenand T, Ramadan M, Khaled M (2021) Parabolic trough
photovoltaic/thermal hybrid system: thermal modeling and parametric analysis. Renew Energy
165:224–236. https://​doi.​org/​10.​1016/​j.​renene.​2020.​11.​009
[Crossref]
28.
Alayi R, Kasaeian A, Atabi F (2020) Optical modeling and optimization of parabolic trough
concentration photovoltaic/thermal system. Environ Prog Sustain Energy 39:e13303. https://​doi.​
org/​10.​1002/​ep.​13303
[Crossref]
29.
Ceylan I, Gürel AE, Ergün A, Ali IHG, Ağbulut Ü, Yıldız G (2021) A detailed analysis of CPV/T
solar air heater system with thermal energy storage: a novel winter season application. J Build
Eng 42:103097. https://​doi.​org/​10.​1016/​j.​jobe.​2021.​103097
[Crossref]
30.
Dağ Hİ, Koçar G (2021) Experimental investigation on performance parameters affecting the
efficiency of water type PV/thermal collectors with modified absorber configurations. J Polytech
24:915–931.https://​doi.​org/​10.​2339/​politeknik.​724033
31.
Al-Nimr MA, Mugdadi B (2020) A hybrid absorption/thermo-electric cooling system driven by a
concentrated photovoltaic/thermal unit. Sustain Energy Technol Assess 40:100769. https://​doi.​
org/​10.​1016/​j.​seta.​2020.​100769
[Crossref]
32.
Liang S, Zheng H, Liu S, Ma X (2022) Optical design and validation of a solar concentrating
photovoltaic thermal (CPV-T) module for building louvers. Energy 239:122256. https://​doi.​org/​
10.​1016/​j.​energy.​2021.​122256
[Crossref]
33.
Demircan C (2022) Performance investigation of photovoltaic-thermoelectric (PV-TE) hybrid
power generation systems. Department of Energy Systems Engineering, Graduate School of
Natural and Applied Sciences, Süleyman Demirel University, Ph.D. Thesis (In Turkish), Isparta,
Turkey, p 95
34.
Afzali Gorouh H, Salmanzadeh M, Nasseriyan P, Hayati A, Cabral D, Gomes J, Karlsson B
(2022) Thermal modelling and experimental evaluation of a novel concentrating photovoltaic
thermal collector (CPVT) with parabolic concentrator. Renew Energy 181:535–553. https://​doi.​
org/​10.​1016/​j.​renene.​2021.​09.​042
35.
de Soto W, Klein SA, Beckman WA (2006) Improvement and validation of a model for
photovoltaic array performance. Sol Energy 80:78–88. https://​doi.​org/​10.​1016/​j.​solener.​2005.​06.​
010
[Crossref]
36.
Mathwork, Matlab. https://​www.​mathworks.​com/​. Accessed 11 Aug 2018
37.
TOPRAY SOLAR. www.​topraysolar.​com. Accessed 11 Aug 2018
38.
Kalogirou S, Tripanagnostopoulos Y (2007) Hybrid PV/T solar systems for domestic hot water
and electricity production. Energy Convers Manage 47:3368–3382. https://​doi.​org/​10.​1016/​j.​
enconman.​2006.​01.​012
[Crossref]
39.
Bell IH, Wronski J, Quoilin S, Lemort V (2014) Pure and pseudo-pure fluid thermophysical
property evaluation and the open-source thermophysical property library coolprop. Ind Eng
Chem Res 53:2498–2508. https://​doi.​org/​10.​1021/​ie4033999
[Crossref]

OceanofPDF.com
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023
A. Tomar et al. (eds.), Prediction Techniques for Renewable Energy Generation and Load Demand
Forecasting, Lecture Notes in Electrical Engineering 956
https://doi.org/10.1007/978-981-19-6490-9_6

Renewable Energy Predictions:


Worldwide Research Trends and Future
Perspective
Esther Salmerón-Manzano1 , Alfredo Alcayde2 and
Francisco Manzano-Agugliaro2
(1) Faculty of Law, Universidad Internacional de La Rioja (UNIR),
26006 Logroño, Spain
(2) Department of Engineering, Escuela Superior de Ingeniería, University
of Almeria, 04120 Almeria, Spain

Esther Salmerón-Manzano
Email: [email protected]

Alfredo Alcayde
Email: [email protected]

Francisco Manzano-Agugliaro (Corresponding author)


Email: [email protected]

Abstract
The objective of this chapter is to have a global perspective of the research
related to renewable energy predictions and thus determine the worldwide
research trends in this field. For this purpose, all the publications indexed in
the Scopus database with these terms in the title, abstract or keywords were
studied, obtaining more than 10,000 records. The subject categories were
analyzed, and the most important ones were engineering and energy.
Regarding the trend in the number of publications, two periods have been
detected, from 1996 to 2007 and from 2008 to the present with a growing
interest. Regarding countries, it has been observed that this ranking is led
by the United States, followed by China, and in third place by the United
Kingdom. The main institutions with more than 100 publications were:
North China Electric Power University (China), National Renewable
Energy Laboratory (USA), Ministry of Education (China), Technical
University of Denmark (Denmark), and Tsinghua University (China). The
study of key words made it possible to detect the main clusters that have
been considered significant and are the ones that set the research trends in
this field. Three clearly differentiated clusters have been found. The first
one focused on the search for alternative renewable energies, in its
beginnings mainly with the use of biomass. The second one is more focused
on electric power transmission network, and the third one is focused on
wind energy and its forecasting, where modern computational and
mathematical techniques are being used.

Keywords Renewable energy resources – Renewable energies –


Renewable energy – Forecasting – Wind power – Renewable energy source
– Solar energy – Electric power transmission networks

1 Introduction
From the beginning of the mankind, its development has been characterized
by the use of alternative forms of energy according to its needs and
availability [32]. Energy resources were always based on renewable
energies in the form of biomass, wind, water, or sun. They were used as a
source of fuel, or as mechanical energy for the beginnings of industry such
as hydraulic or wind mills [33].
In this sense, any process that does not alter the thermal balance of the
planet, that does not generate irrecoverable waste, and that its rate of
consumption does not exceed the rate of recovery of the energy source and
the raw material used in it is renewable [4].
Nowadays, the main resources of renewable energy are: solar energy,
wind energy, hydropower, biomass, biogas, ocean energy, or geothermal
energy. The search for a balance between supply and demand is a
particularly relevant fact as it is not feasible to conserve energy [23]. In
addition, in the energy sector, this information becomes even more
important due to the degree of uncertainty involved in predicting highly
volatile factors, such as natural or meteorological phenomena, as well as
other variables that have a major impact on markets, such as legislative or
socioeconomic changes [2, 3].
There is an ever-increasing increase in electricity demand and an
increasing need to use renewable resources to supply this demand [24].
Renewable energy sources imply complexity when planning and managing
energy demand, given the high volatility in their generation through non-
storable resources such as wind or solar radiation.
Thus, nowadays, to know the availability of the renewable resource is
essential to maintain a given energy level at the lowest possible cost [18].
Thus, generation forecasting for renewable energy sources allows energy
managers to have a maximum performance tool that helps them to
optimally manage their renewable resources [22]. This further enables
energy pricing [60]. Therefore, the prediction of demand or the behavior of
the electricity price to make the best decision to sell or buy energy is
currently a need for any company in the sector, whether it is a producer,
distributor, marketer, or system operator.
In this chapter to identify the current directions of research and trends in
the field of renewable energy prediction, a bibliometric study of
publications indexed in the Scopus scientific database will be conducted,
which has proven useful for similar studies [39].

2 Data
The data in this chapter were extracted from the Scopus database, the exact
search query was: TITLE-ABS-KEY (“Renewable Energ*”) AND (TITLE-
ABS-KEY (prediction*) OR TITLE-ABS-KEY (forecast*)).
For this search, more than 10,000 results were obtained from 1976 to
2021, the last year considered because of the completeness of the data. Of
these scientific documents, 58% were articles in scientific journals (of
which 4% were review papers), 39% were conference papers, and only 3%
were books or book chapters. The low percentage in books and book
chapters indicates that it is a scientific topic still rising and with scientific
progress [55, 56]. The newest technologies start in the scientific meetings,
and when they reach a certain maturity, they are published in articles in
journals and later in specialized books.
Figure 1 shows the evolution of these publications, where two periods
have been distinguished. The first one from 1976 to 2007, in which 100
publications per year were published. The second period starts in 2008 with
almost 250 publications, growing exponentially until the last year studied,
2021, with more than 1500 publications in that year.

Fig. 1 Periods of the worldwide scientific production in renewable energy predictions

3 Subjects from Worldwide Publications


One of the most important aspects in a bibliometric analysis of this nature is
to check the distribution of publications by scientific category, to determine
where the publications fall [57]. In this sense, Fig. 2, research is led by the
engineering category with 27% of the total, followed closely by the energy
category (26%). Between these two main categories, they account for more
than 50% of the total scientific production of renewable energy prediction.
The most cited publication in the engineering category is related to wind
energy volatility [62], while for the energy category it is a review paper on
wind speed and power generation forecasting [28].
Fig. 2 Distribution and evolution of Scopus categories on renewable energy predictions
The computer sciences category is very relevant with 12%, showing the
prominence of computational techniques in this scientific field, which are
supported by the field of mathematics (in fifth position with 7% of the
total). The most cited work for computer science is a research on short-term
residential load forecasting based on LSTM recurrent neural network [26].
The most cited paper for mathematics is a review paper of neural networks
applied for wind speed prediction [58].
The fourth scientific field is environmental sciences with 8%. This was
to be expected with renewable energies not only because of the resource
itself but also because of the effort to achieve climate neutrality such as the
decarbonization of the electricity sector, and for this one is the link between
environmental conservation and the large-scale deployment of wind and
photovoltaic energy. The most cited work in this scientific field deals with
the production of bioethanol from rice straw [7].
In addition, Fig. 2 shows the evolution of scientific production in each
of these five main categories. In this sense, it can be observed how the
energy category has dominated until the year 2019, where the engineering
category has surpassed it and is maintaining its position as leader in
research in renewable energy prediction. The computer science category has
been in third place since 2011, where it clearly maintains this position. The
mathematics category has held the fourth position since 2017, the year in
which environmental sciences moved to the fifth position.

4 Countries, Affiliations, and Their Main Topics


Another important factor to consider when studying the state of the art in
this field is the geographical distribution. Therefore, Fig. 3 shows all the
countries in the world that have publications on renewable energy
prediction, as the legend of the figure itself suggests, the more intense the
color, the greater the number of publications. In this sense, research is led
by China, followed by the USA. The most cited work from China is the
aforementioned review on wind speed and power generation forecasting
[28], and from the USA is the aforementioned review on wind power
volatility [62]. At a greater distance is India in third place, followed by the
UK and Germany, in fifth and sixth place, respectively. The most cited
paper from India is the one on bioethanol production from rice straw [7].
The most cited paper from the UK is an engineering paper related to the
estimation of spinning reserve requirements in systems with a significant
penetration of wind generation [43], and the one from Germany is on life
cycle dynamics assessment of renewable energy technologies [44]. To
complete this group of countries with more than 300 publications in order,
there are: Italy, Spain, Australia, Japan, France, and South Korea: Italy,
Spain, Australia, Japan France, and South Korea.

Fig. 3 Worldwide geographical distribution of the scientific production on renewable energy


predictions
On the other hand, if we analyze the publications of the main affiliations
with the highest scientific production, we obtain Table 1. In the Fig. 4, the
affiliations with at least 50 publications have been listed. Twenty-six
affiliations have been found, of which thirteen are from China. Followed by
Portugal with three affiliations, Denmark with two, and USA, Italy,
Bangladesh, Switzerland, France, Singapore, Australia, and Japan with one.
Table 1 Main affiliations and their main keywords

Affiliation Country N Keywords


1 2 3 4
North China China 160 Renewable Wind power Renewable Forecasting
Electric Power energy energies
University resources
National US 148 Renewable Wind power National Forecasting
Renewable energy renewable
Energy resources energy
Laboratory laboratory
Ministry of China 144 Forecasting Wind power Renewable Electric power
Education China energy transmission
resources networks
Technical Denmark 124 Wind power Renewable Forecasting Renewable
University of energy energies
Denmark resources
Tsinghua China 124 Renewable Forecasting Renewable Wind power
University energy energies
resources
Politecnico di Italy 84 Renewable Renewable Forecasting Neural
Milano energy energy networks
resources source
Shanghai Jiao China 76 Renewable Renewable Electric power Forecasting
Tong University energy energies transmission
resources networks
Chinese China 75 Forecasting Renewable China Wind power
Academy of energies
Sciences
State Grid China 68 Renewable Renewable Electric power Forecasting
Corporation of energies energy transmission
China resources networks
Aalborg Denmark 67 Renewable Renewable Energy Forecasting
University energy energy management
resources source
Affiliation Country N Keywords
1 2 3 4
Zhejiang China 66 Renewable Forecasting Renewable Renewable
University energy energies energy source
resources
China Electric China 66 Electric Wind power Renewable Renewable
Power Research power energies energy
Institute transmission resources
networks
Universidade do Portugal 65 Forecasting Renewable Wind power Renewable
Porto energy energies
resources
Southeast Bangladesh 62 Forecasting Renewable Wind power National
University energy renewable
resources energy
laboratory
Institute for Portugal 62 Renewable Forecasting Wind power Renewable
Systems and energy energies
Computer resources
Engineering,
Technology and
Science
ETH Zürich Switzerland 61 Renewable Renewable Wind power Forecasting
energy energy
resources source
Tianjin China 60 Forecasting Renewable Wind power Renewable
University energy energies
resources
CNRS Centre France 57 Renewable Renewable Forecasting Optimization
National de la energy energies
Recherche resources
Scientifique
North China China 56 Wind power Forecasting Renewable Renewable
Electric Power energies energy
University resources
Baoding
Shandong China 55 Renewable Renewable Forecasting Wind power
University energies energy
resources
Universidade de Portugal 53 Renewable Forecasting Renewable Wind power
Lisboa energy energies
resources
Affiliation Country N Keywords
1 2 3 4
National Singapore 51 Renewable Forecasting Renewable Smart power
University of energy energies grids
Singapore resources
UNSW Sydney Australia 51 Renewable Forecasting Electric power Renewable
energy transmission energies
resources networks
Xi'an Jiaotong China 50 Wind power Renewable Renewable Forecasting
University energy energies
resources
The University of Japan 50 Renewable Forecasting Renewable Renewable
Tokyo energy energies energy
resources
Huazhong China 50 Renewable Wind power Forecasting Renewable
University of energy energies
Science and resources
Technology

Fig. 4 Main affiliations on renewable energy predictions (more than 50 publications)


Looking at the main keywords of these institutions, Table 1, there are no
major differences in the fields of specialization. The top five affiliations
(North China Electric Power University, National Renewable Energy
Laboratory, Ministry of Education China, Technical University of Denmark,
and Tsinghua University) have wind power among their main keywords.
The keyword Electric Power Transmission Networks also appears in the top
institutions such as Ministry of Education China, Shanghai Jiao Tong
University, or State Grid Corporation of China. The keyword Neural
Networks appears only in Politecnico di Milano, although it is one of the
top 20 keywords as will be seen below.
In summary, it can be said that all these institutions have similar
objectives as their main keywords coincide in almost all of them and can be
summarized in these five: renewable energy resources, renewable energies,
wind power, forecasting, and electric power transmission networks.

5 Keywords from Worldwide Publications


Keywords make it possible to classify the entries in the indexing and
information retrieval systems in the databases of a particular manuscript or
subject area [54]. Keywords then become an essential two-way tool, i.e., for
those who write and for those who search for information on related
manuscripts or subject areas. In general, the number of keywords in most
scientific journals ranges between 3 and 10. Consequently, their importance
should not be undervalued or underestimated when considering them, since
it could become difficult to disseminate a manuscript and even fail to detect
its relationship with other similar ones, due to the inadequate use of
keywords.
Therefore, from all these publications, to try to narrow down the topics
on which they focus, it is necessary to analyze the keywords that these
research papers deal with. As can be seen in Table 2, apart from the obvious
search terms (renewable energy resources, forecasting, or renewable
energies), the fourth place is held by wind power, which is the renewable
energy that occupies most scientific effort in trying to determine its
periodicity and, therefore, its forecasting as an energy resource. It is striking
that electric power transmission networks are above solar energy. This may
be due to the importance of the renewable energy resource on the design of
energy transmission networks or that solar energy in this sense is well
studied, i.e., energy-efficient solar hours in a specific area of the planet. To
compare the relative importance of these keywords, they have been
represented by a cloud of words, Fig. 5.
Table 2 Top 20 keywords related to renewable energy predictions

Keyword N
Renewable energy resources 3.703
Forecasting 3.076
Renewable energies 2.579
Wind power 2.148
Renewable energy 1.858
Renewable energy source 1.678
Electric power transmission networks 1.304
Solar energy 1.215
Weather forecasting 982
Optimization 931
Neural networks 791
Energy policy 730
Solar power generation 729
Photovoltaic cells 717
Energy management 713
Alternative energy 709
Wind 692
Smart power grids 690
Renewable resource 685
Energy utilization 673
Energy efficiency 627
Costs 616
Electric utilities 616
Scheduling 609
Energy storage 578
Stochastic systems 573
Electric power generation 571
Prediction 548
Machine learning 534
Solar radiation 524
Commerce 508
Fig. 5 Cloud of keywords from the scientific production of the renewable energy predictions

6 Worldwide Research Trends: Cluster Analysis


The analysis of the relationships between keywords makes it possible to
obtain the scientific communities or clusters in which these publications are
grouped [55, 56]. For the analysis of this section, the software Vosviewer
(https://​www.​vosviewer.​com/​) available online has been used, which has
proven to be useful for this analysis in many scientific fields.
Figure 6 shows the representation of the three clusters retrieved with the
total number of publications analyzed. Table 3 shows the main keywords of
these clusters, and in the last column a name has been proposed for their
identification.
Fig. 6 Relationship between renewable energy predictions

Table 3 Main clusters (Fig. 6), weight, and names

Color Weight Main keywords Cluster name


(%)
Red 40 Renewable energies, alternative energy, energy Renewable energies
policy, prediction, greenhouse gases, carbon dioxide,
biomass, biofuels, biological materials, energy
efficiency, energy utilization, electricity generation,
turbines, energy market
Green 34 Renewable energy resources, electric power Renewable energy
transmission network, electrical load flow, distributed resources/electric power
generation, optimization, energy storage, electricity transmission network
market, smart grid, risk assessment, uncertainty,
predictive control systems
Blue 26 Forecasting, wind power, electric power generation, Wind power/forecasting
solar radiation, photovoltaic cells, bid data, deep
Color Weight Main keywords Cluster name
(%)
learning, regression analysis, learning systems
The first cluster can be considered as starting in 2001 with the objective
of optimizing the overall performance of isolated and weakly
interconnected systems in liberalized market environments, increasing the
share of wind and other renewable forms of energy [14]. This cluster has a
high component of studies of the potential of bioenergy already since 2002
and for studies of its temporality in very different fields [59], biogas [13],
horticultural waste [9] for electricity production [1], grassland [53], woody
[48], and vegetable residues such as those of tropical fruits like avocado
[45], mango [47], date [12], or loquat [49]. More recently, the production of
hydrogen from plant residues such as those from the wine industry has been
introduced [40].
Another major line of study of this cluster is the energy market and its
implications on greenhouse gas emissions [21], both in high-energy
consuming countries such as the United Arab Emirates [24], US [63], or
China [35], and in medium energy consuming countries such as Spain [36]
or Italy [10].
The second cluster, focused on renewable energy resources, started in
1991 with the study of geothermal power in Iceland [27], and is closely
related to optimization techniques both in terms of the use of different
methods [6] and the optimization of small installations [5]. In this cluster,
energy storage plays an important role, and therefore there are researches
related to the improvement of batteries [20], especially those based on
lithium [61] and their incorporation into microgrids based on renewable
energies [30, 41]. Within this cluster, the electric power transmission
network is of great relevance. For this purpose, forecasting models for
photovoltaic energy [51] and wind energy [28] are under study. Solar
energy prediction studies range from direct irradiance data of high quality
[31], the possible shading of large installations for a specific latitude [11,
42], or for isolated rooftop installations [2, 46] to cloudiness prediction
models [34]. And, the other major source of energy that influences the
power grid is wind energy, and therefore its forecasting is fundamental [28]
both for the possible available energy and for the selection of the type of
turbine for a wind farm [38]. Some authors also point out the possible
power quality disturbances due to the incorporation of renewable energy
into the energy system [37].
The third cluster is mainly focused on wind energy and its possible
estimation. Once estimates of the availability of this resource have been
made at the local [16], regional [19], or country level [15], it is necessary to
identify the periodicity of the resource [17], based on complete data series
that sometimes need to be revised or completed with modern mathematical
techniques such as wavelet [64]. Short-term forecasting employs from
simple statistical methods [8] to very diverse algorithms such as security-
constrained unit commitment (SCUC) algorithm [62], or f-ARIMA models
for day-ahead wind speed forecasting [25]. More recently, neural networks
are used for short-term wind power and load forecasting [50], where the use
of particle swarm optimization (PSO) stands out [52].

7 Evolution of the Research and Future


Perspective
The evolution of research trends has also been analyzed through their key
words. Figure 7 shows the evolution of the keywords of all the analyzed
publications. Table 4 shows the main keywords for each period. At the
beginning of the period studied is in the year 2012, where differences begin
to emerge. In that year, publications related to biofuels and biomass are
reflected. A little later, in 2014, the impact on the electricity market also
begins to be studied.
Fig. 7 Trend of renewable energy predictions

Table 4 Main clusters (Fig. 7)

Color Years Main keywords Cluster name


Blue 2012– Biomass, biological materials, electric load forecasting, Bioenergy
2014 biofuels, electricity market, ocean current
Cian– 2015– Energy policy, costs, investments, emission control, wind Energy policy
green 2016 power
Yellow 2017 neural network, forecasting, electric power transmission Transmission
network, energy storage network
Orange– 2018– Bid data, learning systems, deep learning, lstm Computer
red 2020 sciences
In the following years, studies on the economic viability of the sector
(costs vs. investments) and energy policy were included. As renewable
energy sources, a boost is given to wind power and, on the other hand, the
advantages for the environment, such as the reduction of emissions, begin
to be highlighted. In 2017, studies on the electricity grid itself as a whole
(electric power transmission network, energy storage) highlighted, and its
study with neural network, or how forecasting can affect the network. In
parallel, forms of energy storage are being studied.
The last period analyzed in Table 4, from 2018 to 2020, appear with
great strength the computer techniques applied to the study of energy such
as Bid data, learning systems, deep learning, LSTM (Long Short Term
Memory). The latter, LSTM, is an algorithm of neural networks but differs
from the standard ones in that it has feedback connections [26]. Deep
Learning techniques are booming especially for PV power prediction
models [29].
The global demand for renewable energy will increase significantly in
the very near future given the global energy and geopolitical situation to
avoid dependence on other countries. The two renewable energies with the
greatest projection are wind and solar, being the forecasting of the
production of the first one where the greatest efforts will continue to be
made from the scientific and technological point of view.

References
1. Agugliaro FM (2007) Gasification of greenhouse residues for obtaining electrical energy in the
south of Spain: localization by GIS. Interciencia 32(2):131–136
2.
Albatayneh A, Albadaineh R, Juaidi A, Abdallah R, Montoya MDG, Manzano-Agugliaro F
(2022a) Rooftop photovoltaic system as a shading device for uninsulated buildings. Energy Rep
8:4223–4232
3.
Albatayneh A, Juaidi A, Abdallah R, Peña-Fernández A, Manzano-Agugliaro F (2022b) Effect of
the subsidised electrical energy tariff on the residential energy consumption in Jordan. Energy
Rep 8:893–903
4.
Alcayde A, Montoya FG, Baños R, Perea-Moreno AJ, Manzano-Agugliaro F (2018) Analysis of
research topics and scientific collaborations in renewable energy using community
detection. Sustainability 10(12):4510
5.
AlFaris F, Juaidi A, Manzano-Agugliaro F (2017) Intelligent homes’ technologies to optimize the
energy performance for the net zero energy home. Energy Buildings 153:262–274
6.
Banos R, Manzano-Agugliaro F, Montoya FG, Gil C, Alcayde A, Gómez J (2011) Optimization
methods applied to renewable and sustainable energy: a review. Renew Sustain Energy Rev
15(4):1753–1766
7.
Binod P, Sindhu R, Singhania RR, Vikram S, Devi L, Nagalakshmi S, Kurien N, Sukumaran RK,
Pandey A(2010) Bioethanol production from rice straw: an overview. Bioresour Technol
101(13):4767–4774
8.
Bludszuweit H, Domínguez-Navarro JA, Llombart A (2008) Statistical analysis of wind power
forecast error. IEEE Trans Power Syst 23(3):983–991
9.
Callejón-Ferre AJ, Velázquez-Martí B, López-Martínez JA, Manzano-Agugliaro F (2011)
Greenhouse crop residues: energy potential and models for the prediction of their higher heating
value. Renew Sustain Energy Rev 15(2):948–955
10.
Cannemi M, García-Melón M, Aragonés-Beltrán P, Gómez-Navarro T (2014) Modeling decision
making as a support tool for policy making on renewable energy development. Energy Policy
67:127–137
11.
Castellano NN, Parra JAG, Valls-Guirado J, Manzano-Agugliaro F (2015) Optimal displacement
of photovoltaic array’s rows using a novel shading model. Appl Energy 144:1–9
12.
de la Cruz-Lovera C, Manzano-Agugliaro F, Salmerón-Manzano E, de la Cruz-Fernández JL,
Perea-Moreno AJ (2019) Date seeds (Phoenix dactylifera L.) valorization for boilers in the
Mediterranean climate. Sustainability 11(3):711
13.
El-Mashad HM, Zhang R (2010) Biogas production from co-digestion of dairy manure and food
waste. Biores Technol 101(11):4021–4028
14.
Hatziargyriou N, Contaxis G, Matos M, Lopes JP, Vasconcelos MH, Kariniotakis G, Mayer D,
Halliday J, Dutton G, Dokopoulos P, Bakirtzis A (2001) Preliminary results from the more
advanced control advice project for secure operation of isolated power systems with increased
renewable energy penetration and storage. In: 2001 IEEE Porto power tech proceedings (cat. no.
01EX502), vol 4. IEEE, p 6
15.
Hernández-Escobedo Q, Manzano-Agugliaro F, Zapata-Sierra A (2010) The wind power of
Mexico. Renew Sustain Energy Rev 14(9):2830–2840
16.
Hernández-Escobedo Q, Manzano-Agugliaro F, Zapata-Sierra A (2009) Caracterización de la
intensidad del viento en la provincia de Almería. DYNA-Ingeniería e Industria 84(8)
17.
Hernandez-Escobedo Q, Manzano-Agugliaro F, Gazquez-Parra JA, Zapata-Sierra A (2011) Is the
wind a periodical phenomenon? The case of Mexico. Renew Sustain Energy Rev 15(1):721–728
18.
Hernández-Escobedo Q, Perea-Moreno AJ, Manzano-Agugliaro F (2018) Wind energy research
in Mexico. Renew Energy 123:719–729
19.
Hernandez-Escobedo Q, Saldaña-Flores R, Rodríguez-García ER, Manzano-Agugliaro F (2014)
Wind energy resource in Northern Mexico. Renew Sustain Energy Rev 32:890–914
20.
Hu X, Xu L, Lin X, Pecht M (2020) Battery lifetime prognostics. Joule 4(2):310–346
21.
Jaber JO, Mohsen MS, Probert SD, Alees M (2001) Future electricity-demands and greenhouse-
gas emissions in Jordan. Appl Energy 69(1):1–18
22.
Juaidi A, AlFaris F, Saeed F, Salmeron-Manzano E, Manzano-Agugliaro F (2019) Urban design
to achieving the sustainable energy of residential neighbourhoods in arid climate. J Clean Prod
228:135–152
23.
Juaidi A, Montoya FG, Gázquez JA, Manzano-Agugliaro F (2016) An overview of energy
balance compared to sustainable energy in United Arab Emirates. Renew Sustain Energy Rev
55:1195–1209
24.
Juaidi A, Montoya FG, Ibrik IH, Manzano-Agugliaro F (2016) An overview of renewable energy
potential in Palestine. Renew Sustain Energy Rev 65:943–960
25.
Kavasseri RG, Seetharaman K (2009) Day-ahead wind speed forecasting using f-ARIMA
models. Renew Energy 34(5):1388–1393
26.
Kong W, Dong ZY, Jia Y, Hill DJ, Xu Y, Zhang Y (2019) Short-term residential load forecasting
based on LSTM recurrent neural network. IEEE Trans Smart Grid 10(1):841–851
27.
Koshkin NL (1991) Geothermal power in Island. Teploenergetika 12:73–75
28.
Lei M, Shiyan L, Chuanwen J, Hongling L, Yan Z (2009) A review on the forecasting of wind
speed and generated power. Renew Sustain Energy Rev 13(4):915–920
29.
Li J, Niu H, Meng F, Li R (2022) Prediction of short-term photovoltaic power via self-attention-
based deep learning approach. J Energy Res Technol 144(10):101301
30.
Li N, Uckun C, Constantinescu EM, Birge JR, Hedman KW, Botterud A (2015) Flexible
operation of batteries in power system scheduling with renewable energy. IEEE Trans Sustain
Energy 7(2):685–696
31.
López G, Batlles FJ, Tovar-Pescador J (2005) Selection of input parameters to model direct solar
irradiance by using artificial neural networks. Energy 30(9):1675–1684
32.
Manzano-Agugliaro F, Alcayde A, Montoya FG, Zapata-Sierra A, Gil C (2013) Scientific
production of renewable energies worldwide: an overview. Renew Sustain Energy Rev 18:134–
143
33.
Manzano-Agugliaro F, Zapata-Sierra A, Alcayde A, Salmerón-Manzano E (2021) Worldwide
research trends on hydropower. In: Recent advances in renewable energy technologies.
Academic Press, pp 249–280
34.
Martínez-Chico M, Batlles FJ, Bosch JL (2011) Cloud classification in a Mediterranean location
using radiation data and sky images. Energy 36(7):4055–4062
35.
Miranda-da-Cruz SM (2007) A model approach for analysing trends in energy supply and
demand at country level: case study of industrial development in China. Energy Econ 29(4):913–
933
36.
Montoya FG, Aguilera MJ, Manzano-Agugliaro F (2014) Renewable energy production in
Spain: a review. Renew Sustain Energy Rev 33:509–531
37.
Montoya FG, García-Cruz A, Montoya MG, Manzano-Agugliaro F (2016) Power quality
techniques research worldwide: a review. Renew Sustain Energy Rev 54:846–856
38.
Montoya FG, Manzano-Agugliaro F, López-Márquez S, Hernández-Escobedo Q, Gil C (2014)
Wind turbine selection for wind farm layout using multi-objective evolutionary algorithms.
Expert Syst Appl 41(15):6585–6595
39.
Montoya FG, Montoya MG, Gomez J, Manzano-Agugliaro F, Alameda-Hernandez E (2014) The
research on energy in Spain: a scientometric approach. Renew Sustain Energy Rev 29:173–183
40.
Nadaleti WC, Martins R, Lourenço V, Przybyla G, Bariccatti R, Souza S, Manzano-Agugliaro F,
Sunny N(2021) A pioneering study of biomethane and hydrogen production from the wine
industry in Brazil: pollutant emissions, electricity generation and urban bus fleet supply. Int J
Hydrogen Energy 46(36):19180–19201
41.
Nguyen TA, Crow ML (2015) Stochastic optimization of renewable-based microgrid operation
incorporating battery operating cost. IEEE Trans Power Syst 31(3):2289–2296
42.
Novas N, Fernández-García A, Manzano-Agugliaro F (2020) A simplified method to avoid
shadows at parabolic-trough solar collectors facilities. Symmetry 12(2):278
43.
Ortega-Vazquez MA, Kirschen DS (2008) Estimating the spinning reserve requirements in
systems with significant wind power generation penetration. IEEE Trans Power Syst 24(1):114–
124
44.
Pehnt M (2006) Dynamic life cycle assessment (LCA) of renewable energy technologies. Renew
Energy 31(1):55–71
45.
Perea-Moreno AJ, Aguilera-Ureña MJ, Manzano-Agugliaro F (2016) Fuel properties of avocado
stone. Fuel 186:358–364
46.
Perea-Moreno AJ, García-Cruz A, Novas N, Manzano-Agugliaro F (2017) Rooftop analysis for
solar flat plate collector assessment to achieving sustainability energy. J Clean Prod 148:545–554
47.
Perea-Moreno AJ, Perea-Moreno MÁ, Dorado MP, Manzano-Agugliaro F (2018) Mango stone
properties as biofuel and its potential for reducing CO2 emissions. J Clean Prod 190:53–62
48.
Perea-Moreno AJ, Perea-Moreno MÁ, Hernandez-Escobedo Q, Manzano-Agugliaro F (2017)
Towards forest sustainability in Mediterranean countries using biomass as fuel for heating. J
Clean Prod 156:624–634
49.
Perea-Moreno MA, Manzano-Agugliaro F, Hernandez-Escobedo Q, Perea-Moreno AJ (2020)
Sustainable thermal energy generation at universities by using loquat seeds as biofuel.
Sustainability 12(5):2093
50.
Quan H, Srinivasan D, Khosravi A (2013) Short-term load and wind power forecasting using
neural network-based prediction intervals. IEEE Trans Neural Netw Learn Syst 25(2):303–315
51.
Raza MQ, Nadarajah M, Ekanayake C (2016) On recent advances in PV output power forecast.
Sol Energy 136:125–144
52.
Ren C, An N, Wang J, Li L, Hu B, Shang D (2014) Optimal parameters selection for BP neural
network based on particle swarm optimization: a case study of wind speed forecasting. Knowl-
Based Syst 56:226–239
53.
Rösch C, Skarka J, Raab K, Stelzer V (2009) Energy production from grassland—assessing the
sustainability of different process chains under German conditions. Biomass Bioenergy
33(4):689–700
54.
Salmerón-Manzano E (2021) Legaltech and Lawtech: global perspectives, challenges, and
opportunities. Laws 10(2):24
55.
Salmeron-Manzano E, Manzano-Agugliaro F (2018a) The electric bicycle: worldwide research
trends. Energies 11(7):1894
56.
Salmerón-Manzano E, Manzano-Agugliaro F (2018b) The higher education sustainability
through virtual laboratories: the Spanish University as case of study. Sustainability 10(11):4040
57.
Salmerón-Manzano E, Manzano-Agugliaro F (2020) Worldwide research on low cost
technologies through bibliometric analysis. Inventions 5(1):9
58.
Sheela KG, Deepa SN (2013) Review on methods to fix number of hidden neurons in neural
networks. Mathematical problems in engineering
59.
Tatsiopoulos IP, Tolis AJ (2002) Technical and economic evaluation and feasibility study of
biomass energy. Int J Environ Sustain Dev 1(2):142–159
60.
Tu Q, Mo JL (2017) Coordinating carbon pricing policy and renewable energy policy with a case
study in China. Comput Ind Eng 113:294–304
61.
Wagner R, Preschitschek N, Passerini S, Leker J, Winter M (2013) Current research trends and
prospects among the various materials and designs used in lithium-based batteries. J Appl
Electrochem 43(5):481–496
62.
Wang J, Shahidehpour M, Li Z (2008) Security-constrained unit commitment with volatile wind
power generation. IEEE Trans Power Syst 23(3):1319–1327
63.
Wiser RH, Fowlie M, Holt EA (2001) Public goods and private interests: understanding non-
residential demand for green power. Energy Policy 29(13):1085–1097
64.
Zapata-Sierra AJ, Cama-Pinto A, Montoya FG, Alcayde A, Manzano-Agugliaro F (2019) Wind
missing data arrangement using wavelet based techniques for getting maximum likelihood.
Energy Convers Manage 185:552–561

OceanofPDF.com
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023
A. Tomar et al. (eds.), Prediction Techniques for Renewable Energy Generation and Load Demand
Forecasting, Lecture Notes in Electrical Engineering 956
https://doi.org/10.1007/978-981-19-6490-9_7

Models of Load Forecasting


Sunil Yadav1, Bhavesh Tondwal1 and Anuradha Tomar1
(1) Netaji Subhas University of Technology, Sector-3, Dwarka, Delhi,
110078, India

Anuradha Tomar (Corresponding author)


Email: [email protected]

Abstract
World is growing every day in many aspects. Economic growth, population
growth, technical growth, etc., leads to a common factor: a never-ending
increasing demand of energy. To meet this energy demand, fossil fuels will
burn out soon and renewable energy is a long way to go to certain
advancements to meet the energy demand. Here, load forecasting (LF) will
play a key role to predict the future load so that the energy can be generated
in efficient way which will be less harmful to environment and more
economical. LF is done using various models such as long short-term
memory (LSTM), artificial neural network (ANN), and support vector
machine (SVM), which predicts the future load based on historic data. In
this chapter, we have discussed about LF, types of LF, factors affecting LF,
and a comparative review has been performed of recently developed
techniques and models with benchmarks models used for LF.

Keywords Load forecasting – Load forecasting models – Machine learning


– Deep learning – Artificial intelligence

Abbreviations
ANN Artificial Neural Network
ANFIS Adaptive Network-Based Fuzzy Inference System
ARIMA Autoregressive Integrated Moving Average
ARMSE Average Root Mean Squared Error
BDLSTM Bayesian Deep Long Short-Term Memory
BPNN Backpropagation Neural Network
CCRN Correlation Based Convolution Recurrent Network
CNN Convolutional Neural Network
DRNN Deep Recurrent Neural Network
DNN Deep Neural Network
DT Decision Tree
ELM Extreme Learning Machine
ES Expert Systems
ETS Exponential Smoothing
GA Genetic Algorithm
GBRT Gradient boosted Regression Trees
KNN K-Nearest Neighbor
LR Linear Regression
LSTM Long Short-Term Memory
LTLF Long-Term Load Forecasting
MAE Mean Absolute Error
MLP Multilayer Perceptron
MLR Multiple Linear Regression
MAPE Mean Absolute Percentage Error
MTLF Medium-Term Load Forecasting
NN Neural Network
OP-ELM Optimally Pruned Extreme Learning Machine
PDRNN Pooling-based Deep Recurrent Neural Network
PLCNet Parallel Long Short-Term Memory-Convolutional Neural
Network
QLSTM Pinball Loss Guided Long Short-Term Memory
RF Random Forest
RFR Random Forest Regression
RMSE Root Mean Squared Error
RNN Recurrent Neural Network
SDTRM Spark Decision Tree Regression Model
SGBT Spark Gradient-Boosted Trees
SRFRM Spark Random Forest Regression Model
STLF Short-Term Load Forecasting
SVM Support Vector Machine
SVR Support Vector Regression
SVRL Support Vector Machine with Linear Kernel
SVRP Support Vector Machine with Polynomial Kernel
SVRR Support Vector Machine with Radial Kernel
WNN Wavelet Neural Network
XGB Extreme Gradient Boosting

1 Introduction
Most of the industries depend on electrical energy therefore its availability
is of economic importance throughout the world. A continuous, affordable,
and reliable source of electricity is of great importance to achieve all the
objectives mentioned we apply ‘Electrical Load Forecasting’ on a power
grid [1].
‘Electrical Load Forecasting’ is a computational method by means of
which we predict the future load demand with the help of past and present
data of load demand. It acts as an important factor during power system
planning, operation, and control [2].
By performing ‘Electrical Load Forecasting’ for the residential and
commercial load’s the electricity generation and distribution companies can
schedule functionally ahead and develop energy conservation among the
users [3].
The objectives of ‘Electrical Load Forecasting’ are power system:
Planning
Operation
Finance
Development
Maintenance.
The load prediction can be calculated for about 2–4 h for operative
purposes or as much as about 30 years for planning purposes [2].

2 Types of Load Forecasting


Electrical Load Forecasting’ can be majorly categorized into three types:
(1) Short-Term Load Forecasting (STLF)

(2) Medium-Term Load Forecasting (MTLF)

(3) Long-Term Load Forecasting (LTLF) [4].


In STLF, the load is being predicted from few hours to week ahead
to curtail the running and transmitting cost. The methods commonly
used for forecasting are the LSTM, neural network (NN), random
forest regression (RFR) method, and SVM method [5]. It is used in
load flow study and further to take decisions for the prevention of
overloading. Its applications are allocation of spinning reserve, unit
commitment calculation, maintaining proper fuel stock, maximizing
utility revenue, development of small generation schemes, etc.
In MTLF, the load is being predicted in a range of few weeks to
10 years ahead so that the systematic planning can be maintained
[6]. The multilayer perceptron (MLP) and SVM are some of the
methods used for the forecasting. Its application include deciding
rate structures of different consumers, calculating capital cost of
different generation options, annual planning and budget allocation
for fuel requirements, and other operational purposes etc.
In LTLF, the load is being predicted in a range of a decade to
50 years ahead so that development planning can be eased upon.
Neural network (NN), genetic algorithm (GA), fuzzy rules, SVM,
wavelet neural networks, and expert systems. Its applications are
national grid expansion, demand side management, selection of
substation capacity, development of a new power plant, ‘Fuel Mix’
decision, etc. [7].
3 Factors Affecting Load Forecasting
3.1 Meteorological Factors: It is Further Divided into Two Sub
Parts that are ‘Climate’ and ‘Weather’
Climate is the mean weather over a finite period in an area. With a change
in climate influences the load consumption consequently. It is a major
factor in long-term load forecasting.
Weather is an atmospheric condition that mostly exists for a temporary
period of time in an area. It is reasonable and important to take weather
factor into consideration for STLF. It affects the load demand for
domestic and agricultural customers. The alterations in weather alters the
utilization of appliances in accordance with comfort level of consumers.
It is a major factor to be considered in short-term forecasting.
‘Weather’ further incorporates four parts, i.e., temperature, cloud cover,
wind speed, and humidity.

3.2 Temporal and Calendar Factors


The impact of the calendar difference of the same month between
different years are known as the calendar factors.
Load consumption varies between different seasons due to dissimilar
beginning and ending timings of day and night, hours of difference
between timings of day or nights; increased residential load at weekends
compared to weekdays, timings of the year leading to festivals or big
events.

3.3 Economy Factors


The economic factors play significant role in load forecasting, such as
type of customers, per capita income, demographic conditions, gross
domestic product (GDP) growth, industrial development, and cost of
electricity. The daily load curve of developing and developed countries is
distinct as maximum loading occurs at different time period, for
developing countries, it is at evening time between 06:00 pm and
09:00 pm, whereas for developed countries the peak load timings are
from 11:00 am to 04:00 pm [8].
Spot market prices and short-term futures contracts are a crucial factor for
STLF, whereas the LTLF are not very much affected by these factors.
Some countries use these factors to reduce load during peak load hours be
keeping a difference in electricity prices between peak and off-peak load
hours consumed by residential households. If the price of electricity is
increased, then domestic consumption will reduce because the price of
electric power and the consumer’s financial condition varies the load
consumption [9].

3.4 Random Factors


Huge industrial loads in a power system sometimes cause sudden
imbalance in load consumption; these sudden imbalances are known as
random factors.
Special events such as festival or regional happenings are also considered
as important factors for load forecasting.
Other than these, certain situations such as the shutdown of an industry,
or a big event such as sports competition, wedding season, and lockdown
are the random factors affecting the load forecasting.

3.5 Customer Factors


The various consumers such as residential and commercial so the load
curve may be varying from consumer to consumer. The consumer factors
of electricity consumption are the specifications or ratings of the
electrical equipment of the customer. Also, the electrical equipment varies
from consumer to consumer.

3.6 Factors Based on Time Horizon


Short-Term Influence Factors: These factors frequently appear in a
specific forecasting span and nearly do not have the characteristic of that
time span.
Medium-Term Influence Factors: These factors frequently last for some
forecasting span and have specific characteristics of that time span.
Long-Term Influence Factors: These factors are experienced for many
forecasting periods and have especially the characteristic of that time
span [8].
3.7 Other Factors
According to the geographical areas the load curve could be different,
i.e., the load curve of less populated areas will be different from highly
populated.
These factors can have more or less effect on the machine learning (ML)
model, it can also have a destructive effect on a model. As the load varies
the effect of various factors changes accordingly.

4 Comparative Review of Popular Load


Forecasting Techniques
4.1 Techniques Based on Machine Learning
4.1.1 In Table 8 [10–14], Different Individual Models Are
Compared Which Were Enhanced with the Help of Multi-
processing, and it is Noticed that SVR Had the Most Significant
Performance and also RF Had Lowest RMSE
See Table 1.
Table 1 Utilization of multi-processing to improve overall performance of forecasting models
Model Data used Types of Errors Significance/remarks Application/applicable
load (%) to
forecasting
Decision Real-time STLF RMSE Improvement in short- Distribution
tree (DT) energy = 5.67 term forecasting transformers
consumption MAPE accuracy and speed
dataset = 10.91
collected at
Linear distribution STLF RMSE Benchmarks Non-real time
regression transformers = 5.91 performance forecasting
(LR) MAPE
= 10.0
Neural STLF RMSE – Peak load reduction
network = 14.78
(NN) MAPE
=
553.29
Support STLF RMSE Can predict the price Economic load dispatch
vector = 4.87 and load more
regression MAPE accurately
(SVR) = 5.42
Gradient STLF RMSE – Decisions related to
boosted = 4.26 increment/decrement
regression MAPE loads
trees = 11.99
(GBRT)
Random STLF RMSE – Its hybrids can be used
forest = 4.02 for MTLF
(RF) MAPE
= 10.64

4.1.2 In Table 3 [15, 16], a Novel Technique Based on ML Based


on Distributed Trees with Apache Spark is Applied to Some
Models and Compared Using a Standard Error ARMSE
Distributed Tree-Based Machine Learning with Apache Spark
See Table 2.
Table 2 Distributed tree-based ML with Apache Spark
Model Data used Types of Errors Significance/remarks Application/applicable
load (kWh) to
forecasting
Spark Distribution STLF ARMSE Two times faster Distribution
decision transformers = execution time with transformers
tree real-world 10.8071 the use of thread pool
regression dataset from and fair scheduler
model Spain
(SDTRM)
Spark STLF ARMSE High accuracy but –
random = requires more training
forest 10.6005 time
regression
model
(SRFRM)
Spark STLF ARMSE – –
gradient- =
boosted 11.8855
trees
(SGBT)

4.1.3 In Table 3 [17–21], the Performance of a Hybrid Model


Which Combines Two Individual Models; LSTM and CNN are
Compared to Various Models, and It Is Noticed That This New
Model is Very Accurate
See Table 3.
Table 3 Hybrid model which combine LSTM and CNN
Model Data used Types of Errors Significance/remarks Application/applicable
load (%) to
forecasting
Parallel Real-world STLF, RMSE PLCNet out performs Large to small loads
LSTM-CNN hourly load MTLF = every other machine
(PLCNet) consumption 0.031 learning demonstrated
dataset MAPE models
= 2.08
Model Data used Types of Errors Significance/remarks Application/applicable
load (%) to
forecasting
Autoregressive STLF RMSE ARIMA—ANN Most suitable for very-
integrated = hybrid models show short and short-term
moving 0.102 incredible forecasting
average MAPE performance for linear
(ARIMA) = 3.56 and non-linear
problems
Exponential STLF RMSE – –
smoothing = 0.36
(ETS) MAPE
= 8.81
LR STLF RMSE Benchmarks Very short- and short-
= performance models term forecasting
0.092
MAPE
=
2.335
SVR STLF RMSE –
=
0.272
MAPE
= 7.63
Deep neural STLF RMSE – –
network =
(DNN) 0.128
MAPE
= 3.62
LSTM STLF RMSE Performs better than Reduced artificial
= most benchmarks debugging for STLF
0.097 models
MAPE
= 3.11
LSTM-CNN STLF, RMSE High accuracy for Maintenance
MTLF = short term and scheduling
0.053 medium-term load
MAPE forecasting
= 2.43

4.1.4 In Table 4 [22–26], A Novel Algorithm to Select Least Cost


Electric Load Forecasting Model is Used and Compared Using
Correlated Meteorological Parameters
See Table 4.
Table 4 Novel algorithm to select least cost electric load forecasting model using correlated
meteorological parameters
Model Data used Types of Errors Significance/remarks Application/applicable
load to
forecasting
Deep Time series STLF MAPE Economically better Can be used where cost
neural data of (%) = 9.23 model for electric is major constraint
network hourly RMSE load forecasting
(MLR) electricity (kWh) =
consumption 4677.23
K-nearest STLF MAPE Accuracy can be –
neighbor (%) = 6.07 increased with
(KNN) RMSE increase training time
(kWh) =
2948.49
Support STLF MAPE(%) Linear kernel model is –
vector = 10.26 less complex
machine RMSE compared to other
with (kWh) = kernels
linear 5120.05
Kernel
(SVRL)
Support STLF, MAPE(%) Radial kernel is more Unit commitment
vector MTLF = 5.62 accurate compared to transaction and other
machine RMSE other kernels power system
with (kWh) = operations
linear 2746.58
Kernel
(SVRR)
Support STLF MAPE(%) Polynomial kernel Can be used for time
vector = 8.39 exhibits higher series prediction
machine RMSE efficiency for non-
with (kWh) = linear complications
linear 3791.53
Kernel
(SVRP)
RF STLFB MAPE Gives more accurate –
(%) = 5.71 results with selective
RMSE features
(kWh) =
2870.55
AdaBoost STLF MAPE – Electricity market
(%) = 8.43 clearing
RMSE
(kWh) =
3913.34
4.2 Techniques Based on Deep Learning (DL)
4.2.1 In Table 5 [27, 28], the Bayesian Deep Learning technique
is Used to Improve the Performance of LSTM, and the Results
Are Compared with Pinball Loss Guides LSTM (QLSTM)
See Table 5.
Table 5 Bayesian deep learning technique

Model Data used Types of Errors Significance/remarks Application/applicable


load (%) to
forecasting
QLSTM Smart STLF MAPE Residential and
meter data = commercial loads
from the 0.1155 forecasting
Australian MAE =
grid 17.7268
grid RMSE
=
21.9014
Bayesian STLF MAPE High speed and
deep long = reduction in error
short-term 0.0892
memory MAE =
(BDLSTM) 13.8607
RMSE
=
17.1698

4.2.2 In Table 6 [29–32], Clustering is Used to Enhance CNN


and then Compared with Other Similar Models
See Table 6.
Table 6 CNN based on clustering
Model Data Types of Errors Significance/remarks Application/applicable
used load (%) to
forecasting
Pyramid—CNN Power STLF MAPE (1) Efficient approach Similar-profile energy
load = 39 of grouping similar- customers based on
data profile energy clustering
from customers
SGSC (2) Elimination of need
project for developing and
training models for
individual household
Extreme learning STLF MAPE Very poor accuracy –
machine (ELM) = 122
KNN STLF MAPE – (1) Effective for very-
= 71 short, short- and
medium-term
Backpropagation STLF MAPE Large amount of
forecasting
neural network = 49 training data is
(BPNN) required for training of (2) Local approach
model instead of global
approach can improve
ANN STLF MAPE By tuning of the accuracy
= 47 parameters,
performance can be
improved significantly
LSTM STLF MAPE – –
= 44

4.2.3 In Table 7 [33, 34], a Hybrid Model of Deep Recurrent


Neural Network (DRNN) Based on Pooling is Compared with
Similar Models. Based Deep Recurrent Neural Network
See Table 7.
Table 7 Pooling-based deep recurrent neural network
Model Data Types of Errors Significance/remarks Application/applicable
used load (kWh) to
forecasting
Pooling Smart STLF RMSE = Improved efficiency Commercial load
based deep metered 0.4505 and reduced error forecasting
recurrent data NRMSE
neural from = 0. 0912
network Ireland
(PDRNN) MAE =
0.2510
ARIMA STLF RMSE = Non suitable for Very effective for low
0.5593 nonlinear problems model orders
NRMSE
= 0.11
MAE =
0.2998
RNN STLF RMSE = Large training data is –
0.5280 required
NRMSE
= 0.1076
MAE =
0.2913
SVR STLF RMSE = Efficiency depends on –
0.5180 tuning of parameters
NRMSE
= 0.1048
MAE =
0.2855
Deep STLF, RMSE = Better performance Can be used for smart
recurrent MTLF 0.4815 with entropy-based cities forecasting
neural NRMSE training
network = 0.0974
(DRNN)
MAE =
0.2698

4.2.4 In Table 8 [35], Four Hybrid Models Based on Clustering


and Deep Learning Are Compared Using a Real-Life Dataset
Collected from Commission of Energy Regulation
See Table 8.
Table 8 Combination of clustering and NN
Model Data used Types of Errors Significance/remarks Application/applicable
load (%) to
forecasting
K-shape Real-life STLF MAPE Very high accuracy Residential loads and
clustering dataset from = 2.15 achieved clustering small to medium
+ (DNN) Commission methods are used in enterprises
of Energy combination with DNN
K-means STLF MAPE
Regulation
clustering = 2.55
+ DNN
K-shape STLF MAPE
clustering = 2.98
+ (NN)
K-means STLF MAPE
clustering = 3.33
+ NN

4.2.5 In Table 9 [36], a Hybrid Model Combination of Deep


Learning and k-means clustering is Compared with Another
Hybrid Model, PDRNN, Based on Standard Errors MAE and
RMSE
See Table 9.
Table 9 Combination of deep learning and k-means clustering

Model Data used Types of Errors Significance/remarks Application/applicable


load (%) to
forecasting
K-means Real-life STLF MAE Improved prediction Residential load
clustering Irish = accuracy forecasting
+ LSTM residential 0.3791
load dataset RMSE
=
0.6022
PDRNN STLF MAE Comparatively better –
= performance then
0.3959 benchmarks models
RMSE
=
0.6202
4.2.6 In Table 10 [37–40], a Hybrid DNN Model Based on Two
Different Techniques Transfer Learning and Meta learning is
Compared to Various Models Based on Their Performance in a
Residential Dataset
See Table 10.
Table 10 DNN model based on transfer learning and meta learning

Model Data used Types of Errors Significance/remarks Application/applicable


load to
forecasting
Extreme Residential STLF RMSE Significant Medium–small loads
gradient dataset (kWh) performance of meta
boosting = 0.261 learning for limited
(XGB) SMAPE data availability
(%) =
32.04
MLP STLF, RMSE MLP and its hybrid Regulatory actions
MLP (kWh) models can be used
= 0.261 for MTLF
SMAPE
(%) =
32.23
LSTM STLF RMSE – –
(kWh)
= 0.295
SMAPE
(%) =
38.26
Sequence to STLF RMSE Benchmark –
sequence (kWh) performance
= 0.309
SMAPE
(%) =
36.39
ResNet/LSTM STLF RMSE More accurate and –
(kWh) fast then individual
= 0.263 LSTM
SMAP
(%) =
32.74%
4.2.7 In Table 11 [41, 42], Multiple Hybrid Models Based on
DNN Are Compared Using a Dataset of Independent System
Operator of New England (ISO-NE)
See Table 11.
Table 11 Convolution of deep neural networks

Model Data Types of Errors Significance/remarks Application/applicable


used load to
forecasting
Convolutional - ISO- STLF MAE Outperforms other Highly efficient for short
LSTM NE (kWh) models used for and very short-term
based = 118 comparison forecasting
dataset MAPE
(%) =
0.201
Bidirectional STLF MAE Increase in training –
LSTM (kWh) time may increase
= 130 accuracy
MAPE
(%) =
0.443
LSTM STLF MAE – –
(kWh)
= 119
MAPE
(%) =
1.049
Convolutional STLF MAE Better performance Can be used for non-
neural network (kWh) than time series models linear problems
(CNN) = 133
MAPE
(%) =
0.814
Correlation STLF MAE Error reduction and Can be used for electric
Based (kWh) require less training commercial load
convolution = 178 time
recurrent MAPE
network (%) =
(CCRN) 0.634

4.3 Techniques Based on Artificial Intelligence


4.3.1 In Table 12 [43], Two Hybrid AI-Based Techniques,
Optimally Pruned Extreme Learning Machine (ANFIS) and
Adaptive Network-Based Fuzzy Inference System (OP-ELM) Are
Used for LF and Compared
See Table 12.
Table 7.12 Hybrid AI techniques: ANFIS and OP-ELM

Model Data used Types of Errors Significance/remarks Application/applicable


load (kWh) to
forecasting
(OP- Real dataset STLF, MAPE = OP-ELM model out Distribution stations
ELM) of large MTLF 0.090344 performs ANFIS model
power MAE =
consuming 0.057076
substation
RMSE =
0.077942
(ANFIS) STLF, MAPE = Better performance Turning on/off electric
MTLF 0.088012 then benchmarks power plants
MAE = models
0.060583
RMSE =
0.073518

5 Conclusion
In this chapter, a review has been done of load forecasting models with a
brief introduction of Electrical load forecasting and further moving toward
its importance in current global power systems. There are many
uncertainties faced during load forecasting which are discussed as factors
affecting load forecasting. Therefore, using different parameters, a
comparative review has been done of recent studies focusing on different
techniques and models to obtain efficient and fast results. Parameters used
to compare these models are: type of data used, type of load forecasting,
standard errors: RMSE, NRMSE, MAE, significance/remarks, and their
application. From this review, it is concluded that it is feasible to use
individual models as LSTM, LR, RF, etc., for very short and short-term load
forecasting. To forecast load for a longer period of time, models can be
combined to form hybrid models according to their performance. Further
studies can focus on hybrid models with combination of more than two
models and with proper tuning of its parameters, these can be used for even
long-term forecasting.

References
1. Soliman SA-H, Al-Kandari AM (2010) Electrical load forecasting: modeling and model
construction. Elsevier
2.
Load forecasting—purpose, classification and procedure (2016)
3.
Nti IK et al (2020) Electricity load forecasting: a systematic review. J Electr Syst Inf Technol
7(1):1–19
4.
Electric load forecasting—classification, procedure and approach (2017)
5.
Guo W et al (2021) Machine-learning based methods in short-term load forecasting. Electr J
34(1):106884
[Crossref]
6.
Lera Figal PD (2016) Medium-term electricity load forecasting
7.
Ladan G, Kalantar M (2011) Different methods of longterm electric load demand forecasting; a
comprehensive review. Iranian J Electr Electron Eng 7(4):249–259
8.
Khatoon S, Singh AK (2014) Effects of various factors on electric load forecasting: an overview.
In: 2014 6th IEEE power India international conference (PIICON). IEEE
9.
Admin (2016) 10 factors affecting the energy markets
10.
Zainab A et al (2021) A multiprocessing-based sensitivity analysis of machine learning
algorithms for load forecasting of electric power distribution system. IEEE Access 9:31684–
31694
[Crossref]
11.
Sultana T et al (2019) Data analytics for load and price forecasting via enhanced support vector
regression. In: Advances in internet, data and web technologies. Springer International
Publishing, Cham
12.
Son M et al (2019) A short-term load forecasting scheme based on auto-encoder and random
forest. In: Applied physics, system science and computers III. Springer International Publishing,
Cham
13.
Ganguly A et al (2019) Short-term load forecasting for peak load reduction using artificial neural
network technique. In: Advances in computer, communication and control. Springer, Singapore
14.
Xu W et al (2019) A hybrid modelling method for time series forecasting based on a linear
regression model and deep learning. Appl Intell 49(8):3002–3015
[Crossref]
15.
Zainab A et al (2021) Distributed tree-based machine learning for short-term load forecasting
with apache spark. IEEE Access 9:57372–57384
[Crossref]
16.
Syed D, Refaat SS, Abu-Rub H (2020) Performance evaluation of distributed machine learning
for load forecasting in smart grids. In: 2020 cybernetics & informatics (K&I). Piscataway
17.
Farsi B et al (2021) On short-term load forecasting using machine learning techniques and a
novel parallel deep LSTM-CNN approach. IEEE Access 9:31191–31212
[Crossref]
18.
Cho H et al (2015) Modelling and forecasting daily electricity load via curve linear regression.
In: Modeling and stochastic learning for forecasting in high dimensions. Springer International
Publishing, Cham
19.
Izudin NEM et al (2021) Forecasting electricity consumption in Malaysia by hybrid ARIMA-
ANN. In: Proceedings of the 6th international conference on fundamental and applied sciences.
Springer Nature, Singapore
20.
Zhang W et al (2020) Short-term power load forecasting using integrated methods based on long
short-term memory. Sci China Technol Sci 63(4):614–624
[Crossref]
21.
Dudek G, Pełka P, Smyl S (2021) A hybrid residual dilated LSTM and exponential smoothing
model for midterm electric load forecasting. IEEE Trans Neural Netw Learn Syst
22.
Jawad M et al (2020) Machine learning based cost effective electricity load forecasting model
using correlated meteorological parameters. IEEE Access 8:146847–146864
[Crossref]
23.
Aimal S et al (2019) An efficient CNN and KNN data analytics for electricity load forecasting in
the smart grid. In: Web, artificial intelligence and network applications. Springer International
Publishing, Cham
24.
Subbiah SS, Chinnappan J (2022) Short-term load forecasting using random forest with entropy-
based feature selection. In: Artificial intelligence and technologies. Springer, Singapore
25.
Dudek G (2020) Multilayer perceptron for short-term load forecasting: from global to local
approach. Neural Comput Appl 32(8):3695–3707
[Crossref]
26.
Masood J et al (2020) An optimized linear-Kernel support vector machine for electricity load and
price forecasting in smart grids. In: 2019 international conference on advances in the emerging
computing technologies (AECT). IEEE.
27.
Sun M et al (2019) Using Bayesian deep learning to capture uncertainty for residential net load
forecasting. IEEE Trans Power Syst 35(1):188–201
[Crossref]
28.
Xuan A, Tian S (2021) A regional integrated energy system load prediction method based on
Bayesian optimized long-short term memory neural network. In: 2021 IEEE PES innovative
smart grid technologies-Asia (ISGT Asia). IEEE
29.
Aurangzeb K et al (2021) A pyramid-CNN based deep learning model for power load forecasting
of similar-profile energy customers based on clustering. IEEE Access 9:14992–15003
[Crossref]
30.
Upadhaya D, Thakur R, Singh NK (2019) PSO-optimized ANN for short-term load forecasting:
an Indian scenario. In: Applications of computing, automation and wireless systems in electrical
engineering. Springer, Singapore
31.
Bisoi R, Dash PK, Das PP (2020) Short-term electricity price forecasting and classification in
smart grids using optimized multikernel extreme learning machine. Neural Comput Appl
32(5):1457–1480
[Crossref]
32.
Yi P et al (2019) An electricity load forecasting approach combining DBN-based deep neural
network and NAR model for the integrated energy systems. In: 2019 IEEE international
conference on big data and smart computing (BigComp). IEEE
33.
Shi H, Xu M, Li R (2017) Deep learning for household load forecasting—a novel pooling deep
RNN. IEEE Trans Smart Grid 9(5):5271–5280
[Crossref]
34.
Hossen T et al (2018) Residential load forecasting using deep neural networks (DNN). In: 2018
North American power symposium (NAPS). IEEE
35.
Fahiman F et al (2017) Improving load forecasting based on deep learning and K-shape
clustering. In: 2017 international joint conference on neural networks (IJCNN). IEEE
36.
Han F et al (2020) Short-term forecasting of individual residential load based on deep learning
and K-means clustering. CSEE J Power Energy Syst 7(2):261–269
37.
Lee E, Rhee W (2021) Individualized short-term electric load forecasting with deep neural
network based transfer learning and meta learning. IEEE Access 9:15413–15425
[Crossref]
38.
Cui C et al (2020) Research on power load forecasting method based on LSTM model. In: 2020
IEEE 5th information technology and mechatronics engineering conference (ITOEC). IEEE
39.
Imani M, Ghassemian H (2018) Electrical load forecasting using customers clustering and smart
meters in Internet of Things. In: 2018 9th international symposium on telecommunications (IST).
IEEE
40.
Xu C, Chen G, Zhou X (2020) Temporal pattern attention-based sequence to sequence model for
multistep individual load forecasting. In: IECON 2020 the 46th annual conference of the IEEE
industrial electronics society. IEEE
41.
Eskandari H, Imani M, Moghadam MP (2020) Correlation based convolutional recurrent network
for load forecasting. In: 2020 28th Iranian conference on electrical engineering (ICEE). IEEE
42.
Ren C, Jia L, Wang Z (2021) A CNN-LSTM hybrid model based short-term power load
forecasting. In: 2021 power system and green energy conference (PSGEC). IEEE
43.
Motepe S et al (2019) South African power distribution network load forecasting using hybrid AI
techniques: ANFIS and OP-ELM. In: 2019 international Aegean conference on electrical
machines and power electronics (ACEMP) & 2019 international conference on optimization of
electrical and electronic equipment (OPTIM). IEEE

OceanofPDF.com
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023
A. Tomar et al. (eds.), Prediction Techniques for Renewable Energy Generation and Load Demand
Forecasting, Lecture Notes in Electrical Engineering 956
https://doi.org/10.1007/978-981-19-6490-9_8

Load Forecasting Using Different


Techniques
Arshi Khan1 and M. Rizwan1
(1) Department of Electrical Engineering, Delhi Technological University,
Delhi, 110042, India

Arshi Khan (Corresponding author)


Email: [email protected]

M. Rizwan
Email: [email protected]

Abstract
Load forecasting uses previous data from the electrical system to predict
future electric load. For the planning and operation of the utility, precise
models for forecasting the electric power load are required. Load
forecasting can also be used to support an electric utility’s future system
operations, such as load switching, demand-side management, and
identifying and forecasting energy consumption patterns. Electric charge
prediction is critical in the electric power system because it determines
when and how much generation, transmission, and distribution capacity
must be arranged to match the predicted load without supply interruptions.
As a result, the higher the quality of the forecast, the more accurate,
dependable, and timely the results are. In this chapter, various
methodologies used for load forecasting are discussed. With the help of
artificial intelligence techniques, namely fuzzy logic, ANN, and ANFIS, the
future load is predicted. All three methods are used for the data set
considered, and the results are analyzed. The results of all three
methodologies are studied and compared.

Keywords Short-term load forecasting – Artificial intelligence – Fuzzy


logic

1 Introduction
With the rise in people’s living standards, the share of cooling load, such as
air conditioning, is increasing in summer load, posing a threat to the power
system's safe operation and economic dispatch. Accurate daily load
forecasting can provide a strong scientific basis for optimal unit
combination, economic dispatch, electricity market transaction, and demand
response in the implementation of national energy saving and emission
reduction policies. The demand for power in India is steadily increasing.
India's total installed capacity is 3,95,075 MW as of January 31, 2022. The
reason for the rise in electricity usage is urbanization and population
growth. It may be stated that this need will continue to rise in the future.
Electricity is produced in response to demand [1].
Demand expectation is a significant angle in the advancement of any
model for power planning, particularly in the present improving power
framework structure. The type of the interest relies upon the kind of
preparation and precision that is required. Depending upon the time locale
of planning strategies, the forecasting of load can be classified into the
following three types specifically:
Short-term load forecasting (STLF): In this method, generally, the time
period ranges from an hour to a week. It can direct us to surmised load
flow and then lead to making choices that can block excess loading.
Transient determining is utilized to give mandatory data for managing
system of day-to-day activities and unit responsibility.
Medium-term load forecasting (MTLF): In this method, the period of
time range is from a week to a year. The figures for various time horizons
are significant for various tasks inside a utility organization. Medium-
term estimating is utilized to plan fuel supplies and unit the board.
Long-term load forecasting (LTLF): In this method, the time range is
more than a year. It is utilized to supply electric service organization with
précised expectation of future requirements for extension, hardware buys,
or staff employing.
In [2], importance of load forecasting and issues regarding load
forecasting are focused. Various methodologies of artificial intelligence that
can be used in forecasting are explained like fuzzy, ANN, statistical, spatial,
etc. It highlights the importance of these various intelligent system
approaches and helps in recognizing various aspects of research in these
methods. In [3], priority vector-based technique for load forecasting is used.
Records of almost two years of load at every hour and weather are
extracted, and the relation between them is drawn and categorized based on
that. It is an adaptive technique as it generates relationship coefficient
between weather parameters and load continuously. As these relations
change from time to time, it automatically updates the changed coefficient
between these two parameters. It is used to predict forecast of load of one
week. In [4], knowledge-based expert system is used for short-term load
forecasting (STLF). The expert system developed in this method is written
using 5 years of historical data in prolog. Distinct load shapes and their load
calculations are done. Various categories of load usage according to the
observation are set like low level of load during Chinese New Year or at the
time of typhoon. With the help of these observations, new rules or
information are made or set for the purpose of short-term load forecasting.
In [5], linear regression-based method or model used for STLF is described.
This model takes care of many areas such as innovative model of building,
with the help of which weighted least squares in linear regression
techniques estimation of parameters are done, with the use of which reverse
errors-in-variables techniques effect of potential errors on load forecasts can
be relieved, and to differentiate between daily time-independent peak load
forecast and maximum hourly peak load forecast from negative bias.
In this chapter, three models are developed for short-term load
forecasting using fuzzy logic, ANN, and ANFIS.

2 Fuzzy Logic-Based Forecasting


The fuzzy logic concept was introduced by Professor Lotfi A. Zadeh. Truth
is certainly not an outright idea. Fuzzy logic gives an approach to address
levels of conviction. It is a technique for thinking that looks like human
thinking. It is a problem-solving tool that falls somewhere between classical
logic's precision and the real world’s inherent imprecision. Several fuzzy
logic-based algorithms have been established in recent years to interpret
picture data with vagueness and ambiguity due to the acquisition phase, as
well as imprecise and ill-defined knowledge about the image contents.
Fuzzy sets, which are the main parts of fuzzy logic, can be used to handle
the imprecision in an image stored in the pixels. Vague ideas such as sharp
boundaries, excellent contrast, high saturation, bright red, and so on can be
recognized qualitatively by human reasoning and articulated in a formal
way using fuzzy logic, allowing a machine to emulate human reasoning.

2.1 Architecture of Fuzzy Logic


Figure 1 shows the block diagram of fuzzy logic. The methodology of FL
impersonates the method of dynamic in people that includes all middle of
the road prospects between computerized values YES and NO. The four
main parts can be explained as follows:

Fig. 1 Fuzzy logic block diagram

(1) Fuzzifier: The method of fuzzification involves converting crisp inputs


into fuzzy sets defined on the input space. The component of the
system that performs this procedure is known as a fuzzifier. In this
stage, a fuzzification function is used to express the measurement
uncertainty for each input variable. The fuzzification function's
objective is to interpret measurements of input variables, each of
which is expressed as a real number, as more realistic fuzzy
approximations of those real numbers.
(2) Fuzzy rule base: It contains the course of action of rules and the IF–
THEN conditions given by the experts to direct the unique structure,
in view of linguistic information. Late upgrades in feathery
speculation offer a couple of fruitful strategies for the arrangement and
tuning of fuzzy controllers. Most of these headways decline the
number of fuzzy rules.

(3) Fuzzy inference system: It chooses the organizing with level of the
current fuzzy information concerning every norm and picks which
rules are to be ended by the data field. At that point, the ended
standards are joined to outline the control exercises.

(4) Defuzzifier: A crisp value is frequently required as the output of a


fuzzy rule-based system, which is a necessity in many engineering
challenges, such as fuzzy control applications. A defuzzification stage
is required in these circumstances to achieve a crisp output from the
fuzzy output generated by rule inference.

Membership functions allow us to graphically represent a fuzzy set. In


the membership functions, The x axis represents the universe of discourse,
whereas the y axis represents the degrees of membership in the [0, 1]
interval.
Membership functions that could be classified into two groups: those
made up of straight lines being “linear” ones, and the “curved” or
“nonlinear” ones. Some of the most common membership functions are
listed as follows:
(1) Triangular function.

(2) Trapezoidal function.

(3) Gaussian function.

2.2 Fuzzy Logic Model


Generalized flowchart for fuzzy is shown in Fig. 2. The load consumed in a
location is recorded every minute and the average is calculated every 15
minutes. The data of input and output is a normalized value that is scaled
down in the range of 0.1–0.9. It is done to avoid convergence problem. The
normalized values of the data can be seen in Table 1. Fuzzy logic is
basically the general Boolean logic that is used in design of digital circuits.
It takes only two values, i.e., false (0) or true (1). But in this, the input can
take the values in between 0 and 1 also. It chips away at the degrees of
potential result for contribution to attain the definite output. The actual data
is scaled down using the equation below:

(1)
Fig. 2 Flowchart for fuzzy logic

Table 1 Normalized value

Time Inputs Output


Day 1 Day 2 Day 3 Day 4 Day 5 Day 6
00:15 0.3 0.34 0.44 0.43 0.48 0.50 0.54
00:30 0.27 0.30 0.41 0.4 0.48 0.47 0.54
00:45 0.27 0.30 0.41 0.34 0.47 0.39 0.54
01:00 0.26 0.27 0.39 0.34 0.40 0.49 0.54
01:15 0.26 0.27 0.37 0.34 0.40 0.49 0.54
01:30 0.25 0.27 0.37 0.34 0.40 0.49 0.51
01:45 0.25 0.27 0.38 0.34 0.40 0.49 0.39
02:00 0.26 0.27 0.37 0.34 0.40 0.43 0.39
02:15 0.25 0.27 0.37 0.34 0.40 0.38 0.39
02:30 0.26 0.27 0.37 0.34 0.40 0.38 0.39
02:45 0.25 0.27 0.37 0.34 0.40 0.38 0.39
03:00 0.26 0.27 0.37 0.34 0.39 0.38 0.39
03:15 0.25 0.27 0.37 0.34 0.35 0.38 0.39
03:30 0.25 0.24 0.37 0.34 0.35 0.38 0.39
03:45 0.26 0.25 0.37 0.34 0.35 0.38 0.39
04:00 0.26 0.26 0.33 0.34 0.35 0.38 0.39
04:15 0.26 0.24 0.33 0.32 0.35 0.38 0.39
04:30 0.25 0.24 0.33 0.30 0.35 0.37 0.39
04:45 0.25 0.25 0.33 0.30 0.35 0.35 0.39
05:00 0.25 0.25 0.33 0.31 0.35 0.35 0.39
05:15 0.26 0.27 0.33 0.31 0.38 0.35 0.39
05:30 0.27 0.27 0.33 0.31 0.38 0.35 0.39
05:45 0.27 0.27 0.33 0.31 0.35 0.35 0.39
06:00 0.27 0.27 0.33 0.31 0.35 0.35 0.39
06:15 0.26 0.27 0.30 0.31 0.35 0.35 0.36
06:30 0.25 0.27 0.34 0.31 0.32 0.32 0.35
06:45 0.25 0.33 0.33 0.27 0.33 0.36 0.35
07:00 0.26 0.36 0.33 0.33 0.41 0.45 0.44
Time Inputs Output
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6
07:15 0.26 0.35 0.33 0.38 0.41 0.45 0.45
07:30 0.27 0.35 0.34 0.37 0.43 0.46 0.46
07:45 0.26 0.35 0.40 0.38 0.45 0.46 0.48
08:00 0.26 0.35 0.42 0.38 0.48 0.47 0.52
08:15 0.28 0.35 0.42 0.41 0.51 0.50 0.53
08:30 0.28 0.34 0.46 0.42 0.53 0.53 0.56
08:45 0.29 0.31 0.48 0.46 0.55 0.55 0.61
09:00 0.30 0.31 0.54 0.48 0.57 0.57 0.63
09:15 0.31 0.31 0.51 0.46 0.61 0.61 0.56
09:30 0.35 0.29 0.62 0.43 0.47 0.70 0.64
09:45 0.30 0.38 0.75 0.61 0.78 0.73 0.68
10:00 0.40 0.45 0.77 0.66 0.8 0.73 0.79
10:15 0.42 0.46 0.77 0.69 0.8 0.66 0.84
10:30 0.43 0.45 0.77 0.73 0.84 0.69 0.87
10:45 0.43 0.46 0.81 0.77 0.84 0.69 0.88
11:00 0.44 0.33 0.81 0.79 0.87 0.72 0.88
11:15 0.44 0.40 0.81 0.80 0.89 0.82 0.88
11:30 0.44 0.40 0.81 0.80 0.88 0.89 0.88
11:45 0.44 0.39 0.81 0.78 0.9 0.88 0.88
12:00 0.45 0.36 0.83 0.75 0.88 0.86 0.87
12:15 0.44 0.38 0.83 0.73 0.85 0.84 0.86
12:30 0.43 0.37 0.81 0.73 0.87 0.83 0.84
12:45 0.43 0.37 0.79 0.78 0.87 0.82 0.8
13:00 0.43 0.37 0.79 0.79 0.87 0.82 0.78
13:15 0.43 0.35 0.77 0.76 0.81 0.82 0.75
13:30 0.40 0.35 0.77 0.67 0.67 0.78 0.74
13:45 0.39 0.35 0.79 0.61 0.77 0.78 0.75
14:00 0.38 0.36 0.78 0.59 0.79 0.79 0.74
14:15 0.38 0.39 0.78 0.56 0.79 0.79 0.77
14:30 0.38 0.35 0.76 0.60 0.79 0.81 0.85
14:45 0.38 0.43 0.75 0.73 0.75 0.83 0.85
Time Inputs Output
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6
15:00 0.38 0.46 0.69 0.73 0.74 0.83 0.83
15:15 0.38 0.25 0.62 0.76 0.74 0.81 0.80
15:30 0.38 0.39 0.48 0.75 0.73 0.78 0.78
15:45 0.35 0.35 0.44 0.76 0.70 0.76 0.73
16:00 0.32 0.38 0.44 0.66 0.83 0.69 0.67
16:15 0.31 0.38 0.44 0.60 0.55 0.54 0.51
16:30 0.31 0.1 0.44 0.47 0.54 0.49 0.44
16:45 0.36 0.28 0.44 0.42 0.55 0.45 0.40
17:00 0.02 0.26 0.43 0.43 0.57 0.45 0.40
17:15 0.36 0.25 0.40 0.40 0.42 0.47 0.43
17:30 0.36 0.27 0.37 0.38 0.42 0.45 0.43
17:45 0.36 0.24 0.37 0.36 0.40 0.41 0.44
18:00 0.36 0.24 0.37 0.36 0.39 0.41 0.44
18:15 0.29 0.25 0.35 0.35 0.33 0.39 0.41
18:30 0.26 0.27 0.37 0.32 0.33 0.38 0.40
18:45 0.26 0.20 0.45 0.32 0.33 0.38 0.37
19:00 0.3 0.26 0.48 0.32 0.33 0.41 0.37
19:15 0.31 0.25 0.48 0.32 0.33 0.49 0.38
19:30 0.31 0.26 0.48 0.33 0.35 0.46 0.38
19:45 0.28 0.34 0.48 0.36 0.37 0.47 0.40
20:00 0.27 0.36 0.49 0.43 0.37 0.46 0.42
20:15 0.28 0.39 0.52 0.43 0.37 0.46 0.41
20:30 0.28 0.38 0.52 0.43 0.37 0.47 0.38
20:45 0.29 0.35 0.49 0.43 0.37 0.50 0.40
21:00 0.29 0.36 0.49 0.45 0.38 0.53 0.50
21:15 0.29 0.36 0.49 0.47 0.41 0.53 0.53
21:30 0.29 0.35 0.51 0.47 0.41 0.53 0.56
21:45 0.30 0.35 0.52 0.47 0.41 0.53 0.56
22:00 0.30 0.29 0.52 0.47 0.41 0.53 0.57
22:15 0.30 0.28 0.52 0.47 0.41 0.53 0.56
22:30 0.30 0.31 0.51 0.47 0.41 0.53 0.56
Time Inputs Output
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6
22:45 0.30 0.38 0.49 0.47 0.41 0.53 0.53
23:00 0.30 0.38 0.49 0.46 0.41 0.53 0.52
23:15 0.30 0.38 0.43 0.46 0.43 0.53 0.52
23:30 0.30 0.38 0.40 0.44 0.54 0.53 0.52
23:45 0.30 0.38 0.40 0.43 0.48 0.53 0.51
00:00 0.27 0.38 0.40 0.4 0.48 0.53 0.49

where
is 0.9;
is 0.1;
is maximum load value;
is minimum load value;
L is load to be converted;
is normalized value.
Fuzzy methodology that is put forward can be utilized as a guide to
forecasting the heaps with various time arrangements. An accurate fuzzy
system can be made by dividing into various intervals. The basic fuzzy
logic model used for STLF for the data can be seen in Fig. 3. The span of
input as well as output is divided into thirteen triangular membership
functions that is presented in Fig. 4.
Fig. 3 Model of fuzzy logic for STLF

Fig. 4 Input and output triangular membership function


The triangular membership functions are utilized where the help of the
participation work is settled based on the gathered information. The
arrangement of the creation rules depends on the basic semantic learning
and is the essential premise for the forecast model. The output of the model
will exclusively rely upon this, and subsequently after primer investigation
of the informational collection, the accompanying creation rules are
utilized; anyway, the equivalent might be distinctive for another
arrangement of information. Together the triangular membership functions
as well as fuzzy rules are intended to give a simpler technique in which we
can implement instinct and experience directly into a PC program.
After the crisp input is applied with logical reasoning or through the
fuzzy inference system, an output is obtained to which the defuzzification
process can be applied and the crisp output can be obtained. The output
obtained from the model is then compared to the actual load which can be
seen in Fig. 5. One of the outputs for the 28th rule is seen in Fig. 6.

Fig. 5 Actual load versus forecasted load using FL


Fig. 6 Rule viewer for 28th rule
Then, absolute relative error (ARE) is calculated with the help of
formula given below:

(2)

where is the target load and is the forecast load through


fuzzy logic model for STLF. The error obtained is observed in Fig. 7.
Fig. 7 Error obtained in fuzzy
The comparison is done to check the accuracy of the fuzzy logic model
developed for the STLF. It can be observed that the load forecasted is
nearby the actual demand data. It was observed that minimum ARE is
0.052% and maximum ARE is 8.514%. The average absolute relative error
calculated is 2.376%. It can be concluded that the error is low. Hence, the
fuzzy model developed for the purpose of STLF for this data is accurate.

3 Artificial Neural Network


It is also known as neural network (NN). It is a machine which acts like
human brain with learning capacity and speculation as its attributes. ANNs
make them learn capacities that empower them to deliver better outcomes
as more information opens up. It is the establishment of AI and tackles
issues that would demonstrate outlandish or troublesome by human or
measurable norms. They are basically nonlinear mathematical processing
networks. They are being used in fields like image recognition, load
forecasting, speech recognition, energy consumption prediction, data
retrieval, and mine dam water level prediction and monitoring. Because of
their ability to work on complicated and nonlinear systems, artificial
intelligence methods for predicting complex and ambiguous models have
grown popular. Artificial neural networks (ANNs) are based on the
operation of biological neural networks and can learn in a similar way to
humans. It has three layers: an input layer that accepts data, a hidden layer
that processes data between the input and output levels, and an output layer
that outputs the data (which sends computed data). Each layer is made up of
neurons that process the input parameters and produce an output, with a
weight factor applied to the connections between layers.
In [6], ANN is studied in context of its strength in field of power system
and its application. Also, its application in various problems of power
system is briefly discussed. An overview of ANN-based models for STLF is
presented in [7]. Review of paper published during 1991 and 1999 is done.
These papers that are reviewed are application of NN used for STLF
purpose. Each paper is critically reviewed to properly understand the use of
NN in forecasting. A further developed NN approach is produced for STLF
purpose in [8]. An approach that is befitting for selection of training cases
in the NN is suggested. This approach has benefit of circumventing the
issue of holidays and sudden changes in weather patterns, which makes it
difficult for training of network. Additionally, an improved algorithm for
neural network is presented. In [9], the practicality of utilizing simple NN
for STLF is researched. The combination of nonlinear and linear neural
network is created. The estimates are computed utilizing weights that are re-
estimated using recent observations.

3.1 Architecture of Artificial Neural Network


ANNs are made out of different nodes, which mimic natural neurons of
human mind. The neurons are associated with connections, and they
communicate with one another. The hubs can take input information and
perform straightforward procedure on the information. The outcome of
these activities is passed to different neurons. The yield at every hub is
called its initiation or node value. It consists of three layers:
(i) Input layer: It is the first layer. It enters the external input data in the
network.

(ii) Hidden layer: It is the second layer. It is the layer between output and
input. All sorts of calculation are performed in this to determine any
pattern or hidden feature.

(iii) Output layer: It is the final layer. After going through some
transformation series in hidden layer, it provides an output which is
conveyed by this layer.
The basic structure is seen in Fig. 8.

Fig. 8 Basic structure of ANN

3.2 ANN Method for Load Forecasting


Using the algorithm discussed through flowchart in Fig. 9, the forecasted
load data for the seventh day for the location is generated through ANN.
With the help of “nftool” in MATLAB, ANN model is developed. Feed
forward network type of ANN is used here. Training of network is done by
using “Levenberg–Marquardt backpropagation algorithm”.
Fig. 9 Flowchart of ANN algorithm
The input layer includes information that the network must use during
the learning process, like target data that the network must imitate. The
weights are modified during.
The training process to provide the best outcomes. The network’s input
and target vectors are divided into three groups at random as follows:
60% will be used for training.
20% for validating that the network is generalizing and terminating the
training process before overfitting or terminating training when
generalization has reached its limit.
The 20% was utilized as a completely independent network
generalization test. This has no bearing on training; instead, it serves as
an independent indicator of network performance during and after
training (Fig. 10).

Fig. 10 ANN model in MATLAB

Figure 11 represents the regression plot obtained during ANN model


training and testing. Figure 12 is comparing of actual load and forecasted
load from ANN model. Figure 13 is the error plot obtained by ANN.
Fig. 11 Regression plot while implementing ANN model
Fig. 12 Actual versus forecasted load through ANN

Fig. 13 Error obtained using ANN


The average absolute relative error calculated is 2.913%. It is slightly
more than fuzzy model error but accurate enough for the purpose of STLF.

4 Adaptive Neuro-Fuzzy Interference System


ANFIS can address any sort of nonlinear and complex issues successfully
by adding the benefits of ANN and fuzzy. It merges the mathematical and
linguistic information by using fuzzy methods. It additionally utilizes the
ANN's capacity of classification of data and identifies the pattern. Also, the
ANFIS causes less retention error and is more noticeable to user in
comparison with ANN. It is a combination of both ANN and fuzzy logic
(FL). Hence, it has advantages of both the methods overcoming their flaws.
FL cannot gain any information from the data. ANN has absence of
information representability and logic.
In [10], adaptive neuro-fuzzy inference system (ANFIS) is used for the
purpose of studying STLF design. In this paper, consumed load is
forecasted with the help of multi-ANFIS. Sections of the presenting model
are into the multi-ANFIS which includes maximum and minimum
temperature, date of day, condition of climate, and consumed load of
previous day, and its output is the forecast of load consumption of power. In
[11], ANFIS model is developed for short-term load forecasting purpose. It
is the combination of both fuzzy and ANN. Factors like data types and
weather, etc., are used in this model. The training of the model is done by
historical load data. ANFIS-based approach of load forecasting is used for
small regions with low consumption in [12].

4.1 Architecture of ANFIS


ANFIS consists of five layers of neurons as shown in Fig. 14. Every layer
has their own behavior. Layers 2, 3, and 5 consist of constant behavior,
whereas layer 1 and layer 4 have varying parameters, in these modifications
are done for training. These five layers are as follows:

Fig. 14 Basic structure of ANFIS


(1) Layer 1—Fuzzification

In this layer, process known as fuzzification is carried out. Degrees in


which every input is belonged to fuzzy space are given the values in
between 0 and 1. Every node in this layer is adaptive node. The input and
output relation of this node can be given as follows:
(3)

(2) Layer 2—Fuzzy rule

Every node is fixed and addressed with a rule. Every node of this layer
duplicates the input signal which can show degrees to which the sources of
incoming signal fulfill the membership function. The result of the
information signs to every node of this layer addresses the terminating
strength of a rule. The output for this can be defined as follows:
(4)

(3) Layer 3—Normalization

In this layer, fixed nodes are named as N. Output in this layer is


normalization of the weight work or summation of every rule firing strength
as follows:

(5)

(4) Layer 4—Defuzzification

Every node registers the weighted subsequent value of every rule which
addresses the contribution of every rule to the output overall. These are
adaptive nodes other than the nodes in fuzzy layer. In this layer, nodes
calculate output of rules base on subsequent parameters as follows:
(6)
(5) Layer 5—Output

It is the final layer. By summing all the incoming signals, it provides the
output as below:

(7)

4.2 ANFIS Model for Load Forecasting


For the development of ANFIS model, MATLAB 2018 software is used.
With the help of “anfisedit”, ANFIS model is developed. The
methodology’s training section is based on a system that collects data from
the plant’s database on a regular basis in order to analyze the data and find
potential energy pattern behaviors. Approximately, 70% data is trained and
30% data is tested in this model. Using the algorithm discussed through the
flowchart in Fig. 15, the forecasted load data for the seventh day for the
location is generated through ANFIS.
Fig. 15 Flowchart for ANFIS algorithm
Figure 16 shows the comparison of forecasted load from ANFIS model
and actual load recorded. Figure 17 is the error plot obtained by ANFIS.

Fig. 16 Actual versus forecasted load through ANFIS

Fig. 17 Error obtained using ANFIS

The average absolute relative error calculated is 1.953%. It is less than


fuzzy and ANN model. It can be said that the model developed is accurate
for load forecasting.
5 Conclusion
The importance of short-term load forecasting is increasing with increase in
the utilization of electricity. In electricity load forecasting, machine learning
techniques are demonstrating to be quite useful. These are frequently being
used as one of the most forward-looking approaches during the time of
generation of electricity, market planning activities, and also in planning for
development in the distribution network.
In fuzzy model, it is observed that at 12:30 pm the accuracy of the
model developed is 100%. The rest are nearby values for the actual load.
The average absolute relative error calculated is 2.376%. So, it can be
concluded that the model developed for the STLF is quite accurate (Fig.
18).

Fig. 18 Comparison of actual load with all the techniques used

References
1. Hu L, Zhang L, Wang T, Li K (2020) Short-term load forecasting based on support vector
regression considering cooling load in summer. Chin Contr Dec Conf (CCDC) 2020:5495–5498
2.
Campbell PRJ, Adamson K (2006) Methodologies for load forecasting. In: 3rd international
IEEE conference intelligent systems, pp 800–806
3.
Rahman S, Shrestha G (1991) A priority vector based technique for load forecasting. IEEE Trans
Power Syst 6(4):1459–1465
[Crossref]
4.
Ho K-L et al (1990) Short term load forecasting of Taiwan power system using a knowledge-
based expert system. IEEE Trans Power Syst 5(4):1214–1221
[Crossref]
5.
Papalexopoulos AD, Hesterberg TC (1989) A regression-based approach to short-term system
load forecasting. In: Conference papers power industry computer application conference, pp
414–423
6.
Bansal RC (2006) Overview and literature survey of artificial neural networks applications to
power systems (1992–2004). IE (I) J EL 86:282–296
7.
Hippert HS, Pedreira CE, Souza RC (2001) Neural networks for short-term load forecasting: a
review and evaluation. IEEE Trans Power Syst 16(1):44–55
[Crossref]
8.
Peng TM, Hubele NF, Karady GG (1992) Advancement in the application of neural networks for
short-term load forecasting. IEEE Trans Power Syst 7(1):250–257
[Crossref]
9.
Peng TM, Hubele NF, Karady GG (1990) Conceptual approach to the application of neural
network for short-term load forecasting. IEEE Int Sympos Circuits Syst 4:2942–2945
[Crossref]
10.
Souzanchi KZ, Fanaee TH, Yaghoubi M, Akbarzadeh TM (2010) A multi adaptive neuro fuzzy
inference system for short term load forecasting by using previous day features. In: International
conference on electronics and information engineering, pp V2-54–V2-57
11.
Peng J, Gao S, Ding A (2017) Study of the short-term electric load forecast based on ANFIS. In:
32nd Youth Academic annual conference of Chinese Association of Automation (YAC), pp 832–
836
12.
Akarslan E, Hocaoglu FO (2018) A novel short-term load forecasting approach using adaptive
neuro-fuzzy inference system. In: 6th International Istanbul Smart Grids and Cities Congress and
Fair (ICSG), pp 160–163

OceanofPDF.com
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023
A. Tomar et al. (eds.), Prediction Techniques for Renewable Energy Generation and Load Demand
Forecasting, Lecture Notes in Electrical Engineering 956
https://doi.org/10.1007/978-981-19-6490-9_9

Time Load Forecasting: A Smarter


Expertise Through Modern Methods
Trina Som1
(1) Dr. Akhilesh Das Gupta Institute of Technology and Management,
Delhi, FC-26, Shastri Park, New Delhi, 110053, India

Trina Som
Email: [email protected]

Abstract
Electricity is a necessary aspect of modern life, and it benefits us in a
variety of ways. Electricity is a part of daily living of human race, which
includes basic lighting, cooling, heating, cooking, refrigeration, as well as
for operations of electronic appliances, online-based systems,
transportations, and medical purposes. With growing awareness towards
effective and green energy production, forecasting of accurate load demand
has become the most vital part in today’s power sectors. Suppliers of energy
and others involved in electric energy’s generation, distribution, and
transmission along with marketing, rely heavily on the demand estimates.
Electricity demand estimates are used to guide investment decisions in
power generating transmission, distribution, and markets, as well as
network infrastructure. Forecasts are also important for development
experts as well as power utilities, energy policymakers, and private
investors. Forecasting of electric power demand is regarded as one of the
most important aspects of economic operation of power systems, which
serves as a significant cost-cutting potential for power utilities or
companies. Many research resulted in achievement of maximum savings
when control operations, fuel allocations, economic dispatch, unit
commitments are made on the basis of proper load forecasting. Hence,
development of exact models for projecting the electricity demand is
critical for functioning and planning of utility companies. Calendar seasonal
information, wind speed, air temperature, history knowledge of load
pattern, air temperature, wind speed, geographically information, and
economic events are all aspects that influence prediction or forecasting of
the load. The forecasting of load on timely basis has mainly been classified
into short-term, medium-term, and long-term forecasts. Different models
with different modes and constraining parameters, needs proper controlling
methods. These methods are generally known as traditional forecasting
technique, modified traditional technique, and modern techniques. Though
the conventional methods are able to consider the above-mentioned aspects
on time series forecast, but it often takes a longer time and complicated
ways to predict the desire value. Unlike the conventional methods, the
hybrid models, are capable of adapting to the fluctuations in the raw value
of electric load data. The hybrid techniques results in a better performance
and higher accuracy in forecasting. By fusing the best of statistics and
machine learning techniques, the hybrid methods promise to advance time
series forecasting. The basic concepts of hybrid techniques in forecasting
lies in compensating the weakness of one method with strength of other.
However, this field of research is essential to create the statistical
significance of the existing data, by analyzing existing methods, initiating
generalized research queries, and further exploring areas of possible
improvements.

Keywords Load forecasting – Short term – Long-term – Optimization –


Grasshopper optimization method

1 Introduction
Electricity forecasting is an important part of the power grid, which has
piqued academic interest. Forecasting allows for well-versed and efficient
responses to demand for electricity. However, there are many models for
forecasting which are available thus making the unskilled researchers to
choose the right one.
Forecasting models are frequently employed in a variety of fields,
including as stock market movements with forecasting or stock market
indexes [1]. In business, it's used to schedule employees, manage
inventories, and forecast demand, while in meteorology, it's used to forecast
weather [1], and in many economic estimations in social and professional
events.
Power plant control relies heavily on forecasts and the exchange of
electrical power in linked systems [2]. Proper forecast aids the planners in
comprehending the impact of several factors influencing energy
consumption, thereby providing better results with good decisions [3].
Prediction of electrical load demand is an important part of the power
industry's planning. It is important for the function of electrical power grids
[3]. Electrical load demand projections is closely linked to the economic
growth, as well as national security and society’s day-to-day operations [4].
As a result, the accuracy of electric load forecasting and scheduling of
maximum generation of power is crucial for power system management. On
a temporal scale, forecast includes short-term predictions, such as for
keeping a balance between the prediction on long term basis and electrical
power generation, and long-term predictions, such as for building the
maximum capacity, return on investment evaluations, and revenue studies
[5]. Various forecasting models or frameworks have been explored and
tested to aid in the easier acquisition of results in the sphere of business and
marketing [6]. Despite the fact that several forecasting models and
methodologies have been developed to calculate reliable load forecasts,
selecting an acceptable. It is difficult to develop a forecasting model for a
given energy network, and none of these, however, can’t be used for all
demand patterns [7]. Hence emerging of new and modified forecasting
methods are the prime need of the society at the present time period.

2 Types of Electrical Load Forecasting


For proper planning and functioning of a power utility firm, a good model
for projecting electric power consumption is required. Accurate prediction
of load is critical in assisting an electric utility when making key decisions
about power, switching of load, control of voltage, re-configuration of
network, and structure development [8]. Among various forecasting models,
the most common type of classification is done on the basis of time. Long,
medium-and short-term forecasting models are the types in this genre.
2.1 Long Term
This form of forecasting is usually done over a number of years (of about
twenty years approximately). This form of projection is crucial for long-
term planning, generating new building, and the development of the power
supply and distribution system. In terms of annual numbers, long-term
power demand may increase, and the temporal load profile’s contour could
also change. With the need of long-term load forecast precision, for grid
growth and operation [9], it has gotten little attention in terms of other
aspect. Grading up current electrical load has conventionally been used to
forecast the hourly profile of local as well as national electricity
consumption [10]. In comparison to today’s circumstances, end-user
freedom and more electrification will change things the hourly profile of
load demand. Power-system planners must account for such changes in
their modeling concepts and frameworks for analysis in order to propose
cost-effective and practical solutions. Peak electricity demand is a big
concern since it dictates the size and power producing capacity of the
electrical infrastructure all times. Understanding how current and future
developments, such as developing structures of integrated PV systems, heat
pumps, electric vehicles, energy storage devices, variations in demand
responses, and affect peak electrical load is critical. Long-term models
including the power sector must incorporate better solutions to enable more
accurate load forecasting by involvement of construction and transportation
sectors.

2.2 Medium Term


Medium-term forecasting is useful for planning maintenance and obtaining
fuel, as well as energy transaction and utility revenue review, and is
normally for a week to a year. This information is useful for planning
proper operation of power system, and has considerable advantages for
businesses in the energy market, whether regulated or unregulated [11].
Taipower is a state-owned combined generation, distribution and
transmission of power company which is an example of a regulated
business that might benefit from the MTLF [12]. For many of these types of
businesses, MTLF data can serve as a barometer of energy consumption and
growth at the regional and national levels, as well as assisting with energy
planning for the long and medium term [13]. Load projections over the
medium term can also be utilized to plan and manage network repairs. The
use of intermittent resources is maximized when fuel purchases for power
generation are effectively negotiated. Major choices about long-term power
system development, such as building of a power plant that takes two or
more years to complete, typically necessitate longer-term estimates.
Forecasting on basis of medium-term time, on the other hand, might
provide useful information, viz, the improvement of the transmission grid
for guiding the development of other infrastructure parts that must be
completed in a shorter timeframe.
For distribution systems, grid congestion is a major issue, and it can
have a considerable influence on consumer energy prices and efficiency of
overall system [14]. In a regulated sector, MTLF can be utilized to increase
overall system reliability by optimizing energy generation and transmission.
In a regulated sector, MTLF can be utilized to increase overall system
reliability by optimizing energy generation and transmission. Majority of
the advantages realized in an energy market that is regulated through
precise MTLF can be achieved as well as in a deregulated energy market.
Transmission congestion is a problem that affects all energy delivery
systems independent of legislation. Similarly, transmission and distribution
companies are also affected in a deregulated energy economy. These
deregulated enterprises can successfully use MTLF data to direct the
upgrading their transmission network in order to provide better service to
their clients.

2.3 Short Term


In recent decades, one of the most significant fields in the electrical industry
is forecasting short-term load in order for electricity systems to run
efficiently and reliably [15]. It is particularly important in the fields of load
flow studies, planning and monitoring, power unit scheduling, and exigency
analysis. This type of forecasting is generally last for an interval of one hour
to one week. It is critical for a utility's day-to-day operations, along with
proper scheduling regarding transmission of generated electricity. Another,
type of forecasting, i.e., ultra or VSTLF (very short-term load forecasting)
is used for controlling operation in real-time and varies between a few
minutes and an hour in advance. Several forecasting methods and models
have been created to calculate an accurate forecast for an hour to week
interval. It is critical for a day-to-day operation of the utility to schedule
power transmission and generation. Short-term load forecasting (STLF) has
been a popular research area in recent decades. This STLF offers precise
input into a previous day's scheduling, load power flow analysis, planning,
and maintenance, exigency analysis, of power systems [16] to achieve
improved reliability and effectiveness in power system operation, and to
make it easier to reduce the cost of operation by offering standard statistical
models, namely, ARIMA, ARMAX, SARIMA, exponential smoothing,
multi-variable regression, and Kalman filter based methods. AI-based
models such as knowledge-based expert systems, artificial neural networks
(ANNs), evolutionary computation models, fuzzy theory and fuzzy
inference system, and support vector regression also enhance the efficiency
of the system.
To obtain a sufficiently precise forecasting level, many advanced
hybrids using those AI-based models have lately been developed because of
the rapid advancement of evolutionary algorithms (EAs) and novel
computing ideas viz chaotic mapping functions, quantum cloud mapping
process and computing concepts. By implementation of a superior methods,
existing models such as ARIMA become capable of solving seasonal
problems. STLF’s study trends and progress have revealed a wealth of
potential, deserving of additional investigation into this vital topic.
Considering these types of forecasting, several models of load
forecasting have been developed and adapted for improvement in electricity
generation and distribution.

3 Existing Models of Load Forecasting


The existing models for load forecasting are mainly developed on the basis
of certain parameters which are realized as time series-based analysis [17]
and estimates, qualitative techniques, and causal models. Generally, the
models were compared in terms of the timeframe they are supposed to
forecast. The most important issue becomes selecting the appropriate
forecasting method depending on the qualities of the time series data. On
the basis of different parameters, regression, bottom up, time series
analysis, ANN, and SVM are the five most prevalent models being
compared. Further, considering only the time series analysis, three different
types of series models are constructed, as exponential smoothing model,
moving average model, and ARIMA [18]. Many parameters under
consideration hold a good inter relationship such as dependence on the
frequency. Furthermore, the time series can be represented as a yearly
annual budget or quarterly expenses, as well as monthly air traffic, weekly
sales volume, daily weather conditions, hourly stock prices, minutes
inbound calls in a call center, and even seconds wise web traffic.
Furthermore, the time series can be represented as a yearly annual budget or
quarterly expenses, as well as monthly air traffic, weekly sales volume,
daily weather conditions, hourly stock prices, minutes inbound calls in a
call center, and even seconds-wise web traffic. In consideration to so many
parameters, three main types of time series models came into the market,
namely, moving average, exponential smoothing, and ARIMA.
The ARIMA model focus on the way of forecasting, where prediction of
future value is made on the basis of previous value are called univariate
time series forecasting; while the predictions when made on basis of
different factors other than series data, are known as multi-variate time
series forecasting. Auto Regressive Integrated Moving Average (ARIMA) is
a type of model that describes a time series using its own previous values
[19]. These are the equation’s own lags and lagged forecast mistakes, which
will aid in forecasting values for future. Any ‘non-seasonal’ time series can
be modeled using ARIMA models, that isn’t random white noise but has
pattern. If a time series has seasonal trends, the model is seasonal and is
named as SARIMA, i.e., Seasonal ARIMA. The three terms that
characterize an ARIMA model are p, d, and q, where p and q are the order
of the AR term and MA term respectively, and d is the number of
differencing steps required to stabilize the time series. However, the
nonlinearity of the impacting components makes electricity load forecasting
challenging, where ARIMA fails to address the problem.
In this regard, support vector machines (SVMs) have been used to
address time series issues and nonlinear regression satisfactorily. The
structural risk minimization (SRM) principle, which is better than ERM, is
used in SVM [20]. Rather than minimizing the training error, an upper
constrain on the generalization error is minimized, which is the most
fundamental concept in SRM. SVM was able to achieve an optimal network
structure using this technique. In addition, the SVM regression transforms
the original data x nonlinearly into a space of higher dimension. This is
analogous to solve a problem of linear constrained quadratic programming,
guaranteeing that the SVM solution remains unique and optimal globally.
On the basis of the quantitative forecasting, those uses past data in
numerical and continuous pattern form, many other load forecasting models
have also been developed. Econometric modeling, judgmental forecasting
modeling, time series modeling, and Delphi method modeling are among
them. These methods create a forecasting logic by identifying the
components that influence the forecast and constructing a functional form
of the link between the identified factors. Short-term load prediction gives
most suitable results using these models.
The load forecasting models require proper controlling techniques on
the basis of criteria and parameters to be considered.

4 Controlling Method in Load Forecasting


Despite the fact that several techniques and models for forecasting have
been created to compute reliable load forecasts, selecting an acceptable It is
difficult to develop a forecasting model for a given energy network of
varying demand pattern. This gave rise to many research questions relating
to the responses of the criteria with that of the platform to conduct the
forecasting algorithms. Keywords such as “electricity demand models,”
“electricity prediction models,”, “electricity forecasting models,” “online
database,” “advanced search tool,” and so on are included.
Among the controlling methods, the search for causal linkages among
various inducing factors and forecasted values is the focus of the cross
sectional or multi-factor based forecasting approach, the time series-based
forecasting methods, on the other hand, is more reliant on past data. When
comparing these methodologies, the researchers discovered that time series
forecasting is a lot easier and faster. This method avoids the numerous and
subjective aspects that could sway the accuracy of a forecasting model
while considering multivariable forecasting. Three types of time series-
based forecasting models can be found [21, 22], namely,
Models based on statistical data.
Models based on machine learning.
Models based on hybrid technology.

4.1 Classical Methods in Load Forecasting


In fundamental approaches, both qualitative and quantitative methodologies
are used to anticipate outcomes, with the most appropriate kind being
selected by the data available. The future load is estimated subjectively.
Using expert opinions in subjective or qualitative forecasting methods,
nonetheless, they are not simply guessing, but have developed organized
procedures for creating effective forecasts without the use of historical data
[23]. When historical data is inaccessible or scarce, such tactics are useful.
These approaches include Delphi method, subjective curve fitting, and
technical comparison method. Quantitative or objective forecasting
methods, on the other hand, operates through mathematical and statistical
formulas. With accessibility of data, these methods are implemented
satisfying two criteria, viz, accessible historical data in the form of
numbers, and it is acceptable to infer that few features of historical forms
should persist in the future. Methods of quantitative forecasting include a
diverse set of techniques, each with its own set of features, precisions, and
prices to consider when selecting a method within a field for a given goal.
Decomposition methods, regression analysis, Box-Jenkins methodology,
and exponential smoothing, are examples of methods based on quantity
[24]. The majority issues with quantitative prediction is either data collected
over a period of time at regular intervals or data acquired at a specific point
in time in cross section. However, the load forecasting methods can be
summarized on the basis of the structural methods.

4.1.1 Statistical Models


A mathematical model is referred to as a statistical model that contains a
collection of numerical assumptions about how sample data is generated. A
model based on statistics can also be defined as highly idealized
representation of the data-gathering process. Statistical model provides a
mathematical relationship exists between non-random variables with one or
more random variables. Several other statistical models for forecasting and
prediction-making have also been developed based on specific criteria of
optimal fit. Methods developed and implemented in this regard are Box-
Jenkins basic models such as ARIMA, and ARIMAX, AR, MA, and ARMA
[25], Algorithms for Kalman Filtering in State Space [26], Grey models
[27] and exponential smoothing.

4.1.2 Autoregressive (AR) Model


Autoregressive models work on the principle that the series’ most recent
value. Yt can be described as a linear mixture of prior loads.
Mathematically, the auto regressive (AR) model can predict future load
values. The value for a pth order auto regression can be found through the
expression shown below,

(1)

where εt is the random noise, and ϕ1, ϕ2, ϕ3, …, ϕp the AR coefficients that
are unknown. The model’s order specifies the number of lagged preceding
values. As a result, the mentioned model can forecast future behavior on the
basis of previous actions. This method considers the random noise along
with the present and past values. Many industries, including finances,
electrical load demand prediction, and digital signal processing units, have
used autoregressive models for decades [28].

4.1.3 Model Based on Moving Average


The moving average-based model imitates the behavior of the process
regarding moving average. It is just a regression model that reverts existing
values linearly against one or more preceding values generate white noise.
The time series is treated as unevenly weighted of a random shock series
(εt) in a moving average model. Thus, the qth order of moving average
based model can be represented as:

(2)

The noise series can be represented by model residuals or forecast errors


once load observations are available. This gives rise to the technique a
“duality,” or invertibility, property. An infinite order or autoregressive form
can be inverted or rebuilt, in this type of model which makes the difference
between the MA with the AR processes. This is only possible if the MA
parameters meet certain criteria, otherwise, the model will fail to meet the
Box-Jenkins conditions for stationarity, invertibility, and stability [29].

4.1.4 Autoregressive Moving Average (ARMA) Model


George Box and Gwilym Jenkins created the autoregressive moving
average in 1970. Because of their relative simplicity and effectiveness,
ARMA models are becoming increasingly prevalent and have been
extensively studied in load forecasting [30]. The present value Yt is linearly
stated in terms of the current value in ARMA models, prior values, and
preceding noises. The ARMA (p, q) models are a combination of AR (p)
and MA (q) autoregressive and moving average models, which can be
mathematically expressed as below;

(3)

4.1.5 Autoregressive Integrated Moving Average (ARIMA)


Model
Because many time series, such as those connected to business and
socioeconomics, have non-stationary behavior in practice, approaches that
can deal with parameter and behavior fluctuations are required. As a result,
the AR, MA, or ARMA models are unable to adequately characterize non-
stationary time series because they can only deal with stationary data. As a
result, Box and Jenkins presented the ARIMA models in 1976 with the goal
of including non-stationarity as well. The parameters of autoregression (1,
2, …, p), the amount of distinctions d done to (1 − B), with B as a lag
operator, and the moving average parameters (1, …, q), and are the three
types of parameters in the ARIMA Box–Jenkins models.
The lag polynomials are used to create the mathematical expression for
the ARIMA (p, d, q) model, as illustrated below. The lag polynomials are
used to create the mathematical expression for the ARIMA (p, d, q) model,
as illustrated below in Eq. (4)
(4)
The seasonal model ARIMA (p, d, q) (P, D, Q)s, where s is the number
of periods each season and P, D, and Q are the cyclical counterparts of p, d,
and q, respectively. (SARIMA) models are seasonal versions of the ARIMA
model. The autoregressive fractionally integrated moving average
(ARFIMA) model is a useful generalization of ARIMA models that enables
non-integer differencing parameters’ values d. The ARFIMA has
applications in time series modeling with a large memory. For electric load
forecasting, the ARIMA models and their derivatives have had a lot of
success [19].

4.1.6 ARMAX and ARIMAX Models


Apart from the random noise that disrupts the process, only time and load
are required as input data for the ARMA and ARIMA models. Exogenous
factors can occasionally be incorporated in the ARMAX and ARIMAX
models because loads are influenced by the meteorological conditions and
the time period of day [31].
In the model based on autoregressive moving average with exogenous
inputs. In the time series the present value, yt is linearly stated in terms of
its preceding values. Present and historical noise levels, as well as current
and previous exogenous variable levels (s), are all taken into account.
The ARMAX (p, q, r1, …, rk) can be expressed as,

(5)

where the i’s represent the exogenous factor ordering (variables) vti and ψi
(B) is an adequate coefficient polynomials
The ARIMAX model can be expressed in the same way as the ARMAX
model, with the exception that the integrated part must be taken into
account. This can be done with the help of differencing operator [32].
However, with many advantages, still the conventional methods fail to
address all the issues and factors of an effective load forecasting. Because
of the strong reliance on socioeconomic factors, long-term forecasting has a
high level of uncertainty; As a result, a degree of error of up to 10% is
allowed. Kalman Filtering algorithm can be used to reduce the inaccuracy
of the mean squared model, thus considering the uncertainties.

4.1.7 Kalman Filtering Algorithm


Rudolph E. Kalman, who presented his important paper on a recursive
solution to the discrete-data linear filtering challenge in 1960, is the name
of the Kalman filter. The Kalman filter (KF) is a collection of state-space
mathematics that can be used to estimate the state of an observable process
in a computationally efficient (recursive) manner [33]. It can predict past,
present, and future conditions, even if the nature of the system that is being
represented is unknown. A Kalman filter can also be implemented to control
noisy systems, such as electric power systems.
According to many researchers [34], the main rudiments that influence
the electric load behavior are weather, random disturbances, economy,
customer factors and time. In weather factor, wind speed, humidity,
precipitation, and temperature, etc. are considered as adjustment in habit
patterns in consumers like heaters, coolers, etc. The load curve impulses
caused by the massive loads, such as steel mills or wind tunnels, are shut
down or restarted, and considered as random disturbances. Other irregular
events that are known in advance but have an unknown influence on the
load are likewise classified as random disturbances. The type of facility, i.e.,
residential complex, commercial buildings, agricultural units, or industry),
the size of the building, employees and electricity users are all factors to
consider as customer factor. The time factors include the effect of loads
during weekdays, weekends, holidays, and seasons.
The KF mechanism operates in two steps such as the corrector step (CP)
and predictor step (PS). The PS, together with its covariance uncertainty,
assesses the present load’s state based on its previous state. After taking the
new SMD measurement, a weighted average is used to update the predicted
state vector. The estimate with the greatest degree of certainty gives greater
weight.
The KF is usually expressed as a discrete-time linear dynamic system as
state-space vector, shown below in Eq. (6)
(6)
where k represents discrete time moment. Further, the smart meter device
(SMD) readings can be realized as an observant vector y(k). Finally the
delayed estimator calculate the output as y(k/k − 1) by the application of
(k − 1)th output.
The Kalman filter has been commonly utilized for tracking in computer
visuals that interact. It has been utilized for both prediction of motion and
fusion of multi-sensors (inertial-acoustic). Furthermore, this filter is
extremely effective in several additional areas:
Because the linear KF frequently fails to meet the strict requirement
while detecting accuracy of forecasting in the presence of significant
nonlinearities in the situation, numerous nonlinear versions have been
created. To study the problem’s hidden nonlinearities, the unscented
Kalman filter (UKF) and extended Kalman filter (EKF) are occasionally
utilized.

4.1.8 Gray System Theory (GST)


Deng was the first to introduce this approach in 1982. To predict the
behavior of an unknown system, gray models simply require a small
amount of data [35]. The GST’s fundamental goal is to derive plausible
governing laws for the observed system from available data, regardless of
how complicated or chaotic it is. One of the most often utilized models is
the gray model, which is capable of producing future primitive data point
projections as well as coping with observed systems with partially unknown
parameters.
The gray model’s differential equation is crucial since it allows the
power load to be forecasted. The system’s n step forward predicted value
can be found once the DE is solved. The GM (1, 1) is a model for
forecasting time series with time-dependent changing coefficients and a
differential equation (DE). It is feasible to moderate the system’s
uncertainty and hence lower its intensity. This is accomplished through the
use of cumulative generation (AG). Because the models can employ
random changes reflecting the quantity of gray, which is altered in a
particular interval. These gray system theory-based models are commonly
utilized in networks. The model that can be used to predict future load can
be brought into market after it has been successfully tested for acceptable
dependability, stability, and accuracy. All three forms of load forecasting
can benefit from gray models. One of the main benefits of GMs is that they
may be created without considering load distribution or load trend
variations. However, their shortcoming is that they are only useful for
effectively tackling problems with current exponential development
tendencies.

4.1.9 Exponential Smoothing (ES)


The ES models are one of the most widely used statistical forecasting
techniques, because to their precision, simplicity, durability, and
inexpensive price [36]. They’re also necessary for power system load
predictions. The EF model’s smoothing coefficients have a significant
impact on the model’s accuracy. This research also shows how to locate the
smoothing coefficients having best values. Exponential smoothing is a
practical forecasting method that uses an exponentially weighted average of
previous observations to make a prediction. The present observation is
given the highest weight, followed by the measurement preceding it, and so
on. Single exponential smoothing (SES) based on Brown’s approach,
double exponential smoothing (DES) using Holt's method, and triple
exponential smoothing (TES) are three forms of ESP. This exponential
smoothening procedure further based on Holt-Winters method.
When there is no seasonal or periodic change in the data pattern, the
SES model is used. It also does not have a trend in the earlier data. DES
models, on the other hand, are frequently used in economics sectors which
allow for anticipated values to have a trend. The TES model based on Holt-
Winters concept can be computed in two different ways: additive and
multiplicative. If the original data shows stable seasonal fluctuations, the
additive type model is applied. When the original data exhibits large
changes in seasonal fluctuations, however, multiplicative models are
applied. The basic Holt-Winters method, according to empirical evidence,
tends to yield over-or under-forecasts, especially for longer forecasting
horizons.
Regression analysis, weighted iteration, and exponential smoothing as
well as other enhanced algorithms like adaptive prediction and stochastic
time series, have all been utilized for electric load forecasting.
Traditional statistical models have flaws and can sometimes cause
unfavorable outcomes. This is because there are too many computational
options, resulting in long times to solve and the difficulty of some nonlinear
data patterns. As a result, machine learning and artificial intelligence
techniques offer a viable and enticing option.

4.2 Modern Techniques in Load Forecasting


Forecasting and categorization are two applications where ANNs have
proven to be useful. The use of artificial neural networks (ANNs) as a
technique for anticipating electric load has been extensively studied in
recent decades, and have garnered enormous popularity.

4.2.1 Artificial Neural Network (ANN)


In 1990, Warren McCulloch and Walter Pitts created the artificial neural
network (ANN) approach as a time series forecasting substitute [38]. The
ANNs try to spot patterns and regularities in the data and learn from their
mistakes, and then deliver generalized results based on their previously
acquired knowledge. Input, hidden, and output layers make up the most
basic form of an artificial neural network model. The function of hidden
layers, related weights, and outputs can all be considered when modifying
the input values to the hidden node. An iterative training method is one in
which the weights of the ANNs are changed over time. Some of the most
widely used ANN algorithms for electric load forecasting include neural
networks, feed-forward (FF), back-propagation (BP), radial basis function
(RBF), NARX (nonlinear autoregressive with exogenous inputs), random
neural networks, recurrent neural networks, and self-organizing competitive
networks. In order to explore more better options, wavelet neural networks
have also come up in load forecasting problems. For approximating
arbitrary nonlinear functions using wavelet transform theory, WNNs are
suggested as a substitute for feedforward neural networks.

4.2.2 Wavelet Neural Networks (WNNs)


Grossman and Morley introduced wavelet theory in the 1980s. Few scholars
later proposed the wavelet neural network to make use of wavelet functions
as well as the extensively used neural network (WNN). By computing the
signal vector’s internal product and wavelets base, a WNN may recognize
pattern recognition-inspired feature abstraction of the signal using feature
space. As a result, the network may efficiently learn the system’s input and
output properties without too much prior knowledge. WNN transmits the
signal forward while propagating the error backward, resulting in a more
accurate signal predictive value. For approximating nonlinear functions,
WNNs have a rough capability and are robust.
Another form of controlling approaches in load forecasting field is
extreme learning machine. Extreme learning machines are a subset of feed
forward ANNs. Clustering, regression, feature learning, and sparse
approximation are some of its applications.

4.2.3 The Extreme Learning Machines (ELM)


Extreme learning machines were introduced by Huang, Zhu, and Siew in
2004. They mainly deal with an FF neural network with a single hidden
layer. Weights for buried layer nodes are chosen at random in the ELM, and
the output weights of the ELM can be determined analytically using a least-
squares solution. This means that, in addition to the weights that connect
inputs to hidden nodes, the hidden nodes’ parameters do not need to be
changed. The nodes those who are hidden, on the other hand, can be
assigned at random and never updated. ELM networks have a lot of
potential, generalization performance that can learn thousands of times
more quickly than backpropagation networks. Furthermore, the hidden
nodes’ output weights are frequently resolved in a single step, significantly
reducing the time required for learning the algorithm. In both regression
and classification difficulties, literature suggests that ELM models outclass
support vector machines. For ELF forecasting, Chen [40] developed a
unique recurrent ELM technique, while Rafie [39] employed a mechanism
to improve the prediction by combining numerous ELM machines in a
linear fashion and established the performance on three engineering
challenges.

4.2.4 Support Vector Machines (SVMs)


Vapnik invented support vector machines (SVMs) as a reversion and
organization approach in 1992. SVMs were first created to cope with
pattern classification difficulties, but their use has now expanded to include
regression techniques like support vector regression (SVR). The
fundamental goal of SVMs is to create a unique decision rule with
acceptable generalization ability by selecting a training data forming a
subset which is known as support vectors. The training approach for an
SVM model is similar to that of solving a quadratic programming problem
with linear constraints. In contrast to the training of other networks, SVM
solutions appear to be globally optimum and exclusive at all times. Instead
of obtaining the least empirical errors the principle of minimizing the
structural risk is taken into account while dealing with SVM models [41].
SVMs have gained in popularity over the last two decades, not just for
pattern identification as well as regression analysis, forecasting, and dealing
with prediction based on time series. However, the fundamental
shortcoming of SVMs, is that they require a large number of computations,
which dramatically increases the temporal complexity of the solutions.

4.2.5 Fuzzy Logic-Based Forecasting


Modeling and prediction have been prioritized in the field of electric load
forecasting, with a focus on computational and artificial intelligence
techniques, such as models based on fuzzy logic. Several researchers have
used fuzzy logic models to forecast long-term load [42] and short-term ELF
[43]. Furthermore, Jamaluddin et al. [44] used fuzzy logic to anticipate a
very short-term peak load time, whereas authors [45] created a 220 kV
transmission line short-term load forecasting model based on FL. Laouafi et
al. [46] developed a daily load curve prediction system based on an
adaptive neuro-fuzzy inference system, and Yao few researchers used an
interval Type-2 FL system for short-term load forecasting.
Fuzzy approaches are extremely beneficial for dealing with
uncertainties and are required for human specialists to acquire information.
To produce good prediction outcomes, fuzzy theory is frequently integrated
with other methodologies.

4.2.6 Genetic Algorithm


In the realm of electric load forecasting, GAs have been frequently used.
Genetic algorithms have become one of the most widely utilized
evolutionary computation approaches. GAs are a collection of genetics and
natural selection principles-based optimization and exploration techniques.
Nonlinear systems are frequently well-suited to these strategies. They carry
out a specific optimization which is founded on the natural selection of the
most effective solutions. The information comes from a variety of
forecasting model having candidates’ populations.
When the best appropriate forecasting model parameters must be
discovered, this type of during selecting model, GA-based optimization is
widely applied. The implementation of GAs was used to find the ARIMA
model's best p, d, and q parameters [47]. Singh et al. [48] have used GAs to
construct a load forecasting model based on neural network for ELF. For
effective ELF forecasting, Semero et al. [49] applied the hybrid back-
propagation-GA method. Khan et al. [50] described how Cartesian Genetic
Programming developed Recurrent Neural Networks were used to forecast
very short-term load. Furthermore, several additional works, such as [51],
have also discuss and implemented GA-based ELF forecasting.
With increasing automation, the demand toward remote monitoring and
controlling has increased to its peak. The knowledge of an expert must be
easy to codify into software rules. This made the expert system as the most
convenient and desirable option in forecasting load data apriori.
4.2.7 Expert Systems
Human experts employ rules and procedures to create expert systems.
Experts must elucidate their decision-making process to programmers in
particular [52]. According to researchers, a computer program that can
explain, comprehend, and the knowledge base is expanded when new data
becomes accessible, is called as expert system. It's a set of relationships
between system load changes and changes in load-influencing external
factors. In these processes, some rules do not alter over time, while others
may need to be modified on a regular basis. Several studies on load
forecasting have been conducted by developing various expert systems [40,
53, 54].
However, with many attractive features, the conventional hard
computing and soft computing techniques leaves a scope for improvement
in load forecasting. By integrating the best of statistical and machine
learning methodologies, hybrid methods promise to improve time series
forecasting.

5 Hybrid Method and a Classification System for


Load Forecasting Models
Hybrid or combination models combine the benefits of multiple separate
forecasting models. These methods can outperform single models in terms
of prediction accuracy, and are thus widely employed in many forecasting
domains. In this regard, a variety of forecasting methodologies, data
processing techniques and optimization methods are available for
constructing various hybrid models [55–57]. As a result, recent research has
shifted its primary focus to the construction of successful hybrid models in
the hopes of boosting prediction performance [58, 59]. However, there are
no openly available guidelines on how to choose among alternative
strategies when creating a hybrid model. In, consideration with many
modern techniques implemented in various newly developed models, a case
study of very short-term forecasting of load has been presented in next
section of the chapter.

6 Case Study
A case study has been conducted as short-term forecasting of load
considering the load data of the specific region. The specific region
corresponds to Kolkata, located at 22.5726° N latitude, and 88.3639° E
longitude. The single most important climatic component impacting load
demand is commonly recognized as temperature.

6.1 Problem Formulation


The short-term load forecasting problem has been proposed in terms of the
objective function mentioned below [60] in Eq. (7)
(7)
where, Jβ is the total cost function, based on the load forecasting error, in
consideration with hourly variation of temperature and humidity, et is the
difference between the actual values and the forecasted values, defined as
LF errors, Ct+ and Ct− are the rates of electricity corresponding to positive
error and negative error.
The load forecasting error is calculated [61] as given below in Eq. (8)
(8)
where ytEN is the real value obtained at time t, as calculated following a
well reputed Euclidean norm (EN), (xt−l) is the independent variable at time
(t − l) for forecasting yt, l is the lead time. The function fβ is based on an
optimal parameter β.
Further, the actual data ytEN follows the Euclidean norm (EN) including
weight factors and climatic factors of a specific place, while computing the
forecasted value.
(9)

(10)

(11)
where Tt and Ht are the forecasted temperature and humidity on hourly
basis, and Tp and Hp are the past data for temperature and humidity on
hourly basis. W1 and W2 are the weight factors.

6.2 Input Data


The study has been performed by considering the influence of climatic
factors on the load data. The load data was collected data from a
distribution center [62] of south Kolkata, India. Figures 1 and 2 represent
the monthly load variation, with temperature and humidity, respectively.

Fig. 1 Variation of average load demand versus temperature on monthly basis


Fig. 2 Variation of average load demand versus humidity on monthly basis
The average temperature rises in March, April, and May, but the average
load demand does not rise in proportion because to the large decline in
average humidity. On the other hand, though the average temperature in the
months of July, August and September does not rise much from that of the
months from March to May, but as the humidity rises enormously, hence the
demand for power increases. Electricity rates in Kolkata are defined as Rs
4.8/unit, Rs 5/unit, Rs 6/unit, Rs 7/unit, Rs 7/unit and Rs 9/unit for first 25
unit of consumption, for next 35 units, for further 40 units of consumption,
for next 50 unit, for extra 150 units and above 300 units respectively. The
Ct+ and Ct−, i.e., rates of electricity corresponding to positive error and
negative error has been considered as Rs 6.41 per unit and Rs 7.33 per unit
respectively, considering the average tariff of Kolkata.

6.3 Result
GOA was chosen to address this type of forecasting challenge because AI-
based soft computing approaches are tolerant of imprecision and ambiguity,
can generate and sense “linguistic variables,” and are capable of deriving
approximate solutions to problems. Furthermore, unlike other approaches,
GOA is best suited to account for climate fluctuation in a specific location.
The data pre-processing unit (DPPU), which identifies the problem and
computes the desired outcome, is usually included in forecasting models.
The grasshopper optimization method GOA was used in the present study
as the control methodology.

6.3.1 Controlling Method and Implementation


The grasshopper optimization algorithm (GOA) is a relatively modern
optimization method that was first developed in structural optimization
issues by Saremi et al. [63]. The GOA imitates grasshopper swarming
behavior, includes nymphs and adults; those are without wings and with
wings respectively. Adults are used to investigate the entire search space
and identify superior food source regions, whilst the nymphs are used to
exploit a specific region or neighborhood of a certain place (exploitation).
Exploitation and exploration are seamlessly balanced in this strategy, and it
is theoretically included into a less sophisticated algorithm structure.
The current study, implemented GOA algorithm through following
steps:
Step 1: A population of size Nij is generated at first, where, the number
of solutions is i, and j denotes number of hours; i.e., 24 h.
Step 2: The best position is found which corresponds to the best
forecasted data, depending on the fitness function, as shown below in
Eq. (12)

(12)

Here, eij is the fitness function corresponds to the error of load forecasting,
as mentioned in Eq. (12), which needs to be minimized.
The initial set of position is generated by the following equation,
(13)
Step 3: GOA has a parameter called ‘c’ that varies depending on the
number of iterations in order to strike a balance between exploration and
exploitation.
This parameter ‘c’ can be calculated by,

(14)

where M1 is the maximum number of cycles.


Step 4: New set of position is calculated, which corresponds to new set
of solution, through the equation mentioned below in (15) and (16)

(15)

(16)
where Td is the best-found solution, f is the intensity of attraction and the
length of attraction is given as l.
Changes to c in Eq. (14) cause earlier iterations to focus on exploitation
while later iterations focus on exploration. The algorithm’s complete
performance is improved by this balancing approach.
Step 5: The iterations are made, and the best solution from every
iteration is being stored.
Step 6: The iteration count proceeds until it reaches the maximum cycle
number, i.e. M1.
Step 7: The cost function is calculated following Eq. (7), considering
the best solution, i.e. minimum error function.
Case 1 has been studied considering the summer months load demand,
while Case 2 has been studied in consideration with winter months. As, the
specific location is Kolkata, hence, in case 1 the humidity factor is quite
low, while in case 2 the humidity factor in quite high.
Table 1 shows the load forecasting error and minimum cost for both
case 1 and case 2 by implementation of GOA algorithm.
Table 1 Load Forecasting and Cost as evaluated by GOA

Load forecasting error Total cost


(et) (Jβ)

Positive Error Negative Mean


error error
Case 4.74 −0.07 2.405 Rs (4724 + 3921) = Rs 8645 per unit
1 per month
Case 5.26 −0.04 2.61 Rs (3912.6 + 2814) = Rs 6726 per unit
2 per month

It has been noticed, that for case 1 the monthly cost achieved is about
22% more than that obtained during case 2. With less load demand in
winter months, which depicts case 2, along with the computed monthly
price the negative error obtained is also less than that attained in case 1.
This signifies the consideration of region-specific climatic factors. Both the
region-specific model as well as climatic variations are much needed in all
types of load forecasting. The load deviation occurred on hourly basis
resulted very less when computed by Grasshopper optimization method.
Moreover, the proposed method is a single-stage based algorithm with
several unique characteristics, such as all search agents participating in
updating each search agent's position. This function aids load forecasting in
meeting all regional climatic needs.

7 Conclusion
Many utilities have experienced a paradigm shift in how customers use
power and how much they use as a result of the current recession. As a
result, despite all of the mentioned studies discovered and carried by several
researchers, the study door is still wide open for the use and adaptation of a
variety of unique integrated models for energy and power prediction. In this
regard, a case study has been performed using grasshopper optimization
algorithm for a short-term forecast; which reflected a cost-effective
scheduling for a region-specific load. Furthermore, special focus should
have been paid to studying very short-term and mid-term load forecasting in
order to fill the identified vacuum in the field.
However, the foundation for utility planning and a basic commercial
concern in the utility industry has always been load forecasting.
The rapid growth of stored information in the demand forecasting,
associated with data analysis provoked an utmost need for generating a
powerful tool which must be capable of extracting hidden and vital
knowledge of load forecasting from available vast data sets. It is critical for
utilities to have accurate load forecasts, especially given the extraordinary
risks that the electric utility industry faces due to a potentially significant
change in the resource mix as a result of environmental regulation, aging
infrastructure, projected low natural gas prices, and decreasing costs of
renewable technologies. Moreover, the load forecasting is also required for
rate cases, resource planning, financial planning, designing rate structures,
and so forth. Forecasting load is not a one-dimensional procedure. Instead,
utilities and politicians should be always looking for methods to improve
the process, databases, and forecasting tools’ state-of-the-art. A thorough
load forecasting procedure includes complex data needs, dependable
software packages, powerful statistical methodologies, and good
documentation to build plausible narratives that describe customers’
probable future energy use. Almost every state in every country has varied
degrees of jurisdiction to promote database, forecasting tool, and
forecasting process advancements. To establish procurement policies for
construction capital energy forecasts, future fuel requirements are required.
This data can only be available from an advance and accurate load forecast.
Thus, a good forecast, reflecting the present and future trends of load
demand, is the key to all planning. With many emerging models and control
techniques, it is essential that utilities dedicate significant time and
resources to developing reliable load projections.
References
1. Makridakis S, Hyndman RJ, Petropoulos F (2020) Forecasting in social settings: the state of the
art. Int J Forecast 36(1):15–28
[Crossref]
2.
Ahmad T, Zhang H, Yan B (2020) A review on renewable energy and electricity requirement
forecasting models for smart grid and buildings. Sustain Cities Soc 55:102052
[Crossref]
3.
Chou J-S, Tran D-S (2018) Forecasting energy consumption time series using machine learning
techniques based on usage patterns of residential householders. Energy 165:709–726
[Crossref]
4.
Nti IK et al (2020) Electricity load forecasting: a systematic review. J Electr Syst Inf Technol
7(1):1–19
5.
Zhang J et al (2019) Short-term forecasting and uncertainty analysis of wind turbine power based
on long short-term memory network and Gaussian mixture model. Appl Energy 241:229–244
6.
Chen Y, Tan Y, Zhang B (2019) Exploiting vulnerabilities of load forecasting through adversarial
attacks. Proceedings of the 10th ACM international conference on future energy systems
7.
Ahmed R et al (2020) A review and evaluation of the state-of-the-art in PV solar power
forecasting: techniques and optimization. Renew Sustain Energy Rev 124:109792
8.
Huang N et al (2020) Incorporating load fluctuation in feature importance profile clustering for
day-ahead aggregated residential load forecasting. IEEE Access 8:25198–25209
9.
Zhao H, Tang Z (2016) The review of demand side management and load forecasting in smart
grid. In: 2016 12th world congress on intelligent control and automation (WCICA). IEEE
10.
Khan ZA, Jayaweera D (2019) Smart meter data based load forecasting and demand side
management in distribution networks with embedded PV systems. IEEE Access 8:2631–2644
[Crossref]
11.
Luo B, Miao S (2019) A literature survey on electricity price forecasting in deregulated markets.
In: 2019 IEEE sustainable power and energy conference (iSPEC). IEEE
12.
Budi RFS, Hadi SP (2021) Multi-level game theory model for partially deregulated generation
expansion planning. Energy 237:121565
13.
Bouktif S et al (2019) Single and multi-sequence deep learning models for short- and medium-
term electric load forecasting. Energies 12(1):149
14.
Rausch B, Staudt P, Weinhardt C (2019) Transmission grid congestion data and directions for
future research. In: Proceedings of the 10th ACM international conference on future energy
systems
15.
Yang A, Li W, Yang X (2019) Short-term electricity load forecasting based on feature selection
and least squares support vector machines. Knowl-Based Syst 163:159–173
[Crossref]
16.
Bhandari B, Shakya SR, Jha AK (2018) Short term electric load forecasting of kathmandu valley
of nepal using artificial neural network. Kathford J Eng Manage 1(1):43–48
17.
Maldonado S, Gonzalez A, Crone S (2019) Automatic time series analysis for electric load
forecasting via support vector regression. Appl Soft Comput 83:105616
[Crossref]
18.
Al Amin MA, Hoque MA (2019) Comparison of ARIMA and SVM for short-term load
forecasting. In: 2019 9th annual information technology, electromechanical engineering and
microelectronics conference (IEMECON). IEEE
19.
Nepal, Bishnu, et al. “Electricity load forecasting using clustering and ARIMA model for energy
management in buildings.“ Japan Architectural Review 3.1 (2020): 62–76.
20.
Feng Y et al (2019) Short term load forecasting of offshore oil field microgrids based on DA-
SVM. Energy Proc 158:2448–2455
21.
Debnath KB, Mourshed M (2018) Forecasting methods in energy planning models. Renew
Sustain Energy Rev 88:297–325
22.
Cao Z et al (2019) Hybrid ensemble deep learning for deterministic and probabilistic low-voltage
load forecasting. IEEE Trans Power Syst 35(3):1881–1897
23.
Fallah SN et al (2018) Computational intelligence approaches for energy load forecasting in
smart energy management grids: state of the art, future challenges, and research directions.
Energies 11(3):596
24.
Wang F et al (2018) Association rule mining based quantitative analysis approach of household
characteristics impacts on residential electricity consumption patterns. Energy Conver Manage
171:839–854
25.
Khalfi J et al (2021) Box–Jenkins black-box modeling of a lithium-ion battery cell based on
automotive drive cycle data. World Electr Veh J 12(3):102
26.
Wang S et al (2020) A novel charged state prediction method of the lithium-ion battery packs
based on the composite equivalent modeling and improved splice Kalman filtering algorithm. J
Power Sour 471:228450
27.
Duman GM, Kongar E, Gupta SM (2020) Predictive analysis of electronic waste for reverse
logistics operations: a comparison of improved univariate grey models. Soft Comput
24(20):15747–15762
28.
Ahmad T, Chen H (2019) Nonlinear autoregressive and random forest approaches to forecasting
electricity load for utility energy management systems. Sustain Cities Soc 45:460–473
[Crossref]
29.
Chen Z et al (2019) State of health estimation for lithium-ion batteries based on fusion of
autoregressive moving average model and elman neural network. IEEE Access 7:102662–
102678
30.
Upadhaya D, Thakur R, Singh NK (2019) A systematic review on the methods of short term load
forecasting. In: 2019 2nd international conference on power energy, environment and intelligent
control (PEEIC). IEEE
31.
Shilpa GN, Sheshadri GS (2019) ARIMAX model for short-term electrical load forecasting. Int J
Rec Technol Eng (IJRTE) 8(4)
32.
Mohan, Neethu, K. P. Soman, and S. Sachin Kumar. “A data-driven strategy for short-term
electric load forecasting using dynamic mode decomposition model.“ Applied energy 232
(2018): 229–244.
33.
Shrivastava P et al (2019) Overview of model-based online state-of-charge estimation using
Kalman filter family for lithium-ion batteries. Renew Sustain Energy Rev 113:109233
34.
Ullah I et al (2020) ANN based learning to Kalman filter algorithm for indoor environment
prediction in smart greenhouse. IEEE Access 8:159371–159388
35.
Javanajdadi K, Seyed Shenava SJ, Dejamkhooy A (2018) Short-term electric load forecasting
using iteration based modified grey models. Tabriz J Electr Eng 48(3):1069–1081
36.
Dudek G, Pełka P, Smyl S (2021) A hybrid residual dilated LSTM and exponential smoothing
model for midterm electric load forecasting. IEEE Trans Neural Netw Learn Syst
37.
Zhang L et al (2021) A review of machine learning in building load prediction. Appl Energy
285:116452
38.
Yu KW, Hsu CH, Yang SM (2019) A model integrating ARIMA and ANN with seasonal and
periodic characteristics for forecasting electricity load dynamics in a state. In: 2019 IEEE 6th
international conference on energy smart systems (ESS). IEEE
39.
Rafiei M et al (2018) Probabilistic load forecasting using an improved wavelet neural network
trained by generalized extreme learning machine. IEEE Trans Smart Grid 9(6):6961–6971
40.
Chen Y et al (2018) Mixed kernel based extreme learning machine for electric load forecasting.
Neurocomput 312:90–106
41.
Ngoc TT, Thuyen Le Van Dai CM, Thuyen CM (2021) Support vector regression based on grid
search method of hyperparameters for load forecasting. Acta Polytechnica Hungarica 18(2):143–
158
42.
Wen Z et al (2020) Long term electric load forecasting based on TS-type recurrent fuzzy neural
network model. Electr Power Syst Res 179:106106
43.
Jamaaluddin J et al (2018) Very short-term load forecasting peak load time using fuzzy logic. In:
IOP conference series: materials science and engineering, Vol 403, no 1. IOP Publishing
44.
Tondolo de Miranda S et al (2018) Application of artificial neural networks and fuzzy logic to
long‐term load forecast considering the price elasticity of electricity demand. Int Trans Electr
Energy Syst 28(10):e2606
45.
Umoh U et al (2018) Interval type-2 fuzzy neural networks for short-term electric load
forecasting: a comparative study. Int J Soft Comput (IJSC) 9
46.
Li C et al (2020) A hybrid short-term building electrical load forecasting model combining the
periodic pattern, fuzzy system, and wavelet transform. Int J Fuzzy Syst 22(1):156–171
47.
Hasanah RN et al (2020) Performance of genetic algorithm-support vector machine (GA-SVM)
and autoregressive integrated moving average (ARIMA) in electric load forecasting. J FORTEI-
JEERI 1(1):60–69
48.
Singh P, Dwivedi P, Kant V (2019) A hybrid method based on neural network and improved
environmental adaptation method using controlled Gaussian mutation with real parameter for
short-term load forecasting. Energy 174:460–477
[Crossref]
49.
Semero YK et al (2018) An accurate very short-term electric load forecasting model with binary
genetic algorithm based feature selection for microgrid applications. Electr Power Compon Syst
46(14–15):1570–1579
50.
Khan GM, Ahmad AM (2018) Breaking the stereotypical dogma of artificial neural networks
with Cartesian genetic programming. Inspired by nature. Springer, Cham, 2018, pp 213–233
51.
Hammad MA et al (2020) Methods and models for electric load forecasting: a comprehensive
review. Log Supply Chain Sustain Glob Chall 11(1):51–76
52.
Li Y et al (2021) A meta-learning based distribution system load forecasting model selection
framework. Appl Energy 294:116991
53.
Li LL et al (2019) Enhanced Gaussian process mixture model for short-term electric load
forecasting. Inf Sci 477:386–398
54.
Kobylinski P, Wierzbowski M, Piotrowski K (2020) High-resolution net load forecasting for
micro-neighbourhoods with high penetration of renewable energy sources. Int J Electr Power
Energy Syst 117:105635
[Crossref]
55.
Zhao J, Liu X (2018) A hybrid method of dynamic cooling and heating load forecasting for
office buildings based on artificial intelligence and regression analysis. Energy Build 174:293–
308
[Crossref]
56.
Al Mamun A et al (2020) A comprehensive review of the load forecasting techniques using
single and hybrid predictive models. IEEE Access 8:134911–134939
57.
Sideratos G, Ikonomopoulos A, Hatziargyriou ND (2020) A novel fuzzy-based ensemble model
for load forecasting using hybrid deep neural networks. Electr Power Syst Res 178:106025
[Crossref]
58.
Aly HHH (2020) A proposed intelligent short-term load forecasting hybrid models of ANN,
WNN and KF based on clustering techniques for smart grid. Electr Power Syst Res 182:106191
59.
Dai Y, Zhao P (2020) A hybrid load forecasting model based on support vector machine with
intelligent methods for feature selection and parameter optimization. Appl Energy 279:115332
[Crossref]
60.
Han L et al (2018) Enhanced deep networks for short-term and medium-term load forecasting.
IEEE Access 7:4045–4055
61.
Barman M, Choudhury NBD, Sutradhar S (2018) A regional hybrid GOA-SVM model based on
similar day approach for short-term load forecasting in Assam, India. Energy 145:710–720
62.
https://​www.​wbsedcl.​in/​irj/​go/​km/​docs/​internet/​new_​website/​Home.​html
63.
Saremi S et al (2020) Grasshopper optimization algorithm: theory, literature review, and
application in hand posture estimation. Nature-Insp Optim 107–122

OceanofPDF.com
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023
A. Tomar et al. (eds.), Prediction Techniques for Renewable Energy Generation and Load Demand
Forecasting, Lecture Notes in Electrical Engineering 956
https://doi.org/10.1007/978-981-19-6490-9_10

Deep Learning Techniques for Load


Forecasting
Neeraj1 , Pankaj Gupta1 and Anuradha Tomar2
(1) Department of Electronics and Communication Engineering, Indira
Gandhi Delhi Technical University for Women, New Delhi, Delhi,
India
(2) Department of Instrumentation and Control Engineering, Netaji
Subhas University of Technology, New Delhi, Delhi, India

Neeraj (Corresponding author)


Email: [email protected]

Abstract
Electricity load dominates energy consumption and greenhouse gas
emissions. There are increasing concerns about climate change and the need
to minimize energy consumption and enhance energy performance. Energy
management, optimization, and planning all depend on forecasting load
energy consumption. The data-driven approaches are the most popular
approaches to energy forecasting. Deep learning techniques are a new
category of data-driven models that have emerged in the recent years. They
offer improved capabilities in managing big data, attribute extraction
characteristics, and a better ability to model nonlinear phenomena. This
paper examines the effectiveness and potential of deep learning-based
approaches for load energy forecasting. This paper begins with a literature
survey, tracked through an outline of deep learning-based concepts,
methodologies, and examples. Following that, the current trends in
published research were examined and how deep learning-based approaches
may be utilized for forecasting and feature extraction. The study finishes
with an analysis of current problems and recommendations for further
research.

Keywords Load forecasting – Deep learning – Data-driven approaches –


Energy consumption

P. Gupta and A. Tomar: These authors contributed equally to this work.

1 Introduction
1.1 Motivation
In 2018, nearly one-third of global energy consumption was accounted for
by buildings and construction, and almost 40% of global CO emissions.
These percentages will continue to rise in the coming years. It is critical to
minimize energy consumption and enhance energy efficiency in buildings
and facilities to maintain sustainability. Many strategies and approaches to
energy planning, management, and optimization can forecast and predict
energy loads. These applications include modeling predictive controls, load
demand management, load demand response, and optimization. Short- and
long-term forecastings are available for scheduled maintenance,
renovations, and planning. Data-driven and physics-based models are the
commonly used models for load forecasting. Nowadays, data-driven models
are the most commonly used energy models. These representations can also
be classified into either black-box or gray-box representations. But, the
physics-based models can describe the system and its components in detail.
However, such models require many measured parameters to be developed
and calibrated. It can be challenging to obtain the parameters needed in
many cases.
On the other hand, data-driven models usage mathematical models
derived from measured data. These models do not require a large number of
parameters nor detailed knowledge about the building/plant or system’s
internal components. Many buildings/plants have smart meters and
automation systems, making data access easier. These data are easily
accessible and can be used to forecast the load energy.

1.2 Compilation of Published Papers on Data-Driven


Approaches for Load Forecasting
The popularity of data-driven approaches has increased in the recent years.
There have been several literature reviews published. Each study focused
on a different component of energy models. A summary will be provided in
this section and highlight the main points of each paper. These selections
are based on the most recent advances in artificial intelligence, especially
deep learning-based techniques increasing in popularity from 2015 to 2016.
The author in [1] compared artificial intelligence (AI) and statistical and
physical models to estimate the energy consumption. The paper suggested
future research directions, including developing better accuracy models,
integrating these models into building energy management systems, and
collecting data for future research. The capabilities and predictions of
artificial neural networks were examined in [2, 3] looked at ANN, support
vector machines (SVM), and hybrid models for the load forecast of energy
usage. Ahmad et al. [4] have looked into how energy models interact with
building controls and operations. The procedures are still not relevant,
according to [5]. Future research should reduce the computing cost and
memory requirements while retaining accuracy. Wang and Srinivasan [6]
reviewed AI-based energy prediction. They are particularly interested in
ensemble-based and single-point models. The author examined AI-based
and traditional ways of predicting electricity [7, 8] examined time series-
based forecasting strategies to estimating energy usage, highlighting
popular approaches, and mixed methodologies. A full study of machine
learning (ML) techniques for building energy prediction may be found in
[9]. The authors suggested a few suggestions for future investigation. They
a devised that deep learning algorithms be studied more because they are
currently understudied. Furthermore, Ahmad et al. [10] have reviewed data-
driven methods for the organization and estimation of building energy. The
author has studied estimating, mapping benchmarking, and describing
building energy models. And also focusing on how these methods have
been used for large-scale and building applications [11]. The author studied
data-driven models to forecast building energy consumption [12]. A
breakdown of trends was also included.
Furthermore, Runge and Zmeureanu [13] provided a thorough study of
artificial neural network applications for temperature prediction. The
authors also recommended that further research should be done on deep
learning-based approaches. It is engrossed on how ANN models can
forecast power consumption [14]. In addition, the authors noted that future
research should be focused on DL-based models. Aslam et al. [15]
published a review of data-driven models for energy prediction. The focus
was on feature engineering and data-driven algorithms. To the best of their
knowledge, there has not been a literature review paper focused on DL
models for forecasting energy consumption for energy loads. However,
some published papers acclaim that forthcoming research on these
techniques.

1.3 The Aim of the Literature Review


Although earlier literature reviews helped review and describe the current
state using various applications of load forecasting models. This review
aims to summarize the main points. There are still many gaps. The review
paper [16] noted that there are not many review papers that emphasize new
methods for load forecasting. It is noticed in a review paper that the deep
learning models are the most emerging methods for load energy forecasting
[17]. The author says no current paper focuses on load forecasting using
deep learning approaches. Researchers may not be able to access the
previous research because there are no review papers. The review paper
[18] states that a future direction of research should be to establish a
roadmap for machine learning-based load forecasting models. This paper
attentions on establishing an idea for deep learning-based approaches that
contribute to further research direction [19]. This paper reviews how deep
learning-based approaches can predict load energy consumption. This paper
addresses the gaps in papers identified by the literature analysis.

1.4 Objectives and Contributions


Deep learning approaches can be used to predict the load energy. The range
of applications of such methods is extensive such as energy generation,
smart grid networks, electricity price forecasting, and many others [20].
These models can also be used in other areas: air pollution [21], sales
estimating [22], and others like health care and business. This work will
only discuss the techniques used to forecast load energy consumption
because of their wide range of applications. This literature review does not
include integrating fuel cells, absorption, or adsorption systems. This paper
will review several publications that use DL techniques to forecast load
energy. This paper is organized as follows. The second section introduces
deep learning and its many categories. The third section summarizes the
current research trends. Section 4 examines the research that has employed
deep learning-based feature extraction approaches in their research.
Section 5 looks at papers that employed deep learning-based forecasting
models. The future work, results and problems are discussed in Sect. 6.
Section 7 brings the evaluation to a close.

2 Deep Learning Techniques


This part describes the fundamental descriptions, classifications, and
approaches of deep learning that are used in exploration. Autoencoders,
recurrent neural networks (RNNs), and deep neural networks (DNNs) are
the most commonly used deep learning approaches. However, some
approaches to deep learning, such as convolution neural networks (CNNs)
and Boltzmann networks, are used in fewer cases. This section summarizes
the most popular deep learning approaches that have been used for load
forecasting.

2.1 History, Categorization, and a General Description


Deep learning approaches are popular for load forecasting due to their
ability to deal with large amounts of data and feature extraction capabilities.
So that the accuracy of the model is improved due to their features. This
paper will overview several techniques and approaches to deep learning
approaches. The word intelligence is the ability to process the information,
take as input a bunch of information, and make some informed future
decision or prediction. So, the field of artificial intelligence is simply the
ability of computers to take as input: much information and use that
information to inform some future situations or decision making. Deep
learning is simply a subset of machine learning specifically focused on
using neural networks, which extract useful features and patterns in the raw
data and use them. Those patterns or features inform the learning tasks.
Traditional machine learning algorithms typically operate by defining a
set of rules or features in the environment in the data right. The key idea of
deep learning is that these features will be learned directly from the data
itself in a hierarchical manner. These types of hierarchical features, and that
is the goal of deep learning compared to machine learning, are the ability to
learn and extract these features to perform machine learning on them.
Today, we live in a world of big data, where we have more data than ever
before. Neural networks are extremely and massively parallelizable. They
can benefit tremendously and have benefited tremendously from modern
advances in architecture that we have experienced over the past. Source
toolboxes like TensorFlow can build and deploy these algorithms, and these
models have become extremely streamlined.
The deep learning architectures have four to five levels of nonlinear
operations. First, deep learning is a way for practitioners to discover good
features. This requires some engineering skills and domain expertise. Deep
learning approaches do not need domain expertise; they can learn
automatically using the general learning process. This is the main advantage
of deep learning. Feature extraction can also be done automated. Deep
learning can also easily deal with huge amounts of a dataset to make precise
predictions. Nowadays, precise prediction using giant data is a growing
problem, but deep learning solves such problems. These models can store
and hold more information than conventional ANNs. Deep learning
methods have a few drawbacks. They are not easy to train the model and
contain a lot of hyperparameters. There are three main ways deep learning-
based approaches were used to build power estimation.
1. Increment the number of layers concealed in a feed-forward neural
organization or multi-facet discernment framework.

2. A few repetitive neural organizations like RNN, LSTM, and GRU are
utilized. These intermittent neural network models can have at least one
secret layer. These models can be regarded as networks with intense
structures.

3. Consecutively coupling various calculations into one in general


construction.
Fig. 1 Autoencoder

2.2 Autoencoder
In the case of autoencoder, the neural networks consist of multiple hidden
layers. An autoencoder is composed of two sections. One is an encoder, and
the other is a decoder section. An autoencoder’s goal is to be able to
recognize the dataset and reconstruct it using training. The encoder is the
input of a hidden model, and the decoder is the output of a hidden model.
Input data is represented by y, which is s equal to (y). To make the output,
the decoder extracts the hidden representation. The training aims to reduce
the difference between input and output so that y equals to y’. Generally, an
autoencoder is used for feature extraction in huge datasets. The structure of
the autoencoder is shown in Fig. 1.

2.3 Recurrent Neural Network


Deep learning models can be used to process time-series data. Time-series
data is a series of data tracked over time. Recurrent neural networks can
solve the problem of feed-forward networks. The feed-forward networks
such as density connected networks or convolutional neural networks. In
other words, feed-forward networks do not consider the relationship
between the current sample and the previous samples. The relationship
between current and previous data is significant for some kinds of data,
especially time-series data. The previous data predicts the following data to
solve this problem. A loop is purposed to memorize the previous
information [23]. RNN has a loop current connection/recurrent connection,
representing the output back to itself. The RNN has a memory to remember
the previous output and when the following input comes. RNN calculates a
new output based on the current and previous outputs. So, recurrent neural
networks can remember and memorize the previous data in the previous
state. Therefore, the temporal relationship is considered to understand better
how it works and can unfold on a loop in the time domain. Figure 2 shows
that the input is time-series data x, and the output is data h. So first, the
input data is unfolded data in the time domain. The data from the beginning
of , , to will have the output , , . It has only one cell, but
this is the same cell at a different time. So, RNN will consider the previous
input , save the output step, and then pass it to the next state. So when it
has the following data , it can use and the previous output to calculate
new output . Then set the state and then paste the state to the next cell. So
this Fig. 2 unfolds an unknown loop that can better illustrate how RNN
works.

Fig. 2 RNN

(1)
This equation explains what RNNs are and how they work. denotes the
input at time step t, denotes the state at time step t, and is the
recursive function.
(2)
A tanh function is a recursive function. W multiplies with the input state,
while W multiplies with the prior state. Then, it passes through a tanh
activation to get the new state. The weights are W and W . The new state
S is multiplied with W to produce the output vector. It can be seen in
Fig. 2, the input and output states are calculated using the previous and new
state [24].

2.4 Long Short-Term Memory (LSTM)


RNN suffered a diminished and exploding gradient problem. The researcher
proposed a long short-term memory model to solve the gradient
management and exploding problem. It has become very successful. The
long short-term memory adds multiple gates. First, they add an input gate to
control if the new input is in or ignore the input and then add the forget
gate. So, it can delete the trivial information. The output gate can decide to
let the info impact the output at the current time step. The input gate usually
outputs from zero to one. So if the output is zero, then the input will be
ignored. If the gate output is one, the input will pass through to the hidden
cell. So the gate is like a switch, and it is output continuously from zero to
one. So it can control the part of the input that is passed to the hidden cell.
So another gate is the forget gate, or if the forget gate is zero, the hidden
cell’s memory will clean right to zero. The last one is the output gate or the
controller output that decides if the information pass to the next stage or
not. The original LSTM picture is not easy to understand. The LSTM model
has two paths. One is to update the memory state in the model. Another one
is like the original RNN to pass the output to the next stage. For LSTM, it
has two paths to pass the data to the next stage. The forget gate can control
if it ignores the information. The sigma means the sigmoid activation
function. The output of the sigmoid is between zero and one. So the
sigmoid activation function is used as a switch of zero means turn off, and
the one is turned on. It is also assigned a value between zero. The activation
is multiplied by the output with the previous state to control the portion of
the information. The second state is the input gate. It can control to pass
how much input information passes into the state to generate the new form.
The third gate is the output gate that can control how much information is
to pass to the next stage. LSTM model helps to solve the vanishing
gradients problems [25].
(3)

(4)

(5)

(6)

(7)

(8)
i , f , and o are the input, forget, and output gates of the LSTM cell. W
represents the recurring construction between the previously hidden and the
existing layers. The hidden layers are connected to the input through the
weight matrix. The cell state is calculated and depending on the current
and previous input. C stands for the unit’s internal memory. Figure 3 shows
the equations that describe the behavior of all gates in the LSTM cell. As
inputs, each gate accepts the hidden state and the current input x. The
vectors are concatenated, and a sigmoid is applied. is a new potential
value for the cell’s state. The input gate controls the memory cell’s
updating. As a result, it is applied to the vector, which is the only one
that can change the state of the cell. The forget gate determines how much
of the previous state should be remembered. To get the hidden vector, this
state is applied to the output gate [26] (Fig. 4).
Fig. 3 LSTM
Fig. 4 GRU

2.5 Convolutional Neural Networks


Convolutional neural networks are a family of neural networks
characterized by convolutional layers. They are particularly suitable for
tasks involving data with spatial dependencies, such as images and videos.
Convolution is a filtering operation applied to the data to detect certain
features. This is just a matrix of numbers for a computer, with one value for
each pixel. For seeing the borders, take a smaller filter matrix called kernel
and perform an element-wise product between the kernel values and a
portion of the image. Then sum up the results and get a single value, which
indicates whether in that portion of the image borders are present or not.
The kernel is then shifted by several pixels to cover another section until the
whole image has been covered. The final result is a new matrix, called a
feature map, whose numbers describe the borders. A convolutional layer
implements several kernels, each detecting a specific feature. The cool thing
about convolutional layers within a neural network is that it does not have
to design the kernels in advance.
During training, the network decides the important features and adapts
the kernel to detect them. The parameters to set in this stage are the number
of kernels to train, the kernel size, and the convolution dimension such as
1D, 2D, and 3D convolution. The difference between 1D, 2D, and 3D
convolutions is: the convolution dimension sets the number of axes on
which the kernel moves. In a 1D convolution, the kernel moves along one
axis; in a 2D convolution along two axes, and so on. Convolutions with
different dimensions discover features in those dimensions. The data
dimension does not necessarily bind the dimension of the applied
convolution. For example, black and white images are 2D objects, while
color images are 3D objects because of the additional color channel. In both
cases, if we are interested in 2D features, like borders, a 2D convolution
moving along the width and height of the image will do the job. The same
holds for time series. There are two dimensions, values and time. If we want
to discover a 1D feature such as upward trends, we can apply a 1D
convolution. Therefore, the dimension of the convolution is determined by
the dimension of the feature to discover, not by the object’s dimension. A
CNN is represented in Fig. 5. Allude to reference [27] for a point by point
depiction of a CNN’s overseeing conditions and merits.

Fig. 5 CNN

2.6 Deep Belief Networks


Deep belief networks (DBNs) is a type of deep neural network created from
[28]. DBNs can be described as a range of algorithms that combine
probabilities with unsupervised learning to produce outputs. The restricted
Boltzmann machines (RBM) are fundamental to the DBN. It can then be
configured to exhibit desirable properties [29]. The visible layer or input
layer is the first layer of RBM. The hidden layer is the second. The RBM is
illustrated in Fig. 6.

Fig. 6 DBN
Figure 6 shows an example of a DBN. Although stacking multiple
RBMs together can produce large models, it may prove cumbersome to
train such large models. Refer to references [30] for more information about
the governing equations, merits, and limitations of RBMs, DBNs, and their
potential benefits.

2.7 Deep Feed-forward Neural Networks


Deep feed-forward neural networks (DFFNNs) are another popular
technique for forecasting energy in buildings. These models differ from the
standard feed-forward neural networks (FFNNs) because they have multiple
hidden layers. To extract more information from the data, additional layers
are added. Research has shown that there are many other deep learning-
based structures [31] (Fig. 7).
Fig. 7 Deep feed-forward neural network

3 Trends in the Present


From 2000 to 2021, the publications are looked. This section examines the
trends that have been observed in the published data.

3.1 Level of Building Application


Data-driven models need to be validated on testbeds before being used in
real-world applications. There are four levels to these testbeds. The
forecasting model may be modified by the building level data and time
steps data. According to the analysis, the studies were broken into four
categories: district level, buildings, sub-meter level, and component level.
The use of data from existing systems for large-scale installations and
district heating/cooling systems may explain the tendency to focus on
whole building cases and districts.

3.2 Qualities of Data


In each case study, the data size varies to the length and amount of data.
DL-based approaches are being suggested to solve the problem related to
large amounts of data to handle the big data. According to the observed
breakdown of published work, 18% used less than six months, 23% used
6 months to 1 year, 57% used more than 1 year, and 2% did not justify their
data size. This review also examined data types. Three types of data are
commonly used in the published research papers: energy plus, real data, and
target data. According to the findings, 93% of the case studies were applied
to real data. Following that were 4% for experimental data and 3% for the
target data.

3.3 Output Variables


The DL-based model’s energy usage was applied to the forecast. The sub-
meter and components are target variables like electric heating and cooling
demand, etc.

3.4 Input Styles


The characteristics or regressors utilized as inputs into the forecasting
model are inputs. All energy-based models require that you choose the
correct input data. Data-driven models require the selection of appropriate
input data. The poor choice of input variables may cause poor forecasting
performance. The most commonly used features were: environmental data,
such as outdoor temperature, and historical data, such as past energy use.
Nowadays, it is not easy to find out which attributes are the most crucial.
These may depend upon the various case study conditions such as weather,
place, and type of structure. So, many feature extraction techniques were
introduced in published research. Although a thorough examination of
feature selection may be beneficial, the focus of this paper will be on DL-
based approaches for variable selection.

3.5 Granularity of Time


Forecasting models have two main types: forecast horizon and resolution.
The forecast horizon is the projected time. The term “resolution” mainly
relates to the data’s time step. These two temporal granularities are
applicable in various ways to forecast models. For hourly time step data, a
forecast horizon can be used to estimate a horizon of 24 h ahead. The
models’ resolutions were 1% annually, 0% monthly, and 3% weekly. There
are three types of prediction horizons: medium, long-term, and short-term
[31]. It is important to note that the classifications mentioned above are not
set in stone and may differ from those published.

4 Feature Extraction Applications Using Deep


Learning
Feature selection is a process that reduces the size of an initial dataset into
more manageable segments. Big datasets take a lot of computational time to
process the model. So, computational time can be reduced by choosing the
appropriate attribute. This can improve accuracy, reduce overfitting risks,
and reduce computational resources for forecasting-based models. Recently,
DL-based approaches are widely used for feature extractions and load
forecasting. Due to its fast computing speed and simplicity of construction,
this model has grown in popularity. Many studies have compared their
efficacy to that of other data-driven models. In [32], compared four feature
extraction techniques and forecasting models. This paper examined four
different feature extraction methods:
(i) Technical, which selected the model on the basis of technical
expertise.

(ii) Analytical calculated actual data from a response variable that could
be used as a criterion.

(iii) Architectural, in which the time series was transformed.

(iv) Autoencoder.

Variables were selected from various models to forecast the energy load
with different time horizons. Although, DL-based models are the best
estimating performance models, the researchers conclude. The DL-based
model is compared with data-driven models such as autoencoder and
machine learning approaches in [33]. In the case of a retail facility, these
models were utilized to forecast the total energy use. The model was used
with a horizon of 60 min and 30-mintime intervals. The autoencoder and
machine learning approaches provide lower estimating errors. In [34],
compared different feature selection methods. The performance of each
method was compared with feed-forward neural network (FFNN), support
vector regression (SVR), and random forest models. The models have
trained with the resolution of 15 min and ahead horizon. The observation
showed that the prediction error was reduced in 33% of the AE coupling
clusters. However, the predicting inaccuracy was either maintained or
significantly increased in 1/3 of the clusterings. In [35], the autoencoder
model is compared with support vector machine (SVM) and FFNN. In this
paper, forecasting was conducted on the office building energy load with
the resolution of 5 min. They targeted the heating load and cooling load.
Chitalia et al. [36] compared the various data-driven models for estimating
the energy load of a commercial building with the resolution of 24 h ahead.
This research found that combining DL feature extraction with estimating
models results in a high-performing estimating model. DL feature removal
methods to anticipate building energy use are still being developed. More
study is required to compare these models across different case studies and
applications.

5 Application Summary at the Load Level


To predict, the energy loads of the whole plant are the plant-level
applications. The published research is classified into the following
categories: educational, industrial, domestic, combined, etc. Combined
states to publications that have applied their findings to various case studies
involving various plant kinds.
This section contains publications that employed a DL-based approach
to conduct a study on power loads. According to the analysis, all of the
paper in this area came from educational places. There were also several
case studies involving educational structures. Within this research, there are
many case studies discussed. Cooling and electricity usages are the essential
target load in the used research. There are still several gaps in
understanding DL forecasting models for educational buildings, such as
heating and lighting. This discrepancy might be due to the difficulties in
obtaining data for specific loads [37]. On the other hand, the energy loads
of heating and lighting can account for a significant portion of an industrial
or educational building. They use 30% for heating and 15% for lighting
[38]. As a result, future work may benefit from looking into these
possibilities. In some papers, cooling loads were examined by DL-based
predicting models [39, 40]. Increasing the predicting efficiency for cooling
loads uses an autoencoder for variable extractions [35, 45]. The author
discussed in [28] the performance of the RNN, LSTM, and GRU-based
models to predict the cooling loads using various approaches such as direct
and recursive approaches. This paper showed that for the RNN model, the
direct approach was more reliable. In [40], twelve predicting models are
compared for the cooling load applications. According to this paper, the
LSTM model gives more accurate results. In [41], various university
campus heating load was predicted using the RNN model. According to
their work, the RNN models worked better than the other machine learning-
based approaches for medium and long-term estimation. They show that to
get better results to predict the thermal energy loads, RNN models are used.
More study is needed to corroborate the previous work based on different
case studies. Marino et al. [42] show how GRU-based models may be used
to predict energy usage. After examining several strategies for inferring
missing data, GRU forecasting models were tested. For LSTM models that
output power load predictions in educational buildings, see references [31].
The authors of the reference publication [43] evaluate the efficacy of
several deep learning models.

6 Results and Discussion


Because deep learning approaches and techniques can manage vast volumes
of data and give superior results, they have seen rapid expansion in the
recent years. There is substantial research on their application in load
energy forecasting. According to the observations, most of the DL-based
models are used to predict the full power loads. They also target the energy
loads for the entire plant. Most DL techniques have been used in the LSTM
and deep feed-forward neural networks. When comparing forecasting
performance with other ML-based methods, it was found that DL-based
methods typically lead to better performance than ML-based ones. In some
cases, however, it did not. Similar observations were also observed when
the models were used as forecasting models. Despite the strong outcomes,
there are still considerable hurdles to be overcome.

6.1 Challenges
Although the use of DL-based methods is still in their initial stages for
forecasting the energy loads, several new and thrilling tasks remain. The
most significant challenges that have been observed are divided into two
categories:
1. The difficulties that the research community is confronted
2. The DL-based methods are faced the technical challenges

The following are the main challenges which the researchers face:
1. Most papers used unpublished proprietary datasets. This point was
raised in a review study on data-driven models [38, 44]. Because of the
extensive usage of proprietary data, it is not easy to produce the results,
conduct comparison, and expand on the work of others.

2. As a result of the developing number of publishers, there is no standard


for foreseeing model data in each diary composition.

3. Inadequate descriptions of the components/or methods used in their


research. It was found that some papers did not specify their forecast
horizons or hyperparameter tuning approach.

4. Many performance metrics can be applied to each publication. The


most common performance metric in research is the mean absolute
percentage error, found in references [45]. However, it is not always
employed in the study. Occasionally, the author will utilize other
metrics or change the measurements.

5. The issue is further complicated by using unclear terms in research.

A few significant challenges have been identified in the research on DL-


based models. It is challenging to develop and test DL-based models
without guidelines. Creating, applying, and comparing such models are
much more challenging because there are no guidelines. According to the
findings, the majority of articles had changed their hyperparameters by trial
and error. Building various models and ensuring repeatability may be easier
with an automated method and guideline. The models can improve
forecasting performance at multiple levels, but they have a trade-off:
increased complexity of the model and longer training times than typical
machine learning approaches. Future researchers would benefit from the
establishment of guidelines for DL modeling. This will provide them with a
standard set of criteria that they can use to compare and build on models.
This could allow for more generalizations more quickly.
6.2 Data Collection and Results
6.2.1 Data Collection
Data is collected from the Delhi power plant from January 2011 to
December 2020 with hourly resolution. Before processing, any analysis
dataset is normalized using a min-max scalar. Figure 8 represents the power
consumption data before normalization, and Fig. 9 represents the data after
normalization using a min-max scaler. Normalized data provides equal
weights to each attribute. Because they are more significant numbers, no
one attribute influences model performance in one way.

Fig. 8 Hourly power consumption data–before normalization


Fig. 9 Hourly power consumption data–after normalization

6.2.2 Results
From Figs. 10, 11, 12, it can be observed that the forecasted load
performance is identical to the actual load consumption. All deep learning
models provide accurate results as compared to the actual values. It can be
seen that where the behavior of the power consumption load is unstable.
The predicted results are that diverging from the actual values is moderately
important. However, the power load consumption curves are repeated in all
cases, as shown in Fig. 13. If the power load consumption graph is irregular
or stable, the forecasted values fluctuate, especially when the power
consumption load appears irregular.
Fig. 10 Prediction made by RNN model

Fig. 11 Prediction made by LSTM model


Fig. 12 Prediction made by GRU model

Fig. 13 Predicted versus actual

6.3 Future Research Prospects


The potential of coming ways for DL-based methods in energy load
forecasting contains:
1. The improvement of DL approaches across a scope of area types for
load forecasting focuses on comparison-based papers.

2. The applications of DL models in research papers have not been much


discussed.

3. Different case studies have been analyzed using DL gray-box models.

4. Analyze the sensitivity of DL models and their uncertainty.

5. The selection of hyperparameters for the DL model proposal


establishes the guidelines.

6. The production of mountable DL-based models can be immediately


evolved and tuned for use in various areas for load estimation.

7. The growth of strong models can provide accurate predictions even in


the case of sensor fiascos, variations of the process, and other
unforeseen events.

8. Implementation of innovative deep learning-based approaches in real-


world applications, such as predictive model controllers, and demand-
side management scheduling optimization.

7 Conclusion
This paper reviewed deep learning approaches that can be used to estimate
the load energy consumption. Firstly, the concept and characteristics of
deep learning approaches were discussed. After that, most widely used and
vital methods of deep learning were presented. The basic overview and
types for deep learning-based models are provided first, followed by an
overview of some of the most popular methodologies. Following that, this
report included a summary of current trends based on published studies.
After that, attributes extraction and load forecasting using deep learning
techniques were studied. At last, this paper discusses some issues related to
such type of model using deep learning approaches. According to our
review, deep learning strategies have proved to generate more significant
performance outcomes when used to feature extraction when compared to
other methods, according to our consideration. Furthermore, comparable
effects were reported when the deep learning approaches were used as
prediction models. Because there are few comparison-based studies among
DL-based approaches, determining which one produced the most promising
outcomes is challenging. However, the current results are encouraging, and
future research should build on the existing part of the information. Despite
the significant growth of the papers and case studies in the recent years,
there are still several obstacles and tasks to be completed. The applications
of the deep learning approaches are not typically used for case studies and
target attributes. But the comparison of the deep learning approaches across
various case studies and implementation of DL-based models is the actual
application of such works. Many applications for energy management and
improvement rely heavily on forecasting models. Predictive control,
demand response management, fault detection, and optimization models are
examples of such applications. The conversation and discoveries of this
paper might assist the researchers with concluding which profound
learning-based models are utilized for load determining.

References
1. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444
[Crossref]
2.
Zhao H-X, Magoulès F (2012) A review on the prediction of building energy consumption.
Renew Sustain Energy Rev 16(6):3586–3592
[Crossref]
3.
Kumar R, Aggarwal R, Sharma J (2013) Energy analysis of a building using artificial neural
network: a review. Energy Build 65:352–358
[Crossref]
4.
Ahmad AS, Hassan MY, Abdullah MP, Rahman HA, Hussin F, Abdullah H, Saidur R (2014) A
review on applications of ANN and SVM for building electrical energy consumption forecasting.
Renew Sustain Energy Rev 33:102–109
[Crossref]
5.
Wang Z, Srinivasan RS (2017) A review of artificial intelligence based building energy use
prediction: contrasting the capabilities of single and ensemble prediction models. Renew Sustain
Energy Rev 75:796–808
[Crossref]
6.
Wang Z, Srinivasan RS (2015) A review of artificial intelligence based building energy
prediction with a focus on ensemble prediction models. In: 2015 Winter simulation conference
(WSC). IEEE, New York, pp 3438–3448
7.
Deb C, Zhang F, Yang J, Lee SE, Shah KW (2017) A review on time series forecasting
techniques for building energy consumption. Renew Sustain Energy Rev 74:902–924
[Crossref]
8.
Amasyali K, El-Gohary NM (2018) A review of data-driven building energy consumption
prediction studies. Renew Sustain Energy Rev 81:1192–1205
[Crossref]
9.
Wei Y, Zhang X, Shi Y, Xia L, Pan S, Wu J, Han M, Zhao X (2018) A review of data-driven
approaches for prediction and classification of building energy consumption. Renew Sustain
Energy Rev 82:1027–1047
[Crossref]
10.
Ahmad T, Chen H, Guo Y, Wang J (2018) A comprehensive overview on the data driven and
large scale based approaches for forecasting of building energy demand: a review. Energy Build
165:301–320
[Crossref]
11.
Bourdeau M, Qiang Zhai X, Nefzaoui E, Guo X, Chatellier P (2019) Modeling and forecasting
building energy consumption: a review of data-driven techniques. Sustain Cities Soc 48:101533
[Crossref]
12.
Mohandes SR, Zhang X, Mahdiyar A (2019) A comprehensive review on the application of
artificial neural networks in building energy analysis. Neurocomputing 340:55–75
[Crossref]
13.
Runge J, Zmeureanu R (2019) Forecasting energy use in buildings using artificial neural
networks: a review. Energies 12(17):3254
[Crossref]
14.
Wang H, Lei Z, Zhang X, Zhou B, Peng J (2019) A review of deep learning for renewable energy
forecasting. Energy Convers Manage 198:111799
[Crossref]
15.
Aslam Z, Javaid N, Ahmad A, Ahmed A, Gulfam SM (2020) A combined deep learning and
ensemble learning methodology to avoid electricity theft in smart grids. Energies 13(21):5599
[Crossref]
16.
Marcjasz G (2020) Forecasting electricity prices using deep neural networks: a robust hyper-
parameter selection scheme. Energies 13(18):4605
[Crossref]
17.
Tao Q, Liu F, Li Y, Sidorov D (2019) Air pollution forecasting using a deep learning model based
on 1d convnets and bidirectional GRU. IEEE Access 7:76690–76698
[Crossref]
18.
Runge J, Zmeureanu R (2021) A review of deep learning techniques for forecasting energy use in
buildings. Energies 14(3):608
[Crossref]
19.
Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge
[zbMATH]
20.
Wang H, Raj B (2017) On the origin of deep learning. arXiv preprint arXiv:​1702.​07800
21.
Hong T, Fan S (2016) Probabilistic electric load forecasting: a tutorial review. Int J Forecast
32(3):914–938
[Crossref]
22.
Fan C, Xiao F, Zhao Y (2017) A short-term building cooling load prediction method using deep
learning algorithms. Appl energy 195:222–233
[Crossref]
23.
Fan C, Wang J, Gang W, Li S (2019) Assessment of deep recurrent neural network-based
strategies for short-term building energy predictions. Appl Energy 236:700–710
[Crossref]
24.
Mishra S, Palanisamy P (2018) Multi-time-horizon solar forecasting using recurrent neural
network. In: 2018 IEEE energy conversion congress and exposition (ECCE). IEEE, New York,
pp 18–24
25.
Xiaoqiao H, Zhang C, Li Q, Yonghang T, Gao B, Shi J (2020) A comparison of hour-ahead solar
irradiance forecasting models based on LSTM network. Math Prob Eng 2020:1–15
26.
Srivastava S, Lessmann S (2018) A comparative study of LSTM neural networks in forecasting
day-ahead global horizontal irradiance with satellite data. Solar Energy 162:232–247
[Crossref]
27.
Son M, Moon J, Jung S, Hwang E (2018) A short-term load forecasting scheme based on auto-
encoder and random forest. In: International conference on applied physics, system science and
computers. Springer, Berlin, pp 138–144
28.
Rahman A, Srikumar V, Smith AD (2018) Predicting electricity consumption for commercial and
residential buildings using deep recurrent neural networks. Appl Energy 212:372–385
[Crossref]
29.
Kim J, Moon J, Hwang E, Kang P (2019) Recurrent inception convolution neural network for
multi short-term load forecasting. Energy Build 194:328–341
[Crossref]
30.
Kim T-Y, Cho S-B (2019) Predicting residential energy consumption using CNN-LSTM neural
networks. Energy 182:72–81
[Crossref]
31.
Somu N, Gauthama Raman MR, Ramamritham K (2020) A hybrid model for building energy
consumption forecasting using long short term memory networks. Appl Energy 261:114131
[Crossref]
32.
Shi Z, Li H, Cao Q, Ren H, Fan B (2020) An image mosaic method based on convolutional
neural network semantic features extraction. J Sign Process Syst 92(4):435–444
[Crossref]
33.
He W (2017) Load forecasting via deep neural networks. Proc Comput Sci 122:308–314
[Crossref]
34.
Wang J, Chen X, Zhang F, Chen F, Xin Y (2021) Building load forecasting using deep neural
network with efficient feature fusion. J Mod Power Syst Clean Energy 9(1):160–169
[Crossref]
35.
Kong Z, Zhang C, Lv H, Xiong F, Fu Z (2020) Multimodal feature extraction and fusion deep
neural networks for short-term load forecasting. IEEE Access 8:185373–185383
[Crossref]
36.
Chitalia G, Pipattanasomporn M, Garg V, Rahman S (2020) Robust short-term electrical load
forecasting framework for commercial buildings using deep recurrent neural networks. Appl
Energy 278:115410
[Crossref]
37.
Zhang G, Tian C, Li C, Zhang JJ, Zuo W (2020) Accurate forecasting of building energy
consumption via a novel ensembled deep learning method considering the cyclic feature. Energy
201:117531
[Crossref]
38.
Fan C, Sun Y, Zhao Y, Song M, Wang J (2019) Deep learning-based feature engineering methods
for improved building energy prediction. Appl energy 240:35–45
[Crossref]
39.
Laib O, Khadir MT, Mihaylova L (2019) Toward efficient energy systems based on natural gas
consumption prediction with LSTM recurrent neural networks. Energy 177:530–542
[Crossref]
40.
Wang Z, Hong T, Piette MA (2020) Building thermal load prediction through shallow machine
learning and deep learning. Appl Energy 263:114683
[Crossref]
41.
Yang J, Tan KK, Santamouris M, Lee SE (2019) Building energy consumption raw data
forecasting using data cleaning and deep recurrent neural networks. Buildings 9(9):204
[Crossref]
42.
Marino DL, Amarasinghe K, Manic M (2016) Building energy load forecasting using deep
neural networks. In: IECON 2016–42nd annual conference of the IEEE Industrial Electronics
Society. IEEE, New York, pp 7046–7051
43.
Nichiforov C, Stamatescu G, Stamatescu I, Calofir V, Fagarasan I, Iliescu SS (2018) Deep
learning techniques for load forecasting in large commercial buildings. In: 2018 22nd
international conference on system theory, control and computing (ICSTCC). IEEE, New York,
pp 492–497
44.
Su H, Zio E, Zhang J, Xu M, Li X, Zhang Z (2019) A hybrid hourly natural gas demand
forecasting method based on the integration of wavelet transform and enhanced deep-RNN
model. Energy 178:585–597
[Crossref]
45.
Xue P, Jiang Y, Zhou Z, Chen X, Fang X, Liu J (2019) Multi-step ahead forecasting of heat load
in district heating systems using machine learning algorithms. Energy 188:116085
[Crossref]

OceanofPDF.com

You might also like