0% found this document useful (0 votes)
40 views15 pages

Applsci 13 01099 v2

Uploaded by

aristeidislekkas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views15 pages

Applsci 13 01099 v2

Uploaded by

aristeidislekkas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

applied

sciences
Article
Heat Load Forecasting of Marine Diesel Engine Based on Long
Short-Term Memory Network
Rui Zhou, Jiyin Cao *, Gang Zhang , Xia Yang and Xinyu Wang

School of Mechanical & Electrical Engineering, Wuhan Institute of Technology, Wuhan 430205, China
* Correspondence: [email protected]; Tel.: +158-27595304

Abstract: High heat load on diesel engines is a main cause of ship failure, which can lead to ship
downtime and pose a risk to personal safety and the environment. As such, predictive detection and
maintenance measures are highly important. During the operation of marine diesel engines, operating
data present strong dynamic, time lag, and nonlinear characteristics, and traditional models and
prediction methods cause difficulties in accurately predicting the heat load. Therefore, the prediction
of its heat load is a challenging and significant task. The continuously developing machine learning
technology provides methods and ideas for intelligent detection and diagnosis maintenance. The
prediction of diesel engine exhaust temperature using long short-term memory network (LSTM) is
analyzed in this study to determine the diesel engine heat load and introduce an effective method.
Spearman correlation coefficient method with the addition of artificial experience is utilized for
feature selection to obtain the optimal input for the LSTM model. The model is applied to validate
the ship data of the Shanghai Fuhai ship, and results show that the mean absolute percentage error
(MAPE) of the model is lowest at 0.089. Compared with other models, the constructed prediction
model presents higher accuracy and stability, as well as an optimal evaluation index. A new idea
is thus provided for combining artificial knowledge experience with data-driven applications in
engineering practice.

Keywords: diesel engine heat load; intelligent detection; long short-term memory network; prediction
model; evaluation index

Citation: Zhou, R.; Cao, J.; Zhang, G.;


Yang, X.; Wang, X. Heat Load
Forecasting of Marine Diesel Engine
1. Introduction
Based on Long Short-Term Memory
Network. Appl. Sci. 2023, 13, 1099. In dealing with the increasing severity of fossil energy crisis and the strict emission
https://doi.org/10.3390/app13021099 requirements of internal combustion engines, the effective use of energy and environmen-
tal protection are also increasing in importance. If the diesel engine set has insufficient
Academic Editors: Sheng Du, Xiongbo
combustion, then the fuel-generated heat decreases, resources are wasted, black smoke
Wan, Wei Wang and Hao Fu
and a large amount of CO and other harmful gases are discharged, and the environment
Received: 19 November 2022 is polluted, which will cause harm to human body through direct inhalation [1–3]. The
Revised: 6 January 2023 diesel engine set is an important power source for ship navigation, and its normal work-
Accepted: 9 January 2023 ing cycle is a major contributor to efficient transportation by sea, saving energy, and
Published: 13 January 2023 reducing emission [4].
Taking exhaust manifold as an example, the finite element method is used by Li et al. [5]
to verify the effect of thermal load on its fatigue life. The high efficiency heat transfer
model is used by Zhang et al. [6] to analyze the direct relationship between cylinder head
Copyright: © 2023 by the authors.
fatigue life and average gas temperature. In addition, Chaboche model is established to
Licensee MDPI, Basel, Switzerland.
analyze the local deformation and leakage of cylinder head under thermal cycle test [7].
This article is an open access article
distributed under the terms and
The failure of the ship’s exhaust valve was investigated and analyzed by EI-Bitar et al. [8],
conditions of the Creative Commons
and it is determined that the high temperature environment would lead to the expansion
Attribution (CC BY) license (https://
of microcracks and easy fracture. According to the above research, main equipment of the
creativecommons.org/licenses/by/ diesel engine set will be damaged by high heat load, the ship will be stopped, which will
4.0/). greatly increase the navigation cost, and the safety of ship equipment and environment will

Appl. Sci. 2023, 13, 1099. https://doi.org/10.3390/app13021099 https://www.mdpi.com/journal/applsci


Appl. Sci. 2023, 13, 1099 2 of 15

be seriously affected [9,10]. Previously, a Belize foreign ship lost control of its main engine
due to excessive heat load at the floating attachment of the Yangtze River No. 20. Fortunately,
it was timely assisted by the maritime department and did not cause a second accident.
At present, ship data are detected by sensors and transmitted to terminals. However,
when excessive heat load is detected by sensors, ship equipment and personal safety
may have been damaged [11]. Therefore, the prediction of its heat load can achieve the
preventive effect.
The heat load of diesel engine can be accurately characterized by exhaust temperature,
which can be estimated by predicting the exhaust temperature. However, factors affecting
the exhaust temperature are typically influenced by uncertain dynamic environmental
factors. As such, heat load prediction of marine diesel engine units is a challenging and
meaningful task. It is usually used to analyze the heat load of marine diesel engine units
through traditional methods, such as finite element analysis and linear regression model-
ing [12]. However, the accuracy of predicted parameters, results, and complex mapping
relationships are difficult to model due to the complex process inside the combustion
chamber. Complex and variable dynamic processes and nonlinear systems are modeled by
neural network methods, and its continuous development led to various applications in
marine diesel engines [13–16].
Artificial neural network (ANN) was used by Cay to replace traditional modeling to
predict engine fuel consumption, effective power, and exhaust temperature. The mean error
percentage (MEP) of the training test data was less than 2.7% [17]. Ignition timing, engine
speed and air-fuel ratio were used as model inputs by Liu et al. [18] to analyze whether
machine learning can be used to effectively predict engine exhaust temperature. Four
different algorithm combinations were used to evaluate the applicability of ANN. ANN
was used by Uslu et al. [19] to predict the emission and performance of an ether single-
cylinder diesel engine. The maximum mean absolute error range of 5% was obtained, and
the regression coefficient (R2) was in the range of 0.9640–0.9878. Despite the use of ANN
has potential effectiveness on exhaust temperature, a large number of initial parameters are
required in this study, and gradient explosion may lead to unsuccessful training, thereby
requiring additional time in adjusting the hyperparameters. Moreover, heat load is usually
characterized by nonlinear variations; collecting and obtaining these data are necessary
under various conditions for analysis and prediction. However, only several influencing
factors are analyzed, and the dependencies between the factors are ignored. Considering
the shortage of ANN, long short-term memory network (LSTM) model is considered for
prediction analysis.
LSTM network based on recurrent neural network (RNN) with three additional thresh-
olds is a special form that can solve the problem of gradient explosion and disappearance
in training [20]. Continuous development has led to the maturity of this neural network
model. However, a large amount of raw data is not effective when processed by LSTM,
so it is used together with other methods. The Spearman correlation coefficient method
(SR) is utilized in neural networks for feature selection to effectively capture dependencies
between variables by analyzing the correlation between two variables and removing redun-
dant information. A LSTM network used to predict passenger flow at stations was proposed
by Zhang et al. Spearman correlation features were used to select time and space factor
data that significantly and effectively affect passenger flow, and the accuracy of the predic-
tion model was improved [21]. Spearman correlation coefficient method was applied by
Jiao et al. [22] to explore the temporal connection of nonresidential consumers under multi-
ple time series. Spearman’s correlation coefficient is a widely used feature selection method.
The correlation between multiple information sequences can be effectively analyzed by this
method, and the best input of the network model can be provided. However, this purely
data-driven method determines dependencies on the basis of only the correlation between
feature variables, thereby leading to the exclusion of significant variables. Hence, artificial
experience needs to be added when screening features in advance, and significance tests
must be performed to ensure that accurate input is provided to the prediction model.
Appl. Sci. 2023, 13, 1099 3 of 15

As such, a hybrid prediction model incorporating the artificial empirical Spearman


correlation coefficient method (AESR) and long short-term memory network (LSTM) is
proposed in this study to achieve accurate and stable predictions of exhaust temperature
by using the AESR-LSTM model. Redundant information is eliminated through the Spear-
man correlation coefficient method, and the optimal input is derived by adding artificial
empirical supplementary variables while retaining those with high correlation ratings.
The hyperparameters are usually selected according to experience and then set in the
combination. The combination of cross-validation and grid search methods is used to
avoid the blindness of adjusting parameters. The hyperparameters combination of neural
network is scientifically optimized and adjusted, and the robustness and accuracy of the
prediction model are ensured. After the optimal parameter set is selected by grid search
and cross-validation, the model is trained again using the optimal parameters. The trained
LSTM model is utilized to predict the exhaust temperature and highlight the advantages
of the AESR-LSTM model for data trend prediction compared with other models. The
experimental results of the selected prediction model are consistent with the actual values.
The prediction result of the model can be sent to the console as a feedback signal, and more
convenience and information can be provided to the operator. The predicted results can be
used to analyze the combustion conditions in the combustion chamber. Complex models
do not need to be used to create analysis, and such signals are difficult to obtain by physical
sensors. The predicted trend results can be adopted to analyze the working condition
and emission substances of diesel engines, implement certain avoidance measures before
failure occurs, reduce the risk of accidents, improve the safety of ship systems, and prevent
serious personal injury and economic loss. The AESR-LSTM neural network modeling is
simpler than conventional modeling analysis because the workload of heat load research is
reduced, more comprehensive influencing factors are taken into account, complex changes
in the combustion chamber are predicted by a small amount of experimental data, and
more accurate prediction results are obtained. A new idea is provided in this study, which
combines artificial experience with data driven application in engineering practice.
Accordingly, a method for predicting diesel engine exhaust temperature that integrates
feature selection, parameter combination search, and comparative analysis of multiple
model combinations is proposed in this study. The remainder of this paper is structured
as follows. Methods used and the proposed hybrid prediction system model are briefly
described in Section 2. Relevant data are collected and analyzed in Section 3, and the results
of the proposed system used to predict the thermal load of the combustion chamber of the
marine diesel engine set were displayed, and then the results were compared with those of
other models. Finally, the conclusions of this study are drawn in Section 4.

2. Prediction Method
In this section, data preprocessing method, network model, and optimization method
are introduced, and a method to predict the heat load of marine diesel engine combustion
chamber is proposed. The AESR-LSTM method is developed, which mainly consists of the
Spearman correlation coefficient method and the LSTM network, and is used to predict
heat load.

2.1. Long Short-Term Memory Network


LSTM is a neural network proposed by Hochreiter and Schmidhuber in 1997 [23,24].
This model has been continuously developed to form a systematic and complete frame-
work [25–27]. The LSTM is used in this study to compensate for the limitations of recurrent
neural network (RNN) in dealing with the dependence problem at long distances and to
solve the enlargement of and difficulty in updating partial derivatives W during training.
The internal structure of the LSTM neural unit is shown in Figure 1. The LSTM adds three
thresholds to the framework of the RNN as three logical control units, and the input and
output information of the entire network is controlled and managed by the three thresholds.
The three thresholds are described as follows:
Input Gate: Whether the information is stored in the storage unit is determined by
the threshold and denotes it as it.
Forget Gate: Whether the information stored in the storage unit at the previous time
is stored in the storage unit at the current time is determined by the threshold and de‐
notes it as ft.
Output Gate: Whether the information in the storage unit at the current moment
Appl. Sci. 2023, 13, 1099 enters the hidden state ht is determined by the threshold and denotes it as ot. 4 of 15
Historical information can be saved, read, updated, and reset by the unit; it is the
core of the LSTM unit and is denoted as Ct.

Figure 1. LSTM structure diagram.


Figure 1. LSTM structure diagram.
The LSTM neural network at moment t is expressed as follows:
Input Gate: Whether the information is stored in the storage unit is determined by the
f   W   h , X   b  , (1)
threshold and denotes it ast it . f t 1 t f
Forget Gate: Whether ithe information stored in the storage unit at the previous time is
t   Wi   ht 1 , X t   bi  , (2)
stored in the storage unit at the current time is determined by the threshold and denotes it
as ft . ot   Wo   ht 1 , X t   bo  , (3)
Output Gate: Whether the information in the storage unit at the current moment
enters the hidden state htCis tanh WC   ht 1 , X tby
t  determined c ,
  bthe (4) it as ot .
threshold and denotes
Historical information can be saved, read, updated, and reset by the unit; it is the core
Ct  ft asCt 1Cti.t  C ,
of the LSTM unit and is denoted t (5)
The LSTM neural network at moment t is expressed as follows:
ht  ot  tanh  Ct  , (6)

where ft, it, ot, and ht are (1), (2), (3), and (6), σ (W f · [hWt−f, 1W, iX
f t =respectively; ,Wt ]o,+ b fW),C denote the
and (1)
recursive connection weights of the corresponding thresholds; σ is the sigmoid function,
which is the same as the tanh function forithe=activation σ (Wi · in[hEquations (7) and (8).
t t − 1 , X t ] + bi ) , (2)
1
 (oxt) 
=1 σe(W ,
x o · [ h t − 1 , X t ] + bo ) , (7) (3)
Cet = tanh(WC · [ht−1 , Xt ] + bc ), (4)
Ct = f t × Ct−1 + it ⊗ Cet , (5)
ht = ot × tanh(Ct ), (6)
where ft , it , ot , and ht are (1), (2), (3), and (6), respectively; Wf , Wi , Wo , and WC denote the
recursive connection weights of the corresponding thresholds; σ is the sigmoid function,
which is the same as the tanh function for the activation in Equations (7) and (8).

1
σ( x) = , (7)
1 + e− x

sinh( x ) e x − e− x
tanh( x ) = = x , (8)
cosh( x ) e + e− x
The state at the previous point in time needs to be discarded, and the content saved
to the memory unit is determined by the forgetting gate. The sigmoid function is used to
decide whether Ct −1 is cumulatively retained or not. Cumulative retention is achieved
when the sigmoid function is equal to 1 but is absent when the function is equal to 0.
The input gate contains the output ht −1 from the previous moment and the input Xt at
this time, and the sigmoid function is used to control how much to add to Ct . An alternative
Cet is also created and then the tanh function can be used to control how much to add to
Cet . The two parts are then multiplied to determine the amount of influence Ct , and the
influence of the forgetting gate is added to obtain the expression for Ct .
The output gate is a sigmoid function that can determine which parts of Ct need to be
output to describe the ot expression. Ct is placed into the tanh function to determine the
final output Ct and then multiplied with ot to obtain the final output ht , which signals the
end of the LSTM work for one moment. How many memory units are forgetten, retained,
Appl. Sci. 2023, 13, 1099 5 of 15

and outputted at each moment are determined and affected by the three thresholds, and
they are finally transferred to the state of this moment at the end.
The prediction results of LSTM model are affected by the learning rate, weights, activa-
tion function, step size, and number of batches in the network. For example, convergence
failure is caused by learning rate being set too high, while consuming a lot of training
time to calculate the optimal value is caused by learning rate being set too low. Problems,
such as gradient explosion and disappearance, can occur when the activation function is
poorly chosen. Therefore, LSTM prediction model needs to be trained, and appropriate
parameters are selected to improve the prediction accuracy.

2.2. Spearman Correlation Coefficient Method


As mentioned above, factors affecting the exhaust temperature are typically influenced
by uncertain dynamic environmental factors. To find them, Spearman correlation analysis
method was adopted by us. The change trend and correlation strength between the two
variables were tested by Spearman’s correlation coefficient method. This method is based
on calculating the difference of each pair of equivalents of two columns of paired ranks as
the basis. If the correlation coefficient between two variables is close to +1 and −1, then
the surface correlation is strong. The Spearman correlation coefficient rp can be expressed
as follows:
6∑ d2i
rp = 1 − , (9)
n ( n2 − 1)
where n is the sample size, di is the difference of bit values of the ith data pair. The values
of rp are within the range of [−1, 1]. If rp = 1, then the correlation is perfectly positive;
if rp = −1, then the correlation is perfectly negative. The absolute value is used as the basis
to judge the correlation. The strength of correlation between variables is divided into four
categories, as shown in Table 1 [28].

Table 1. Correlation intensity.

Value of r Strength of Relationship


−1.0 to −0.5 or 0.5 to 1.0 Strong
−0.5 to −0.3 or 0.3 to 0.5 Moderate
−0.3 to −0.1 or 0.1 to 0.3 Weak
−0.1 to 0.1 None or very weak

2.3. AESR-LSTM Hybrid Prediction Model


AESR-LSTM hybrid prediction model is proposed to combine Spearman correlation
coefficient method with LSTM network, and artificial experience is added to conduct
exhaust temperature prediction. First, sensor data is analyzed to eliminate overlapping
features. Spearman correlation coefficient method is used to discard redundant information
in the original data because exhaust temperature will be affected by various factors and
there is correlation between various factors. Finally, the variables are supplemented by
artificial experience, and the efficiency of the algorithm and the accuracy of prediction
are improved. The cross-validation and grid search methods are used to optimize the
hyperparameters of the neural network to obtain the optimal combination of parameters
with maximum prediction accuracy. After the optimal parameter set is selected by grid
search and cross-validation, the model is trained again using the optimal parameters.
The overall framework and partial procedures of AESR-LSTM are shown in Figure 2 and
Algorithm 1. The specific modeling steps are presented as follows.
Step 1: The influencing factors related to exhaust temperature are analyzed to collect
relevant time series data Xt on the basis of engineering experience.
Step 2: The training and test sets are divided into pieces in a ratio of 7:3.
Step 3: Data is preprocessed, Spearman correlation coefficient is used for feature
selection to process the original data, redundant information is eliminated, highly corre-
tors and there is correlation between various factors. Finally, the variables are supple‐
mented by artificial experience, and the efficiency of the algorithm and the accuracy of
prediction are improved. The cross‐validation and grid search methods are used to op‐
timize the hyperparameters of the neural network to obtain the optimal combination of
parameters with maximum prediction accuracy. After the optimal parameter set is se‐
lected by grid search and cross‐validation, the model is trained again using the optimal
Appl. Sci. 2023, 13, 1099 parameters. The overall framework and partial procedures of AESR‐LSTM are shown in
6 of 15
Figure 2 and Algorithm 1. The specific modeling steps are presented as follows.
Step 1: The influencing factors related to exhaust temperature are analyzed to col‐
lect relevant time series data Xt on the basis of engineering experience.
lated variables
Step are extracted,
2: The training and
and test sets are variables
divided areinsupplemented
into pieces a ratio of 7:3. by mechanisms and human
Step 3: Data
experience is preprocessed,
to obtain the best Spearman
input X ∗.
correlation coefficient is used for feature
t
selection to process the original data, redundant information is eliminated, highly corre‐
Step 4: The hyperparameters in the LSTM neural network model are adjusted through
lated variables are extracted, and variables are supplemented by mechanisms and hu‐
iterative optimization
man experience combined
to obtain the best input 𝑋 ∗ . with cross-validation and grid search methods to select
the optimal
Step 4: Thecombination
hyperparameters of parameters
in the LSTM neural and improve its prediction
network model are adjustedaccuracy.
through iterative optimization combined with cross‐validation and grid search methods
Step 5: After the optimal parameter set is selected by grid search and cross-validation,
to select the optimal combination of parameters and improve its prediction accuracy.
the model
Step 5: is trained
After again using
the optimal the optimal
parameter parameters.
set is selected by grid search and
Step 6: The
cross‐validation, test set
the model samples
is trained againare input
using into the
the optimal prediction model to predict the combus-
parameters.
tionStep 6: The test
chamber set samples
exhaust are input intoofthe
temperature prediction
marine modelengine
diesel to predict the com‐
sets.
bustion chamber exhaust temperature of marine diesel engine sets.
Step
Step 7: The
7: The prediction
prediction performance
performance of the proposedof the proposed
model is comparedmodel is compared with those of
with those
other
of otherprediction models.
prediction models.

Figure 2. General Framework Structure.


Figure 2. General Framework Structure.

Algorithm 1 Partial procedures


1: function coeff=mySpearman(X,Y)
2: if length(X)~=length(Y)
3: error(‘Unequal dimensions’);
4: return;
5: end
6: N=length(X);
7: Xrank=zeros(1,N);
8: Yrank=zeros(1,N);
9: for i=1:N
10: cont1=1;
11: cont2=−1;
12: for j=1:N
13: if X(i)<X(j)
14: cont1=cont1+1;
15: elseif X(i)==X(j)
16: cont2=cont2+1;
17: end
18: end
19: Xrank(i) = cont1 + mean ([0:cont2]);
20: end
21: for i=1:N
22: cont1=1;
23: cont2=−1;
24: for j=1:N
25: if Y(i)<Y(j)
26: cont1=cont1+1;
27: elseif Y(i)==Y(j)
Appl. Sci. 2023, 13, 1099 7 of 15

3. Case Study
3.1. Principle Analysis and Data Processing
In a ship, the power source is composed of the main engine and an auxiliary engine.
The auxiliary power system is composed of machinery other than the diesel engine (main
Appl. Sci. 2023, 13, x FOR PEER REVIEW
engine), including the fuel system, lubricating oil system, air system, cooling system, and 8 of
other mechanical equipment. The main and auxiliary engines work together to propel the
ship, and its composition structure is shown in Figure 3.

3.Sketch
Figure 3.
Figure Sketchofof
thethe
composition structure.
composition structure.

On the basis of the mechanism and data of the ship, the heat load of the marine diesel
On the basis of the mechanism and data of the ship, the heat load of the mari
engine during operation is accurately reflected by the exhaust temperature. The amount,
diesel engine
perfection, during operation
and timeliness is accurately
of fuel combustion in the reflected
combustion bychamber
the exhaust
can be temperature.
reflected T
amount, perfection, and timeliness of fuel combustion in the combustion
by the exhaust temperature, as well as the high temperature heating time and brightness of chamber can
reflected
combustion bychamber
the exhaust temperature,
components. as welltemperature
Hence, exhaust as the high cantemperature heating
be used to predict the time a
heat load of diesel engine set.
brightness of combustion chamber components. Hence, exhaust temperature can
usedThe exhaust temperature
to predict the heat load of aof
single cylinder
diesel engine is predicted
set. as an example in this study to
analyze the trend of heat load variation and the operating performance of the combustion
The exhaust temperature of a single cylinder is predicted as an example in th
chamber. The high exhaust temperature of the cylinder is due to poor internal combustion,
study to analyze the trend of heat load variation and the operating performance of t
which is related to the amount of fresh air in the cylinder, cooler cooling effect, injector
combustion chamber.
atomization quality, The high
fuel viscosity, andexhaust
cylindertemperature of the cylinder
compression pressure. The sensor is is
due
usedtotopoor inte
nal combustion,
monitor its working which is related
condition to the
and collect amount
factors relatedoftofresh air temperature,
exhaust in the cylinder, cooler cooli
including
effect, injector atomization
high-temperature quality,outlet
cooling, freshwater fuel temperature,
viscosity, and cylinder
cylinder liner compression
cooling water pressu
inletsensor
The pressure, is piston
used to cooling
monitoroil outlet temperature,
its working and fuel
condition pressure
and collectafter the fuel
factors filter.to exhau
related
Determining the
temperature, correlationhigh‐temperature
including and dependence among these freshwater
cooling, data is important
outletto temperature,
predict the cyli
exhaust temperature of marine diesel engine sets.
der liner cooling water inlet pressure, piston cooling oil outlet temperature, and fu
Sensor monitoring data of the Chinese vessel Shanghai Fuhai are used in this study,
pressure
which are after the every
uploaded fuel filter.
25 min.Determining the correlation
The sampled relevant and variables
data of initial dependence among the
are listed
data is important
in Table 2. Field datatofor
predict the exhaust
two months show that temperature
28,160 piecesof of marine
ship datadiesel engine via
are measured sets.
Sensor
the ship’s monitoring
sensors data of
and constitute thethe
dataChinese
set, which vessel Shanghai
is randomly Fuhai
divided are
into used and
training in this stud
test setsare
which at a uploaded
ratio of 7:3. every 25 min. The sampled relevant data of initial variables a
listed in Table 2. Field data for two months show that 28,160 pieces of ship data a
measured via the ship’s sensors and constitute the data set, which is randomly divid
into training and test sets at a ratio of 7:3.

Table 2. Initial Variables.

Number Variable Description Unit


1 T Exhaust temperature °C
Appl. Sci. 2023, 13, 1099 8 of 15

Table 2. Initial Variables.


Appl. Sci. 2023, 13, x FOR PEER REVIEW 9 of 16

Number Variable Description Unit


1 T Exhaust temperature ◦C
◦C
2 Ta1 Cylinder linerscavenge
Cylinder cooling boxwater outlet
temperature
9 3 Tw2 To1 Dieseltemperature
engine inlet oil temperature °C ◦C
4 To2 lubricating oil outlet temperature ◦C
5 Po1 Pressurized air temperature
Diesel inlet oil pressureafter Mpa
10 Ta2 °C
6 To3 Main enginecooler
inlet oil temperature Mpa
7 Tw1 HighExhaust
temperature cooling fresh watersuper‐
outlet temperature ◦C
temperature before
11 8 Ta3 Pw1 High temperature cooling fresh water inlet pressure°C Mpa
charger ◦C
9 Tw2 Cylinder liner cooling water outlet temperature
12 10 Tf1 Ta2 Fuel oil temperature
Pressurized at unit after
air temperature inletcooler °C ◦C
◦C
11 Ta3 Outlet temperature
Exhaust temperature ofbefore
cylinder pis‐
supercharger
13 12 To4 Tf1 Fuelton
oil temperature °C ◦C
cooling oil at unit inlet ◦C
13 To4 Outlet temperature of cylinder piston cooling oil
14 14 Pf1 Pf1 Fuel pressure afterafter
Fuel pressure fuelfuel
filter
filter Mpa Mpa
15 15 Pf2 Pf2 Fuel inlet
Fuel pressure of main
inlet pressure engine
of main engine Mpa Mpa
16 Pw2 Inlet
Inletpressure
pressure of cylinder
cylinderliner
liner cool‐ water
cooling Mpa
16 17 Pw2 Po2 Pressurizer inlet oil pressure Mpa Mpa
ing water ◦C
18 Ta4 Exhaust temperature after supercharger
17 19 Po2 NT Pressurizer inlet oil pressure
Turbocharger speed Mpa rpm
20 Pa1 Exhaust Exhaust
temperature after
valve air super‐
pressure Mpa
18 Ta4 °C
charger
19 NT Turbocharger speed rpm
The time series correlation data Xt0 associated with the exhaust temperature are col-
20 Pa1 Exhaust valve air pressure Mpa
lected as follows.
The time series correlation data 𝑋 associated with the exhaust temperature are
n o
Xt0 = Ta1 , To1 , To2 , Po1 , To3 , Tw1 , Pw1 , Tw2 , Ta2 , Ta3 , T f 1 , To4 , Pf 1 , Pf 2 , Pw2 , Po2 , Ta4 , NT , Pa1
collected as follows.
𝑋 = {Ta1,To1,Tis
The turbocharger o2,Pdriven
o1,To3,Tw1by
,Pw1the
,Tw2,T a2,Ta3,Tf1impulse
inertial ,To4,Pf1,Pf2,P ,Po2,Texhaust
ofw2the a4,NT,Pa1} gas to drive the

turbine,
The and then fresh
turbocharger air is pressurized
is driven by the inertial into the cylinder.
impulse Thus,gas
of the exhaust overlapping
to drive thevariables
and supercharger
turbine, and then fresh Ta3is and
frontair rear Ta4into
pressurized exhaust temperatures
the cylinder. must be eliminated
Thus, overlapping variablesto obtain
time series data as
and supercharger follows.
front Ta3 and rear Ta4 exhaust temperatures must be eliminated to ob‐
tain time series
n data as follows. o
00
Xt = Ta1 , To1 , To2 , Po1 , To3 , Tw1 , Pw1 , Tw2 , Ta2 , T f 1 , To4 , Pf 1 , Pf 2 , Pw2 , Po2 , NT , Pa1
𝑋 = {Ta1,To1,To2,Po1,To3,Tw1,Pw1,Tw2,Ta2,Tf1,To4,Pf1,Pf2,Pw2,Po2,NT,Pa1}
Spearman
Spearmancorrelation
correlationcoefficient method
coefficient is used
method for for
is used feature selection
feature of data,
selection and and the
of data,
input of neural network is determined by the correlation between two factors,asas shown
the input of neural network is determined by the correlation between two factors,
shown
in in Figure
Figure 4. 4.

Figure 4.
Figure 4.Heat
Heatmap
mapofof
thethe
correlation matrix.
correlation matrix.

According
Accordingtotothe thecorrelation matrix
correlation in Table
matrix 1 and1the
in Table andabove figure, figure,
the above the correlation
the correlation
coefficient between
coefficient betweenthe theexhaust
exhausttemperature
temperatureand and
To2 isT0.8997. Hence, the turbocharger
o2 is 0.8997. Hence, the turbocharger
lubricating oil
lubricating oiloutlet
outlettemperature
temperatureis highly relevant
is highly to thetoexhaust
relevant temperature.
the exhaust This
temperature. This
finding is consistent with the actual scenario. The viscosity of the lubricating oil will be
finding is consistent with the actual scenario. The viscosity of the lubricating oil will be
affected by the temperature of the lubricating oil and increase the exhaust temperature.
The correlation of the variable Tw2 is 0.8639, and how much heat is taken away from
Appl. Sci. 2023, 13, 1099 9 of 15

the combustion chamber is determined by the outlet temperature of the cylinder liner
cooling water, thereby indicating its sensitivity to changes in the exhaust temperature. The
cylinder liner cooling water inlet pressure and the sweep box temperature are important
factors affecting the exhaust temperature. Six variables with correlations higher than 0.5 are
derived. The significance of their p-values is below 0.001.
If the temperature of the pressurized air after the cooler is excessively high, then the
exhaust temperature rises because the fresh gas entering the diesel engine is cooled by the
cooler after being pressurized by the turbocharger into the combustion chamber. With the
increase in supercharger speed, the increase in exhaust energy is affected by the increase
in exhaust temperature. The reason is that the high-temperature exhaust gas from the
combustion chamber flows through the supercharger. Another factor to be considered is
the fuel pressure after the diesel filter. This refers to whether the faulty filter is reflected by
the fuel pressure. Fuel quality and exhaust temperature can be affected by damaged filters.
The three variables Ta2 , NT , and Pf1 mentioned above are all important with a sig-
nificance of less than 0.001, and the predictive variables will be affected, although their
correlations are below 0.5, 0.1117, 0.1863, and 0.3574, respectively. Therefore, these factors
are considered when deriving the final set of variables for the input model as follows.
n o
Xt∗ = Ta1 , To2 , To3 , Tw2 , Ta2 , To4 , Pf 1 , Pw2 , NT

3.2. Analysis of Modeling and Prediction Results


On the basis of Spearman correlation analysis, the top nine positively correlated
parameters are selected as model inputs in predicting the target output exhaust temperature
T. The inputs are divided into training and test sets in a ratio of 7:3 given the impact of
data volume on learning ability in the data drive. A combination of grid search and
tenfold cross-validation methods is applied to improve the prediction performance of the
model. The number of times to calculate the set of hyperparameters X = {X1 , X2 , . . . , Xn } is
Πi = 1 i = n |hi |, where (i = 1, 2, . . . ) and hi is the number of hyperparameter values. Five
parameters are selected in this study to set the hidden layers, hidden units, training rounds,
learning rate, and batch size of the LSTM prediction network. The change trend of the loss
function is affected by five super parameters, which are divided into two groups. See the
change in loss function under the change in hyperparameters.
The influence of the number of units and learning rounds of the five-layer neural
network on RMSE is shown in Figure 5. With the increase in the number of learning rounds,
the RMSE decreases first and then increases, and the RMSE of 100 units is generally lower
than that of other units. From Figure 6, we can see that the loss function is affected by
different hidden layers. Usually, higher values are caused by the low learning rate of 0.001.
Appl. Sci. 2023, 13, x FOR PEER REVIEW 11 of 16
Among the 0.01 learning rate and 0.005 learning rate, the number of hidden layers of five
layers is better than other layers.

Figure
Figure5.5.Influence of Unit
Influence Number
of Unit and Learning
Number Round Number
and Learning Roundon RMSE. on RMSE.
Number
Appl. Sci. 2023, 13, 1099 10 of 15

Figure 5. Influence of Unit Number and Learning Round Number on RMSE.

Influence
Figure6.6.Influence
Figure of of hidden
hidden layer
layer on RMSE.
on RMSE.

Inthe
In theprocess
process of of hyperparameters
hyperparameters optimization,
optimization, the combination
the combination with lowwith
RMSE low RMSE
value isisselected
value selectedasasthethe best
best hyperparameters
hyperparameters combination.
combination. Some adjustment
Some adjustment results ofresults of
cross‐validation
cross-validationgridgridsearch optimization
search are shown
optimization in Table
are shown 3 below.
in Table 3 below.
Table 3. Cross‐Validation grid search optimization and tuning results.
Table 3. Cross-Validation grid search optimization and tuning results.
Learning Rate Hidden Layers Hidden Units Training Rounds Batch Size RMSE
Learning Hidden Hidden Training
0.01 4 150 200 128 27.82
Batch Size RMSE
Rate Layers Units Rounds
0.005 5 50 400 64 27.92
0.001 4 0.01 150 4 150
200 200
256 128 34.25 27.82
0.005 3 0.005 150 5 30050 400
128 64 26.44 27.92
0.001 4 150 200 256 34.25
0.001 3 50 300 64 33.35
0.005 3 150 300 128 26.44
0.005 4 100 200 64 26.35
0.001 3 50 300 64 33.35
0.01 5 0.005 100 4 300
100 128
200 64 25.21 26.35
0.001 3 0.01 100 5 300
100 64
300 128 30.48 25.21
0.01 5 0.001 50 3 200
100 256
300 64 29.15 30.48
0.001 4 0.01 50 5 40050 128
200 256 32.19 29.15
0.001 4 50 400 128 32.19

After optimization, the best hyperparameter combination of RMSE is obtained. The


hyperparameter candidate values and optimal values of the prediction model LSTM are
shown in Table 4 below.

Table 4. Candidate and optimal sets of hyperparameters for the LSTM model.

Example of Optimal
Hyperparameter Name Hyperparameter Values
Hyperparameter Values
Learning rate {0.01, 0.005, 0.001} 0.01
Hidden layers {3, 4, 5} 5
Hidden units {100, 150, 200} 100
Training rounds {100, 200, 300} 300
Batch size {64, 128, 256} 128

After the optimal parameter combination is selected, the training set is input into the
LSTM model for training. At the same time, discard technology is introduced to prevent
the model from over fitting. The training curve and training relative error scatter diagram
are shown in Figures 7 and 8 below. From the figure, we can see that the predicted value
basically coincides with the actual value in the training, and the error in the training finally
approaches the zero line.
After the
After the optimal
optimal parameter
parameter combination
combination isis selected,
selected, the
the training
training set
set is
is input
input into
into
the LSTM model for training. At the same time, discard technology is
the LSTM model for training. At the same time, discard technology is introduced to introduced to
prevent the model from over fitting. The training curve and training relative
prevent the model from over fitting. The training curve and training relative error scat‐error scat‐
ter diagram
ter diagram areare shown
shown inin Figures
Figures 77 and
and 88 below.
below. From
From the
the figure,
figure, we
we can
can see
see that
that the
the
Appl. Sci. 2023, 13, 1099 11 of 15
predicted value
predicted value basically
basically coincides
coincides with
with the
the actual
actual value
value in
in the
the training,
training, and
and the
the error
error in
in
the training finally approaches the zero line.
the training finally approaches the zero line.

Figure 7. Model
Model trainingresult
result curve.
Figure 7. Model training
Figure 7. training result curve.
curve.

Figure 8. Relative error of model training.


Figure 8.
Figure 8. Relative
Relative error
error of
of model
model training.
training.
Appl. Sci. 2023, 13, x FOR PEER REVIEW 13 of 16
The test
The
The test set
test set isis
set isfed
fedinto
fed intothe
into thetrained
the trainedmodel
trained modelfor
model forexhaust
for exhausttemperature
exhaust temperatureprediction.
temperature prediction. The
prediction. The
The
prediction
prediction results
prediction results
results are are illustrated
are illustrated in Figure
illustratedininFigure 9. The
Figure9.9.The strong
The generalization
strong
strong ability
generalization
generalization of
ability
ability the pre‐
of pre‐
of the the
diction model
diction model
prediction modelis reflected
is reflected
is reflectedby the
by the
by theconsistency between
consistency
consistency the predicted
between
between the predicted and measured
the predicted
and measured
and tem‐
measured
tem‐
by comparison
perature values.
temperature with
values.TheThe those
results ofoftraditional
results the selectedforecasting
forecasting methods,
model as subsequently
are described in analyzed
detail be‐
analyzed
perature values. The results of of
thethe selected
selected forecasting
forecasting modelare
model are subsequently
subsequently analyzed
low.
by comparison with those of traditional forecasting methods, as described in detail below.

Figure9.9.Model
Figure Modelprediction
predictionoutcomes.
outcomes.

3.3. Multimodel Comparative Analysis


In this study, Spearman correlation coefficient method and LSTM network are
combined to predict a time series data. Other prediction models are input into the same
data set, and the results of other prediction methods are compared with the results of the
proposed methods for further analysis. The results of each prediction model are shown
in Figures 10 and 11.
Appl. Sci. 2023, 13, 1099 12 of 15
Figure 9. Model prediction outcomes.
Figure 9. Model prediction outcomes.
3.3. Multimodel
3.3. MultimodelComparative
ComparativeAnalysis
Analysis
In
3.3. this study, Spearman
In this study, SpearmanAnalysis
Multimodel Comparative correlation coefficient
correlation coefficient method
method and
andLSTM
LSTMnetwork
networkare arecom-
combined In to predict
this study, a time
Spearmanseries data. Other
correlation prediction
coefficient models
method are
and input
LSTM
bined to predict a time series data. Other prediction models are input into the same data into the
network same
are
data
set, set, the
and andresults
combined thepredict
to results a of other
time
of other prediction
series methods
data. methods
prediction Other areare
prediction compared with
models are
compared withinputtheinto
the results ofthe
the same
results of the pro-
proposed
data set,methods
and the for further
results of analysis.
other The
prediction results
methods of each
are prediction
compared with model
posed methods for further analysis. The results of each prediction model are shown inthe are
results shown
of the
in Figures10 10
proposed
Figures and
11.11. for further analysis. The results of each prediction model are shown
methods
and
in Figures 10 and 11.

Figure
Figure Comparison
Comparison
10. 10.
Figure ofofof
Comparison prediction
predictionresults.
prediction results.
results.

(a) (b) (c) (d)


(a) (b) (c) (d)
Figure 11. The forecasting and actual temperature for different models: (a) Training and test re-
Figure 11. The
Figure The forecasting
forecastingand
andactual
actualtemperature
temperature forfor
different models:
different (a) Training
models: andand
(a) Training test test
re‐ results
sults of AESR-LSTM, (b) Training and test results of SR-LSTM, (c) Training and test results of
sults of AESR‐LSTM,
of AESR-LSTM,
LSTM, and (d)(b)
(b) Training
Training
Training andand
and test results
test results
test results
of SR‐LSTM, (c) Training and test results of
of BP.of SR-LSTM, (c) Training and test results of LSTM, and
LSTM,
(d) and (d)
Training andTraining and test
test results results of BP.
of BP.

From Figure 10, the prediction curve (red line) of AESR-LSTM model with human
experience is closer to the true value (blue line). As can be seen in Figure 11, except for a
few predicted outliers, the system’s scatter plot of forecasting and actual values is closest to
the diagonal, which indicates that the difference between the forecasting value and actual
value is the smallest.
At the same time, several commonly used evaluation indicators were cited to further
verify the prediction performance of the AESR-LSTM model. The prediction performance
of the four models is used for comparison, as shown in Table 5.

Table 5. Evaluation indicators.

Indicators Formula
N
Mean absolute error (MAE) 1
∑ Tri − Tpi
N
i =1
N Tri − Tpi
Mean absolute percentage error (MAPE) 100%

N Tri
s i =1
N
Root-mean-square error (RMSE) 1 2
N ∑ ( Tri − Tpi )
i =1
N i 1

100% N Tri  Tpi


Mean absolute percentage error (MAPE) 
N i 1 Tri
1 N

Appl. Sci. 2023, 13, 1099


Root‐mean‐square error (RMSE)
N
 (T
i 1
ri  Tpi ) 2
13 of 15

N is the number of predicted values, Tri is the original data value, Tpi is the predict‐
ed value. The
N is the prediction
number performance
of predicted of the
values, prediction
Tri is model
the original dataisvalue,
indicated bythe
Tpi is thepredicted
value
of MAPE, MAE and RMSE. The MAPE, MAE, and RMSE of the four models
value. The prediction performance of the prediction model is indicated by the value of were calcu‐
lated separately
MAPE, MAE and to reflect
RMSE. theMAPE,
The goodness
MAE,of and
the RMSE
prediction model
of the four through the indexes.
models were calculated
Figure 12 shows the values of the four prediction models the evaluation indexes.
separately to reflect the goodness of the prediction model through the indexes. Figure The 12
error bars in the figure represent 95% confidence intervals. The mean absolute percent‐
shows the values of the four prediction models the evaluation indexes. The error bars in the
age, mean
figure absolute,
represent and root‐mean‐square
95% confidence errors
intervals. The meanof absolute
the proposed AESR‐LSTM
percentage, mean model
absolute,
are 0.089, 10.5403, and 27.5408, respectively, and the best indicators among several pre‐
and root-mean-square errors of the proposed AESR-LSTM model are 0.089, 10.5403, and
diction models. The feature inputs selected by the improved AESR‐LSTM model are
27.5408, respectively, and the best indicators among several prediction models. The feature
better than those obtained by traditional methods for data trend prediction, so the
inputs selected by the improved AESR-LSTM model are better than those obtained by
method optimization is effective.
traditional methods for data trend prediction, so the method optimization is effective.

(a) (b) (c)


Figure12.
Figure 12.Comparative
Comparativeresults
resultshistogram
histogram of
of model
model evaluation
evaluation metrics:
metrics:(a)
(a)MAPE
MAPEvalue
valuehistogram
histogram of
of different models, (b) MAE value histogram of different models, and (c) RMSE value histogram
different models, (b) MAE value histogram of different models, and (c) RMSE value histogram of
of different models.
different models.
4. Conclusions
4. Conclusions
According to the data set collected in the marine cabin system, an AESR‐LSTM data
According to the data set collected in the marine cabin system, an AESR-LSTM data
trend prediction model with artificial experience is constructed in this study. The model
trend prediction
can be model
used for heat with
load artificialfault
prediction, experience is constructed
detection, and diagnosis in of
this study.diesel
marine The model
en‐
can be used for heat load prediction, fault detection, and diagnosis of marine
gines. Spearman correlation coefficient method is used to collect relevant raw data diesel engines.
for
Spearman correlation
feature selection, coefficient
and the optimal method is used by
input is selected to collect
artificialrelevant
empiricalraw
anddata for feature
significance
selection, and the optimal input is selected by artificial empirical and significance check.
The cross-validation and grid search methods are combined, and the hyperparameters are
adjusted scientifically to avoid the randomness of the validation set. After the optimal
parameter set is selected by grid search and cross-validation, the model is trained again
with the optimal parameters, and the test set data is input into the training model to obtain
the prediction results. The findings are subsequently compared and analyzed with those of
other prediction models.
(1) The Spearman correlation coefficient method incorporating artificial experience
was proposed to select features on the basis of operational monitoring data collected from
the sensors. The correlation, redundancy, and significance of variable sets are analyzed
separately, and the nine monitoring characteristic parameters with the maximum influence
on the exhaust temperature are selected. Data-driven analysis and human experience are
combined to provide optimal input features for the predictive models.
(2) The LSTM prediction model is trained with parameter tuning in combination with
cross-validation grid search to obtain the prediction and evaluation metrics. The results
and indicators of several models were compared. The results show that predicted value of
AESR-LSTM are closest to the true value, and its evaluation indicators MAPE, MAE and
RMSE are the best, which are 0.089, 10.5403, and 27.5408, respectively.
(3) The shortcomings of only using a single method can be overcome by the fusion of
multiple methods, and the data can be scientifically and effectively screened to improve the
effectiveness of the model in data prediction and fault diagnosis of marine diesel engines.
Thus, the hybrid algorithm model is stable, and the error tolerance of the prediction results
is reduced.
Appl. Sci. 2023, 13, 1099 14 of 15

(4) The proposed method is based on the mechanism and data of the ship. All factors
that may cause thermal load failure of the diesel engine are taken into account and can
be used to analyze and refer to the working performance of the marine diesel engine.
The prediction data can achieve effective fault detection and maintenance of ships for
the implementation of preemptive corrective measures before ship failure, prevent ship
downtime due to damaged components caused by excessive heat load, improve fuel
economy and equipment reliability of ship diesel engines, and reduce economic losses.
A novel method combining artificial experience and data-driven is proposed. The
selected optimal feature set is input into the model for prediction, and the better prediction
results are obtained. As such, a feasible extended method of machine learning in marine
diesel engine thermal load prediction and fault diagnosis is provided. Future research
can focus on the optimization of methods, better operation parameter combination will be
obtained through data mining techniques, and independent fault detection system will be
developed to provide more convenience and information for ship operators.

Author Contributions: Conceptualization, R.Z.; methodology, R.Z.; software, R.Z.; validation, R.Z.;
formal analysis, R.Z.; investigation, R.Z.; resources, R.Z.; data curation, R.Z. and J.C.; writing—original
draft preparation, R.Z.; writing—review and editing, R.Z. and J.C.; visualization, R.Z. and X.W.; supervi-
sion, J.C. and G.Z.; project administration, J.C. and X.Y.; funding acquisition, J.C. All authors have read
and agreed to the published version of the manuscript.
Funding: This research was supported by the Key Laboratory of Marine Power Engineering & Technol-
ogy (Wuhan University of Technology), Ministry of Transport (No. KLMPET2018-10). This research was
supported by the Scientific Research Program supported by Hubei Provincial Department of Education
in 2021 (No. Q20211510).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Gabina, G.; Martin, L. Performance of marine diesel engine in propulsion mode with a waste oil-based alternative fuel. Fuel 2019,
235, 259–268. [CrossRef]
2. Wei, L.J.; Cheng, R.P. Combustion process and NOx emissions of a marine auxiliary diesel engine fuelled with waste cooking oil
biodiesel blends. Energy 2018, 144, 73–80. [CrossRef]
3. Geng, P.; Tan, Q.M. Experimental investigation on NOx and green house gas emissions from a marine auxiliary diesel engine
using ultralow sulfur light fuel. Sci. Total Environ. 2016, 572, 467–475. [CrossRef]
4. Gospic, I.; Glavan, I. Economic and Environmental Effects of the Marine Diesel Engine Trigeneration Energy Systems. J. Mar. Sci.
Eng. 2021, 9, 773. [CrossRef]
5. Li, B.; Cui, Y. Marine diesel exhaust manifold failure and life prediction under high-temperature vibration. Proc. Inst. Mech. Eng.
Part C J. Mech. Eng. Sci. 2022, 236, 6180–6191. [CrossRef]
6. Zhang, H.B.; Cui, Y. Fatigue life prediction analysis of high-intensity marine diesel engine cylinder head based on fast thermal
fluid solid coupling method. Braz. Soc. Mech. Sci. Eng. 2021, 43, 327. [CrossRef]
7. Zhang, H.B.; Liang, G. Experimental and numerical study of inelastic behavior based on simulated cylinder head specimen under
thermal cycling conditions. Braz. Soc. Mech. Sci. Eng. 2022, 44, 372. [CrossRef]
8. EI-Bitar, T.; EI-Meligy, M. Investigation of exhaust valve failure in a marine diesel engine. Eng. Fail. Anal. 2020, 114, 104574.
[CrossRef]
9. Liu, Z.; Ning, X. Starved lubrication analysis for the top ring and cylinder liner of a two-stroke marine diesel engine considering
the thermal effect of friction. Int. J. Engine Res. 2021. [CrossRef]
10. He, T.; Lu, X.Q. Thermomechanical Fatigue Life Prediction for a Marine Diesel Engine Piston considering Ring Dynamics. Adv.
Mech. Eng. 2014, 6, 429637. [CrossRef]
11. Safi, A.; Ahmad, Z. A Fault Tolerant Surveillance System for Fire Detection and Prevention Using LoRaWAN in Smart Buildings.
Sensors 2022, 22, 8411. [CrossRef]
12. Dere, C.; Deniz, C. Effect analysis on energy efficiency enhancement of controlled cylinder liner temperatures in marine diesel
engines with model based approach. Energy Convers. Manag. 2020, 220, 113015. [CrossRef]
Appl. Sci. 2023, 13, 1099 15 of 15

13. Tosun, E.; Aydin, K. Comparison of linear regression and artificial neural network model of a diesel engine fueled with biodiesel-
alcohol mixtures. Alex. Eng. J. 2016, 55, 3081–3089. [CrossRef]
14. Shin, S.; Lee, S. Deep learning procedure for knock, performance and emission prediction at steady-state condition of a gasoline
engine. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2020, 234, 3347–3361. [CrossRef]
15. Rezaei, J.; Shahbakhti, M. Performance prediction of HCCI engines with oxygenated fuels using artificial neural networks. Appl.
Energy 2015, 138, 460–473. [CrossRef]
16. Huang, Q.; Liu, J.L. On the use of artificial neural networks to model the performance and emissions of a heavy-duty natural gas
spark ignition engine. Int. J. Engine Res. 2022, 23, 1879–1898. [CrossRef]
17. Cay, Y. Prediction of a gasoline engine performance with artificial neural network. Fuel 2013, 111, 324–331. [CrossRef]
18. Liu, J.L.; Huang, Q. Machine learning assisted prediction of exhaust gas temperature of a heavy-duty natural gas spark ignition
engine. Appl. Energy 2021, 300, 117413. [CrossRef]
19. Uslu, S.; Celik, M.B. Prediction of engine emissions and performance with artificial neural networks in a single cylinder diesel
engine using diethyl ether. Eng. Sci. Technol. 2018, 21, 1194–1201. [CrossRef]
20. Pascanu, R.; Mikolov, T. On the difficulty of training recurrent neural networks. arXiv 2013, arXiv:1211.5063.
21. Zhang, Z.; Wang, C. Passenger Flow Forecast of Rail Station Based on Multi-Source Data and Long Short Term Memory Network.
IEEE Access 2020, 8, 28475–28483. [CrossRef]
22. Jiao, R.H.; Zhang, T.M. Short-Term Non-Residential Load Forecasting Based on Multiple Sequences LSTM Recurrent Neural
Network. IEEE Access 2018, 6, 59438–59448. [CrossRef]
23. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1977, 9, 1735–1780. [CrossRef] [PubMed]
24. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory Technical Report FKI-207-95; Technische Universitat Munchen, Fakultat fur
Informatik: Vienna, Austria, 1995.
25. Gers, F.A.; Schmidhuber, J. Learning to forget: Continual prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [CrossRef]
[PubMed]
26. Graves, A.; Schmidhuber, J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures.
Neural Netw. 2005, 18, 602–610. [CrossRef] [PubMed]
27. Shobana, J.; Murali, M. An efficient sentiment analysis methodology based on long short-term memory networks. Complex Intell.
Syst. 2021, 7, 2485–2501. [CrossRef]
28. Xiao, C.W.; Ye, J.Q. Using Spearman’s correlation coefficients for exploratory data analysis on big dataset. Concurr. Comput. Pract.
Exp. 2015, 28, 3866–3878. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like