Applsci 12 09512
Applsci 12 09512
sciences
Article
A Novel Inverse Kinematic Solution of a Six-DOF Robot Using
Neural Networks Based on the Taguchi Optimization Technique
Teodoro Ibarra-Pérez 1, * , José Manuel Ortiz-Rodríguez 2 , Fernando Olivera-Domingo 1 ,
Héctor A. Guerrero-Osuna 3 , Hamurabi Gamboa-Rosales 3 and Ma. del Rosario Martínez-Blanco 2, *
1 Instituto Politécnico Nacional, Unidad Profesional Interdisciplinaria de Ingeniería Campus Zacatecas (UPIIZ),
Zacatecas 98160, Mexico
2 Laboratorio de Innovación y Desarrollo Tecnológico en Inteligencia Artificial (LIDTIA),
Universidad Autónoma de Zacatecas, Zacatecas 98000, Mexico
3 Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Zacatecas 98000, Mexico
* Correspondence: tibarrap@[Link] (T.I.-P.); mrosariomb@[Link] (M.d.R.M.-B)
Abstract: The choice of structural parameters in the design of artificial neural networks is generally
based on trial-and-error procedures. They are regularly estimated based on the previous experience
of the researcher, investing large amounts of time and processing resources during network training,
which are usually limited and do not guarantee the optimal selection of parameters. This paper
presents a procedure for the optimization of the training dataset and the optimization of the structural
parameters of a neural network through the application of a robust neural network design methodol-
ogy based on the design philosophy proposed by Genichi Taguchi, applied to the solution of inverse
Citation: Ibarra-Pérez, T.;
kinematics in an open source, six-degrees-of-freedom robotic manipulator. The results obtained
Ortiz-Rodríguez, J.M.; during the optimization process of the structural parameters of the network show an improvement in
Olivera-Domingo, F.; the accuracy of the results, reaching a high prediction percentage and maintaining a margin of error
Guerrero-Osuna, H.A.; of less than 5%.
Gamboa-Rosales, H.;
Martínez-Blanco, M.d.R. A Novel Keywords: backpropagation; optimization methods; inverse kinematics; robotics
Inverse Kinematic Solution of a
Six-DOF Robot Using Neural
Networks Based on the Taguchi
Optimization Technique. Appl. Sci. 1. Introduction
2022, 12, 9512. [Link]
One of the main problems in the design of neural networks is the selection of the
10.3390/app12199512
structural parameters of the network and their corresponding values before performing the
Academic Editor: Luis Gracia training. In this work, the robust design artificial neural network (RDANN) methodology is
Received: 24 August 2022
used. The main focus of this methodology is based on reducing the number of experiments
Accepted: 17 September 2022
that can be carried out using the factorial fractional method, a statistical procedure based
Published: 22 September 2022
on the robust design philosophy proposed by Genichi Taguchi. This technique allows one
to set the optimal settings on the control factors to make the process insensitive to noise
Publisher’s Note: MDPI stays neutral
factors [1,2].
with regard to jurisdictional claims in
Currently, the selection of the structural parameters in the design of artificial neural
published maps and institutional affil-
networks (ANNs) remains a complex task. The design of neural networks implies the
iations.
optimal selection of a set of structural parameters in order to obtain greater convergence
during the training process and high precision in the results. In [1], the feasibility of
this type of approach for the optimization of structural parameters in the design of a
Copyright: © 2022 by the authors.
backpropagation artificial neural network (BPANN) for the determination of operational
Licensee MDPI, Basel, Switzerland. policies in a manufacturing system is demonstrated, where it is shown that the Taguchi
This article is an open access article method allows designers to improve the performance in the learning speed of the network
distributed under the terms and and the precision in the obtained results.
conditions of the Creative Commons Most designers select an architecture type and determine the various structural pa-
Attribution (CC BY) license (https:// rameters of the chosen network. However, there are no clear rules on how to choose those
[Link]/licenses/by/ parameters in the selected network architecture, although these parameters determine the
4.0/). success of the network training. The selection of the structural parameters of the network
In this case, if a desired level of performance is not maintained, the levels in the previ-
ously established design parameters are changed until the desired performance is obtained.
In each experiment, the responses are observed in order to determine the appropriate levels
in the design of the structural parameters of the network [7].
A drawback in the use of this type of procedure is that one parameter is evaluated,
while the others are kept at a single level, so the level selected in a variable may not
necessarily be the best at the end of the experiment, since it is very likely that most of the
layout variables involved will change their value. A possible solution could be that all
the possible combinations in the parameters are evaluated, that is, to carry out a complete
factorial design. However, the number of combinations can be very large due to the
number of levels and previously established design parameters, so this method could be
computationally expensive and time-consuming.
Due to all these limitations, the scientific community has shown special interest in
the implementation of new approaches and procedures applied to the optimization of
structural parameters in the search to generate better performance in ANNs [8–13].
Currently, ANNs can be trained to solve problems that can be complex for a human or
a conventional computer, since they allow obtaining results with a high degree of precision
and a significant reduction in error in real-time applications. In recent decades, the use
of ANNs has been successfully applied in different fields, including pattern recognition,
classification, identification, voice, vision, control systems, and robotics, the latter of which
has raised special interest among researchers in the field, particularly the solution of the
inverse kinematics in manipulators with six or more degrees of freedom, due to the great
flexibility of control that they present for the execution of very complex tasks [14–17].
In [18], a BPNN algorithm is proposed, optimized by Fruit Fly Optimization Algorithm
(FOA), to find the solution of the inverse kinematics in a four-DOF robot, obtaining an
output error range −0.04686–0.1271 smaller than that obtained by a BPNN. In [19], a BPNN
algorithm, optimized by means of particle swarm optimization (PSO), is studied to solve
the inverse kinematic problem in a six-DOF UR3 robot applied in puncture surgery, where
convergence in the precision of the results, as well as the speed and generalization capacity
of the proposed network, is improved. In [20], a deep learning approach is proposed to solve
the inverse kinematics in a seven-DOF manipulator. The approach used allows it to be fast,
easy to implement, and more stable, allowing less sensitivity in hyperparameters. In [21], a
combination of swarm intelligence (SI) and the product of exponentials (PoEs) is used to
solve the inverse kinematics in a seven-DOF manipulator, where they are compared with the
conventional inverse kinematics and standard PSO algorithms. In [22], the main approach
is based on a redundant manipulator inverse kinematic problem that is formulated as a
quadratic programming optimization problem solved by different types of recurrent neural
networks. In [23], an approach is proposed to address the complexity of solving the inverse
kinematics in a seven-DOF serial manipulator through an algorithm based on the Artificial
Bee Colony (ABC) optimization algorithm, where two control parameters are used in order
Appl. Sci. 2022, 12, 9512 3 of 20
to adjust the search to optimize the distribution of the sources. In [24], an optimization
approach is shown in the planning of the trajectories applied in a five-bar parallel robot for
real-time control, minimizing the trajectory time and avoiding singularities in the parallel
manipulator, achieving an error of less than 0.7◦ at the joints.
Factorial experimental design is a statistical technique used to identify and measure
the effect that one variable has on another variable of interest. In 1920, R. A. Fisher
studied multiple factors in the agricultural field to determine the effect of each factor on the
response variable, as well as the effect of the interactions between factors on this variable.
This method is known as the factorial design of experiments. Factors are variables that
determine the functionality of a product or process and significantly influence system
performance and can usually be controlled. To evaluate the impact of each variable, the
factors must establish at least two levels; therefore, given k factors with l levels, a complete
factorial design that includes all the possible combinations between these factors and levels
will produce a total of l k experimental runs. Obviously, as k or l increases, the number
of experiments may become unfeasible to carry out, since a significant number of factors
would imply a large number of experiments. For this reason, fractional factorial designs
have been introduced, which require only a fraction of a run, unlike a complete factorial
design, and which allow estimating a sufficient number of effects [25,26].
Genichi Taguchi is considered to be the author of robust parameter design through
a procedure focused on reducing variation and/or sensitivity to noise in the design of
products or processes, which is based on the concept of fractional factorial design. Through
the implementation of orthogonal arrays (OA) and fractional factorial design, it is possible
to analyze a wide range of parameters through a reduced number of experiments, ensuring
a balanced comparison between the factors involved and the interaction with their different
levels [2,27,28].
The Taguchi method is applied in four stages:
1. Selection of design and noise variables. In this stage, the most important parameters for
the product/process are considered, taking into account the quality characteristics.
Generally, there are variables that can be controlled by the user and others that cannot.
These types of variables are known as design and noise factors, respectively, which
have an important influence on the operation of the product/process. They can be
determined mainly by answering the following questions: What is the optimal design
condition? What factors contribute to the results and to what extent? What will be the
expected result?
2. Design and experimentation. An OA is established, which contains the organization of
the experiment taking into account the levels established for each of the factors in order
to minimize the effects produced by noise factors. In other words, the adjustments
made to the factors must be determined in such a way that there is the least variation
in the response of the product/process, and the mean is established as close as possible
to the desired objective. The OA allows the implementation of a balanced design in
the weighting of the pre-established levels for each factor involved since it is possible
to evaluate various factors with a minimum number of tests, obtaining a considerable
amount of information through the application of few tests. The mean and variance of
the response obtained in the OA configuration are combined into a single performance
measure known as the signal-to-noise ratio (S/N).
3. Analysis of results. The S/N ratio is a quality indicator by which the effect produced
on a particular parameter can be evaluated. The variation in the response obtained in
dynamic characteristics, the S/N ratio, is shown below in the following equation:
βi
S/N = 10·log10 , (1)
MSEi
where β i is the square of the largest value of the signal, and MSEi represents the root
mean square deviation in the performance of the neural network, or in other words,
the mean square of the distance between the measured response and the best fit line.
Appl. Sci. 2022, 12, 9512 4 of 20
A valid robustness measure is related to obtaining the highest values in the S/N ratio,
because the configurations of control factors that minimize the effects on noise factors
can be identified.
4. Execution and confirmation of tests in optimal conditions. In this stage, a confirmation
experiment is carried out by performing training with optimal design conditions in
order to calculate the performance robustness measure and verify if this value is close
to the predicted value.
Hartemberg (D–H) parameters to obtain a kinematic model through four basic transforma-
tions that are determined based on the geometric characteristics of the Quetzal manipulator
to be analyzed [43]. The D–H parameters are shown in Table 1.
The basic transformations represent a sequence of rotations and translations, where the
reference system of element i is related to the system of element i − 1. The transformation
matrix is given by Equation (2).
i −1
i T = R Z (θi ) DZ (di ) DX, ( ai−1 ) R X (αi−1 ) (2)
Carrying out the multiplication of the four matrices, Equation (3) is obtained:
−sθi cαi−1
cθi sθi sαi−1 ai−1 cθi
i −1 sθi
cθi cαi−1 −cθi sαi−1 ai−1 sθi
i T = 0 (3)
sαi−1 cαi−1 di
0 0 0 1
where ii−1 T is the D–H transformation matrix from coordinate system i to i − 1. R Z (θi ) is
the rotation matrix representing a rotation θi around the Z axis, DZ (di ) is the translation
matrix representing a translation of di on the Z axis, DX, ( ai−1 ) is the translation matrix
representing a translation of ai−1 on the X axis, R X (αi−1 ) is the rotation matrix representing
a rotation αi−1 around the X axis, cθi is shorthand for cos cos(θi ), sθi is shorthand for sin(θi ),
etc. The transformation matrices of each of the joints are multiplied to obtain the initial
position of the end effector in the base reference system, as shown in Equation (4).
0 nx ox ax px
0P
6 R 3×3
ny oy ay py
0
= 01 A · 12 A · 23 A · 34 A · 45 A · 56 A = 6 3×1
6T = (4)
0 1 nz oz az pz
0 0 0 1
Therefore, the equation of the forward kinematics of the Quetzal manipulator can be
expressed as shown in Equation (5).
Ff orward_k (θ1 , θ2 , θ3 , θ4 , θ5 , θ6 ) = p x , py , pz , n x , ny , nz , o x , oy , oz , a x , ay , az (5)
As shown in Equation (5), the position of the end effector of the manipulator can be
obtained from the angular values of the six joints of the manipulator. However, in practice, it
is necessary to obtain the angles at each of the joints through a given position, so it is necessary
to calculate the inverse kinematics, which can be expressed as shown in Equation (6).
Finverse_k p x , py , pz , n x , ny , nz , o x , oy , oz , a x , ay , az = (θ1 , θ2 , θ3 , θ4 , θ5 , θ6 ) (6)
Appl. Sci. 2022, 12, 9512 6 of 20
Solving (4) gives the orientation and position of the final effector in regard to the
reference system, as shown in Equation (7), where the position vector [ p] = {42.215, 0, 20.2}
and the orientation vector [n o a] = {0, 0, 1, 0, −1, 0, 1, 0, 0}.
0 0 1 42.215
6
0 −1 0 0
T0 =
1
(7)
0 0 20.2
0 0 0 1
The graphic representation of the initial position of the Quetzal robotic manipulator is
shown in Figure 2 through a simulation carried out with the Robotics Toolbox for MATLAB
software [44].
Description θ1 θ2 θ3 θ4 θ5 θ6
Min 0 0 2π 0 2π 0
Max 2π π π/2 2π π/2 2π
Step π/12 π/24 π/24 π/12 π/24 π/12
Total steps 25 25 25 25 25 25
Appl. Sci. 2022, 12, 9512 7 of 20
Figure 3. Position vector [ p] = p x , py , pz in function of joints θ1 , θ2 , θ4 , θ5 and θ6 .
data − min
DataNorm = × (h − l ) + l (8)
range
Figure 4. Scatter matrix of the position dataset: (a) before applying the reduction filter; (b) after
applying the data reduction filter.
The graphic representation of the BPNN used in this work is shown in Figure 6,
with 12 input variables and 6 output variables that correspond to the position vector
[ p] = p x , py , pz and orientation vector [n o a] = n x , ny , nz , o x , oy , oz , a x , ay , az as input
and the vector of joint angles [θ ] = {θ1 , θ2 , θ3 , θ4 , θ5 , θ6 } as output.
In this case, N represents the number of attempts, θiPREDICTED represents the set
of joint values that are predicted by the BPANN, and θiORIGI N AL represents the set of
joint values.
The design variables correspond to those that can be controlled by the user, such as
the number of neurons in the first layer, the number of neurons in the second layer, the
momentum constant, and the learning rate. By contrast, the noise variables are commonly
not directly controlled by the user in most cases, such as the initialization of synaptic
weights that are generally assigned randomly, the size of the training sets versus test
sets, and the random selection of training and test sets. According to the requirements
of the problem, the user can choose the factors related to variation in the system during
the optimization process. Four design variables and three noise variables were selected
because they were directly involved with the performance of the ANN, as described below
in Table 3 with their respective configuration levels.
Appl. Sci. 2022, 12, 9512 10 of 20
In terms of variables, A is the number of neurons in the first hidden layer; B is the
number of neurons in the second hidden layer; C is the momentum constant, which allows
the stabilization of the updating of each of the synaptic weights taking into account the sign
of the gradient; D is the learning rate, which allows us to define the cost that the gradient
has in updating a weight because the increase or decrease in the synaptic weight is related
to the magnitude of the proposed value, so it may or may not affect the convergence of the
MSE, causing instability and divergence [47]. X is the initial set of weights, Y is the size in
proportions of the dataset, and Z is the random selection from the training and testing set.
Once the variables and their respective levels were chosen, a suitable OA was chosen
to carry out the training sessions. An OA is described as Lr (Sc ), where r represents the
number of rows, c represents the number of columns, and s represents the number of
levels in each of the columns. In this experiment, the columns of the OA represent the
parameters to be optimized, and the rows represent the tests carried out by combining the
three proposed levels.
Table 4. Recording of responses in trials using crossed OAs with L9 34 and L4 32 configurations.
In this case, MSD is the root mean square deviation in the ANN performance. The
best topology is considered when more signals and less noise are obtained; therefore, a
high S/N ratio at this stage allows us to identify the best design values in the BPANN with
the help of statistical analysis with the JMP software.
3. Results
In this work, the RDANN methodology was used for the optimal selection of the
structural parameters in a feed-forward backpropagation network, known as BPNN, to
find the solution to the inverse kinematics in a Quetzal robot. For the BPNN training,
the “resilient backpropagation” training algorithm and mse = 1E−4 were selected. In
accordance with the RDANN methodology, an OA corresponding to the design and noise
variables, respectively, was implemented in configurations L9 34 and L4 32 to determine
the response to the tests during the 36 training sessions carried out.
The results obtained after applying the RDANN methodology are presented in the
next sections.
The values for the three levels established in each of the tests regarding the number of
neurons for the first hidden layer were A = 80, 100, and 120, respectively; for the number
of neurons in the second hidden layer, they were B = 30, 60, and 90, respectively; for the
constant momentum, they were C = 0.1, 0.2, and 0.3, respectively; and for the learning
rate, they were D = 0.01, 0.1, and 0.2, respectively.
The values for the initial sets of weights X were randomly determined at all three
levels. The values set in the proportions of the dataset for level 1 were Y = 70% training
and 30% testing; for level 2, they were 80% and 20%, and for level 3, they were 90% and
30%, respectively; finally, the random selection of the training and testing set for level
Appl. Sci. 2022, 12, 9512 12 of 20
1 was Z = Training1/Test; for level 2, it was Training2/Test2, and for level 3, it was
Training3/Test3.
L4 32 configurations, where the network architectures were trained and tested, obtaining
the results shown in Table 6.
For the analysis of the S/N ratio, an analysis of variance (ANOVA) was performed
using the statistical software program JMP. The S/N ratio and the mean value of the
MSE are two of the specific criteria for determining the appropriate levels in the variables
involved in network design, and their choice is determined through a series of validation
procedures carried out in the next stage, as described below.
Table 7. Best design values with normal desirable and maximized profiles.
Profile A B C D
Normal 80 30 0.01 0.1
Desirable 80 30 0.01 0.1
Maximized 100 30 0.2 0.2
For the choice of the best network profile obtained, three training sessions were carried
out for each of the three profiles in order to contrast them based on the size of the training
and test data and their generalization capacity, estimating the percentage of correct answers
in the prediction of the data, obtaining the results shown in Table 8.
The best topology corresponds to the maximized desirable profile, with the percentage
of obtained hits being 87.71% with a margin of error of less than 5% in the tests. Once the best
topology was chosen, the statistical tests of correlation and chi-square were performed, showing
the best and worst prediction of the network, as shown in Figures 8 and 9, respectively.
Appl. Sci. 2022, 12, 9512 13 of 20
Figure 7. S/N analysis for the determination of the optimal parameters of the network: (a) normal
profile; (b) desirability profile; (c) maximized desirability profile.
Appl. Sci. 2022, 12, 9512 14 of 20
Table 8. Comparison of density and percentage of hits in the three best profiles.
(b)
Figure 8. Trajectory of the manipulator with the best prediction and correlation test: (a) best pre-
Figure
dicted values; 8. Trajectory
(b) correlation [Link] the manipulator with the best prediction and correlation test: (a) best predicted
values; (b) correlation test.
Appl. Sci. 2022, 12, 9512 15 of 20
Figure 9. Trajectory of the manipulator with the worst prediction and correlation test: (a) worst
predicted values; (b) correlation test.
To determine if the predicted data are statistically reliable, the cross-validation method
was used by splitting the training and testing datasets. The set was split into five subsets of
the same size, as shown in Figure 10. The validation subset in each training session was
used to measure the generalization error, in other words, the misclassification rate of the
model with data dissimilar from those previously applied during the training procedure.
The cross-validation procedure was implemented on the training and testing datasets, and
the average value of MSE and the standard deviation obtained were very close to those
obtained in the confirmation stage [48].
Table 9 shows the results obtained in the cross-validation process, where it is observed
that the average training value was equal to 17.5099, the average percentage of hits con-
sidering an error of less than 5% was equal to 87.86%, and the average value of MSE was
equal to 17.5099, with standard deviations of 1.5848, 1.8974, and 0.0059, respectively.
Appl. Sci. 2022, 12, 9512 16 of 20
In relation to the three profiles analyzed, the choice of the appropriate levels for
the structural parameters of the best network topology were those corresponding to the
maximized desirable profile with 100 and 30 neurons, respectively, a momentum of 0.2,
and a learning rate of 0.2.
Figure 11 shows the layered surface diagram of the neural network used in this work.
The training was performed using MATLAB software. The ANN was composed of an
input layer with 100 neurons, a hidden layer with 30 neurons, and an output layer with
6 neurons. All three layers used the activation function. The training algorithm used to
adjust the weighting of the synaptic weights was resilient backpropagation.
The greatest error observed was in trajectory number 236, with a value of 7.7% com-
pared with the calculated one, while for trajectory number 6, the error was 1.1% compared
with the calculated one. A mean error of 3.5% was obtained for the implementation of the
10 physically realized trajectories using the low-cost (approximately USD 1500) 3D-printed
Quetzal manipulator.
The time spent during the network design process was significantly reduced compared
with the conventional methods based on trial and error. In the methods that are generally
proposed by the previous experience of the researcher, the design and modeling of the
network can take from several days to a few weeks or even months to test the different
ANN architectures, which can lead to a relatively poor design. The use of the RDANN
methodology in this study allowed the network design to be carried out in less time, with
approximately 13 h of training, due to the orthogonal arrangement corresponding to the 36
training sessions performed using a conventional AMD Ryzen 7 5700 series processor with
an integrated graphics card.
Depending on the complexity of the problem, the use of this methodology allows
handling times ranging from minutes to hours to determine the best robustness parameters
in the network architecture. Therefore, it is possible to streamline the process and reduce
efforts, with a high degree of precision in network performance. The use of the RDANN
methodology allowed the analysis of the interaction between the values in the design
variables that were involved, in order to consider their effects on network performance,
thus allowing a reduction in the time and effort spent in the modeling stage and speeding
up the selection and interpretation of the optimal values in the structural parameters of the
network. The quality of the data in the training sets, without a doubt, can significantly help
to increase the performance, generalization capacity, and precision of the results obtained.
Although the proposed method was implemented and tested in a low-cost manipulator in
this study, in future work, we plan to implement it in an industrial-type robot controller.
The implementation of the proposed method in parallel robotic manipulators, where the
solution of the kinematics is more complex, is also considered.
References
1. Khaw, J.F.C.; Lim, B.S.; Lim, L.E.N. Optimal Design of Neural Networks Using the Taguchi Method. Neurocomputing 1995, 7,
225–245. [CrossRef]
2. Rankovic, N.; Rankovic, D.; Ivanovic, M.; Lazic, L. A New Approach to Software Effort Estimation Using Different Artificial
Neural Network Architectures and Taguchi Orthogonal Arrays. IEEE Access 2021, 9, 26926–26936. [CrossRef]
3. Snoek, J.; Larochelle, H.; Adams, R.P. Practical Bayesian Optimization of Machine Learning Algorithms. In Advances in Neural
Information Processing Systems 25 (NIPS 2012); Pereira, F., Burges, C.J., Bottou, L., Weinberger, K.Q., Eds.; Curran Associates, Inc.:
Red Hook, NY, USA, 2012; Volume 25.
4. Bergstra, J.; Bengio, Y. Random Search for Hyper-Parameter Optimization. J. Mach. Learn. Res. 2012, 13, 281–305.
5. Kosarac, A.; Mladjenovic, C.; Zeljkovic, M.; Tabakovic, S.; Knezev, M. Neural-Network-Based Approaches for Optimization of
Machining Parameters Using Small Dataset. Materials 2022, 15, 700. [CrossRef] [PubMed]
6. Teslyuk, V.; Kazarian, A.; Kryvinska, N.; Tsmots, I. Optimal Artificial Neural Network Type Selection Method for Usage in Smart
House Systems. Sensors 2021, 21, 47. [CrossRef]
7. Ortiz-Rodríguez, J.M.; Martínez-Blanco, M.d.R.; Viramontes, J.M.C.; Vega-Carrillo, H.R. Robust Design of Artificial Neural
Networks Methodology in Neutron Spectrometry. In Artificial Neural Networks; Suzuki, K., Ed.; IntechOpen: Rijeka, Croatia, 2013.
8. Zajmi, L.; Ahmed, F.Y.H.; Jaharadak, A.A. Concepts, Methods, and Performances of Particle Swarm Optimization, Backpropaga-
tion, and Neural Networks. Appl. Comput. Intell. Soft Comput. 2018, 2018, 9547212. [CrossRef]
9. Sun, W.; Huang, C. A Carbon Price Prediction Model Based on Secondary Decomposition Algorithm and Optimized Back
Propagation Neural Network. J. Clean. Prod. 2020, 243, 118671. [CrossRef]
10. Huang, D.; Wu, Z. Forecasting Outpatient Visits Using Empirical Mode Decomposition Coupled with Back-Propagation Artificial
Neural Networks Optimized by Particle Swarm Optimization. PLoS ONE 2017, 12, e0172539. [CrossRef]
11. Gaxiola, F.; Melin, P.; Valdez, F.; Castro, J.R.; Castillo, O. Optimization of Type-2 Fuzzy Weights in Backpropagation Learning for
Neural Networks Using GAs and PSO. Appl. Soft Comput. 2016, 38, 860–871. [CrossRef]
12. Huang, H.-X.; Li, J.-C.; Xiao, C.-L. A Proposed Iteration Optimization Approach Integrating Backpropagation Neural Network
with Genetic Algorithm. Expert Syst. Appl. 2015, 42, 146–155. [CrossRef]
13. Amirsadri, S.; Mousavirad, S.J.; Ebrahimpour-Komleh, H. A Levy Flight-Based Grey Wolf Optimizer Combined with Back-
Propagation Algorithm for Neural Network Training. Neural Comput. Appl. 2018, 30, 3707–3720. [CrossRef]
14. Li, S.; Zhang, Y.; Jin, L. Kinematic Control of Redundant Manipulators Using Neural Networks. IEEE Trans. Neural Netw. Learn.
Syst. 2017, 28, 2243–2254. [CrossRef]
15. Gao, R. Inverse Kinematics Solution of Robotics Based on Neural Network Algorithms. J. Ambient Intell. Humaniz. Comput. 2020,
11, 6199–6209. [CrossRef]
16. Kramar, V.; Kramar, O.; Kabanov, A. An Artificial Neural Network Approach for Solving Inverse Kinematics Problem for an
Anthropomorphic Manipulator of Robot SAR-401. Machines 2022, 10, 241. [CrossRef]
17. Šegota, S.B.; And̄elić, N.; Mrzljak, V.; Lorencin, I.; Kuric, I.; Car, Z. Utilization of Multilayer Perceptron for Determining the
Inverse Kinematics of an Industrial Robotic Manipulator. Int. J. Adv. Robot. Syst. 2021, 18, 1729881420925283. [CrossRef]
18. Bai, Y.; Luo, M.; Pang, F. An Algorithm for Solving Robot Inverse Kinematics Based on FOA Optimized BP Neural Network. Appl.
Sci. 2021, 11, 7129. [CrossRef]
19. Jiang, G.; Luo, M.; Bai, K.; Chen, S. A Precise Positioning Method for a Puncture Robot Based on a PSO-Optimized BP Neural
Network Algorithm. Appl. Sci. 2017, 7, 969. [CrossRef]
20. Malik, A.; Lischuk, Y.; Henderson, T.; Prazenica, R. A Deep Reinforcement-Learning Approach for Inverse Kinematics Solution of
a High Degree of Freedom Robotic Manipulator. Robotics 2022, 11, 44. [CrossRef]
21. Malik, A.; Henderson, T.; Prazenica, R. Multi-Objective Swarm Intelligence Trajectory Generation for a 7 Degree of Freedom
Robotic Manipulator. Robotics 2021, 10, 127. [CrossRef]
22. Hassan, A.A.; El-Habrouk, M.; Deghedie, S. Inverse Kinematics of Redundant Manipulators Formulated as Quadratic Program-
ming Optimization Problem Solved Using Recurrent Neural Networks: A Review. Robotica 2020, 38, 1495–1512. [CrossRef]
23. Zhang, L.; Xiao, N. A Novel Artificial Bee Colony Algorithm for Inverse Kinematics Calculation of 7-DOF Serial Manipulators.
Soft Comput. 2019, 23, 3269–3277. [CrossRef]
24. Bourbonnais, F.; Bigras, P.; Bonev, I.A. Minimum-Time Trajectory Planning and Control of a Pick-and-Place Five-Bar Parallel
Robot. IEEEASME Trans. Mechatron. 2015, 20, 740–749. [CrossRef]
25. Arboretti, R.; Ceccato, R.; Pegoraro, L.; Salmaso, L. Design of Experiments and Machine Learning for Product Innovation: A
Systematic Literature Review. Qual. Reliab. Eng. Int. 2022, 38, 1131–1156. [CrossRef]
26. Ou, Z.; Zhang, M.; Qin, H. Tripling of Fractional Factorial Designs. J. Stat. Plan. Inference 2019, 199, 151–159. [CrossRef]
27. Rahman, M.A.; Muniyandi, R.c.; Albashish, D.; Rahman, M.M.; Usman, O.L. Artificial Neural Network with Taguchi Method for
Robust Classification Model to Improve Classification Accuracy of Breast Cancer. PeerJ Comput. Sci. 2021, 7, e344. [CrossRef]
[PubMed]
28. Manni, A.; Saviano, G.; Bonelli, M.G. Optimization of the ANNs Predictive Capability Using the Taguchi Approach: A Case
Study. Mathematics 2021, 9, 766. [CrossRef]
Appl. Sci. 2022, 12, 9512 20 of 20
29. Zhang, Y.; Guo, D.; Li, Z. Common Nature of Learning Between Back-Propagation and Hopfield-Type Neural Networks for
Generalized Matrix Inversion With Simplified Models. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 579–592. [CrossRef]
[PubMed]
30. Alebooyeh, M.; Urbanic, R.J. Neural Network Model for Identifying Workspace, Forward and Inverse Kinematics of the 7-DOF
YuMi 14000 ABB Collaborative Robot. IFAC PapersOnLine 2019, 52, 176–181. [CrossRef]
31. Köker, R. A Genetic Algorithm Approach to a Neural-Network-Based Inverse Kinematics Solution of Robotic Manipulators Based
on Error Minimization. Inf. Sci. 2013, 222, 528–543. [CrossRef]
32. Volinski, A.; Zaidel, Y.; Shalumov, A.; DeWolf, T.; Supic, L.; Tsur, E.E. Data-Driven Artificial and Spiking Neural Networks for
Inverse Kinematics in Neurorobotics. Patterns 2022, 3, 100391. [CrossRef]
33. Fang, G.; Tian, Y.; Yang, Z.-X.; Geraedts, J.M.P.; Wang, C.C.L. Efficient Jacobian-Based Inverse Kinematics With Sim-to-Real
Transfer of Soft Robots by Learning. IEEEASME Trans. Mechatron. 2022, 1–11, in press. [CrossRef]
34. Phuoc, L.M.; Martinet, P.; Lee, S.; Kim, H. Damped Least Square Based Genetic Algorithm with Ggaussian Distribution of
Damping Factor for Singularity-Robust Inverse Kinematics. J. Mech. Sci. Technol. 2008, 22, 1330–1338. [CrossRef]
35. Köker, R.; Çakar, T. A Neuro-Genetic-Simulated Annealing Approach to the Inverse Kinematics Solution of Robots: A Simulation
Based Study. Eng. Comput. 2016, 32, 553–565. [CrossRef]
36. Qie, X.; Kang, C.; Zong, G.; Chen, S. Trajectory Planning and Simulation Study of Redundant Robotic Arm for Upper Limb
Rehabilitation Based on Back Propagation Neural Network and Genetic Algorithm. Sensors 2022, 22, 4071. [CrossRef]
37. Liu, R.; Liu, C. Human Motion Prediction Using Adaptable Recurrent Neural Networks and Inverse Kinematics. IEEE Control
Syst. Lett. 2021, 5, 1651–1656. [CrossRef]
38. Reinhart, R.F.; Steil, J.J. Reaching Movement Generation with a Recurrent Neural Network Based on Learning Inverse Kinematics
for the Humanoid Robot ICub. In Proceedings of the 2009 9th IEEE-RAS International Conference on Humanoid Robots, Paris,
France, 7–10 December 2009; pp. 323–330.
39. Yiyang, L.; Xi, J.; Hongfei, B.; Zhining, W.; Liangliang, S. A General Robot Inverse Kinematics Solution Method Based on
Improved PSO Algorithm. IEEE Access 2021, 9, 32341–32350. [CrossRef]
40. Ghosh, A.; Singh, O.; Ray, A.K. Inverse Kinematic Solution of a 7 DOF Robotic Manipulator Using Boundary Restricted Particle
Swarm Optimization. IFAC PapersOnLine 2022, 55, 101–105. [CrossRef]
41. Wang, D.; Wu, J.; Wang, L.; Liu, Y. A Postprocessing Strategy of a 3-DOF Parallel Tool Head Based on Velocity Control and Coarse
Interpolation. IEEE Trans. Ind. Electron. 2017, 65, 6333–6342.
42. Larrañaga, A. 3D Printable Robotic Arm. Available online: [Link] (accessed on 5 July 2022).
43. Denavit, J.; Hartenberg, R.S. A Kinematic Notation for Lower-Pair Mechanisms Based on Matrices. J. Appl. Mech. 1955, 22,
215–221. [CrossRef]
44. Corke, P.I.; Khatib, O. Robotics, Vision and Control: Fundamental Algorithms in MATLAB.; Springer: Berlin/Heidelberg, Germany,
2011; Volume 73, ISBN 3-642-20144-X.
45. Mostafa, S.A.; Ahmad, I.A. Remainder Linear Systematic Sampling with Multiple Random Starts. J. Stat. Theory Pract. 2016, 10,
824–851. [CrossRef]
46. Martínez-Blanco, M.d.R.; Ibarra-Pérez, T.; Olivera-Domingo, F.; Ortiz-Rodríguez, J.M. Optimization of Training Data Set Based on
Linear Systematic Sampling to Solve the Inverse Kinematics of 6 DOF Robotic Arm with Artificial Neural Networks. In Frontiers
of Data and Knowledge Management for Convergence of ICT, Healthcare, and Telecommunication Services; Paul, S., Paiva, S., Fu, B., Eds.;
Springer International Publishing: Cham, Switzerland, 2022; pp. 85–112. ISBN 978-3-030-77558-2.
47. Gurney, K. An Introduction to Neural Networks, 1st ed.; CRC Press: Boca Raton, FL, USA, 1997; ISBN 1-315-27357-8.
48. Paneiro, G.; Rafael, M. Artificial Neural Network with a Cross-Validation Approach to Blast-Induced Ground Vibration Propaga-
tion Modeling. Undergr. Space 2021, 6, 281–289. [CrossRef]
49. Dereli, S.; Köker, R. IW-PSO Approach to the Inverse Kinematics Problem Solution of a 7-DOF Serial Robot Manipulator. Sigma J.
Eng. Nat. Sci. 2018, 36, 77–85.