56 R. Rooki et al.
/ International Journal of Mineral Processing 110–111 (2012) 53–61
Table 1 Table 1 (continued)
Properties of fluid and solid spheres tested and experimental results of terminal falling K n dp ρs ρ V Re CD
velocities. (Pa*sn) (–) (m) (kg/m3) (kg/m3) (m/s) (–) (–)
K n dp ρs ρ V Re CD Kelessidis and Mpandelis (2004)
(Pa*sn) (–) (m) (kg/m3) (kg/m3) (m/s) (–) (–) 0.0010 1 0.0026 11444 989.056 1.0660 2772.8972 0.320
Kelessidis (2003) 0.1350 1 0.0032 2506 1227.110 0.0420 1.2064 24.420
0.2648 0.7529 0.0015 2260 1000 0.0119 0.1125 174.572 0.1350 1 0.0022 2668 1238.396 0.0232 0.4767 62.840
0.2648 0.7529 0.0021 2727 1000 0.0361 0.5678 35.534 0.1350 1 0.0012 2314 1227.003 0.0072 0.0798 272.700
0.2648 0.7529 0.0023 2449 1000 0.0409 0.7234 26.059 0.1350 1 0.0026 11444 1226.688 0.1848 4.4163 8.390
0.1350 1 0.0031 7859 1226.493 0.1656 4.6489 7.970
0.2648 0.7529 0.0030 2609 1000 0.0664 1.6292 14.463
0.1152 0.7449 0.0032 2506 999.591 0.1282 9.0398 3.790
0.2648 0.7529 0.0035 2572 1000 0.0802 2.2735 11.029
0.0353 0.8724 0.0015 2260 1000 0.0440 2.8774 12.769 0.1152 0.7449 0.0022 2668 999.944 0.0835 4.0860 7.010
0.0165 0.9198 0.0015 2260 1000 0.0597 7.3105 6.936 0.1152 0.7449 0.0012 2314 999.985 0.0321 0.7828 20.350
0.0353 0.8724 0.0021 2727 1000 0.1008 9.6228 4.558 0.1152 0.7449 0.0026 11444 998.131 0.4657 39.7432 1.660
0.0353 0.8724 0.0023 2449 1000 0.1119 11.9690 3.481 0.0865 0.8610 0.0032 2506 999.592 0.1031 6.1128 5.860
0.0165 0.9198 0.0021 2727 1000 0.1275 22.1153 2.849 0.0865 0.8610 0.0022 2668 1000.211 0.0637 2.6282 12.040
0.0353 0.8724 0.0030 2609 1000 0.1592 22.6541 2.516 0.0865 0.8610 0.0012 2314 999.984 0.0225 0.4760 41.420
0.0865 0.8610 0.0026 11444 999.089 0.3855 23.4289 2.420
0.0165 0.9198 0.0023 2449 1000 0.1403 27.2608 2.215
0.0865 0.8610 0.0031 7859 999.826 0.3399 23.3385 2.400
0.0353 0.8724 0.0035 2572 1000 0.1825 29.5950 2.130
0.0165 0.9198 0.0030 2609 1000 0.1950 50.1299 1.677 0.0849 0.9099 0.0032 2506 999.835 0.0820 4.0910 9.260
0.0165 0.9198 0.0035 2572 1000 0.2196 64.2225 1.471 0.0849 0.9099 0.0022 2668 999.922 0.0493 1.7180 20.110
Miura et al. (2001) 0.0849 0.9099 0.0012 2314 1000.004 0.0164 0.2978 77.960
0.5940 0.5610 0.0030 2500 1000 0.0314 0.4446 59.698 0.0849 0.9099 0.0026 11444 999.267 0.3286 15.7112 3.330
0.5940 0.5610 0.0050 2500 1000 0.0881 2.6131 12.639 0.0849 0.9099 0.0031 7859 999.073 0.2828 15.4439 3.470
0.5940 0.5610 0.0070 2500 1000 0.1594 7.4079 5.405
0.1690 0.6250 0.0030 2500 999 0.1213 8.6137 4.007
0.1770 0.6020 0.0050 2500 999 0.1972 24.0244 2.527
0.1690 0.6250 0.0050 2500 999 0.2524 32.4644 1.542
represents the percentage of the initial uncertainty explained by the
0.0675 0.6290 0.0030 2500 1000 0.2049 43.6445 1.402
0.1770 0.6020 0.0070 2500 999 0.3031 53.6535 1.497 model. The best fitting between measured and predicted values
0.1690 0.6250 0.0070 2500 999 0.3734 68.6443 0.987 would have a root mean square error of zero and a coefficient of
0.0299 0.7190 0.0030 2500 997 0.2673 94.4191 0.828 determination equal to one.
0.0675 0.6290 0.0050 2500 1000 0.4051 153.2203 0.598 The best selected ANN model in this study, has one input layer with
0.0166 0.7510 0.0030 2500 998 0.3054 174.1574 0.633
six inputs (ρ, K, n, g, dp, ρs) and one hidden layer with 12 neurons.
0.0299 0.7190 0.0050 2500 997 0.4391 257.4585 0.511
0.0675 0.6290 0.0070 2500 1000 0.5235 269.0901 0.501 Fletcher and Goss (1993) suggested that pffiffiffi the appropriate number of
0.0299 0.7190 0.0070 2500 997 0.5196 406.8381 0.511 nodes in a hidden layer ranges from (2 k + m) to (2 k + 1), where k is
0.0166 0.7510 0.0050 2500 998 0.4437 407.5338 0.500 the number of input nodes and m is the number of output nodes. In
0.0166 0.7510 0.0070 2500 998 0.6035 770.4722 0.378
this study (k = 6) and (m = 1) and thus the appropriate number of
Pinelli and Magelli (2001)
0.0521 0.7300 0.0008 2470 1000 0.0306 1.2458 16.222
hidden layer neurons was chosen as 12. Fletcher and Goss (1993)
0.0471 0.7300 0.0008 2470 1000 0.0392 1.8888 9.885 further suggested that each neuron has a bias and is fully connected to
0.0521 0.7300 0.0011 2900 1000 0.0718 4.7792 5.447 all inputs and utilizes sigmoid hyperbolic tangent (tansig) activation
0.0521 0.7300 0.0011 2900 1000 0.0818 5.6399 4.197 function (Fig. 3). The output layer has one neuron (V) with linear
0.0521 0.7300 0.0030 1470 1000 0.0734 9.9022 3.366
activation function without bias. Training function in this network is
0.0466 0.7300 0.0030 1470 1000 0.0887 14.0750 2.305
0.0521 0.7300 0.0059 1170 1000 0.0829 19.2645 1.922 Automated Bayesian Regularization algorithm (trainbr). Fig. 3.a shows
0.0462 0.7300 0.0059 1170 1000 0.1013 28.0121 1.287 the back-propagation neural network architecture. In Fig. 3.b, Layer 1
Ford and Oyeneyin (1994) is hidden layer and Layer 2 is output layer. Fig. 3.c shows the detailed
9.1673 0.1714 0.0050 7949 1014.406 0.1200 0.9242 31.047
structure of hidden layer.
19.7360 0.0623 0.0070 7744 1000 0.1900 1.4891 17.105
19.7360 0.0623 0.0100 7796 1000 0.4100 6.7582 5.288
19.7360 0.0623 0.0120 7730 1000 0.4200 7.1622 5.988
4.9100 0.2075 0.0050 7949 1000 0.3200 8.7991 4.438
4. Results and discussion
9.1673 0.1714 0.0070 7744 1014.406 0.4400 10.5351 3.137
11.4890 0.0614 0.0050 7949 1032.413 0.4000 10.9860 2.738 Using the approach described above, the predictions were made in
16.1350 0.1580 0.0100 7796 1044.418 0.6100 12.5801 2.272 MATLAB software. The matrix of inputs in training step is a k × N
4.0029 0.2867 0.0120 7730 1026.411 0.3800 13.7496 7.099
matrix, where k is the number of network inputs and N is the number
11.2000 0.1113 0.0100 7796 1034.814 0.5800 19.7802 2.540
16.1350 0.1580 0.0120 7730 1044.418 0.8000 21.3357 1.570 of samples used in training step; in this paper we used six input
4.0029 0.2867 0.0100 7796 1026.411 0.5800 26.9295 2.564 variables (ρ, K, n, g, dp, ρs), and 69 samples to train of the network,
11.2000 0.1113 0.0120 7730 1034.814 0.7100 29.5753 2.015 thus k = 6 and N = 69. The matrix of outputs in training step, is a
9.1673 0.1714 0.0100 7796 1014.406 0.7500 29.6966 1.555 m × N matrix, where m is the number of outputs; in this paper m = 1.
4.9100 0.2075 0.0070 7744 1000.000 0.6600 34.5389 1.418
6.5705 0.0796 0.0050 7949 1020.408 0.6100 39.4240 1.193
The matrix of inputs for testing phase is k × N = 6 × 19 and the output
11.4890 0.0614 0.0070 7744 1032.413 0.7900 41.9566 0.954 matrix is m × N = 1 × 19. Comparison of the results of the proposed
9.1673 0.1714 0.0120 7730 1014.406 1.0000 51.8492 1.039 ANN model with the two other models used which directly predict
11.4890 0.0614 0.0100 7796 1032.413 1.0800 78.6261 0.735 settling velocities in power law fluids (Kelessidis, 2004; Chhabra and
4.9100 0.2075 0.0100 7796 1000.000 1.0600 86.9519 0.791
Peri, 1991) was performed using the coefficient of determination (R 2)
6.5705 0.0796 0.0070 7744 1020.408 0.9100 87.2948 0.729
11.4890 0.0614 0.0120 7730 1032.413 1.1800 94.4025 0.731 and RMS values. The latter parameters are affected by numbers of
6.5705 0.0796 0.0100 7796 1020.408 0.9800 103.5443 0.904 datasets (N) and number of parameters in the model.
4.9100 0.2075 0.0120 7730 1000.000 1.2100 114.4831 0.721 In Fig. 4, the predicted velocities are compared with the measured
6.5705 0.0796 0.0120 7730 1020.408 1.2400 165.0765 0.671 data for the training dataset of 69 data. The coefficient of determina-
Kelessidis and Mpandelis (2004)
0.0010 1 0.0032 2506 995.629 0.3692 1161.5720 0.460
tion to the linear fit (y = ax) is 0.996 with an RMS value of 0.021 m/s
0.0010 1 0.0022 2668 997.066 0.2935 655.5112 0.570 giving an almost perfect fit, something of course expected since it was
0.0010 1 0.0012 2314 1003.903 0.1763 215.9254 0.670 this data set used for the training of the network. The very good fitting
values indicate that the training was done very well.