Multimedia Tools and Applications (2023) 82:30107–30123
https://doi.org/10.1007/s11042-023-15044-2
Automatic detection of hypertensive retinopathy
using improved fuzzy clustering and novel loss function
Usharani Bhimavarapu 1
Received: 29 April 2022 / Revised: 4 August 2022 / Accepted: 27 February 2023 /
Published online: 15 March 2023
# The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023
Abstract
Hypertension retinopathy is a retinal disease caused due to hypertension which leads to
vision loss and blindness. Ophthalmologists use clinical methods to perform the diagno-
sis, which takes more time and money. Still, the computer-aided diagnostic system
detects and grades Hypertensive Retinopathy with no time and is less expensive. This
paper introduces an automated system that identifies hypertension retinopathy in the early
stage of hypertension. Retinal image segmentation efficiently detects eye ailments, which
are the signs of major eye diseases caused by hypertension, diabetes, and age-related
macular disorders. This study uses fuzzy logic techniques in digital image processing and
mainly concentrates on the early detection of hypertension retinopathy by using a nature-
inspired optimization algorithm. Improved Fuzzy C-Means clustering identifies the lesion
regions in hypertensive retinopathy accurately. The present model is tested on the
publicly available online dataset, and its outcomes are compared with distinguished
published methods. This study calculates the segmented output on the optimized features
using the improved loss function in the Resnet-152 model. The proposed approach
improves performance and surpasses the existing state-of-the-art models.
Keywords Hypertension . Fuzzy logic . Fuzzy clustering . Resnet-152
1 Introduction
The retina is the vital part of the human eye that takes the images and sends them to the brain.
The retina consists of two blood vessels called the veins and the arteries. HR (Hypertensive
retinopathy) and DR (Diabetic retinopathy) causes high blood pressure. HR can be detected by
examining the retina fundus images.
* Usharani Bhimavarapu
[email protected]
1
Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation,
Vaddeswaram, Andhra Pradesh, India
30108 Multimedia Tools and Applications (2023) 82:30107–30123
Early detection of HR can prevent the risks like cardiovascular disease, stroke, and even
mortality [5]. The major symptoms of HR-related eye disease are narrowing arteries, retina
bleeding, microaneurysms, hemorrhages, cotton wool spots, etc. [15].
Ophthalmologists examine the retina to detect HR, but it is cost-effective and time-
consuming. The computer-aided diagnosis automates the detection and classification of the
HR, and it is very cost less, saving the ophthalmologists and patients time. The normal fundus
images and HR symptoms fundus image is exhibited in Fig. 1.
Malignant HR or papilledema is the severe stage of HR in which optic disk swelling
happens. High blood pressure leads to intracranial pressure, which causes papilledema. In this
stage, abnormalities may occur on the optical disk, hemorrhages, blurring of optic disk
boundaries, and elevation of optic disk due to severe swelling. The different stages of HR
are shown in Fig. 2.
Blood vessels are classified as arteries and veins, whereas the arteries and veins carry
oxygenated blood and deoxygenated blood to the organs in the body. Due to hypertension,
arteries get narrowed and thickened, and HR can be found by obtaining the A/V ratio. A/V
ratio is assessed for large vessels, and small vessels with a diameter of less than two are not
considered for the A/V ratio.
1.1 Motivation
Hypertensive retinopathy damages the various organs in the human body. Hypertension
damages the tissues at the back of the eye, leading to double or dim vision loss. Chronic
hypertension keeps damaging the retina and causes few or no symptoms until the disease is
advanced. Early finding of retinal abnormalities lessens the risk and burden to the patients and
ophthalmologists. Automated systems enhance the outcome and minimize human perception.
Deep learning [3] recognizes and separates the HR features like arteriovenous nicking,
venous beading, arteriolar narrowing, silver wiring, and copper wiring. Using the fundus
images increases the convenience and efficiency of automating hypertensive retinopathy.
1.2 Research contribution
The major contributions of the paper are given as follows:
Fig. 1 Fundus Images(a) Normal (b) HR Symptoms [Source: DRIVE]
Multimedia Tools and Applications (2023) 82:30107–30123 30109
Fig. 2 HR Stages(a) Mild HR (b) Moderate HR (c) Malignant HR [Source: DRIVE]
1. The implementation of the new loss function for the Resnet-152 model.
2. Comparison of the performance of the proposed new loss function with the other loss
functions publicly available online dataset.
3. Evaluation and demonstration of the performance of the loss functions with the Enhanced
Resnet-152 model.
4. The current Resnet-152 version is improved by taking a distinct loss function, which
provides the most excellent outcomes.
2 Related work
Ortiz et al. [35] implemented the image and signal analysis using Gabor wavelet gradients,
morphological operations and Niblack to pre-process the retina image, identified the region of
interest in the retinal image through local intensity variation, morphological reconstruction,
and local threshold method [43]. The authors then classified the arteries and veins by
calculating the mean intensity of all pixels in the image, and to count the blood vessels the
authors used the segmented retina image to calculate the slope and perpendicular line of the
blood vessels. Agurto [4] et al. automatically classifies the hypertensive retinopathy based on
the features that distinguish HR as the silver cropper wiring, tortuosity, AR, and the vessel
anomalies. Applied the histogram stretching and the entropy techniques to pre-process the
retina image. The authors implemented the automatic and fast optic disc localization to
determine AVR, silver copper wiring, tortuosity and textual features and then segmented the
blood vessels by applying the multi scale linear structure, trimming algorithm and the local
entropy thresholding. Then extracted features from the segment vessel to categorize the vessel
as the artery or the vein. Calculated the Artery vein ratio using the central retinal artery
equivalent and the central retinal vein equivalent techniques.
Narasimhan et al. [31] proposed an algorithm to segment the blood vessels from the pre-
processed image. Classified the blood vessels as the arteries and the veins by extracting the
moment-based features and measured the AVR by estimating the blood vessel width on the
private dataset that was collected from the local hospital named Deepam Eye hospital. Author
achieved AVR as 0.24 for HR patients. Georgios et al. [25] developed a framework to detect
and measure the blood vessels, proposed an algorithm to automatically segment the retina
blood vessels and implemented the skeletonization techniques to get the blood vessels from the
segmented fundus image. The proposed technique achieved the accuracy of 93.18%, sensitiv-
ity of 71.89% and specificity of 96.56% on DRIVE and STARE datasets. Irshad et al. [19]
proposed an automated system to calculate the AVR for the arteries. Authors implemented the
30110 Multimedia Tools and Applications (2023) 82:30107–30123
local mean and the local variance to eliminate the noise and the irrelevant pixels from the
fundus images during pre-processing, implemented the 2D Gabor wavelet filter to segment the
vessel and the optic disk and extracted the profile based, centerline features. Calculated the
width of the blood vessels and used this measure for AVR on the available datasets VIC-AVR
and INSPIRE- AVR. Triwijoyo et al. [6] implemented the Convolution Neural network to
classify the HR sand achieved the accuracy of 98.6% on the DRIVE dataset. Faheem et al. [16]
detects the HR by automatically segments the blood vessels utilizing the match filtering and
classifies the vessels as the arteries and the veins by implementing the neural network.
Calculated the vessel width to determine the AVR and achieved the accuracy of 93.9% on
the DRIVE dataset. Akbar et al. [42] used an automatic system to detect the HR by using the
AVR and the papilledema signs on the fundus images. Extracted the textual based, colored
based, vascular based and disc obscuration-based features to detect papilledema and imple-
mented the SVM along with radial basis function to classify the papilledema on the VIVAVR,
STARE, AVRDB INSPIRRE-AVR and private database. Abbasi et al. [2] implemented Gabor
wavelet to enhance the vessels, segmentation using the multilayer thresholding, classify blood
vessels using the artificial neural network, naïve bayes, support vector machine, decision tree
and used the parr-Hubbar formulas to compute the AVR. Kiruthika et al. [23] proposed an
automatic system to classify the AVR. Segmented the blood vessels by applying the B-
COSFIRE filter, optic disk is detected using Fuzzy C-Means, extracted the statistical,
texture and color-based features from the skeletonized segmented vessels. Implemented
the convolution neural network and SVM to classify the artery and vein and achieved the
accuracy of 93.80%. Considered the CRAE, CRVE parameters to calculate the AVR on
the DRIVE dataset. Syahputra et al. [41] proposed the probabilistic neural network to
detect HR. Implemented the morphological closing to extract the background and the
optical disk, implemented the thresholding process to segment the vessels, fractal
dimension and the invariant moments techniques are implemented to extract the features.
Authors concluded that their proposed technique achieved 100% accuracy in detecting
the HR on the STARE dataset.
Ayasy et al. [7] implemented the principal component analysis to detect the HR and
backpropagation neural network to classify the HR. The authors concluded that their proposed
technique achieved the 86.36% accuracy on the STARE dataset. Wiharto et al. [40] extracted
features using the fractal dimension and the lacunarity and classify the HR by implementing
the ensemble random forest and achieved the accuracy of 88%, sensitivity of 91.3% and
specificity of 85.1% on STARE dataset. Chetia et al. [32] evaluated the tortuosity of the blood
vessels utilizing the distance metric technique to detect the HR on the VICAVR and DRIVE
datasets.
HR can be detected and graded using the improved activation function in the convolution
neural network [48]. The deep learning architectures dual stream fusion network and dual
stream aggregation network detect the retinal vasculature and use semantic segmentation to
detect diabetic and hypertensive retinopathy [8]. The stage of hypertensive retinopathy
can be determined by calculating the arterial-venous diameter, which gives the
diameter of the arteries and veins [44]. An improved iterative method and the parr-
Hubbard and Knudtson methods are used to compute the AVR ratio from the arteries
and veins. The retinal vessel tortuosity severity index is computed using the improved
hybrid decision support system for automatic detection and grading of hypertensive
retinopathy [10]. Hypertensive retinopathy is analyzed and classified based on symp-
toms and related risk factors identified [28].
Multimedia Tools and Applications (2023) 82:30107–30123 30111
3 Methodology
The present study aims to accurately categorize the hypertensive retinopathy fundus images.
We considered an automated model for grading the severity of hypertensive retinopathy.
Improved loss function in CNN architecture increases the classification accuracy in the
hypertensive retinopathy categorization. The proposed methodology flow is presented in
Fig. 3.
3.1 Pre-processing
Fuzzy logic, a novel method, is used to improve the fundus image’s brightness. Sigmoid
membership function was applied using the YIQ color space. According to Pal and King
method [39], a gray level image can be expressed as a fuzzy matrix. The steps followed to
enhance the fundus image using the fuzzy logic are:
1. Convert the fundus gray image to the fuzzy matrix
μij
X ij ¼ ⋃hi¼1 ⋃wj¼1 ð1Þ
xij
x is the Gray image with size h X w and fuzzy matrix size is h X w
Fig. 3 Block Diagram
30112 Multimedia Tools and Applications (2023) 82:30107–30123
2. First transform the RGB image to the YIF form by applying the technique followed in [38]
2 3
0:299 0 587 0:114
M 1 ¼ 4 0 596 −0 270 −0:322 5 ð2Þ
0:211 −0:253 0:312
where M1 is the forward transform of the RGB channel. Y values are limited to [0–255] and
normalized in [0,1] and it can be expressed as:
y
yn ¼ ð3Þ
255
3. The S-shaped membership function does not depend on the implied transactions and
enhances the image’s contrast. INT operator is used to adjusting the contrast of the fundus
image. The transformation function is taken from [37]
8 0
< s1 μij ; 0≤μij ≤ 0 5
s1 μij ¼ ð5Þ
: s′′ μ ; 0:5 ≤μ ≤ 1
1 ij ij
Where
1
μij ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ð6Þ
1−xij
1þ
xij
8
< 2μ2ⅈ j ; 0≤ μij ≤0 5
s1 μij ¼ 2 ð7Þ
: 1−2 1−μij ; 0:5≤ μij ≤ 1
0
Where μij is the fundus image X transformed to μij to enhance the fundus images by using the
modification technique [50].
8
0
< sr μij ; 0 ≤ μij ≤ 0 5
μij ¼ ð8Þ
: S 0 0 μ ; 0:5≤μ ≤ 1
r ij ij
4. The Sr is called as the successive application of S1 by the recursive application. As r
reaches infinity, the fundus images become the very crisp form, and the value of r
enhances the fundus domain.
Multimedia Tools and Applications (2023) 82:30107–30123 30113
sr μij ¼ sr sr−1 μij ; r ¼ 1; 2…… ð9Þ
5. The fuzzy values are transformed between 0 and 1 into spatial domain values by using the
inverse transform ye for defuzzification.
255
Ye ¼ ð10Þ
1−μ 2
1 þ μ ij α
ij
where α is the correction parameter to enhance the contrast of the fundus image.
6. Convert YIQ to RGB channel
2 3
1 0 956 0 621
M2 ¼ 4 1 0 270 −0 647 5 ð11Þ
1 −1 063 1 703
Where M2 is the inverse transform.
The resultant fuzzy enhanced image is exhibited in Fig. 4.
3.2 Enhanced fuzzy C-means clustering
The main purpose of retinal segmentation is to automatically distinguish the lesions patterns
and classify the abnormalities, and retinal image segmentation detects the lesions fast and
accurately. Segmentation extracts the lesions in the retina for further processing such as
classification etc. This study uses enhanced fuzzy c-means clustering to identify and process
the fundus image data. The steps followed to segment the fundus image using the enhanced
fuzzy C-Means clustering are:
Fig. 4 Pre-processed Image
30114 Multimedia Tools and Applications (2023) 82:30107–30123
1. Initialize the cluster centers as m and where are the stopping criteria.
For each pixel in the fundus image calculate the membership values μij (as)
1
μij ¼ m−1
1 ð12Þ
x −v
˙ ˙
∑ci¼1 j
x j −vk
j
xj is the pixel value in the fundus image and
∑jl wmⅈ j k x j vi xi
vi ¼ n ð13Þ
∑ j¼1 wmⅈ j k x j vi −1
Where
0 0
μⅈ j zij
wij ¼ ð14Þ
∑ck¼1 μk zkj
0 0
k(xi, vi) is the kernel distance metric.
Where vi is the new cluster center and k(xi) represents the square window centered at pixel
xi, P is the parameter that controls the function until {T(i)-T(i-1)< ε}
2. Partition coefficient (kpc) and partition entropy (kpe) uses the member ship values to
evaluate the cluster validity [11]:
1 n
K Pc ¼ ∑ ∑c μm ð15Þ
n j¼1 i¼1 ij
1 n
K Pe ¼ ∑ j¼1 ∑ci¼1 μm
ij log μij ð16Þ
n
KPc values vary between [ 1p ; 1], where p is the no: of clusters and KPe varies between [0, log p]
3. To maintain the compactness and separation strength of the cluster, Xie-Beni function is
used [36]
∑i¼1 ∑nj¼1 μm
2
ij x j −vi
xib ðU Þ ¼ ð17Þ
2
n min vi −vp
i≠p
The numerator improves the compactness of the fuzzy partition, and the denominator repre-
sents the partition depth between the clusters.
To predict the HR, first determined the impact of the blood vessels in the fundus images
and segmented the blood vessels. All the fundus images of the training set and test set were
Multimedia Tools and Applications (2023) 82:30107–30123 30115
pre-processed identically and then segmented using the above-discussed technique. The
segmented retina image is shown in Fig. 5.
3.3 Calculation of artery vein ratio (a/V ratio)
According to [21], central retinal artery equivalent (CRAE) is equated as:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
CRAE ¼ 0 87w2x þ 1:01w2y −0:22wx wy −10 73 ð18Þ
wx is the median of the artery and wy is the value before the median;
Central retinal vein Equivalent (CRVE) is computed as:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
CRVE ¼ 0 72w2x þ 0:91w2y þ 450 02 ð19Þ
wx is the median of the vein and wy is the value before the median; AVR is calculated as
CRAE
AVR ¼ ð20Þ
CRVE
AVR ratio for different levels of HR is presented in Table 1.
3.4 CNN
CNN is popular for image classification, segmentation, and object detection [20]. The CNN
consists of various models such as VGG net, Google net, and Resnet, and among them, Resnet
improves the performance and solves the vanishing gradients. Resnet-152 shows a better
performance compared to the Inception, Google net in CNN models. Resnet performs high
performance for medical image recognition, segmentation, and classification [33]. The net-
work structure of the CNN used the structure of the Resnet, demonstrating the high perfor-
mance of the medical image classification. Resnet has five versions named Resnet-18,34,
50,101,152, these differences between these versions are the depth of the convolution. The
deeper the convolution, performs better, so we chose Resnet-152 to classify the HR. Resnet-
152 has 151 convolution layers and a fully connected last layer with activation with each layer.
The first convolution layer is set to a 7 × 7 filter with padding three and stride 2, the kernel
Fig. 5 Segmented retina Calculation of Artery vein ratio
30116 Multimedia Tools and Applications (2023) 82:30107–30123
size of the remaining convolution layers is 3 × 3, and the pooling used the filter size of 16.
The activation function [13] is used in the fully connected layer, and CNN outputs the HR
prediction. CNN is specified as the possibility of the forecasted HR, creating the CNN for HR
forecast as a classifier. For the loss function used for training and error calculation, we
proposed a loss function in the forecast of the HR, and this loss function is discussed in the
next section. In addition, an optimizer [12] with a learning rate of 1e-5 was used, and HR
prediction was learned through 500 epochs with a batch size of 64.
3.5 Improved loss function in Resnet-152
The loss functions in the previous CNN-based models are the Mean square Error and so on.
The improved loss function in the Resnet-152 model is depicted in Fig. 6.
The Improved Loss function (ILF) loss function can be represented mathematically as:
1h 2 i
log ∑ Ypred −Ytrue ð21Þ
N
The complete loss function that was calculated for the enhanced Resnet-152
∂L ∂L ∂L ∂L
¼ þ þ −−−−−− þ ð22Þ
∂w ∂W 1 ∂W 2 ∂W 151
3.6 Improved loss function in VGG-19
The improved loss function in VGG-19 model is depicted in Fig. 7.
The complete loss function that was calculated for the enhanced VGG-19
∂L ∂L
¼
∂w ∂W 1
3.7 Improved loss function in Alex net
The improved loss function in VGG-19 model is depicted in Fig. 8.
The complete loss function that was calculated for the enhanced Alexnet
∂L ∂L
¼
∂w ∂W 1
Table 1 AVR ratio for different
levels of HR [18] Degree of HR A/V Ratio
Normal Retina 0.667–0.75
Grade-1 0.5
Grade-2 0.33
Grade-3 0.25
Grade-4 <0.2
Multimedia Tools and Applications (2023) 82:30107–30123 30117
Fig. 6 Enhanced Resnet-152 Model
Fig. 7 Enhanced VGG-19 Model
Fig. 8 Enhanced Alex net Model
30118 Multimedia Tools and Applications (2023) 82:30107–30123
4 Experimental results
We collected 1000 fundus images which consist of hypertensive retinopathy including Normal
stage (HR-NR) of 200, Mild stage (HR-MD) of 200, Moderate stage (HR-MO) of 200,
Malignant stage (HR-MG) of 200, Severity stage (HR-SR) of 200. We collected 1000
hypertensive retinopathy images from six different online sources, and by using the data
augmentation [20, 47], we doubled these 1000 HR images to 2000 images to work on a large
database. To implement the proposed algorithm, a 2.2 GHz Intel Core i5 processor with 16 GB
RAM was used and implemented using Python 3.7.2. We split the dataset as 85% for training
and 15% for testing; out of 2000 images, 1700 images are utilized for training, and 300 images
are for testing.
4.1 Performance metrics
The proposed model is evaluated and compared with the state-of-the-art hypertensive system
models in terms of sensitivity, specificity, F1-score, and accuracy measures. The mathematical
representation of these measures is given from Eqs. (23) to (26).
TP þ TN
Accuracy ¼ ð23Þ
TP þ TN þ FP þ FN
TP
Sensitivity ¼ ð24Þ
TP þ FN
FP
Specificity ¼ 1− ð25Þ
TN þ FP
2*TP
F1−Score ¼ ð26Þ
2*TP þ FP þ FN
4.2 Results
The acquired fundus images are of variable sizes. During classification, it is not easy to process
the fundus image with its original size, so first, we performed the resize of the fundus images.
Table 2 tabulates the different effects of the pixel sizes in the fundus images to diagnose the
five stages of hypertensive retinopathy.
Table 2 shows that accuracy would not be improved by increasing the image size. The
image size 525X525 captures all the features and improves the accuracy. So, we resized all the
fundus images to 525X525 pixel images. Dataset was randomly shuffled during the training
process and executed multiple experiments with different hyperparameters. Table 3 tabulates
the results with different hyperparameters, regularized function [14] and optimizer [12],
activation function [13], epoch size = 500/1000/2000, learning rate = 0.01/0.5 and batch
size = 32,64,128,256.
Multimedia Tools and Applications (2023) 82:30107–30123 30119
Table 2 Results of varying sizes of fundus images
Size Sensitivity (%) Specificity (%) Accuracy (%) F1-Score (%)
400X400 91.56 89.56 90.85 88.68
450X450 92.36 90.47 91.69 89.83
500X500 93.56 90.94 91.89 88.42
525X525 95.44 92.34 94.67 92.47
600X600 92.45 88.73 90.36 85.82
700X700 89.84 84.93 89.78 82.73
Table 3 Performance of the proposed model with different hyperparameters
Batch Size Learning Rate Epoch RMSE Processing Time(ms)
32 0.01 500 8.75 6:37
0.01 1000 11.35 7:14
0.01 2000 13.45 9:32
0.5 500 9.38 7:16
0.5 1000 14.34 12.33
0.5 2000 18.78 14.45
64 0.01 500 9.47 7:37
0.01 1000 12.53 8:61
0.01 2000 14.84 10:12
0.5 500 10.68 8:16
0.5 1000 15.39 13.73
0.5 2000 19.79 15.95
128 0.01 500 10.59 8:87
0.01 1000 13.63 9:69
0.01 2000 16.89 12:12
0.5 500 11.65 9:69
0.5 1000 16.79 14.78
0.5 2000 20.47 16.69
256 0.01 500 11.45 10:38
0.01 1000 14.63 12:36
0.01 2000 20.89 14:71
0.5 500 12.46 13:59
0.5 1000 18.59 19.78
0.5 2000 22.54 22.69
We measured the five stages of hypertensive retinopathy and tabulated them in Table 4.
The proposed model recognizes five stages of HR, HR-NR, HR-MI, HR-MO, HR-MA, and
HR-SR.
Table 5 tabulates the comparison of the loss and the processing time for the proposed with
the different CNN models. According to the results, overall testing loss and the processing
Table 4 Results of the proposed for different stages of hypertensive retinopathy
Stage Sensitivity (%) Specificity (%) Accuracy (%) F1-Score (%)
HR-NR 95.74 96.39 96.35 95.56
HR-MI 95.24 96.78 96.58 95.73
HR-MO 94.45 94.78 96.15 93.31
HR-MA 94.53 93.89 95.89 92.72
HR-SR 96.48 97.38 96.15 95.37
30120 Multimedia Tools and Applications (2023) 82:30107–30123
Table 5 HR classification comparison of the loss for the proposed with different CNN models
MODEL LOSS PPV NPV Processing time(ms)
Inception-v3 0.0029 0.804 0.708 12.98
VGG-19 0.0015 0.815 0.718 11.67
Resnet-50 0.0019 0.812 0.739 10.53
Alex net 0.0021 0.826 0.720 11.53
Google net 0.0029 0.828 0.789 12.48
Squeeze net 0.058 0.695 0.656 12.35
Capsule NN 0.0023 0.658 0.603 11.39
Enhanced Resnet-152 0.0013 0.895 0.796 9.31
Enhanced Alexnet 0.0014 0.891 0.781 10.01
Enhanced VGG-19 0.0015 0.885 0.786 10.41
time for the proposed model are appropriate for detecting and classifying the HR. The test set
gets the error’s absolute values and their HR distributions. The vessel calculation measures the
Positive Predicted Value (PPV) and the negative value (NPV). The percentages of correctly
classified abnormal cases and correctly classified normal cases are shown as PPV and NPV
values.
For the 5-class classification of Hypertensive retinopathy grading, the proposed model is
compared with the other state-of-the-art classification techniques. Table 6 tabulates the
comparison of the proposed with the state-of-the-art HR performance metrics.
Table 6 compares the implementation of the proposed with the previously existing tech-
niques. The table presented that the Triwijoyo model [46] accomplished an accuracy of
98.66%, a sensitivity of 78.55%, specificity of 82.58%, and F1-score of 84.65%. This model
cannot extract the generalized features to categorize the HR and non-HR. Triwijoyo et al.
tested only 40 images, so their accomplished accuracy is high. Pradipto used image processing
techniques to extract the features and segmentation. Pradipto model [45] achieved an accuracy
of 84.78%, sensitivity of 81.53%, specificity of 82.47%, and F1-score of 86.83. Hyper dense
Table 6 Comparison of the proposed with the previous existing techniques to detect HR for DiaretDB0 dataset
MODEL Accuracy (%) Sensitivity (%) Specificity (%) F1-Score (%)
Triwijoyo et al. [46] 98.66 78.55 82.58 84.65
Pradipto et al. [45] 84.78 81.53 82.47 86.83
Abbas et al. [1] 95.64 93.72 95.86 96.58
Noronha et al. [34] 92.64 82.70 94.56 91.96
Manikis et al. [26] 93.72 95.86 85.84 92.90
Mirsharif et al. [27] 96.54 69.97 92.84 74.05
Muramatsu et al. [30] 75.65 75.67 94.88 95.75
Khitran et al. [22] 98.22 79.65 91.94 94.99
Muhammad et al. [9] 97.15 86.35 98.39 96.37
Kang et al. [29] 95.11 97.40 81.13 96.29
Leopold et al. [24] 90.45 64.33 94.72 79.42
Wang et al. [49] 95.38 79.14 97.22 97.53
Feng et al. [17] 96.33 77.09 98.48 97.57
Enhanced Resnet 98.99 94.96 96.84 97.84
Enhanced Alexnet 97.36 93.74 95.69 96.86
Enhanced VGG-19 97.18 93.69 95.80 96.95
Multimedia Tools and Applications (2023) 82:30107–30123 30121
[1] used the modified DRL model through dense feature transform, which uses the localized
and specialized features. Hype dense achieved an accuracy of 95.64%, sensitivity of 93.72%,
specificity of 95.86%, and F1-score of 96.58%. We observed that Muramatsu et al. [30]
achieved the least accuracy, Kang et al. [29] achieved the least specificity and Leopold et al.
[24] achieved the least sensitivity.
5 Conclusion
This paper applies the classification of retinal images using deep learning integrated with
Enhanced Fuzzy Clustering. The proposed work is two-stage processing. In the first stage, an
Enhanced Fuzzy Clustering generates an optimal set of features that eliminates the redundant
data. In the second stage, the optimal feature vector is subjected to the Resnet152-layer neural
network to classify the pixels into regions containing blood and non-blood vessels. This study
performs the classification of the segmented output on the optimized features using the
improved loss function in Resnet-152, VGG-19, and Alex net models.
Achieved 98% for classification on average scenario for segmentation and 97% for
abnormality detection using Resnet-152, 97% for classification on average scenario for
segmentation and 96% for abnormality detection using VGG-19, 97% for classification on
average scenario for segmentation and 96% for abnormality detection using Alex net.
Data availability Data sharing not applicable to this article.
Declarations
Conflict of interest No conflict of interest to authors.
References
1. Abbas Q, Ibrahim MEA (2020) Densehyper: an automatic recognition system for detection of hypertensive
retinopathy using dense features transform and deep-residual learning. Multimed Tools Appl 79(41):31595–
31623
2. Abbasi UG, Akram MU (2014) Classification of blood vessels as arteries and veins for diagnosis of
hypertensive retinopathy. In 2014 10th International Computer Engineering Conference (ICENCO), pages
5–9. IEEE
3. Aggarwal AK (2022) Biological Tomato Leaf disease classification using deep learning framework. Int J
Biol Biomed Eng 16(1):241–244
4. Agurto C, Joshi V, Nemeth S, Soliz P, Barriga S (2014) Detection of hypertensive retinopathy using vessel
measurements and textural features. In 2014 36th annual international conference of the IEEE engineering
in medicine and biology society, pages 5406–5409. IEEE
5. Akbar S, Akram MU, Sharif M, Tariq A, Khan SA (2018) Decision support system for detection of
hypertensive retinopathy using arteriovenous ratio. Artif Intell Med 90:15–24
6. Akbar S, Akram MU, Sharif M, Tariq A, Yasin UU (2018) Arteriovenous ratio and papilledema based
hybrid decision support system for detection and grading of hypertensive retinopathy. Comput Methods
Programs Biomed 154:123–141
7. Arasy R, Basari (2019) Detection of hypertensive retinopathy using principal component analysis (pca) and
backpropagation neural network methods. In AIP Conference Proceedings, volume 2092, page 040002. AIP
Publishing LLC
8. Arsalan M, Haider A, Choi J, Park KR (2021) Diabetic and Hypertensive retinopathy screening in fundus
images using artificially intelligent shalow architectures. J Person Med 12(1):1–7
30122 Multimedia Tools and Applications (2023) 82:30107–30123
9. Arsalan M, Haider A, Lee YW, Park KAR (2022) Detecting retinal vasculature as a key biomarker for deep
learning based intelligent screening and analysis of diabetic and hypertensive retinopathy. Exp Syst Appl
200:117009
10. Badawi SA, Fraz MM, Shehzad M, Mahmood I, Javed S, Mosalam E, Nileshwar AK (2022) Detection and
grading of hypertensive retinopathy using vessels tortuosity and arteriovenous ratio. J Digit Imaging 35(2):
281–301
11. Bezdek JC (2013) Pattern recognition with fuzzy objective function algorithms. Springer Sci Bus Med
12. Bhimavarapu U, Battineni G, Chintalapudi N (2023) Improved optimization algorithm in LSTM to predict
crop yield. Computers 12(1):1–19
13. Bimavarapu U (2022) Fuzzy LSTM to predict air quality with improves activation function. Int J Fuzzy
Syst:1–16
14. Bimavarapu U (2022) IRF-LSTM: enhanced regularization function in LSTM to predict the rainfall. Neural
Comput Applic 1(1):1–11
15. Chatterjee S, Chattopadhya S, Hope-Ross M, Lip PL (2002) Hypertension and the eye: changing perspec-
tives. J Hum Hypertens 1,16(10):667–675
16. Faheem MR et al (2015) Diagnosing hypertensive retinopathy through retinal images. Biomed Res Ther
2(10):1–4
17. Feng, S, Zhou, Z, Pan, D, Tian, Q (2019) CCnet: A cross connected convolutional network for segmenting
retinal vessels uisng multiscale features. Neurocomput
18. Henderson AD, Bruce BB, Newman NJ, Biousse V (2011) Hypertension-related eye abnormalities and the
risk of stroke. Rev Neurol Dis 8(1–2):1
19. Irshad S, Ahmad M, Akram MU, Malik AW, Abbas S (2016) Classification of vessels as arteries verses
veins using hybrid features for diagnosis of hypertensive retinopathy. In: 2016 IEEE International
Conference on Imaging Systems and Techniques (IST) 1(1):472–475
20. Kaur A, Chauhan AS, Kumar Aggarwal A (2022) Preduction of enhancers in DNA sequence data uisng a
hybrid CNN-DLSTM model. IEEE/ACM Trans Comput Biol Bioinform 1(1):1–10
21. Khitran S, Akram MU, Usman A, Yasin U (2014) Automated system for the detection of hypertensive
retinopathy. In: 2014 4th international conference on image processing theory, tools and applications
(IPTA) 1(1):1–6
22. Khitran S, Akra, MU, Usma, A, Yasin U (n.d.) Automated system for the detection of hypertensive
retinopathy. In: International Conference on image processing theory, tools and applications (IPTA 2014)
1(1):1–6
23. Kiruthika M, Swapna TR, Santhosh KC, Peeyush KP (2019) Artery and vein classification for hypertensive
retinopathy. In 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), pages
244–248. IEEE
24. Leopold HA, Orchard J, Zelek JS, Lakshminarayanan V (2019) PixelBNN: Augmenting the pixelCNN with
batch normalization and the presentation of a fast architecture for retinal vesel segmentation. J Imaging 5(1):
26
25. Manikis GC, Sakkalis V, Zabulis X, Karamaounas P, Triantafyllou A, Douma S, Zamboulis C, Marias K
(2011) An image analysis framework for the early assessment of hypertensive retinopathy signs. In 2011 E-
health and bioengineering conference (EHB), pages 1–6. IEEE
26. Manikis GC, Sakkalis V, Zabulis P, Karamaounas A, Triantafyllou A, Douma S, Zamboulis C, Marias K
(2011) An image analysis framework for the early assessment of hypertensive retinopathy signs.
International conference on E-Health and bioengineering (EHB2011) 1(1):51–57
27. Mirsharif Q, Tajeripour F, Pourreza H (2013) Automated characterization of blood vessels as arteries and
veins in retinal images. Comput Med Imaging Graph 1,37(7):607–617
28. Mohan, N, Murugan, R, Goel, T (2022) Machine learning algorithms for hypertensive retinopathy detection
through retinal fundus images. Appl Acad Press Comput Vis Recog Syst, 39–67
29. Muhammad A, Muhammad O, Tahir M, Se WC, Kang RP (2019) Aiding the diagnosis of diabetic and
hypertensive retinopathy uisng artificial intelligence based semantic segmentation. J Clin Med 8(9):1–15
30. Muramatsu C, Hatanaka Y, Iwase T, Hara T, Fujita H (2010, computer aided diagnosis) Automated
detection and classification of major retinal vessels for determination of diameter ratio of arteries and veins.
Med Imaging 7624:153–160
31. Narasimhan K, Neha VC, Vijayarekha K (2012) Hypertensive retinopathy diagnosis from fundus images by
estimation of avr. Procedia Eng 38:980–993
32. Nirmala SR, Chetia S (2017) Retinal blood vessel tortuosity measurement for analysis of hypertensive
retinopathy. In: 2017 International Conference on Innovations in Electronics, Signal Processing and
Communication (IESC) 1(1):45–50
33. Noh KJ, Park SJ, Lee S (2019) Scale-space approximated convolutional neural networks for retinal vessel
segmentation. Comput Methods Programs Biomed 178:237–246
Multimedia Tools and Applications (2023) 82:30107–30123 30123
34. Noronha K, Navya KT, Nayak KP (2012) Support system for the automated detection of hypetensive
retinopathy using fundus images. International conference on electronic design and signal processing
(ICEDSP2012) 1(1):7–11
35. Ortìz D, Cubides M, Suarez A, Zequera M, Quiroga J, Gòmez JA, Arroyo N (2012) System development
for measuring the arterious venous rate (avr) for the diagnosis of hypertensive retinopathy. In 2012 VI
Andean Region International Conference, pages 53–56. IEEE
36. Pal NR, Bezdek JC (1995) On cluster validity for the fuzzy c-means model. IEEE Trans Fuzzy Syst 3(3):
370–379
37. Ross TJ (2005) Fuzzy logic with engineering applications. John Wiley & Sons, New York, 1(1):1–350
38. Sangwine SJ, Horne REN (2012) The colour image processing handbook. Springer Sci Bus Med
39. Sankar K, Pal RK et al (1981) Image enhancement using smoothing with fuzzy sets. IEEE Trans Sys Man
Cyber 11(7):494–500
40. Suryani E, Kipti MY et al (2018) Assessment of early hypertensive retinopathy using fractal analysis of
retinal fundus image. Telkomnika 16(1):445–454
41. Syahputra MF, Aulia I, Rahmat RF, et al. (2017) Hypertensive retinopathy identification from retinal fundus
image using probabilistic neural network. In 2017 International Conference on Advanced Informatics,
Concepts, Theory, and Applications (ICAICTA), pages 1–6. IEEE\
42. Tan JH, Acharya UR, Bhandary SV, Chua KC, Sivaprasad S (2017) Segmentation of optic disc, fovea and
retinal vasculature using a single convolutional neural network. J Comput Sci 20:70–79
43. Thukral R, Kumar A, Arora AS (2019) Effect of different thresholding techniques for denoising of emg
signals by uisng different wavelets. International conference on intelligent communication and computa-
tional technqiues (ICCT2019) 1(1):161–165
44. Triwijo BK, Sabarguna BS, Budiharto WIDODO, Abdurachman E (2021) New hypertensive retinopathy
grading based on the ration of artery venous diameter from retinal image. Int J Comput 20(2):221–227
45. Triwijoyo BK, Pradipto YD (2017) Detection of hypertension retinopathy using deep learning and
boltzmann machines. J Phys Conf Ser 801:012039. IOP Publishing, 801(1):1–7
46. Triwijoyo BK, Budiharto W, Abdurachman E (2017) The classification of hypertensive retinopathy using
convolutional neural network. Proced Comput Sci 116:166–173
47. Ubhi JS, Aggarwal AK (2022) Neural style transfer for image within images and conditional GANS for
destylization. J Vi Commun Image Represent 85:103483
48. Usharani, B (2022) Hypertensive retinopathy classification using improved clustering algorithm and the
improved convolution neural network. IGI Global, Deep Learn Appl Cyber Phys Syst, 119–131
49. Wang C, Zhao Z, Ren Q, Xu Y, Yu Y (2019) Dense U-net based on patch based learning for retinal vessel
segmentation. Entropy 21:168
50. Zhou F, Jia ZH, Yang J, Kasabov N (2017) Method of improved fuzzy contrast combined adaptive
threshold in nsct for medical image enhancement. Biomed Res Int 2017:1–10
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a
publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and
applicable law.