A Kannada Handwritten Character Recognition System Exploiting Machine Learning Approach
A Kannada Handwritten Character Recognition System Exploiting Machine Learning Approach
Computer Science and Engineering Computer Science and Engineering Computer Science and Engineering
Nitte Meenakshi Institute of Technolgy Nitte Meenakshi Institute of Technolgy Nitte Meenakshi Institute of Technolgy
Bangalore, India Bangalore, India Bangalore, India
[email protected] [email protected] [email protected]
Abstract—Handwritten character recognition plays an recognition applications. There are many applications present
important role when the handwritten text on paper, postcards, these days that yield more than 99% of accuracy for the
etc. requires conversion of the handwritten text into digitized English language. In India there are more than 22 major
form. The difference between a digitized handwritten document regional spoken languages with different scripts of
and a scanned document is that the prior one can be edited, and handwriting. One of the languages spoken in the southern part
the latter cannot. Significant developments have been made on of India is Kannada, which is included in the list of major
the handwritten character recognition of widely used languages languages spoken in India. There are more than 44 million
like English. India is a multilingual country where there exist native Kannada speakers in the country. There are 49 letters
multiple regional languages like Kannada, Tamil, Malayalam
in Kannada script. The 34 base consonants are modified by the
and other Dravidian Languages with complex scripts. Kannada
is spoken in most of the regions of Karnataka State, which is one
modifier glyphs from the 16 vowels, which results in a total of
of the southern regions of India. In the proposed research, a 578 characters ((34*16) +34=578). For each of the 34
Convolutional Neural Network(CNN) is practiced to recognize consonants, there exist an oattakshara consonant emphasis
Kannada handwritten Characters. The research employs glyph that results in a total of (578*34) +16=19668 distinct
densely connected-convolutional networks or DenseNet variant characters. Each letter has its own syllable [1]. Each letter or
of CNN to recognize handwritten Kannada characters. character has a distinct shape and tone, which aids the auditory
DenseNet is preferred in this research for its known advantages and visual presentation of the letter.
such as enhanced feature propagation, improved feature reuse,
Recognition of Kannada handwritten script is difficult for
and minimized vanishing gradient problem. The dataset used in
the experimentation is a standard Char74k dataset. The prime
various reasons. A single character of Kannada comprises
objective of this research is to devise a machine learning based collective symbols. There are 19668 distinct characters in the
application to recognize Kannada handwritten characters with Kannada script. Therefore, the dataset required to recognize
high accuracy and convert them into digitized characters. this vast number of characters is very large. Moreover, the
Digitized documents promote the growth of several other major handwriting varies from person to person and it is difficult to
applications like speech conversion, language translation and decipher handwritten text.
conversion of medieval documents. A testing accuracy of
In the proposed research, the machine learning model
93.87% is observed for 3285 images of handwritten Kannada
exploited is the CNN. CNN is one of the most common
characters with 5 images from each of the 657 classes. This
machine learning model can also be trained to recognize models that is used for object recognition and classification.
characters of different Indian languages. DenseNet is a kind of CNN that utilizes dense blocks for dense
connections between layers, where the layers are directly
Keywords—Handwritten Character Recognition, Kannada; connected with each other if the feature map size matches. The
Convolution Neural Network, DenseNet, Machine Learning. DenseNet model has shown promising outcome in our
research.
I. INTRODUCTION
II. RELATED PAPERS
A handwritten character recognition application interprets
the handwritten document and converts it into a digitized CNN is one of the powerful and widely used deep learning
format that can be edited. There are numerous real-world model for image recognition and image classification. CNN is
applications of handwritten character recognition systems. an Artificial Neural Network(ANN) model that helps discover
The domain of automated handwriting recognition has parts of the images and perceive patterns. CNN are the most
witnessed remarkable real-world success through its effective architecture for the tasks such as image
applications like recognition of address on mail pieces, classification, retrieval and detection as they are expected to
reading handwritten amounts on bank cheques and forms with give results with very high accuracy. In CNN model the
domain-dependent limitations to ensure the problem is convolution layers collect inputs and transform the input to
tractable. Historical documents can be converted into digitized cater the required input to the following layer. CNN uses
text which can further be stored in digital libraries using filters at the beginning of the network to detect patterns such
handwritten character recognition applications. A lot of as lines, strokes etc. Based on a survey of existing Kannada
medieval information can be prevented from being destroyed handwritten character techniques it was concluded that the
if it is converted into digital form and can be saved for a long CNN approach improves the efficiency of the handwritten
time. The analysis of these historical Kannada documents may character recognition remarkably[2]. But the recognition does
provide useful insight of the culture and traditions followed by not only depend on the model and its specifications, but it also
our predecessors. Extracting such valuable information from depends on the pre-processing techniques and segmentation
the stone carvings, palm leaves and paper documents will techniques that are implemented. Feature extraction uses the
enhance the knowledge of our heritage. A lot of improvement feature maps produced using the filters. Pooling is used in
has been made in the development of English character order to preserve the important parts of an image through
Authorized licensed use limited to: M S RAMAIAH INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2024 at 07:19:13 UTC from IEEE Xplore. Restrictions apply.
downsizing. The network density is directly proportional to are transferred to a dense layer's flattening layer. Max pooling
the filter complexity. As we proceed further into the network, adjusts the high-level picture to the current model. This
CNN may, for instance, be able to identify the entire Kannada accelerates computation. Flattening reduces data flow
character using inputs from the preceding layers whereas the channels, but dense layers absorb features after flattening.
network's layers in the beginning can distinguish the strokes Few nodes' outputs from the dense layer are discarded.
and curvature of the letters. The softmax layer categories the Second-layer output has maximum likelihood. 23,500 photos
picture into the relevant groups after removing all the were utilised, 18,800 for training and 4,700 for testing.
important elements from it. CNN has proved to be the most Multiple contributors contributed to these photographs,
accurate of the algorithms for Kannada handwriting detection diversifying the data collection. Vowels and consonants were
and its accuracy has been drastically enhanced. A few separated for distinct trials. Accuracy was 93.2% and 78.73%
distinguished CNN models are discussed here. for the two data sets.
Joe et al. [3] confer a comparative study of Random Forest K et al. [8] suggest a Kannada HCR approach using CNN.
Classifier (RFC), CNN and multinomial Naive Bayes This methodology includes training and assessment. The
classifier w.r.to. Kannada handwritten character recognition. training dataset was Chars74K. Chars74K contains 657
The paper focuses on a CNN model that utilizes two groupings of photos. The suggested approach changes the
convolution layers of 32 and 64 filters. ReLu is the choice for input document's color image to grayscale and denoises it
activation functions which gives the output as 0 or 1 with thereafter. This approach employs non-local means to cope
negative or positive inputs respectively. Succeeding the max with Gaussian noise during document scanning. Contrast
pooling layer, a dropout layer is used to prevent model normalization improves a photo's contrast after denoising,
overfitting and to incorporate the local features, a dense layer extending its intensity range. After that, grayscale photos are
is added. This model’s observed accuracy is 57.002 percent. converted to binary images. After normalizing the data, the
input document's lines are segmented, words were deleted
Rani et al. [4] use transfer learning and the Devanagari using vertical segmentation. Marking each character's
handwriting recognition model to recognise Kannada
boundaries marks region of interest. Each convolutional layer
handwritten alphabets. VGG19 NET uses deep learning is followed by max pooling layers. Chars74k dataset accuracy
network architecture to recognise information. VGG19 NET is 99% and handwritten document accuracy is 96%.
has five hidden layers, two condensed layers, with one output
layer. First block comprises 4 convolution layers and 1 max Campos et al. recognized the characters in the images of
pooling layer. Devanagari has 92000 pictures in 46 natural scenes. Authors used an annotated dataset of images
classifications. VGG19 NET uses 1,23,654 images. Each comprising Kannada and English characters. kNN and SVM
class included 40-100 samples; therefore 9401 samples were classification have been used to analyse performance of
analysed with 90% accuracy. The accuracy for 10 epochs of various features[9].
VGG19 NET assessment is 73.51 %.
Numerous supervised and unsupervised machine learning
Rao et al. [5] evolved with a model utilising ANN with models for classification of Kannada characters is presented
fully connected layers to recognize Kannada handwritten in the research paper by Sen et al. [10]. Authors used two
characters. This research aimed to design a methodology that datasets in their research; Chars74k dataset and the custom
could properly digitize ancient Kannada manuscripts to dataset PSCube. In Chars74k dataset, images belong to more
preserve previous knowledge. Noise reduction, greyscale than 657 classes. There exist 25 handwritten characters for
conversion, contrast normalisation, binarization and each class. The PSCube is a custom-made dataset of
segmentation are the image pre-processing stages included in handwritten characters picked up from thirty native
the image pre-processing process. Pre-processed photos are volunteers. Authors performed line, word and character
enhanced to prevent overfitting. Max pooling and flattening segmentation before pre-processing of the data. Pre-
are used to extract features. Convolutional neural networks are processing demands techniques such as normalization of
used to classify. Categorical cross-entropy loss function is contrast, removal of noise, binarization, thinning of image and
used to compute error rates. Training accuracy is 95.1% while sampling. Augmentation of existing data is performed to
testing accuracy is 86%. This model may be enhanced by generate fresh data for training specific models. The steps
adding convolutional layers or incorporating kNN algorithm. followed include aspect ratio calculation, rotation of the
image, smoothening of the image, padding, noise removal and
Fernandes et al. [6] presented two recognition techniques resizing of the image. The major models devised in this
for recognition of handwritten Kannada scripts, CNN and research include CNN model using Keras and Tensorflow,
Tesseract. Pictures of text are curated from web sources. The SVM model using OpenCV and kNN. The CNN model used
images are converted to grayscale, erasure is performed to 2 convolution layers and a maxpool layer. Local response
decrease the thickness of the images. Python instructions are normalization is used to inhibit laterally. The dropout layer
used by Tesseract to obtain location of the alphabet. Multiple
probability was 0.5 and, learning rate of Adam optimizer is
variations of the same character are utilized so that the 1e-2. The specifications for SVM are Histogram of Oriented
machine can learn them. The character's coordinates are then Gradients(HOG), feature set, regularization parameter (C =
adjusted and given a mark during testing. CNN training uses 12.5), gamma (0.001),with RBF Kernel and for KNN, the
Python and TensorFlow. Images are divided into characters value of K is square root of 100 and distance measure used is
and sorted using CNN. Huge data sets encompassing most Euclidean distance.
handwriting types might be 99% accurate.
Handwritten digit categorization is a key topic in machine
The model suggested in G et al. [7] has 4 convolution vision because of its various applications in recognition
layers and 2 max pooling layers, which caused overflowing to systems. Deep neural networks, particularly CNN, provide a
decrease significantly. Convolution layers used 3x3 kernels solution to the tough Kannada MNIST dataset[11].This
for convolution. Max pooling and convolutional layer outputs research presents a hierarchical combination approach using
Authorized licensed use limited to: M S RAMAIAH INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2024 at 07:19:13 UTC from IEEE Xplore. Restrictions apply.
two CNN models. This approach outperformed all other
related approaches. The research applied morphological
operations and created two CNN models. Two models that
were entirely new were created to improve earlier work and
minimize system overhead. The training, validation, and
testing accuracy of this unique technique on the Kannada-
MNIST dataset are 99.86%, 99.66%, and 99.80%,
respectively.
Word Spotting is a research domain that has evolved
throughout the years from printed text to handwritten
document. Handwritten word recognition is difficult due to
different styles but similar shapes. Word Spotting simplifies
document indexing and interpretation. This paper compares
and examines different Kannada handwritten character
recognition methods. The word in the submitted image is
identified and labelled and it is determined that CNN with
spatial layer modification works best for Kannada handwritten
words. KAZE, HoG, LBP, and Gabor Wavelets are utilized
for feature extraction. HDF5 usually recommended [12].
Recognition of postcode, document digitization, and Fig. 1. Work flow diagram of the model
recognition of ancient characters required human effort before
the advent of machine learning. Modern approaches allow 1) Dataset : The dataset used in the proposed research is
computers to do such jobs efficiently and precisely. In 2018, char74K for Kannada characters. Chars74 consists of 657
Southwest University's School of Computer & Information distinct handwritten Kannada characters with 25 samples,
Science and, Guizhou University of Engineering Science's each making it a total of 16425 images. These characters have
Research Institute of Yi Nationality led AI-based research on been chosen based on the frequency of occurrence in the
ancient Yi language categorization. CNN-based model Kannada language. These images consist of 16 vowels, 34
obtained great accuracy. Later in 2019, the K- base consonants making up the Kannada varnamale, the
MNIST(Kannada-MNIST) dataset was used for character dataset also consists of kagunitha consonants (16 * 34) and
classification research. CNN models were created for image
the ten numerals present in the Kannada language. The
categorization. This document analyses CNN's model's
development and testing and compared it to Logistic dataset is then divided into a train-test split based on an 80:20
Regression and Support Vector Machine. The model obtained ratio. Training includes 20 images of each class and testing
98.77% accuracy. The paper concluded that CNN models are includes 5 images of each class. This gives a total of 13140
good at classifying handwritten characters.[13] images for training and 3285 images for testing.
2) Data Pre-processing and Segmentation : The dataset
Transfer Learning using CNN excels at large-scale picture
categorization. Telugu and Kannada structural characters are goes through a process of data preprocessing to prepare it
similar and humans find it difficult to recognise. Character according to the requirements of the machine learning model
recognition research utilise various feature extraction models. which increases the model accuracy. There are a number of
CNN extract supervised feature vectors effectively. Large steps the data undergoes before it is fed into the machine
scale pretrained ImageNet or COCO feature vectors learning model. The OpenCV module of python is used
outperformed the script datasets. Comparative studies extensively for the process of data pre-processing. All the
employed fine-tuned model of transfer learning [14-15]. images in the dataset are grayscale images. These grayscale
images get converted into binary images by thresholding.
III. EXPERIMENTAL SETUP
Then the edges and lines of the image are detected. After the
The prime objective of this research is to devise a model edges are extracted, it is smoothened. The unwanted portions
which is able to recognize handwritten Kannada characters are then blurred. The unwanted noise in the image is
from a scanned text and store it in a document. We believe that
eliminated. Contouring of eroded images is done. An
it is significantly easier to manage digitized text than the
written text. It helps users to access, search, share, and analyse approximate bounding rectangle around the binary image is
their records effectively, while still permitting them to work formed. This function is used mainly to highlight the region
with their preferred writing style. Once the output is obtained of interest after obtaining contours from the images.
in the PDF format it can be used in various field like text-to- The input images to the application will not be individual
speech conversion, grammar correction, translation to the characters, but it will be words. Therefore, if the input images
English language etc. are in the form of a line, it is first segregated into words. Space
segmentation is performed on the image to extract the
A. Work Flow Diagram individual words. Then each character is extracted for the
Figure 1 represents the work flow diagram of DenseNet character recognition purpose. A similar process like the data
model. The phases of the data flow are divided into two pre-processing is applied to extract the lines and the words
streams. One is used to train the model and the other is used from the image.
to test on real time input. Each of these phases are explained
in the sections below. 3) DenseNet 121: DenseNet or Dense Convolutional
Neural Network is type of CNN in which each layer is
connected to every other layer. This is a feed-forward neural
Authorized licensed use limited to: M S RAMAIAH INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2024 at 07:19:13 UTC from IEEE Xplore. Restrictions apply.
network. Input to a particular layer will be the feature maps feature maps. Thus, the input to the ith layer is given by the
from all of the predecessor layers. The strength of DenseNet eqn (1)
is that the features are concatenated and not summed up like
other Convolutional Neural Networks. Although the training Y_(i )=〖H_i ([Y〗_0,Y_1,Y_2…….,Y_(i-1)]) (1)
process increases due to the summation, it does not harm the
gradient flow. The concatenation of DenseNet layers was Where Y0, Y1, Y2 ……, Yi-1] is the feature map
derived from inception nets which makes the layers or concatenation of all the previous layers produced by function
channels thicker. This allows the next layer to work on H.
feature maps generated by the predecessive layer or the Figure 2 Shows the Architecture of DenseNet-121.
feature maps of the Conv2D operation of any of the previous DenseNet comprises two important blocks in addition to the
layers. This also helps prevent the learning of unimportant basic convolutional layer and the pooling layer. There exist
dense blocks as well as transition layers.
Densenet-121 comprises four Dense Blocks and three of the 657 classes were chosen for testing. A testing
Transition Layers. The dense blocks comprise a specific accuracy of 93.87% is observed. Loss curve found decreasing
number of convolutional layers. For example, Dense Block 1 with epochs.
contains 12 layers, that is 6 [1 x 1] layers and 6 [3 x 3] layers.
The 1 x 1 layer precedes the 3 x 3 layer such that the size of
the feature maps is reduced before performing a more
expensive 3 x 3 Convolution operation, hence also called as a
bottleneck layer. Each of the transition layers also contain a 1
X 1 convolution layers and a 2 x 2 average pooling layer.
Table I gives the total number of convolution layers
present in the base model which is 121 and hence the name
DenseNet-121.
Authorized licensed use limited to: M S RAMAIAH INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2024 at 07:19:13 UTC from IEEE Xplore. Restrictions apply.
and the Unicode corresponding to it is fed to an array Once recognized and converted to a PDF it can be further
‘content’. If the number is greater than 17 then the number is converted to the English language and give way for
divided by 17. The quotient is the determined to be a translation services. By extending the character set to include
Vyanjana consonant and the remainder is the vowel sign that Ottkshara consonants we can further recognize all the
is added to the Vyanjana to make the corresponding characters in the Kannada language.
Kagunitha consonant. The respective Unicode for these
added and appended to the array ‘content’. If the model is not
able to recognize the alphabet it is deemed as an exception
and '☐' is appended to the array. For instance, the character
recognized by the model is VV, the output number generated
is 622. Since it is greater than 17, 622//17 gives the Vyanjana
consonant [\u0Cb9] and 622 % 17 gives the vowel sign
[\u0cc7] to be added, hence generating ಹ + V = VV. Table II
outlines the pseudocode of the application.
Generate PDF
View Segmented Characters to verify the output
Authorized licensed use limited to: M S RAMAIAH INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2024 at 07:19:13 UTC from IEEE Xplore. Restrictions apply.
Article Metrics Limitation
“K. G. Joe et al., (2019)” Accuracy of models is as below. Random Forest - CNN model exhibits the highest accuracy compared to other
[3] 5.234 %, MNNBC - 4.199% models. Accuracy can be further enhanced by collaborating
CNN - 57.002%. capsule networks with the CNN model [16][17][18][19] with
dynamic routing to reduce processing time.
“N. S. Rani et al., (2020)” VGG19 NET exhibited an accuracy of 73.51% on Training the model with minimal examples of each class of
[4] validation, after an evaluation of 10 epochs. handwritten Kannada characters resulted in lower accuracy of
the model.
“Rao et al., (2020)” [5] Yielded a training accuracy of 95.11% and testing Additional convolution layers may further improve the
accuracy of 86%. Authors proposed a CNN-kNN accuracy of the model for higher count of epochs.
hybrid model and ANN model for enhanced character
set with higher accuracy.
“R. Fernandes et al., Usage of extensive dataset with images of varying Does not recognize the characters that do not meet the
(2019)” [6] handwriting styles is proposed to yield 99% accuracy. constrains. The images are expected to be black and white and
need to have a black border. Obtaining the near perfect dataset
manually is a tedious task. A standard dataset could have been
used.
“G. Ramesh et al., (2019)” Model exhibited an accuracy of 93.2 % for the The model does not recognize words and sentences. Recognises
[7] Consolidated Dataset and 78.73 % for the Raw only individual Kannada characters.
Dataset.
“K. Asha et al., (2018)” [8] Accuracy of 96% and 99% were observed for custom Model does not classify the images of overlapping Kannada
handwritten dataset and Chars74k dataset respectively. characters accurately.
“S. Sen et al., (2018)” [10] For Chars74k dataset CNN, SVM, Inception V3 and The dataset is not normalized for kNN Algorithm.
k-NN yielded an accuracy of 99.84%, 96.35%, 75.36%
and 68.53% respectively.
Proposed Method Accuracy – Consider cross-validation and increase size of dataset to include
Train – 98.74% all characters in the Kannada Language.
Test – 93.87%
Table III outlines the comparative analysis of the Recognition", Second International Conference on Inventive Research
proposed and the existing methods. The proposed method in Computing Applications (ICIRCA), 2020, pp. 213-220,
doi:10.1109/ICIRCA48905.2020.9183160
exhibits an accuracy of 98.74% and 93.87% on training and
[5] Rao, Abhishek & Arpitha, Anusha & Nayak, Chandana & Meghana,
testing respectively, for a larger character set compared to Sneha & Nayak, Sneha & S., Sandhya. (2020), “Exploring Deep
existing models. Learning Techniques for Kannada Handwritten Character Recognition:
A Boon for Digitization”, International Journal of Advanced Science
V. CONCLUSION and Technology, Vol. 29, No. 5, (2020), pp. 11078-11093.
The current model is unique and is able to recognize [6] R. Fernandes and A. P. Rodrigues, "Kannada Handwritten Script
Recognition using Machine Learning Techniques," 2019 IEEE
around 600 characters in the Kannada Language. This International Conference on Distributed Computing, VLSI, Electrical
includes the swara, vyanjana, kagunitha, and kannada Circuits and Robotics (DISCOVER), 2019, pp. 1-6,
numerals. With the implementation of space segmentation, the doi:10.1109/DISCOVER47552.2019.9008097.
model is also able to recognize words and small sentences. [7] G. Ramesh, G. N. Sharma, J. M. Balaji and H. N. Champa, "Offline
The model can be further extended to include all the characters Kannada Handwritten Character Recognition Using Convolutional
in the Kannada language, although this requires extensive Neural Networks," 2019 IEEE International WIE Conference on
Electrical and Computer Engineering (WIECON-ECE), 2019, pp. 1-5,
dataset collection it will be a milestone in the field of doi:10.1109/WIECON-ECE48653.2019.9019914
Handwritten character recognition. A validation set can be
[8] K. Asha and H. K. Krishnappa, "Kannada Handwritten Document
used to optimize the ImageNet weights to more suitably fit the Recognition using Convolutional Neural Network," 2018 3rd
problem. Since the recognition is based on identifying the International Conference on Computational Systems and Information
symbol and matching the characters this work can also be Technology for Sustainable Solutions (CSITSS), 2018, pp. 299-301,
implemented for other symbolic datasets like Dravidian doi:10.1109/CSITSS.2018.8768745
languages and international languages like Chinese and [9] T. E. de Campos, B. R. Babu and M. Varma, “Character recognition in
Japanese too. natural images”, in Procedings of the Fourth International Conference
on Computer Vision Theory and Applications, Lisboa, Portugal,
February 5-8, 2009 - Volume 2.
REFERENCES
[10] S. Sen, S. V. Prabhu, S. Jerold, J. S. Pradeep and S. Choudhary,
[1] K. Sheshadri, P. K. T. Ambekar, D. P. Prasad and R. P. Kumar, "An "Comparative Study and Implementation of Supervised and
OCR system for printed Kannada using k-means clustering", in Unsupervised Models for Recognizing Handwritten Kannada
Proceedings of the IEEE International Conference on Industrial Characters," 2018 3rd IEEE International Conference on Recent
Technology, 2010, pp. 183-187, doi: 10.1109/ICIT.2010.5472676. Trends in Electronics, Information & Communication
[2] Vijaya Shetty, S., Karan, R., Devadiga, K., Ullal, S., Sharvani G.S. and Technology(RTEICT), 2018,pp.774-778,
Shetty, J. , “Kannada Handwritten Character Recognition Techniques: doi.:10.1109/RTEICT42901.2018.9012531
A Review”, in Rajakumar, G., Du, KL., Vuppalapati, C., Beligiannis, [11] A. Beikmohammadi and N. Zahabi, "A Hierarchical Method for
G.N. (eds) Intelligent Communication Technologies and Virtual Kannada-MNIST Classification Based on Convolutional Neural
Mobile Networks. Lecture Notes on Data Engineering and Networks," 2021 26th International Computer Conference, Computer
Communications Technologies, vol 131, Print ISBN 978-981-19-1843- Society of Iran (CSICC), 2021, pp. 1-6,doi:
8, doi.: 10.1007/978-981-19-1844-5_56 10.1109/CSICC52343.2021.9420604.
[3] K. G. Joe, M. Savit and K. Chandrasekaran, "Offline Character [12] T. Sureka, K. S. N. Swetha, I. Arora and H. R. Mamatha, "Word
recognition on Segmented Handwritten Kannada Characters", 2019 Recognition Techniques for Kannada Handwritten Documents," 2019
Global Conference for Advancement in Technology (GCAT), 2019, 10th International Conference on Computing, Communication and
pp. 1-5, doi:10.1109/GCAT47503.2019.8978320 Networking Technologies (ICCCNT), 2019, pp. 1-7, doi:
[4] N. S. Rani, A. C. Subramani, A. Kumar P. and B. R. Pushpa, "Deep 10.1109/ICCCNT45670.2019.8944753.
Learning Network Architecture based Kannada Handwritten Character
Authorized licensed use limited to: M S RAMAIAH INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2024 at 07:19:13 UTC from IEEE Xplore. Restrictions apply.
[13] E. X. Gu, "Convolutional Neural Network Based Kannada-MNIST
Classification," 2021 IEEE International Conference on Consumer
Electronics and Computer Engineering (ICCECE), 2021, pp. 180-185,
doi: 10.1109/ICCECE51280.2021.9342474.
[14] S. Sahoo, P. Kumar B. and R. Lakshmi, "Offline handwritten character
classification of the same scriptural family languages by using transfer
learning techniques," 2020 3rd International Conference on Emerging
Technologies in Computer Engineering: Machine Learning and
Internet of Things (ICETCE), 2020, pp. 1-4, doi:
10.1109/ICETCE48199.2020.9091744.
[15] Goutam.R, Dr.Vijaya Shetty S, ”A Comprehensive Survey On
Automatic Kannada Handwritten Text Recognition”, Journal of Data
Acquisition and Processing, 2023, 38 (3),pp 1667-1683,
https://sjcjycl.cn/DOI: 10.5281/zenodo.98549369
[16] S. Vijaya Shetty, K. Anand, P. S, P. K. A and P. M, "Social Distancing
and Face Mask Detection using Deep Learning Models: A Survey,"
2021 Asian Conference on Innovation in Technology (ASIANCON),
PUNE, India, 2021, pp. 1-6, doi:
10.1109/ASIANCON51346.2021.9544890.
[17] Vijaya Shetty Sadanand, Keerthi Anand, Pooja Suresh, Punnya
Kadyada Arun Kumar, Priyanka Mahabaleshwar, “Social distance and
face mask detector system exploiting transfer learning”, International
Journal of Electrical and Computer Engineering (IJECE), Vol 12, No
6, DOI: http://doi.org/10.11591/ijece.v12i6.pp6149-6158
[18] Vijaya Shetty, S., Guruvyas, K.R., Patil, P.P., Acharya, J.J. (2022).
Essay Scoring Systems Using AI and Feature Extraction: A Review.
In: Bindhu, V., Tavares, J.M.R.S., Du, KL. (eds) Proceedings of Third
International Conference on Communication, Computing and
Electronics Systems . Lecture Notes in Electrical Engineering, vol 844.
Springer, Singapore. https://doi.org/10.1007/978-981-16-8862-1_4
[19] Vijaya Shetty Sadanand, Kadagathur Raghavendra Rao Guruvyas,
Pranav Prashantha Patil, Jeevan Janardhan Acharya, Sharvani
Gunakimath Suryakanth, “An automated essay evaluation system using
natural language processing and sentiment analysis”, Vol 12, No 6,
DOI: http://doi.org/10.11591/ijece.v12i6.pp6585-6593.
Authorized licensed use limited to: M S RAMAIAH INSTITUTE OF TECHNOLOGY. Downloaded on October 18,2024 at 07:19:13 UTC from IEEE Xplore. Restrictions apply.