Noise Reduction in Images Using Autoencoders
Noise Reduction in Images Using Autoencoders
https://doi.org/10.22214/ijraset.2022.48306
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
Abstract: Ideally, the signals which are pure can exist only on paper. As there are some techniques for denoising the provided
signal up to some degree, so procedure during that time it is important that such techniques must be reconcilable with the most
of the devices. This article describes a for denoising with the help of an autoencoder using image processing technique and
algorithms which are based on deep learning. With the aid of autoencoders, noise reduction is not accomplished using a
conventional method in which the output signal is essentially the same signal that was used as an input previously. Here the
main focus remains originality as the autoencoder follows a back propagation process It is one of the approaches that focuses on
the techniques described in this article are interchangeable. i.e., Working for any signal and having, reliability, efficient Ness
and compatibility with more devices.
Keywords: CNN, LSR, encoder, ANN, Machine Learning.
I. INTRODUCTION
Artificial Neural Network designed to automatically encode the image into a standard format after compressing the input image into
a (LSR) vector. After converting the input image to LSR vector format, you can extract image features. The primary goal is making
the image crystal clear by eliminating the extra turbulence. It can be used on a signal to cut down or eliminate any image noise. The
main reason of removing noise is to help the hidden layer of the autoencoder to learn more strong features and become more
efficient and reduce the risk of overfitting.
Two sections make up the blocks of an auto-encoder: the first is an encoder section, also known as encoding, and the second is a
decoder section, also known as decoding. [1] The encoding section takes a signal and gives the result as a compressed signal in the
form of an LSR vector, which is a type of vector known as the LSR vector. The input for a decoder signal is vector that is an LSR
vector and it gives the output in the form of image.
According to Fig.1, the input designated as (sig) passes the encoding section before becoming E. (sig). It’s output signal now serves
as an LSR vectors. [2] The S Vector is then processed, eventually becoming D. (s), And that is the final output o.
Abstractly x and s should be the same, the latter not the original. Throughout this process some properties of the signal are lost and
it is not same as the original.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1732
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
As mentioned above, the implementation of such algorithms occurs when the signal is in LSR format. This is because context can be
derived from signal since everything will be set of numbers.
When the image goes along the autoencoder, it is first compressed in the first section known as encoding block. Images are
converted to LSR vectors during compression. That vector is sent along fully connected layers in the decoding block to increase the
size and reconstruct the output image and it loses some small features due to compression and decompression. [3] When an image
first goes through a noisy autoencoder, the output image is effectively an exact version of the same input in the clear form. This is
because autoencoders effectively omit some of the features, for example the added noise.
There are losses. However, this is the noise lost which ids present in the input signal. With the help of this idea, this paper helps to
develop a system to remove or reduce noise from image, confirm the plausibility of the idea, and discuss its extension.
[5] The core suggested model is the decoder portion of an autoencoder. Features of the block image (in this case, noise) are
excluded due to reconstruction loss. However, in essence, you require an encoder to transform image into format that the decoder
can understand.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1733
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
B. Decoder (Reconstruction)
The vector of is first used as the decoder's input. Then, a three-D extent with completely linked layers is created by the decoder. The
volume of the transposed convolution layer will increase as the LSR vector passes through it. Leaky Relu is then applied, and
normalization is completed.
V. AUTOENCODER TRAINING
A. Noise adding to MNIST dataset
The Pixel Intensities of the photographs are scaled from zero to one.[8] The commonly allotted random sampling noise is focused
on 0.5 and 0.5 standard deviation is included to the NumPy representation.
Looking at Figure 5 and 6, we are able to see that the picture is pretty damaged. [6] Now that our dataset is prepared, we can feed it
to autoencoder network.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1734
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
[9] At first, validation loss increases, however because the quantity of epochs will increase, the graph began to descend. The loss
did not grow with epochs, which shows that model was not overfitted.
VI. RESULTS
A picture of numbers that at the start contained noise was leaked first. Figure 6 indicates the formerly recorded autoencoder
inputs and outputs. Looking on the enter tool, the noise is so loud that even the bare eye can slightly see the numbers. Whilst the
tool goes through an autoencoder, the noise in the picture is eliminated and the numbers inside the image are visible as in fig 8
and in fig 9 .
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1735
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XII Dec 2022- Available at www.ijraset.com
The noisy image had a PSNR of 29.52 dB and an MSE of 72.25. The model performed absolute accuracy through manually testing
images from the MNIST data set.
VII. CONCLUSION
The suggestion is about how noise could be reduced somewhat. An autoencoder that accepts input as an image, compresses it, and
then immediately turns it into an LSR vector during the encoding stage has been trained. [10] The decoding phase of the LSR vector
is followed by a fully linked layer, which enlarge the size and convert the vector into an image. Some of the image's properties are
lost during decoding. It is applied to the creation system that denoises a chosen signal. The only thing that is lost in the decoding
process as properties is Noise.
REFERENCES
[1] Calderara, S., Piccinini, P. & Cucchiara, R.Vision smoke detection system using image energy and Colour information Machin .
[2] V. Agrawal, S. Dhekane, N. Tuniya and V. Vyas, "Image Caption Generator Using Attention Mechanism," 2021 12th International Conference on Computing
Communication and Networking Technologies (ICCCNT),2021,1-6,doi:.Rosebrock,Adrian.“Fireandsmokedetectionwith Keras and Deep Learning. ” by
image search, 18 Nov. 2019
[3] Cestari, Luis & Worrell, Clarence & Milke, James. (2005). Advanced fire detection algorithms using data from the home smoke detector project. Fire Safety
Journal - FIRE SAFETY J. 40.128.10.101 6/j.firesaf.2004.07.004.
[4] O. Vinyals, A. Toshev, S. Bengio and D. Erhan, "Show and tell: A neural image caption generator," 2015 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2015, pp. 3156-3164, doi:10.1109/CVPR.2015.7298935. Jacob,I.Jeena."Capsule network based biometric recognition system." Journal
of Artificial Intelligence1,no.02(2019):83-94.
[5] P. Mathur, A. Gill, A. Yadav, A. Mishra and N. K. Bansode, "Camera2Caption: A real-time image caption generator," 2017 International Conference on
Computational Intelligence in Data Science(ICCIDS), 2017, pp. 1-6, doi: 10.1109/ICCIDS.2017.8272660.X. Ye, L. Wang, H. Xing and L. Huang, "Denoising
hybrid noises in image with stacked autoencoder,"2015IEEE International Conference on Information and Automation, Lijiang, 2015, pp.2720-2724.
[6] L.Yasenko,Y.Klyatchenko and O.Tarasenko Klyatchenko,"Image noise reduction by denoising autoencoder,"2020IEEE11thInternational Conference on
Dependable Systems, Services and Technologies(DESSERT),Kyiv,Ukraine,2020, pp.351-355.
[7] S. -H. Han and H. -J. Choi, "Explainable Image Caption Generator Using Attention and Bayesian Inference," 2018 International Conference on
Computational Science and Computational Intelligence (CSCI), 2018, pp. 478-481, doi: 10.1109/CSCI46756.2018.00098.
[8] M. M. A. Baig, M. I. Shah, M. A. Wajahat, N. Zafar and O. Arif, "Image Caption Generator with Novel Object Injection," 2018 Digital Image Computing:
Techniques and Applications (DICTA), 2018, pp. 1-8, doi: 10.1109/DICTA.2018.8615810.
[9] S. -H. Han and H. -J. Choi, "Domain-Specific Image Caption Generator with Semantic Ontology," 2020 IEEE International Conference on Big Data and
Smart Computing (Big Comp), 2020, pp. 526-530, doi: 10.1109/BigComp48618.2020.00-12.
[10] A. Singh and D. Vij, "CNN-LSTM based Social Media Post Caption Generator," 2022 2nd International Conference on Innovative Practices in Technology
and Management (ICIPTM), 2022, pp. 205-209,doi: 10.1109/ICIPTM54933.2022.9754189.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1736