Study Materials - Sparse Autoencoder
Study Materials - Sparse Autoencoder
www.codingninjas.com /studio/library/sparse-autoencoder
Sparse Autoencoder
Table of contents
1.
2.
Sparse Autoencoder
3.
L1-Regularization Sparse
4.
5.
Key Takeaways
1 upvote
Share
chrome-extension://nhiebejbpolmpkikgbijamagibifhjib/data/reader/index.html?id=1505974594&url=https%3A%2F%2F2.zoppoz.workers.dev%3A443%2Fhttps%2Fwww.codingninjas.com%2Fstudio%2Fl… 1/8
01/02/2024, 22:26 Sparse Autoencoder
1. Data Specific: We can only use autoencoders for the data that it has previously been trained on. For
instance, to encode an MNIST digits image, we’ll have to use an autoencoder that previously has been
trained on the MNIST digits dataset.
2. Lossy: Information is lost while encoding and decoding the images using autoencoders, which means
that the reconstructed image will have some missing details compared to the original image.
Sparse Autoencoders are one of the valuable types of Autoencoders. The idea behind Sparse
Autoencoders is that we can achieve an information bottleneck (same information with fewer neurons)
without reducing the number of neurons in the hidden layers. The number of neurons in the hidden layer
can be greater than the number in the input layer.
We achieve this by imposing a sparsity constraint on the learning. According to the sparsity constraint,
only some percentage of nodes can be active in a hidden layer. The neurons with output close to 1 are
active, whereas the neurons close to 0 are in-active neurons.
More specifically, we penalize the loss function such that only a few neurons are active in a layer. We
force the autoencoder to represent the input information in fewer neurons by reducing the number of
neurons. Also, we can increase the code size because only a few neurons are active, corresponding to a
layer.
chrome-extension://nhiebejbpolmpkikgbijamagibifhjib/data/reader/index.html?id=1505974594&url=https%3A%2F%2F2.zoppoz.workers.dev%3A443%2Fhttps%2Fwww.codingninjas.com%2Fstudio%2Fl… 2/8
01/02/2024, 22:26 Sparse Autoencoder
Source: www.medium.com
In Sparse autoencoders, we use L1 regularization or KL-divergence to learn useful features from the
input layer.
Connect with our expert counsellors to understand how to hack your way to success
Akash Pal
chrome-extension://nhiebejbpolmpkikgbijamagibifhjib/data/reader/index.html?id=1505974594&url=https%3A%2F%2F2.zoppoz.workers.dev%3A443%2Fhttps%2Fwww.codingninjas.com%2Fstudio%2Fl… 3/8
01/02/2024, 22:26 Sparse Autoencoder
Himanshu Gusain
Programmer Analyst
After Job
Bootcamp
L1-Regularization Sparse
L1-Regularization is one of the most famously used regularization methods in Machine Learning. In L1-
Regularization, we use the magnitude of coefficients as the penalty term.
chrome-extension://nhiebejbpolmpkikgbijamagibifhjib/data/reader/index.html?id=1505974594&url=https%3A%2F%2F2.zoppoz.workers.dev%3A443%2Fhttps%2Fwww.codingninjas.com%2Fstudio%2Fl… 4/8
01/02/2024, 22:26 Sparse Autoencoder
Graph: L1 = ||w||
chrome-extension://nhiebejbpolmpkikgbijamagibifhjib/data/reader/index.html?id=1505974594&url=https%3A%2F%2F2.zoppoz.workers.dev%3A443%2Fhttps%2Fwww.codingninjas.com%2Fstudio%2Fl… 5/8
01/02/2024, 22:26 Sparse Autoencoder
For L1-Regularization, the derivative is either 1 or -1 (except when w=0), which means regardless of the
value of w, L1-Regularization will always push w towards zero with the same step size.
1. Encoder
2. Code
3. Decoder
chrome-extension://nhiebejbpolmpkikgbijamagibifhjib/data/reader/index.html?id=1505974594&url=https%3A%2F%2F2.zoppoz.workers.dev%3A443%2Fhttps%2Fwww.codingninjas.com%2Fstudio%2Fl… 6/8
01/02/2024, 22:26 Sparse Autoencoder
1. Data Specific,
2. Lossy (The reconstructed images loses details when compared to the original image),
3. Learn automatically from the data examples.
Ans. Autoencoders belong to the unsupervised machine learning category; they do not need explicit
labels for training because input and output are the same.
1. Sparse Autoencoder
2. Deep Autoencoder
3. Convolutional Autoencoder
4. Contractive Autoencoder
5. Variational Autoencoder
6. Denoising Autoencoder
7. Undercomplete Autoencoder
Ans. The idea of the Denoising autoencoder is that we add random noise instances in the input images
and then ask the autoencoder to recover the original image from the noisy one. The autoencoder has to
subtract the noise and only output the meaningful features.
Key Takeaways
Congratulations on finishing the blog!! Below, I have some blog suggestions for you. Go ahead and take
a look at these informative articles.
In today’s scenario, more & more industries are adapting to AutoML applications in their products; with
this rise, it has become clear that AutoML can be the next boon in the technology. Check this article to
learn more about AutoML applications.
Check out this link if you are a Machine Learning enthusiast or want to brush up your knowledge with ML
blogs.
If you are preparing for the upcoming Campus Placements, don't worry. Coding Ninjas has your back.
Visit this link for a carefully crafted and designed course on-campus placements and interview
preparation.
Comments
chrome-extension://nhiebejbpolmpkikgbijamagibifhjib/data/reader/index.html?id=1505974594&url=https%3A%2F%2F2.zoppoz.workers.dev%3A443%2Fhttps%2Fwww.codingninjas.com%2Fstudio%2Fl… 7/8
01/02/2024, 22:26 Sparse Autoencoder
No comments yet
chrome-extension://nhiebejbpolmpkikgbijamagibifhjib/data/reader/index.html?id=1505974594&url=https%3A%2F%2F2.zoppoz.workers.dev%3A443%2Fhttps%2Fwww.codingninjas.com%2Fstudio%2Fl… 8/8