UNSUPERVISED PRETRAINED
NETWORK
• Unsupervised Pretrained Networks (UPNs) in neural
networks refer to models that are first trained using
unsupervised learning methods (without labeled data) to
capture the underlying structure of data.
• After this unsupervised pretraining, the network can be fine-
tuned using supervised learning techniques (with labeled
data) for tasks such as classification or regression.
• Examples of Unsupervised Pretrained Networks:
1.Autoencoders (AEs) and Variational Autoencoders (VAEs):
1. Autoencoders are neural networks trained to compress input data into a
lower-dimensional representation (encoding) and then reconstruct it. By
learning to compress and reconstruct, the model captures important
features.
2. Variational Autoencoders go a step further, learning a probabilistic
distribution of latent variables that can be used for generating new data
points.
• Key Benefits of Unsupervised Pretrained Networks:
1.Reduced Need for Labeled Data: Pretraining allows the model
to capture general patterns in the data before being fine-
tuned on labeled data, reducing the need for large labeled
datasets.
2. Faster Training: By initializing the model with pretrained
weights, the network converges faster during supervised fine-
tuning.
• Key Benefits of Unsupervised Pretrained Networks:
3. Improved Generalization: Unsupervised pretraining allows
the model to learn useful features from the data, improving its
ability to generalize to unseen examples.
4. Transfer Learning: Pretrained models can often be
transferred to new tasks with different datasets, reducing the
need for retraining from scratch.
• Scenario: Unsupervised Pretraining for Anomaly Detection in
Network Security
• Problem: A cybersecurity team wants to build a model that detects
unusual network traffic patterns that may indicate a security breach.
However, it is challenging to label network traffic as "normal" or
"anomalous" due to the volume and complexity of the data.
• Solution: An autoencoder is pretrained on unlabeled network traffic
data to capture patterns and dependencies in normal traffic.
• The autoencoder learns to reconstruct normal traffic data accurately.
• For anomaly detection, the pretrained model is used in its
unsupervised form:
• if the model fails to accurately reconstruct a data point (i.e., if the
reconstruction error is high), that data point is flagged as an anomaly.
• Outcome:
• Unsupervised pretraining allows the model to learn a
baseline of normal network behavior, making it effective at
detecting anomalies without requiring labeled data.
Finance: Fraud Detection in Credit Card Transactions
• Scenario: A financial institution has millions of unlabeled credit card
transaction records but only a few thousand labeled fraudulent and
legitimate transactions. The goal is to build a fraud detection system.
• Pretraining:
• A self-supervised model (e.g., using contrastive learning) is pretrained
on the transaction data.
• The model is tasked with learning patterns in user spending behavior
by contrasting similar transactions (same customer, same spending
habits) and dissimilar transactions (different customers, varying
amounts, and merchants).
• Fine-Tuning:
• Once the model learns general features of transaction behavior, it is fine-
tuned on the small labeled dataset to classify transactions as either
fraudulent or legitimate.
• Result: The model is more adept at spotting anomalies or outliers
(fraudulent transactions) because it has learned the normal spending
patterns during pretraining.
• Conclusion:
• Unsupervised pretrained networks offer a powerful approach to
training deep neural networks, especially when labeled data is scarce.
By learning general features through unsupervised pretraining, these
models can achieve better performance when fine-tuned on specific
tasks.
• Whether used in medical image analysis, speech recognition,
sentiment analysis, or anomaly detection, unsupervised pretraining
can significantly enhance the capabilities of neural networks.
GENERATIVE ADVERSARIAL NETWORK
• A Generative Adversarial Network (GAN) is a class of
machine learning frameworks where two neural networks
contest with each other in a game-like setting. The two
networks are:
1.Generator (G): Generates new data instances that resemble
the training data.
2.Discriminator (D): Evaluates whether a given data instance is
real (from the training set) or fake (generated by the
generator).
• The goal of the generator is to create data that is so similar
to the real data that the discriminator cannot tell whether it
is real or fake.
• The discriminator, on the other hand, tries to distinguish
between real and fake data.
• Key Concepts
• The generator learns to produce better fakes by trying to fool
the discriminator.
• The discriminator learns to become better at identifying real
vs. fake data.
• The training process is like a game where both networks
improve their strategies until the generator produces data
indistinguishable from real data.
Example of GAN in Action
• One popular example of GANs in action is generating realistic images
of people who do not exist. Let's say we have a dataset of images of
human faces.
1.Step 1: Training the Discriminator The discriminator is trained with
real images from the dataset labeled as "real" and images generated
by the generator labeled as "fake." Its task is to correctly classify real
and fake images.
2.Step 2: Training the Generator The generator produces images that
initially look like noise. As it keeps learning, its goal is to generate
images that look more like real faces to fool the discriminator.
• Step 3: Adversarial Training The generator and discriminator are
trained together.
• The generator aims to produce better images that the discriminator
cannot distinguish from real ones, while the discriminator improves at
telling the difference.
Practical Use Case: Fake Human Faces
• A famous website, thispersondoesnotexist.com, uses GANs to
generate lifelike images of non-existent people.
• Each time the page is refreshed, the GAN generates a new face that
looks completely realistic, even though the person never existed.
• GANs have been applied in a wide range of fields, including image
synthesis, video generation, and style transfer (e.g., converting
paintings to photo-realistic images).
• Scenario: Furniture Design Generation
• Context:
• A company that sells furniture wants to expand its design options and
create new, unique designs. However, hiring designers to come up
with thousands of ideas is expensive.
• They decide to train a GAN using their existing furniture designs (like
chairs, tables, and sofas) to automatically generate new and realistic-
looking furniture concepts.
• Step-by-Step Example of Using a GAN:
1.Data Collection:
1. The company collects thousands of images of different types of furniture
from its catalog—various chairs, tables, sofas, and lamps.
2. These images are labeled and used to train the GAN.
2.GAN Architecture:
1. The GAN consists of two neural networks:
1.The Generator and
2.The Discriminator.
• Training Process:
• A. The Discriminator's Role:
• The discriminator is trained on real furniture images and learns to
distinguish real furniture designs from fake ones.
• It receives two sets of inputs:
• Real furniture images (from the dataset).
• Fake furniture images (from the generator).
• The discriminator outputs whether it believes the input image is real
or fake.
• B. The Generator's Role:
• The generator starts by producing random noise that initially
resembles chaotic blobs, not realistic furniture.
• Its goal is to transform this noise into images that increasingly
resemble actual furniture from the catalog.
• Over time, it learns to produce more realistic furniture designs to try
and "fool" the discriminator.
• Adversarial Training:
• Adversarial training begins where both networks improve by
competing with each other:
• The generator tries to produce furniture images that look so real that the
discriminator cannot tell whether they are fake.
• The discriminator improves its ability to distinguish between real and
generated furniture images.
• This process continues until the generator can consistently produce
realistic furniture designs, and the discriminator struggles to
differentiate between real and fake images.
Example of Adversarial Interaction:
• At the Start:
• The generator might produce a blurry image of a chair, with oddly shaped legs and
no clear features.
• The discriminator can easily detect this as fake.
• Midway Through Training:
• The generator produces images that vaguely resemble tables and chairs but
with distorted proportions (e.g., a table with one leg much longer than the
others).
• The discriminator improves its detection by picking out such inconsistencies.
• After Extensive Training:
• The generator creates highly realistic images of entirely new chairs,
sofas, or tables with distinct design elements (e.g., modernist,
minimalist styles).
• The discriminator has a hard time distinguishing these images from
real ones, and some generated designs may even look good enough
to be manufactured.
• This scenario illustrates how a GAN can be used to generate high-
quality, realistic images of furniture that mimic the original dataset
but introduce new designs.
• The adversarial process between the generator and discriminator
allows for continuous improvement, eventually resulting in lifelike
images that can have real-world applications.
Scenario: Fashion Design Using GANs
Context:
• A well-known fashion house wants to introduce a fresh line of
designer clothing, pushing boundaries in creativity.
• The designers, however, are looking for inspiration that could
combine elements of their past collections with futuristic, bold
designs. To help, they decide to train a GAN on their past designs,
runway photos, and street fashion trends.
• The goal is to generate new clothing concepts that are unique yet
align with the brand's style.

UNSUPERVISED NEURAL.pptx UNSUPERVISED PPT

  • 1.
  • 2.
    • Unsupervised PretrainedNetworks (UPNs) in neural networks refer to models that are first trained using unsupervised learning methods (without labeled data) to capture the underlying structure of data. • After this unsupervised pretraining, the network can be fine- tuned using supervised learning techniques (with labeled data) for tasks such as classification or regression.
  • 3.
    • Examples ofUnsupervised Pretrained Networks: 1.Autoencoders (AEs) and Variational Autoencoders (VAEs): 1. Autoencoders are neural networks trained to compress input data into a lower-dimensional representation (encoding) and then reconstruct it. By learning to compress and reconstruct, the model captures important features. 2. Variational Autoencoders go a step further, learning a probabilistic distribution of latent variables that can be used for generating new data points.
  • 4.
    • Key Benefitsof Unsupervised Pretrained Networks: 1.Reduced Need for Labeled Data: Pretraining allows the model to capture general patterns in the data before being fine- tuned on labeled data, reducing the need for large labeled datasets. 2. Faster Training: By initializing the model with pretrained weights, the network converges faster during supervised fine- tuning.
  • 5.
    • Key Benefitsof Unsupervised Pretrained Networks: 3. Improved Generalization: Unsupervised pretraining allows the model to learn useful features from the data, improving its ability to generalize to unseen examples. 4. Transfer Learning: Pretrained models can often be transferred to new tasks with different datasets, reducing the need for retraining from scratch.
  • 6.
    • Scenario: UnsupervisedPretraining for Anomaly Detection in Network Security • Problem: A cybersecurity team wants to build a model that detects unusual network traffic patterns that may indicate a security breach. However, it is challenging to label network traffic as "normal" or "anomalous" due to the volume and complexity of the data.
  • 7.
    • Solution: Anautoencoder is pretrained on unlabeled network traffic data to capture patterns and dependencies in normal traffic. • The autoencoder learns to reconstruct normal traffic data accurately. • For anomaly detection, the pretrained model is used in its unsupervised form: • if the model fails to accurately reconstruct a data point (i.e., if the reconstruction error is high), that data point is flagged as an anomaly.
  • 8.
    • Outcome: • Unsupervisedpretraining allows the model to learn a baseline of normal network behavior, making it effective at detecting anomalies without requiring labeled data.
  • 9.
    Finance: Fraud Detectionin Credit Card Transactions • Scenario: A financial institution has millions of unlabeled credit card transaction records but only a few thousand labeled fraudulent and legitimate transactions. The goal is to build a fraud detection system.
  • 10.
    • Pretraining: • Aself-supervised model (e.g., using contrastive learning) is pretrained on the transaction data. • The model is tasked with learning patterns in user spending behavior by contrasting similar transactions (same customer, same spending habits) and dissimilar transactions (different customers, varying amounts, and merchants).
  • 11.
    • Fine-Tuning: • Oncethe model learns general features of transaction behavior, it is fine- tuned on the small labeled dataset to classify transactions as either fraudulent or legitimate. • Result: The model is more adept at spotting anomalies or outliers (fraudulent transactions) because it has learned the normal spending patterns during pretraining.
  • 12.
    • Conclusion: • Unsupervisedpretrained networks offer a powerful approach to training deep neural networks, especially when labeled data is scarce. By learning general features through unsupervised pretraining, these models can achieve better performance when fine-tuned on specific tasks. • Whether used in medical image analysis, speech recognition, sentiment analysis, or anomaly detection, unsupervised pretraining can significantly enhance the capabilities of neural networks.
  • 13.
    GENERATIVE ADVERSARIAL NETWORK •A Generative Adversarial Network (GAN) is a class of machine learning frameworks where two neural networks contest with each other in a game-like setting. The two networks are: 1.Generator (G): Generates new data instances that resemble the training data. 2.Discriminator (D): Evaluates whether a given data instance is real (from the training set) or fake (generated by the generator).
  • 14.
    • The goalof the generator is to create data that is so similar to the real data that the discriminator cannot tell whether it is real or fake. • The discriminator, on the other hand, tries to distinguish between real and fake data.
  • 15.
    • Key Concepts •The generator learns to produce better fakes by trying to fool the discriminator. • The discriminator learns to become better at identifying real vs. fake data. • The training process is like a game where both networks improve their strategies until the generator produces data indistinguishable from real data.
  • 16.
    Example of GANin Action • One popular example of GANs in action is generating realistic images of people who do not exist. Let's say we have a dataset of images of human faces. 1.Step 1: Training the Discriminator The discriminator is trained with real images from the dataset labeled as "real" and images generated by the generator labeled as "fake." Its task is to correctly classify real and fake images. 2.Step 2: Training the Generator The generator produces images that initially look like noise. As it keeps learning, its goal is to generate images that look more like real faces to fool the discriminator.
  • 17.
    • Step 3:Adversarial Training The generator and discriminator are trained together. • The generator aims to produce better images that the discriminator cannot distinguish from real ones, while the discriminator improves at telling the difference.
  • 18.
    Practical Use Case:Fake Human Faces • A famous website, thispersondoesnotexist.com, uses GANs to generate lifelike images of non-existent people. • Each time the page is refreshed, the GAN generates a new face that looks completely realistic, even though the person never existed. • GANs have been applied in a wide range of fields, including image synthesis, video generation, and style transfer (e.g., converting paintings to photo-realistic images).
  • 19.
    • Scenario: FurnitureDesign Generation • Context: • A company that sells furniture wants to expand its design options and create new, unique designs. However, hiring designers to come up with thousands of ideas is expensive. • They decide to train a GAN using their existing furniture designs (like chairs, tables, and sofas) to automatically generate new and realistic- looking furniture concepts.
  • 20.
    • Step-by-Step Exampleof Using a GAN: 1.Data Collection: 1. The company collects thousands of images of different types of furniture from its catalog—various chairs, tables, sofas, and lamps. 2. These images are labeled and used to train the GAN. 2.GAN Architecture: 1. The GAN consists of two neural networks: 1.The Generator and 2.The Discriminator.
  • 21.
    • Training Process: •A. The Discriminator's Role: • The discriminator is trained on real furniture images and learns to distinguish real furniture designs from fake ones. • It receives two sets of inputs: • Real furniture images (from the dataset). • Fake furniture images (from the generator). • The discriminator outputs whether it believes the input image is real or fake.
  • 22.
    • B. TheGenerator's Role: • The generator starts by producing random noise that initially resembles chaotic blobs, not realistic furniture. • Its goal is to transform this noise into images that increasingly resemble actual furniture from the catalog. • Over time, it learns to produce more realistic furniture designs to try and "fool" the discriminator.
  • 23.
    • Adversarial Training: •Adversarial training begins where both networks improve by competing with each other: • The generator tries to produce furniture images that look so real that the discriminator cannot tell whether they are fake. • The discriminator improves its ability to distinguish between real and generated furniture images. • This process continues until the generator can consistently produce realistic furniture designs, and the discriminator struggles to differentiate between real and fake images.
  • 24.
    Example of AdversarialInteraction: • At the Start: • The generator might produce a blurry image of a chair, with oddly shaped legs and no clear features. • The discriminator can easily detect this as fake. • Midway Through Training: • The generator produces images that vaguely resemble tables and chairs but with distorted proportions (e.g., a table with one leg much longer than the others). • The discriminator improves its detection by picking out such inconsistencies.
  • 25.
    • After ExtensiveTraining: • The generator creates highly realistic images of entirely new chairs, sofas, or tables with distinct design elements (e.g., modernist, minimalist styles). • The discriminator has a hard time distinguishing these images from real ones, and some generated designs may even look good enough to be manufactured.
  • 26.
    • This scenarioillustrates how a GAN can be used to generate high- quality, realistic images of furniture that mimic the original dataset but introduce new designs. • The adversarial process between the generator and discriminator allows for continuous improvement, eventually resulting in lifelike images that can have real-world applications.
  • 27.
    Scenario: Fashion DesignUsing GANs Context: • A well-known fashion house wants to introduce a fresh line of designer clothing, pushing boundaries in creativity. • The designers, however, are looking for inspiration that could combine elements of their past collections with futuristic, bold designs. To help, they decide to train a GAN on their past designs, runway photos, and street fashion trends. • The goal is to generate new clothing concepts that are unique yet align with the brand's style.