0% found this document useful (0 votes)
99 views3 pages

Soft Computing Assignment: Q5) Explain Supervised and Unsupervised Learning. Ans 5) Supervised Learning

Supervised learning uses labeled training data to learn a function that maps inputs to outputs. Unsupervised learning looks for patterns in unlabeled data. Linear separability refers to whether data points can be separated into two classes by a single dividing line or hyperplane. Neural networks have applications in domains like speech recognition, character recognition, signature verification, and face recognition.

Uploaded by

ARUSHI JAIN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views3 pages

Soft Computing Assignment: Q5) Explain Supervised and Unsupervised Learning. Ans 5) Supervised Learning

Supervised learning uses labeled training data to learn a function that maps inputs to outputs. Unsupervised learning looks for patterns in unlabeled data. Linear separability refers to whether data points can be separated into two classes by a single dividing line or hyperplane. Neural networks have applications in domains like speech recognition, character recognition, signature verification, and face recognition.

Uploaded by

ARUSHI JAIN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Soft Computing Assignment

Q5) Explain supervised and unsupervised learning.


Ans 5) Supervised learning:
Supervised learning as the name indicates is the presence of a supervisor as a
teacher. Basically, supervised learning is a learning in which we teach or train the
machine using data which is well labelled that means some data is already tagged
with the correct answer. After that, the machine is provided with a new set of
examples(data) so that supervised learning algorithm analyses the training data (set
of training examples) and produces a correct outcome from labelled data.
There is a learning a function that maps an input to an output based on example
input-output pairs. It infers a function from labelled training data consisting of a set
of training examples. In supervised learning, each example is a pair consisting of
an input object (typically a vector) and a desired output value (also called
the supervisory signal). A supervised learning algorithm analyses the training data
and produces an inferred function, which can be used for mapping new examples.

Unsupervised learning:
Unsupervised learning is the training of machine using information that is neither
classified nor labelled and allowing the algorithm to act on that information
without guidance. Here the task of machine is to group unsorted information
according to similarities, patterns and differences without any prior training of
data. Unlike supervised learning, no teacher is provided that means no training will
be given to the machine. Therefore, machine is restricted to find the hidden
structure in unlabelled data by our-self.
It looks for previously undetected patterns in a data set with no pre-existing labels
and with a minimum of human supervision. In contrast to supervised learning that
usually makes use of human-labelled data, unsupervised learning, also known
as self-organization allows for modelling of probability densities over inputs.

Q6) Explain linear separability in artificial neural network.


Ans 6) The idea of linear separability is to check if you can separate points in an n-
dimensional space using only n-1 dimensions.
It’s a property of two sets of points. This is most easily visualized in two
dimensions (the Euclidean plane) by thinking of one set of points as being coloured
blue and the other set of points as being coloured red. These two sets are linearly
separable if there exists at least one line in the plane with all of the blue points on
one side of the line and all the red points on the other side. This idea immediately
generalizes to higher-dimensional Euclidean spaces if line is replaced
by hyperplane.
The problem of determining if a pair of sets is linearly separable and finding a
separating hyperplane if they are, arises in several areas. In statistics and machine
learning, classifying certain types of data is a problem for which good algorithms
exist that are based on this concept.

Examples:

Three non-collinear points in two classes ('+' and '-') are always linearly separable
in two dimensions. This is illustrated by the three examples in the following figure
(the all '+' case is not shown, but is similar to the all '-' case):

However, not all sets of four points, no three collinear, are linearly separable in
two dimensions. The following example would need two straight lines and thus is
not linearly separable:

Q7) Write a note on application domains of neural network.


Ans 7) Following are some of the areas, where ANN is being used. It suggests
that ANN has an interdisciplinary approach in its development and applications.

Speech Recognition
Speech occupies a prominent role in human-human interaction. Therefore, it is
natural for people to expect speech interfaces with computers.
Great progress has been made in this field, however, still such kinds of systems
are facing the problem of limited vocabulary or grammar along with the issue of
retraining of the system for different speakers in different conditions. ANN is
playing a major role in this area. Following ANNs have been used for speech
recognition −
 Multilayer networks
 Multilayer networks with recurrent connections
 Kohonen self-organizing feature map

Character Recognition
It is an interesting problem which falls under the general area of Pattern
Recognition. Many neural networks have been developed for automatic
recognition of handwritten characters, either letters or digits. Following are some
ANNs which have been used for character recognition −
 Multilayer neural networks such as Backpropagation neural networks.
 Neocognitron
Though back-propagation neural networks have several hidden layers, the pattern
of connection from one layer to the next is localized.

Signature Verification Application


Signatures are one of the most useful ways to authorize and authenticate a person
in legal transactions. Signature verification technique is a non-vision based
technique.
For this application, the first approach is to extract the feature or rather the
geometrical feature set representing the signature. With these feature sets, we have
to train the neural networks using an efficient neural network algorithm. This
trained neural network will classify the signature as being genuine or forged under
the verification stage.

Human Face Recognition


It is one of the biometric methods to identify the given face. It is a typical task
because of the characterization of “non-face” images. However, if a neural
network is well trained, then it can be divided into two classes namely images
having faces and images that do not have faces.
First, all the input images must be pre-processed. Then, the dimensionality of that
image must be reduced. And, at last it must be classified using neural network
training algorithm. Following neural networks are used for training purposes with
pre-processed image −
 Fully-connected multilayer feed-forward neural network trained with the
help of back-propagation algorithm.

You might also like