0% found this document useful (0 votes)
9 views

priyanshu.pptx

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

priyanshu.pptx

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

My intro:-

Name-Priyanshu Raj
Class-9th
Sec-C
Roll no-23
Artificial Neural
Networks
Agenda
 History of Artificial Neural Networks
 What is an Artificial Neural Networks?
 How it works?
 Learning
Learning paradigms
 Supervised learning
 Unsupervised learning
 Reinforcement learning
 Applications areas
 Advantages and Disadvantages
History of the Artificial Neural Networks
 history of the ANNs stems from the 1940s, the decade of the first electronic
computer.
 However, the first important step took place in 1957 when Rosenblatt
introduced the first concrete neural model, the perceptron. Rosenblatt also
took part in constructing the first successful neurocomputer, the Mark I
Perceptron. After this, the development of ANNs has proceeded as
described in Figure.
History of the Artificial Neural Networks
 in 1986, The application area of the MLP networks remained rather limited
until the breakthrough when a general back propagation algorithm for a
multi-layered perceptron was introduced by Rummelhart and Mclelland.
 in 1982, Hopfield brought out his idea of a neural network. Unlike the
neurons in MLP, the Hopfield network consists of only one layer whose
neurons are fully connected with each other.
History of Artificial Neural
Networks
Since then, research on artificial neural networks has
remained active, leading to many new network types, as
well as hybrid algorithms and hardware for neural
information processing.
Artificial Neural Network
An artificial neural network consists of a pool of simple
processing units which communicate by sending signals to
each other over a large number of weighted connections.
Artificial Neural Network
 A set of major aspects of a parallel distributed model include:
 a set of processing units (cells).
 a state of activation for every unit, which equivalent to the output of the
unit.
 connections between the units. Generally each connection is defined by a
weight.
 a propagation rule, which determines the effective input of a unit from its
external inputs.
 an activation function, which determines the new level of activation based
on the effective input and the current activation.
 an external input for each unit.
 a method for information gathering (the learning rule).
 an environment within which the system must operate, providing input
signals and _ if necessary _ error signals.
Computers vs. Neural Networks
“Standard” Computers Neural Networks

 one CPU highly parallel processing

fast processing units slow processing units

reliable units unreliable units

static infrastructure dynamic infrastructure


Why Artificial Neural Networks?
There are two basic reasons why we are interested in
building artificial neural networks (ANNs):

• Technical viewpoint: Some problems such as


character recognition or the prediction of future
states of a system require massively parallel and
adaptive processing.

• Biological viewpoint: ANNs can be used to


replicate and simulate components of the human
(or animal) brain, thereby giving us insight into
natural information processing.
Artificial Neural Networks
• Information is transmitted as a series of electric
impulses, so-called spikes.

• The frequency and phase of these spikes encodes the


information.

• In biological systems, one neuron can be connected to as


many as 10,000 other neurons.

• Usually, a neuron receives its information from other


neurons in a confined area, its so-called receptive field.
How do ANNs work?
 An artificial neural network (ANN) is either a hardware
implementation or a computer program which strives to
simulate the information processing capabilities of its biological
exemplar. ANNs are typically composed of a great number of
interconnected artificial neurons. The artificial neurons are
simplified models of their biological counterparts.
 ANN is a technique for solving problems by constructing software
that works like our brains.
The output is a function of the input, that is
affected by the weights, and the transfer
functions
Artificial Neural Networks
An ANN can:
1. compute any computable function, by the appropriate
selection of the network topology and weights values.
2. learn from experience!
 Specifically, by trial‐and‐error
Learning by trial‐and‐error
Continuous process of:
Trial:
Processing an input to produce an output (In
terms of ANN: Compute the output function of a
given input)
Evaluate:
Evaluating this output by comparing the
actual output with the expected output.
Adjust:
Adjust the weights.
Example: XOR

Output
Input Layer,
Hidden Layer, with
with two
Layer, with one neuron
neurons
three
neurons
How it works?

Output
Input Layer,
Hidden Layer, with
with two
Layer, with one neuron
neurons
three
neurons
How it works?
 Set initial values of the weights randomly.
 Input: truth table of the XOR
 Do
 Read input (e.g. 0, and 0)
 Compute an output (e.g. 0.60543)
 Compare it to the expected output. (Diff= 0.60543)
 Modify the weights accordingly.
 Loop until a condition is met
 Condition: certain number of iterations
 Condition: error threshold

You might also like