IMAGE COMPRESSION IN NEURAL NETWORK
Mr.V.A.Daware
Lecturer
Dept. of ECT
COLLEGE OF ENGG. OSMANABAD
Email id :- [email protected]
INTRODUCTION
Today, in the age of information use of digital visual system is increasing at
a tremendous rate for information, education and entertainment purpose e.g.
multimedia, virtual reality and digital studio. Therefore, it has become essential to
reduce the cost of image storage and transmission. As these applications have
become increasingly important, the theory and practice of image compression have
received increased attention. Since the bandwidth of much communication system
or mass memory storage capacities are relatively inextensible. Some kind of data
compression is required to face and reduce the cost of growing amount of
information that people want to store or transmit. Major objective of image
compression is to represent an image with as few bits as possible, by preserving the
level of quality and intelligibility required for the given application.
Image compression techniques fall into two categories as per the
requirement of level of quality and intelligibility as loss less and lossy
compression.
Various fields of application of image data compression includes braodcast
television, remote sensing via. Satellite, aircraft, radar, teleconferencing, computer
communication, facsimile transmission etc.
W
2
X
1
W
3
X
2
W
n X
3
W
1
NET = X.W
X
n
1.1 Artificial Neurons :
The Artificial neuron was designed to mimic the first order characteristic of the
biological neuron. The artificial neuron model is shown in figure 1.7. Here a set of inputs labeled
1 2
, . ,
n
x x x is applied to the artificial neuron. These inputs are collectively referred as the vector
x, corresponding to the signal into the synapses of the biological neuron. Each signal is
multiplied by an associated weight w
1
, w
2
,.w
n
before it is applied to summation block labeled
. Each weight corresponds to the strength of a single biological synaptic connection.
Figure 1.1 Artificial Neuron
The summation block corresponding roughly to the biological cell body, add all of the
weighted input algebraically, producing an output that we call NET. This in vector form is
noted as
NET = x x w. . [1.2]
Activation function :
The NET signal is usually further processed by an activation function to
produce the neurons output signal OUT. This may be simple linear function as
shown in Figure 1.8.
OUT = K x (NET)
Where K is a constant, a Threshold function.
OUT = 1 if NET > T
= 2 otherwise
F
OUT = F (NET)
W
n
W
3
W
2
W
1
X
n
X
3
X
2
X
1
NET
ARTIFICIAL NEURON
Where T is constant threshold value or a function that more accurately simulates
the nonlinear transfer characteristic of the biological neuron and permit more
general network functions.
The block labeled F accepts the NET output and produces the signal labeled OUT.
If the F processing block compresses the range of NET so that OUT never exceed
some low limits regardless of the value of NET. F is called a squashing function
sigmoid as shown in Figure 1.2. This is mathematically expressed as:
Figure 1.2 Artificial Neuron with Activation Function
2. The Backpropagation Network:
For many years there was no theoretically sound algorithm for training
multilayer artificial neural networks. The invention of the backpropagation has
played a large part in the resurgence of interest in artificial neural networks.
Backpropagation is the systematic method for training multiplayer artificial neural
networks. Despite its limitations, because of strong mathematical foundation,
}
backpropagation has dramatically extended the range of problems to which
artificial neural networks can be applied.
Thus there are no feedback connections and no connections that bypass one
layer to go directly to a later layer. It is possible to have several hidden layers
fig.2.1 shows a three layer Back propagation network.
2.1Momentum:
Rummel hart, Hinton, and Williams (1986) described, a method for
improving the training time of the backpropagation algorithm, while enhancing the
stability of the process. The method involves adding a term called momentum to
the weight adjustment that is proportional to the amount of previous weight
change. Once an adjustment is made it is remembered and serves to modify all
subsequent weight adjustments. The adjustment equations are modified to the
following:
, ( 1) , ( )
pq qk pj pq
W k n OUT W k n q o o A + = + A [2.1]
, ( 1) , ( )
pq pq pq
W k n W k n W + = + A .[2.2]
were o is the momentum coefficient
Using the momentum method, the network tends to follows the bottom of
narrow gullies in the error surface (if they exist) rather than crossing rapidly from
side to side.
3 Compression & Decompression Module:
3.1 Compression algorithm:
1. Read the input image for the network architecture same as that is used for
the training
2. Read trained weights and apply it to input layer nodes.
3. Obtain sum of input layer node weight combination.
4. Apply squashing function to sum obtained in step 3 for all hidden layer
nodes.
5. Save the output of hidden layer nodes to a file.
6. Repeat 1 through 5 steps until whole image date is compressed.
3.2 Decompression algorithm:
1. Read the compressed data from the file and apply it to the hidden layer.
2. Adopt output weight matrix from trained weights and use it for connecting
hidden layer nodes with the output layer nodes.
3. Compute sum of hidden nodes weights combination for all output layer
nodes.
4. Apply squashing function to obtain original pixels.
5. Repeat steps 1 through 4 until all image data is recovered.
Results
Original Image Reconstructed Image
(PSNR :--23.030893 db)
Original Image Reconstructed Image
(PSNR :--9.870058 db)
Original Image Reconstructed Image
(PSNR :--10.304684 db)
Network 16-4-16
Parameters Implemented
ALPHA :--0.6 No. of Iterations :--40,000
ETA :--0.05 Compression :--2 bits/pixel
PATTERNS :--200
W
11
INPUT
LAYER
i
Conclusion back propagatio studied in detail and implemented on variousdatabase
for the segmentation and classificationpurpose. It is found that K- means algorithm
gives very high accuracy, but it is useful forsingle database at a time. Whereas
neural network is useful for multiple databases, once it is trained for it. Neural
network also provides good accuracy. In future different neural network algorithms
can be used to classify the satellite images and the classification results of those
images will be compared with results of existing classification
TARGET
2
OUT
2
TARGET
n
Figure 2.1 Three-layer Backpropagation