0% found this document useful (0 votes)
39 views

Adopting and Implementation of Self Organizing Feature Map For Image Fusion

The document proposes a new method for image fusion using self-organizing feature maps (SOM). SOM is used to segment source images into multiple slices based on gray scale combinations, which are then dynamically fused depending on the application. This overcomes limitations of existing segmentation-based fusion methods by making the segmentation step less complicated and time-consuming. The proposed method preserves all image information, can handle noisy or clean images, and produces higher quality fused images compared to popular fusion methods.

Uploaded by

Mandy Diaz
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Adopting and Implementation of Self Organizing Feature Map For Image Fusion

The document proposes a new method for image fusion using self-organizing feature maps (SOM). SOM is used to segment source images into multiple slices based on gray scale combinations, which are then dynamically fused depending on the application. This overcomes limitations of existing segmentation-based fusion methods by making the segmentation step less complicated and time-consuming. The proposed method preserves all image information, can handle noisy or clean images, and produces higher quality fused images compared to popular fusion methods.

Uploaded by

Mandy Diaz
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.

1, January 2013

ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
1

Dr.AnnaSaroVijendran1 and G.Paramasivam2


Director, SNR Institute of Computer Applications, SNR Sons College, Coimbatore, Tamilnadu, INDIA.
[email protected]
2

Asst.Professor, Department of Computer Applications, SNRSonsCollege, CoimbatoreTamilnadu,INDIA.


[email protected]

ABSTRACT
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.

KEYWORDS
Image Fusion, Image Segmentation,Self Organizing Feature Maps,Code Book Generation,Multifocus Images, Gray Scale Images

1.INTRODUCTION
Nowadays, image fusion has become an important subarea of image processing. For one object or scene, multiple images can be taken from one or multiple sensors.These images usually contain complementary information. Image fusion is the process of combining information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual perception or computer processing. The objectivein image fusion is to reduce uncertainty and minimize redundancy in the output while maximizing relevant information particular to an application or task. Image fusion has become a common term used within medical diagnostics and treatment. Given the same set of input images, different fused images may be
DOI : 10.5121/ijist.2013.3104 43

International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013

created depending on the specific application and what is considered relevant information. There are several benets in using image fusion like wider spatial and temporal coverage, decreased uncertainty, improved reliability and increased robustness of system performance.Often a single sensor cannot produce a complete representation of a scene. Successful image fusion signicantly reduces the amount of data to be viewed or processed without signicantly reducing the amount of relevant information. Image fusion algorithms can be categorized into pixel, feature and symbolic levels. Pixel-level algorithms work either in the spatial domain [1, 2] or in the transform domain [3, 4 and 5]. Although pixel-level fusion is a local operation, transform domain algorithms create the fused image globally. By changing a single coefficient in the transformed fused image, all imagevalues in the spatial domain will change. As a result, in the process of enhancing properties in some image areas, undesirable artifacts may be created in other image areas. Algorithms that work in the spatial domain have the ability to focus on desired image areas, limiting change in other areas.Multiresolution analysis is a popular method in pixel-level fusion. Burt [6] and Kolczynski [7] used lters with increasing spatial extent to generate a sequence of images from each image, separating information observed at different resolutions. Then at each position in the transform image, the value in the pyramid showing the highest saliency was taken. An inverse transform of the composite image was used to create the fused image. In a similar manner, various wavelet transforms can be used to fuse images. The discrete wavelet transform (DWT) has been used in many applications to fuse images [4]. The dual-tree complexwavelet transforms (DT-CWT), rst proposed by Kingsbury[8], was improved by Nikolov [9] and Lewis [10] to outperform most other gray-scale image fusion methods. Feature-based algorithms typically segment the images into regions and fuse the regions using their various properties [1012]. Feature-based algorithms are usually less sensitive to signallevel noise [13]. Toet [3] rst decomposed each input image into aset of perceptually relevant patterns. The patterns were then combined to create a composite image containing all relevant patterns. A mid-level fusion algorithm was developed by Piella [12, 15] where the images are rst segmented and the obtained regions are then used to guide the multiresolution analysis. Recently methodshave been proposed to fuse multifocus source images using the divided blocks or segmented regions instead of single pixels [16, 17, and 18]. All the segmented region-based methods are strongly dependent on the segmentation algorithm. Unfortunately, the segmentation algorithms, which are of vital importance to fusion quality, are complicated and time-consuming. The common transform approaches for fusion of mutifocus images include the discrete wavelet transform (DWT) [19],curvelet transform [20] and nonsubsamplingcontourlet transform (NSCT) [21]. Recently, a new multifocus image fusion and restoration algorithm based on the sparse representation hasbeen proposed byYang and Li [22]. A new multifocus image fusion method based on homogeneity similarity and focused regions detection has been proposed during the year 2011 by Huafeng Li , Yi Chai , Hongpeng Yin and Guoquan Liu [23] . Most of the traditional image fusion methods are based on the assumption that the source images are noise free, and they can produce good performance when the assumption is satised. For the traditional noisy image fusion methods, they usually denoise the source images, and then the denoised images are fused. The multifocus image fusion and restoration algorithm proposed by Yang and Li [22] performs well with both noisy and noise free images, and outperforms traditional fusion methods in terms of fusion quality and noise reductionin the fused output. However, this scheme is complicated and time-consuming especially when the source images are noise free. The image fusion algorithm based on homogeneity similarity proposed by HuafengLi , Yi Chai , Hongpeng Yin and Guoquan Liu [23] aims at solving the fusion problem of clean and
44

International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013

noisy multifocus images. Further in any region based fusion algorithm, the fusion results are affected by the performance of segmentation algorithm. The various segmentation algorithms are based on thresholding and clustering but the partition criteria used by these algorithms often generates undesired segmented regions. In order to overcome the above said problems a new method for segmentation using Selforganizing Feature Maps which consequently helps in fusion of images dynamically to the desired degree of information retrieval depending on the application has been proposed in this paper. The proposed algorithm is compatible for any type of image either noisy or clean. The method is simple and since mapping of image is carried out by Self-organizing Feature Maps all the information in the images will be preserved.The images used in image fusion should already be registered. A novel image fusion algorithm based on self organizing feature map is proposed in this paper. The outline of this paper is as follows: In Section 2, the Self-organizing Feature Maps is briefly introduced. Section 3 describes the algorithm for Code Book generation using Self Organizing Feature Maps. Section 4 describes the proposed method of Fusion. Section 5 details the Experimental Analysis and Section 6 gives the conclusion of this paper.

2. SELF-ORGANIZING FEATURE MAP


Self-organizing Feature Map (SOM) is a special class of Artificial Neural Networkbased on competitive learning. It is an ingenious Artificial Neural Network built around a one or twodimensional lattice of neurons for capturing the important features contained in the input. The Kohonen technique creates a network that stores information in such a way that any topological relationships within the training set are maintained. In addition to clustering the data into distinct regions, regions of similar properties are put into good use by the Kohonen maps. The primary benefit is that the network learns autonomously without the requirement that the system be well defined. System does not stop learning but instead continues to adapt to changing inputs. This plasticity allows it to adapt as the environment changes.A particular advantage over other artificial neural networks is that the system appears well suited to parallel computation. Indeed the only global knowledge required by each neuron is the current input to the network and the position within the array of the neuron which produced the maximum output Kohonen networks are grid of computing elements, which allows identifying the immediate neighbours of a unit. This is very important, since during learning, the weights of computing units and their neighbours are updated. The objective of such a learning approach is that neighbouring units learn to react to closely related signals. A Self-organizing Feature Mapdoes not need a target output to be specified unlike many other types of network. Instead, where the node weights match the input vector, that area of the lattice is selectively optimized to more closely resemble the data for the class, the input vector is a member. From an initial distribution of random weights, and over many iterations, the Selforganizing Feature Map eventually settles into a map of stable zones. Each zone is effectively a feature classifier. The output is a type of feature map of the input space. In the trained network, the blocks of similar values represent the individual zones. Any new, previously unseen input vectors presented to the network will stimulate nodes in the zone with similar weight vectors. Training occurs in several steps and over many iterations. Each node's weights are initialized. A vector is chosen at random from the set of training data and presented to the lattice. Every node is examined to calculate which one's weights are most like the input vector. The winning node is commonly known as the Best Matching Unit (BMU). The
45

International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013

radius of the neighbourhood of the Best Matching Unit is now calculated. This is a value that starts large, typically set to the 'radius' of the lattice, but diminishes each time-step. Any nodes found within this radius are deemed to be inside the Best Matching Units neighbourhood. Each neighbouring node's weights are adjusted to make them more like the input vector. The closer a node is to the Best Matching Unit; the more its weights get altered. The procedure is repeated for all input vectors for number of iterations. Prior to training, each node's weights must be initialized. Typically these will be set to small-standardized random values. To determine the Best Matching Unit, one method is to iterate through all the nodes and calculate the Euclidean distance between each node's weight vector and the current input vector. The node with a weight vector closest to the input vector is tagged as the Best Matching Unit. After the Best Matching Unit has been determined, the next step is to calculate which of the other nodes are within the Best Matching Unit's neighbourhood. All these nodes will have their weight vectors altered in the next step. A unique feature of the Kohonen learning algorithm is that the area of the neighbourhoodshrinks over time to the size of just one node. After knowing the radius, iterations are carried out through all the nodes in the lattice to determine if they lay within the radius or not. If a node is found to be within the neighbourhood then its weight vector is adjusted. Every node within the Best Matching Units neighbourhood (including the Best Matching Unit) has its weight vector adjusted. In Self-organizing Feature Map, the neurons are placed at the lattice nodes; the lattice may take different shapes: rectangular grid, hexagonal, even random topology .

Figure 1.Self Organizing Feature Map Architecture The neurons become selectivity tuned on various input patterns in the course of competitive learning process. The locations of the neurons (i.e. the winning neurons) so tuned, tend to become ordered with respect to each other in such a way that a meaningful coordinate system for different input features to be created over the lattice.

46

International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013

SOM Neural Network Training GUI:

3. CODE BOOK FEATURE MAP

GENERATION

USING

SELF-ORGANIZG

Given a two-dimensional input image pattern to be mapped onto a two dimensional spatial organization of neurons located at different positions (i, j)s on a rectangular lattice of size nxn. Thus, for a set of nxn points on the two-dimensional plane, there would be n2 neurons Nij:1 i,j n, and for each neuron Nij there is an associated weight vector denoted as Wij. In Self-organizing Feature Map, the neuron with minimum distance between its weight vector Wij and the input vector X is the winner neuron (k,l), and it is identified using the following equation. ||X - W kl|| = min [min ||X - W ij ||]
1i n ij n

(1)

After the position of the (i,j)th winner neuron is located in the two-dimensional plane, the winner neuron and its neighbourhood neurons are adjusted using Self- organizing Feature Map learning rule as: Wij(t+1) = Wij(t) + || X -Wij(t) || (2)

Where, is the Kohonens learning rate to control the stability and the rate of convergence. The winner weight vector reaches equilibrium when Wij(t+1)=Wij(t). The neighbourhood of neuron Nij is chosen arbitrary. It can be a square or a circular zone around Nij of arbitrary chosen radius. Algorithm 1: The image A (i,j) of size 2 N x 2 N isdivided into blocks, each of them of size 2 n x 2n pixels, n < N.

47

International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013

2: A Self-organizing Feature Map Network is created with a codebook consisting of M neurons (mi: i=1,2,.....M).The total M neurons are arranged in a hexagonal lattice, and for each neuron there is an associated weight vector Wi = [wi1 wi2.......wi22n]. 3: The weights vectors are initiated for all the neurons in the lattice with small random values. 4: The learning input patterns (image blocks) are applied to the network. The Kohonens competitive learning process identifies the winning neurons that best match the input blocks. The best matching criterion is the minimum Euclidean distance between the vectors. Hence, the mapping process Q that identifies the neuron that best matches the input block X is determined by applying the following condition. Q(X) = arg i min ||X - Wi|| i = 1,2,..M (3)

5: At equilibrium, there are m winner neurons per block or m codewords per block. Hence, the whole image is represented using m number of codewords. 6: The indices of the obtained codewords are stored. The set of indices of all winner neurons along with the codebook are stored. 7: The reconstructed image blocks of same size as the original ones, will be restored from the indices of the codewords.

4. PROPOSED METHOD OF FUSION


Let us consider two pre-registered grayscale 8 bitimages A and B formed of the same scene or object. The first imageA is decomposed into sub-images and given as input to the Self-organizing Feature Map Neural Network. In order to preserve all the gray values of the image, the codebook size for compressing an 8 bit image is chosen to be the maximum possible number of gray levels, say 256. Since the weight values after training has to represent the input level gray levels, random values ranging from 0 to 255 are assigned as initial weights.When sub-images of size say 4 x 4 is considered as the input vector, then there will be 16 nodes in the input layer and the Kohonen layer consists of 256 nodes arranged in a 16 x 16 array. The input layer takes as input the graylevel values from all the 16 pixels of the gray-level block. The weights assigned between node j of the Kohonen layer and the input layer represents the weight matrix. For all the 256 nodes we i = 0,115. Once the weights are initialized randomly have Wji for j = 0,1,,255 and network is ready for training. The image block vectors are mapped with the weight vectors. The neighborhood is initially chosen; say 5 x 5and then reduced gradually to find the best matching node. The Self-organizing Feature Map generates the codebook according to the weight updation. The set of indices of all the winner neurons for the blocks along with the code book are stored for retrieval. The image A is retrieved by generating the weight vectors of each neuron from the index values, which gives the pixel value of the image. For each index value the connected neuron is found. The weight vector of that neuron to the input layer neuron is generated. The values of the neuron weights are the gray level for the block. The gray level value thus obtained is displayed as pixels. Thus we get the image A back in its original form. Now the Image B is given as input to the Neural Network. Since the features in Images A and B are the same, when B is given as input to the trained Network the code book for the Image B will be generated in minimum simulation time without loss in information. Also since the images are registeredthe index values which represent the position of pixels will be the same in the two
48

International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013

images. For the same index value the value of the neuron weights of both the images are compared and the higher value is selected. The procedure is repeated for all the indices until a weight vector of optimal strength is generated. The image retrieved from this optimal weight vector is the fused image which will represent all the gray values in its optimal values.The procedure can be repeated in various combinations with multiple images until the desired result is achieved.

5. EXPERIMENTAL ANALYSIS AND RESULTS


The Experimental analysis of the proposed algorithm has been performed using large numbers of images having different content for simulations and different objective parameters discussed in the paper. In order to evaluate the fusion performance, the first experiment is performed on one set of perfectly registered multifocus source images. The dataset consists of multifocus images. The use of different pairs of multifocus images of different scenes including all categories of text, text+object and only objects allows evaluating the proposed algorithm in true sense. The proposed algorithm is simulated in Matlab7. The proposed algorithm is evaluated based on the quality of the fused image obtained. The robustness of the proposed algorithm that is to obtain consistent good quality fused image with different categories of images like standard images, medical images, satellite images has beenevaluated. The average computational time to generate nal fused image from the source image size of 128 x 128 using the proposed algorithm has been calculated and the average computational time is 96 seconds. The quality of the fused image with the source images has been compared in terms of RMSE and PSNR values.The experimental results obtained for fusion of grayscaleimages adopting the proposed algorithm are shown in Table1. The source multifocus images and the fused images of different types are shown in Figures 1 to 4.The histogram and image difference for lena and bacteria images are shown in Figure 5 and 6 respectively. Table 1.a RMSE Images Lena Bacteria Satellite map MRI -Head Image A 6.2856 6.8548 4.0043 4.3765 Image B 6.0162 6.421 5.325 4.9824 Fused Image 3.3205 6.227 3.8171 1.5163

Table 1.b PSNR Images Lena Bacteria Satellite map MRI -Head Image A 30.381 28.194 32.077 30.787 Image B 31.701 27.989 31.582 33.901 Fused Image 38.0481 32.5841 33.2772 45.3924
49

International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013

Figure 1.a

Figure 1.b Figure 1.Lena

Figure 1.c

Figure 2.a

Figure 2.b Figure 2.Bacteria

Figure 2.c

Figure 2.a

Figure 2.b Figure 2.Bacteria

Figure 2.c

Figure 4.a

Figure 4.b

Figure 4.c

Figure 4.MRI Head Images (a) and (b ) are the multi focused images Image (c) is the fused version of (a)and(b)
50

International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013

Figure 5. Histogram for lena and bacteria images f

Figure 6. The Difference images between fused image and the source image :(g) Difference between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j) Difference between (e)and (f).

6. CONCLUSIONS
In this Paper a simple method of fusion ofImages has been proposed. The advantage of Self organizing Feature Map is that after training, the weight vectors not only represents the image block cluster centroids but also preserves two main features. Topologically neighbour blocks in features. the input vectors are mapped to that of topologically neighbouring neurons in the code book. Also the distribution of weight vectors of the neurons reflects the distribution of the weight vectors in the input space. Hence their will not be any loss of information in the process process.The proposed method is dynamic in the sense that depending on the application the optimal weight vectors are generated and also redundant values as well as noise can be ignored in this process resulting in lesser simulation time.The method can be extended to colour images also.

REFERENCES
[1] S. Li, J.T.Kwok, Y.Wang, Using the discrete wavelet frame transform to mergeLandsat TM and SPOT panchromatic images, Information Fusion 3 (2002) 1723. 51

International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 [2] [3] [4] [5] A.Goshtasby, 2-D and 3-D ImageRegistration for Medical, Remote Sensing,and Industrial Applications, Wiley Press,2005. A.Toet,Hierarchical image fusion, Machine Vision and Applications 3 (1990) 111. H.Li,S. Manjunath, S. Mitra, Multisensorimage fusion using the wavelet transform,Graphical Models and Image Processing 57 (3) (1995) 235245. S.G. Nikolov, D.R. Bull, C.N. Canagarajah, M. Halliwell, and P.N.T. Wells. Imagefusion using a 3-d wavelet transform. In Proc.7th International Conference on Image Processing And Its Applications, pages 235-239, 1999. P.J. Burt, A.Rosenfeld (Ed.), Multiresolution Image Processing and Analysis, Springer-Verlag, Berlin, 1984, pp. 635. P.J. Burt, R.J. Kolczynski, Enhanced image capture through fusion, in International Conference on Computer Vision, 1993, pp. 173182. N. Kingsbury, Image processing with complex wavelets, Silverman, J. Vassilicos(Eds.), Wavelets: The Key to Intermittent Information, Oxford University Press, 1999, pp.165185. S.G. Nikolov, P. Hill, D.R. Bull, C.N. Canagarajah, Wavelets for image fusion, in: A. Petrosian, F. Meyer (Eds.), Waveletsin Signal and Image Analysis, Kluwer Academic Publishers,The Netherlands, 2001, pp. 213244. J.J. Lewis, R.J. OCallaghan, S.G.Nikolov, D.R.Bull,C.N.Canagarajah, Region-based image fusion using complex wavelets, in: Proceedings of the 7th International Conference on Information Fusion, Stockholm, Sweden, June 28July 1, 2004, pp. 555562. Z. Zhang, R. Blum, Region-based image fusion scheme for concealed weapon detection, in: ISIF Fusion Conference, Annapolis, MD, July 2002. G. Piella, A general framework for multiresolution image fusion: from pixels to regions, Information Fusion 4 (2003)259280. G. Piella, A region-based multiresolutionimage fusion algorithm, Proceedings of the 5th International Conference onInformation Fusion, Annapolis, MS, July811, 2002, pp. 15571564. S.G. Nikolov, D.R. Bull, C.N. Canagarajah, 2-D image fusion by multiscaleedge graphcombination, in:Proceedings of the 3rdInternational Conference on Information Fusion, Paris,France, July 1013, 1, 2000, pp.1622. G. Piella, A general framework formultiresolutionimage fusion from pixels to regions, Information Fusion 4 (2003) 259280. H. Wei, Z.L. Jing, Pattern Recognition Letters 28 (4) (2007) 493. S.T. Li, J.T. Kwok, Y.N. Wang, Pattern Recognition Letters 23 (8) (2002) 985. V. Aslanta, R. Kurban, Expert Systemswith Applications 37 (12) (2010) 8861. Y. Chai, H.F. Li, M.Y. Guo Optics Communications 248 (5) (2011) 1146. S.T. Li ,B. Yang, Pattern RecognitionLetters 29 (9) (2008) 1295 Q. Zhang, B.L. Guo, Signal Processing 89(2009) 1334. B. Yang, S.T. Li, IEEE Transactions on Instrumentation and Measurement 59 (4)(2010) 884. Multifocus image fusion and denoising scheme based on homogeneity similarityHuafeng Li , Yi Chai ,Hongpeng Yin,Guoquan Liu , Optics Communications Journal , September 2011.

[6] [7] [8] [9]

[10]

[11] [12] [13] [14]

[15] [16] [17] [18] [19] [20] [21] [22] [23]

ACKNOWLEDGMENTS
The authors thanks to our Managementof SNR Sons Institutions for allowing us to utilize their resources for doing our research work. Our sincere thanks to our Principal&Secretary Dr.H.BalakrishnanM.Com., M.Phil.,Ph.D., for his support and encouragement in our research work. And my grateful thanks to my guide Dr. Anna SaroVijendran MCA.,M.Phil.,Ph.D., Director of MCA., SNR Sons College, Coimbatore-6 for her valuable guidance and giving me a lot of suggestions & proper solutions for critical situations in our research work.The authors thank the Associate Editor and reviewers for their encouragement and valued comments, which helped in improving the quality of the paper.

52

International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013

AUTHORS BIOGRAPHY
Dr. Anna SaroVijendran receivedthe Ph.D. degree in Computer Science from Mother Teresa Womens University,Tamilnadu, India, in 2009. She has 20 years of experience in teaching. She is currently working as the Director, MCA in SNR Sons College, Coimbatore, Tamilnadu, India. She has presented and published many papers in International and National conferences. She has authored and co-authored more than 30 refereed papers. Herprofessional interests are Image Processing, Image fusion, Data mining and Artificial Neural Networks.

G.ParamasivamPh.D (PT)Research scholar.He is currently theAsst Professor, SNR Sons College,Coimbatore, Tamilnadu, India.He has 10 years of teaching experience.His technical interests include Image Fusion and ArtificialNeural networks.

53

You might also like