Adopting and Implementation of Self Organizing Feature Map For Image Fusion
Adopting and Implementation of Self Organizing Feature Map For Image Fusion
1, January 2013
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
1
ABSTRACT
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.
KEYWORDS
Image Fusion, Image Segmentation,Self Organizing Feature Maps,Code Book Generation,Multifocus Images, Gray Scale Images
1.INTRODUCTION
Nowadays, image fusion has become an important subarea of image processing. For one object or scene, multiple images can be taken from one or multiple sensors.These images usually contain complementary information. Image fusion is the process of combining information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual perception or computer processing. The objectivein image fusion is to reduce uncertainty and minimize redundancy in the output while maximizing relevant information particular to an application or task. Image fusion has become a common term used within medical diagnostics and treatment. Given the same set of input images, different fused images may be
DOI : 10.5121/ijist.2013.3104 43
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
created depending on the specific application and what is considered relevant information. There are several benets in using image fusion like wider spatial and temporal coverage, decreased uncertainty, improved reliability and increased robustness of system performance.Often a single sensor cannot produce a complete representation of a scene. Successful image fusion signicantly reduces the amount of data to be viewed or processed without signicantly reducing the amount of relevant information. Image fusion algorithms can be categorized into pixel, feature and symbolic levels. Pixel-level algorithms work either in the spatial domain [1, 2] or in the transform domain [3, 4 and 5]. Although pixel-level fusion is a local operation, transform domain algorithms create the fused image globally. By changing a single coefficient in the transformed fused image, all imagevalues in the spatial domain will change. As a result, in the process of enhancing properties in some image areas, undesirable artifacts may be created in other image areas. Algorithms that work in the spatial domain have the ability to focus on desired image areas, limiting change in other areas.Multiresolution analysis is a popular method in pixel-level fusion. Burt [6] and Kolczynski [7] used lters with increasing spatial extent to generate a sequence of images from each image, separating information observed at different resolutions. Then at each position in the transform image, the value in the pyramid showing the highest saliency was taken. An inverse transform of the composite image was used to create the fused image. In a similar manner, various wavelet transforms can be used to fuse images. The discrete wavelet transform (DWT) has been used in many applications to fuse images [4]. The dual-tree complexwavelet transforms (DT-CWT), rst proposed by Kingsbury[8], was improved by Nikolov [9] and Lewis [10] to outperform most other gray-scale image fusion methods. Feature-based algorithms typically segment the images into regions and fuse the regions using their various properties [1012]. Feature-based algorithms are usually less sensitive to signallevel noise [13]. Toet [3] rst decomposed each input image into aset of perceptually relevant patterns. The patterns were then combined to create a composite image containing all relevant patterns. A mid-level fusion algorithm was developed by Piella [12, 15] where the images are rst segmented and the obtained regions are then used to guide the multiresolution analysis. Recently methodshave been proposed to fuse multifocus source images using the divided blocks or segmented regions instead of single pixels [16, 17, and 18]. All the segmented region-based methods are strongly dependent on the segmentation algorithm. Unfortunately, the segmentation algorithms, which are of vital importance to fusion quality, are complicated and time-consuming. The common transform approaches for fusion of mutifocus images include the discrete wavelet transform (DWT) [19],curvelet transform [20] and nonsubsamplingcontourlet transform (NSCT) [21]. Recently, a new multifocus image fusion and restoration algorithm based on the sparse representation hasbeen proposed byYang and Li [22]. A new multifocus image fusion method based on homogeneity similarity and focused regions detection has been proposed during the year 2011 by Huafeng Li , Yi Chai , Hongpeng Yin and Guoquan Liu [23] . Most of the traditional image fusion methods are based on the assumption that the source images are noise free, and they can produce good performance when the assumption is satised. For the traditional noisy image fusion methods, they usually denoise the source images, and then the denoised images are fused. The multifocus image fusion and restoration algorithm proposed by Yang and Li [22] performs well with both noisy and noise free images, and outperforms traditional fusion methods in terms of fusion quality and noise reductionin the fused output. However, this scheme is complicated and time-consuming especially when the source images are noise free. The image fusion algorithm based on homogeneity similarity proposed by HuafengLi , Yi Chai , Hongpeng Yin and Guoquan Liu [23] aims at solving the fusion problem of clean and
44
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
noisy multifocus images. Further in any region based fusion algorithm, the fusion results are affected by the performance of segmentation algorithm. The various segmentation algorithms are based on thresholding and clustering but the partition criteria used by these algorithms often generates undesired segmented regions. In order to overcome the above said problems a new method for segmentation using Selforganizing Feature Maps which consequently helps in fusion of images dynamically to the desired degree of information retrieval depending on the application has been proposed in this paper. The proposed algorithm is compatible for any type of image either noisy or clean. The method is simple and since mapping of image is carried out by Self-organizing Feature Maps all the information in the images will be preserved.The images used in image fusion should already be registered. A novel image fusion algorithm based on self organizing feature map is proposed in this paper. The outline of this paper is as follows: In Section 2, the Self-organizing Feature Maps is briefly introduced. Section 3 describes the algorithm for Code Book generation using Self Organizing Feature Maps. Section 4 describes the proposed method of Fusion. Section 5 details the Experimental Analysis and Section 6 gives the conclusion of this paper.
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
radius of the neighbourhood of the Best Matching Unit is now calculated. This is a value that starts large, typically set to the 'radius' of the lattice, but diminishes each time-step. Any nodes found within this radius are deemed to be inside the Best Matching Units neighbourhood. Each neighbouring node's weights are adjusted to make them more like the input vector. The closer a node is to the Best Matching Unit; the more its weights get altered. The procedure is repeated for all input vectors for number of iterations. Prior to training, each node's weights must be initialized. Typically these will be set to small-standardized random values. To determine the Best Matching Unit, one method is to iterate through all the nodes and calculate the Euclidean distance between each node's weight vector and the current input vector. The node with a weight vector closest to the input vector is tagged as the Best Matching Unit. After the Best Matching Unit has been determined, the next step is to calculate which of the other nodes are within the Best Matching Unit's neighbourhood. All these nodes will have their weight vectors altered in the next step. A unique feature of the Kohonen learning algorithm is that the area of the neighbourhoodshrinks over time to the size of just one node. After knowing the radius, iterations are carried out through all the nodes in the lattice to determine if they lay within the radius or not. If a node is found to be within the neighbourhood then its weight vector is adjusted. Every node within the Best Matching Units neighbourhood (including the Best Matching Unit) has its weight vector adjusted. In Self-organizing Feature Map, the neurons are placed at the lattice nodes; the lattice may take different shapes: rectangular grid, hexagonal, even random topology .
Figure 1.Self Organizing Feature Map Architecture The neurons become selectivity tuned on various input patterns in the course of competitive learning process. The locations of the neurons (i.e. the winning neurons) so tuned, tend to become ordered with respect to each other in such a way that a meaningful coordinate system for different input features to be created over the lattice.
46
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
GENERATION
USING
SELF-ORGANIZG
Given a two-dimensional input image pattern to be mapped onto a two dimensional spatial organization of neurons located at different positions (i, j)s on a rectangular lattice of size nxn. Thus, for a set of nxn points on the two-dimensional plane, there would be n2 neurons Nij:1 i,j n, and for each neuron Nij there is an associated weight vector denoted as Wij. In Self-organizing Feature Map, the neuron with minimum distance between its weight vector Wij and the input vector X is the winner neuron (k,l), and it is identified using the following equation. ||X - W kl|| = min [min ||X - W ij ||]
1i n ij n
(1)
After the position of the (i,j)th winner neuron is located in the two-dimensional plane, the winner neuron and its neighbourhood neurons are adjusted using Self- organizing Feature Map learning rule as: Wij(t+1) = Wij(t) + || X -Wij(t) || (2)
Where, is the Kohonens learning rate to control the stability and the rate of convergence. The winner weight vector reaches equilibrium when Wij(t+1)=Wij(t). The neighbourhood of neuron Nij is chosen arbitrary. It can be a square or a circular zone around Nij of arbitrary chosen radius. Algorithm 1: The image A (i,j) of size 2 N x 2 N isdivided into blocks, each of them of size 2 n x 2n pixels, n < N.
47
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
2: A Self-organizing Feature Map Network is created with a codebook consisting of M neurons (mi: i=1,2,.....M).The total M neurons are arranged in a hexagonal lattice, and for each neuron there is an associated weight vector Wi = [wi1 wi2.......wi22n]. 3: The weights vectors are initiated for all the neurons in the lattice with small random values. 4: The learning input patterns (image blocks) are applied to the network. The Kohonens competitive learning process identifies the winning neurons that best match the input blocks. The best matching criterion is the minimum Euclidean distance between the vectors. Hence, the mapping process Q that identifies the neuron that best matches the input block X is determined by applying the following condition. Q(X) = arg i min ||X - Wi|| i = 1,2,..M (3)
5: At equilibrium, there are m winner neurons per block or m codewords per block. Hence, the whole image is represented using m number of codewords. 6: The indices of the obtained codewords are stored. The set of indices of all winner neurons along with the codebook are stored. 7: The reconstructed image blocks of same size as the original ones, will be restored from the indices of the codewords.
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
images. For the same index value the value of the neuron weights of both the images are compared and the higher value is selected. The procedure is repeated for all the indices until a weight vector of optimal strength is generated. The image retrieved from this optimal weight vector is the fused image which will represent all the gray values in its optimal values.The procedure can be repeated in various combinations with multiple images until the desired result is achieved.
Table 1.b PSNR Images Lena Bacteria Satellite map MRI -Head Image A 30.381 28.194 32.077 30.787 Image B 31.701 27.989 31.582 33.901 Fused Image 38.0481 32.5841 33.2772 45.3924
49
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
Figure 1.a
Figure 1.c
Figure 2.a
Figure 2.c
Figure 2.a
Figure 2.c
Figure 4.a
Figure 4.b
Figure 4.c
Figure 4.MRI Head Images (a) and (b ) are the multi focused images Image (c) is the fused version of (a)and(b)
50
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
Figure 6. The Difference images between fused image and the source image :(g) Difference between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j) Difference between (e)and (f).
6. CONCLUSIONS
In this Paper a simple method of fusion ofImages has been proposed. The advantage of Self organizing Feature Map is that after training, the weight vectors not only represents the image block cluster centroids but also preserves two main features. Topologically neighbour blocks in features. the input vectors are mapped to that of topologically neighbouring neurons in the code book. Also the distribution of weight vectors of the neurons reflects the distribution of the weight vectors in the input space. Hence their will not be any loss of information in the process process.The proposed method is dynamic in the sense that depending on the application the optimal weight vectors are generated and also redundant values as well as noise can be ignored in this process resulting in lesser simulation time.The method can be extended to colour images also.
REFERENCES
[1] S. Li, J.T.Kwok, Y.Wang, Using the discrete wavelet frame transform to mergeLandsat TM and SPOT panchromatic images, Information Fusion 3 (2002) 1723. 51
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013 [2] [3] [4] [5] A.Goshtasby, 2-D and 3-D ImageRegistration for Medical, Remote Sensing,and Industrial Applications, Wiley Press,2005. A.Toet,Hierarchical image fusion, Machine Vision and Applications 3 (1990) 111. H.Li,S. Manjunath, S. Mitra, Multisensorimage fusion using the wavelet transform,Graphical Models and Image Processing 57 (3) (1995) 235245. S.G. Nikolov, D.R. Bull, C.N. Canagarajah, M. Halliwell, and P.N.T. Wells. Imagefusion using a 3-d wavelet transform. In Proc.7th International Conference on Image Processing And Its Applications, pages 235-239, 1999. P.J. Burt, A.Rosenfeld (Ed.), Multiresolution Image Processing and Analysis, Springer-Verlag, Berlin, 1984, pp. 635. P.J. Burt, R.J. Kolczynski, Enhanced image capture through fusion, in International Conference on Computer Vision, 1993, pp. 173182. N. Kingsbury, Image processing with complex wavelets, Silverman, J. Vassilicos(Eds.), Wavelets: The Key to Intermittent Information, Oxford University Press, 1999, pp.165185. S.G. Nikolov, P. Hill, D.R. Bull, C.N. Canagarajah, Wavelets for image fusion, in: A. Petrosian, F. Meyer (Eds.), Waveletsin Signal and Image Analysis, Kluwer Academic Publishers,The Netherlands, 2001, pp. 213244. J.J. Lewis, R.J. OCallaghan, S.G.Nikolov, D.R.Bull,C.N.Canagarajah, Region-based image fusion using complex wavelets, in: Proceedings of the 7th International Conference on Information Fusion, Stockholm, Sweden, June 28July 1, 2004, pp. 555562. Z. Zhang, R. Blum, Region-based image fusion scheme for concealed weapon detection, in: ISIF Fusion Conference, Annapolis, MD, July 2002. G. Piella, A general framework for multiresolution image fusion: from pixels to regions, Information Fusion 4 (2003)259280. G. Piella, A region-based multiresolutionimage fusion algorithm, Proceedings of the 5th International Conference onInformation Fusion, Annapolis, MS, July811, 2002, pp. 15571564. S.G. Nikolov, D.R. Bull, C.N. Canagarajah, 2-D image fusion by multiscaleedge graphcombination, in:Proceedings of the 3rdInternational Conference on Information Fusion, Paris,France, July 1013, 1, 2000, pp.1622. G. Piella, A general framework formultiresolutionimage fusion from pixels to regions, Information Fusion 4 (2003) 259280. H. Wei, Z.L. Jing, Pattern Recognition Letters 28 (4) (2007) 493. S.T. Li, J.T. Kwok, Y.N. Wang, Pattern Recognition Letters 23 (8) (2002) 985. V. Aslanta, R. Kurban, Expert Systemswith Applications 37 (12) (2010) 8861. Y. Chai, H.F. Li, M.Y. Guo Optics Communications 248 (5) (2011) 1146. S.T. Li ,B. Yang, Pattern RecognitionLetters 29 (9) (2008) 1295 Q. Zhang, B.L. Guo, Signal Processing 89(2009) 1334. B. Yang, S.T. Li, IEEE Transactions on Instrumentation and Measurement 59 (4)(2010) 884. Multifocus image fusion and denoising scheme based on homogeneity similarityHuafeng Li , Yi Chai ,Hongpeng Yin,Guoquan Liu , Optics Communications Journal , September 2011.
[10]
ACKNOWLEDGMENTS
The authors thanks to our Managementof SNR Sons Institutions for allowing us to utilize their resources for doing our research work. Our sincere thanks to our Principal&Secretary Dr.H.BalakrishnanM.Com., M.Phil.,Ph.D., for his support and encouragement in our research work. And my grateful thanks to my guide Dr. Anna SaroVijendran MCA.,M.Phil.,Ph.D., Director of MCA., SNR Sons College, Coimbatore-6 for her valuable guidance and giving me a lot of suggestions & proper solutions for critical situations in our research work.The authors thank the Associate Editor and reviewers for their encouragement and valued comments, which helped in improving the quality of the paper.
52
International Journal of Information Sciences and Techniques (IJIST) Vol.3, No.1, January 2013
AUTHORS BIOGRAPHY
Dr. Anna SaroVijendran receivedthe Ph.D. degree in Computer Science from Mother Teresa Womens University,Tamilnadu, India, in 2009. She has 20 years of experience in teaching. She is currently working as the Director, MCA in SNR Sons College, Coimbatore, Tamilnadu, India. She has presented and published many papers in International and National conferences. She has authored and co-authored more than 30 refereed papers. Herprofessional interests are Image Processing, Image fusion, Data mining and Artificial Neural Networks.
G.ParamasivamPh.D (PT)Research scholar.He is currently theAsst Professor, SNR Sons College,Coimbatore, Tamilnadu, India.He has 10 years of teaching experience.His technical interests include Image Fusion and ArtificialNeural networks.
53