Innate and Artificial entity Recognition on Satellite Images by Level Set Evolution and Knn Classification
Enhancing Entity Recognition in Satellite Images using Level Set Evolution and KNN Classification
by Prof. Feroza. M. Mirajkar*, Dr. Ruksar Fatima, Prof. Kaveri Shankar,
- Published in Journal of Advances in Science and Technology, E-ISSN: 2230-9659
Volume 12, Issue No. 25, Dec 2016, Pages 139 - 144 (6)
Published by: Ignited Minds Journals
ABSTRACT
Extraction of entities from image is one of the desired and important steps in image processing. Entity extraction from image is playing a vital role in many areas. To work with satellite images for cropping some useful entity body is very much desired motive in mapping and surveying area. Many improvements have been done in this area and some are in progress. The past work has been done for section same kind of items and some demonstrations have been done for enhancing its productivity, effectiveness and its efficiency. Here we classify both kinds of entities i.e, natural or artificial entities from the images got from satellite. Level set evolution (LSE) is being used for clipping out artificial entities from remote sensing images. Level set evolution (LSE) offers effective outcomes for clipping physiographical changes. We have used some geometrical features and texture features for feature extraction and K-nearest neighbor classification for classification of artificial and natural entities which gave us better output performance and we have calculated precision, recall, accuracy and finalized our paper with some important conclusions and appropriate results.
KEYWORD
innate and artificial entity recognition, satellite images, level set evolution, knn classification, entity extraction, image processing, mapping and surveying, remote sensing images, geometrical features, texture features
I. INTRODUCTION
Now days’, clipping desired items from picture appears the most well-known subject in image processing. Extractting the required information from spatial images is extremely interesting.Artificial entity extraction and natural entity extraction are the essential and helpful sections in different areas and applications, for example survey for military support and reconnaissance, topological changes, cartographical survey, GIS, etc. Common entities that are made by human can be recognized by using their common geometric properties if there arise an occurrence of roads routine show up like extended parts with same intensities. Accordingly, isolating road frameworks from image, in some sense, signifies recognizing line bits. Not exactly the same as streets, building housetops all around contain rectangles or standard polygons with parallel lines and right angles. Thus, the corner identifier is as often as possible used for building extraction in a couple of past studies. Besides, shadows delivered by the event of slanted incident light in remote identifying pictures. Natural entity extraction is mainly useful in surveying and mapping of any fields or areas. Satellite images are a standout amongst the ultimate effective and critical devices utilized by the forecasters. They are basically the eagle eyes in the sky. These images console forecasters to the conduct of the climate as they give a reasonable, succinct, and precise illustration of how occasions are spreading. Forecasting the climate and directing exploration would be to a great degree troublesome without satellites. Information taken at stations around the nation is restricted in its representations of climatic movement. It is still conceivable to get a decent examination from the information, but since the stations are isolated by several miles critical elements can be missed. Satellite images help in indicating
1
possibility for blunder. Satellite images give information that can be translated direct.
Entity Recognition in Image Processing:
Entity recognition is the task of finding a given entity in an image or video sequence. For any entity in an image, there are many 'features' which are interesting points on the entity that can be extracted to provide a "feature" description of the entity. This description extracted from a training image can then be used to identify the entity when attempting to locate the entity in a test image containing many other entities [20]. Ordinarily satellite pictures are inescapable blended with different levels of noise and distortions, so it requests a considerable measure of endeavors for further processing. Retrieval of interesting entity from spatial resolution images is confusing because of similar features of desired entities and background entities. Completely computerized framework for entity extraction is an open task for analysts. Entity recovery without human interference is a major task. Past researches have been accomplished for expand level of automation and this issue is an open task for analysts in picture processing. Past specialists have contributed their time for one and only one kind of entity extraction. Subsequently, there's a need to build up a substantial measure of entity extraction, particularly to deal with more than one item. Some exertion concern to be paid for managing characteristic and unnatural item extraction in parallel way. Some features discussed below, which is mostly used for entity recognition process. These features help us to extract entities from images and to recognize them: Shape Features: Shape is defined as a type of an item or it’s outside border, diagram, or outer surface. Alongside shading and texture features, state of articles is likewise utilized for picture correlation as a part of image processing region. There are two strategies for speaking to and classifying shapes and that is outside technique and inner strategy. The outer technique speaks to the locale in term of its outside attributes or limit, and the interior strategy speaks to the district as far as its inner qualities or the pixels including the area. Shape feature descriptors are characterized into two classes first is district descriptor and the second is boundary descriptor. Texture Features: Texture alludes to visual examples that have properties of the homogeneity that don't characterize for the different pixel, it depends just on the dispersion of power or nearness of shading covers the picture. Texture feature recognition procedures can be characterized into two
morphological operators furthermore portray surface by particular structure primitives and their placement rules. Statistical methods use intensity dissemination of picture to separate measurable parameters speaking to composition property of a picture. Generally utilized methods are statistical strategies include co-occurrence matrices, Fourier power spectra, Tamura feature and multi resolution filtering techniques like wavelength transform etc. Color features: Color is considered as the most dominant and distinguishing visual feature. Generally, it adopts histograms to describe it. A color histogram describes the global color distribution in an image and is more frequently used technique for content-based image retrieval because of its efficiency and effectiveness. By characterizing a color space, color can be unmistakably recognized numerically by their directions. Color spaces allude to a distinct wavelength of electromagnetic range with various items collaboration that structures a chromatographic space. The most often utilized color space are RGB (red, green, blue), CMY (cyan, magenta and yellow), CMYK (cyan, magenta, yellow and black, these are used in color printers), HSV (hue, saturation and value) etc.
With the help of these features the classification can be done. There are many classification methods, which are used to classify entities from images. Mainly classifications are categorized into two parts: supervised and unsupervised. K-nearest neighbour is the simplest one for classification process.
II. RELATED WORK
Here, we first give a brief survey of the general methodologies for man-made entity extraction from remote detecting pictures and we are presenting the LSEs and their applications in remote detecting process. we will be giving a brief examination about the state of the art methods for entity detection strategies. Extraction of entity is valuable for many applications like impervious surfaces [1]-[3], thematic cartography [4], timely update of urban GIS [5], [6], disaster assessment [7], [8], and military reconnaissance [9]. The primary issue with past analysts was that they were extricated stand out classification of item either for artificial entities on the other hand for unnatural entities. Because of their similar texture characteristics of interested area and background area, at some intervals it is troublesome to distinguish desired portion of entities from satellite pictures. As we realized that satellite pictures are having kinds of
Prof. Feroza. M. Mirajkar1*, Dr. Ruksar Fatima2, Prof. Kaveri Shankar3 1
task. In future we will attempt to beat this issue, and will attempt to extricate both artificial and unnatural kind of entities. Typically, man-made entities can be identified by using their intrinsic geometric properties or spectral signatures [10]. According to state of the art methods roads shows as homogeneous colour and intensity [11]-[13] and buildings shows as rectangles [14], parallel lines [15]. In the course of recent years, a progression of methodologies has been produced for entity extraction from optical remote detecting pictures. Some far reaching surveys can be found in [18] and [19]. Nonetheless, man-made article extraction is still an open issue. The procedure of segmentation utilized with the assistance of various algorithms. From the past, it is seen that there are numerous strategies for extraction of items from picture. In any case, the ideal strategy is being seeking yet. The strategies are adjusting step by step. Similarly level set evolution is additionally a segmentation strategy, which is utilized for fragmenting the parts of picture. The performance of the level set evolution is compared with state-of-the art methods, and level set technique gave the better execution[1].
III. PROPOSED METHODOLOGY
The entity recognition plays a crucial role in every scenario of human life. Entity detection plays a significant role in our life for the purpose of automated system. In today’s world the robotized system has become the most growing and interesting area for human life. The entity recognition is a most active area in image processing. The actual task in our proposed method is to crop the artificial and natural entities from image and classify it. To approach this method we forced to first collect the proper images for training or testing then cut the proper data from and takes out its features and then according to the training features we classified the entities from test images. The steps of proposed methodology are given in fig 4.1 and the steps are as follows: Image Acquisition: Image acquisition is step of collecting the images from the sources for implementation process. We are taking the pictures of many area of globe from satellite image. The satellite image are interrupted and covered by a lot of distortion and that is the4 region why entity extraction is irrelevant for satellite images. With the intend to try to improve the satellite image entity extraction we are using the satellite image. We are using Google map to user from the test data set and image will be proceeding to further processes.
Image Acquisition
Pre-processing on image Level set evolutionThresholding segmentation Feature Extraction Train the data sample
Testing and classification
Fig 1: Block diagram ofproposed methodology Image Pre-processing: Because of the input image has many different formats, pre-processing is the step of preparing the obtained image so all further processing become easier and more accurate. It is the step of removing noise and correcting the image to suitable form so that next step especially segmentation can be accurate. In our project we are sharpening the edges of input image so that the region can be detected efficiently.
Image segmentation: Image segmentation is a process of partitioning images into its subparts. Segmentation of image is most important part in image processing. The task of proper image processing is not an easy task, it become more challenging because of the noisy images. To get an efficient segmentation the noise and the quality the picture should be increased. Two types of segmentation we are using level set evolution and thresholding segmentation.
H. Level set evolution
Level set technique was proposed by Osher and Sethian in 1988 and has been broadly utilized as a part of image segmentation, picture polishing, movement partitioning, moving target following and picture rebuilding etc. Osher and Sethian proposed LSE in [20] is given as follows: φt = F(κ)|∇φ| (1) Where φ denotes the level set function(LSF), t is temporal variable, k denotes the mean curvature of
1
Background
object
Zero level curve
Fig 2: Zero level curve (ZLC) moving towards the entity Fig 3: Entity recognition by Level Set Curve
I. Thresholding
Thresholding is a standout amongst the most famous and more genuine methodology for segmentation. The most straightforward thresholding strategies supplant in a image with a dark pixel if the image intensity Iij is fewer than what some fixed value of T (i.e., Iij
Fig 4: Thresholding Segmentation
Feature Extraction: Feature extraction is a process of getting desired information or data from image. It may be the colour information, Texture information or may features and regionprops features. Number of GLCM features we are using are 22 and regionprops feature are 8, so total 30 features we are using in our feature extraction.
A. Texture feature extraction with GLCM features
The 22 features we are using in texture feature extraction are energy, dissimilarity, contrast, entropy, correlation, homogeneity, autocorrelation, maximum probability, inverse difference, cluster shade, cluster prominence, sum of entropy, sum of squares, sum of average, sum of variance, difference variance, difference entropy, information measures of correlation and information measures of correlation, maximum correlation coefficient and inverse difference normalized, inverse difference moment normalized. The gray-level co-occurrence matrix will give out specific properties in regards to the spatial appropriation of gray levels inside the surface picture.
B. Regionprops features
Region is defined as the boundary characteristics based on the shape of entity. Regionprops is used for extracting the property of image region as the shape feature. Many region properties are there such as Area, Centroid, Convexlength, Convexareas, Eccentricity etc. The result is stored as a structure array. We are using 8 regionprops features in our thesis i.e. Euler Number, Total Area, Mean Orient, Equivalent Diameter, Extent, Solidity, Convex Area, Major Axis Length. Testing and classification: After feature selection we will go to classification method. We are using two classification methods to classify the entity in image and then comparing both the classification method to each other. Two classification methods, which we are using is K-nearest neighbour and feed forward back propagation neural network. We are comparing the results of these classifications.
C. Classification with K nearest neighbour
In pattern recognition system, k-nearest neighbour method is the technique for ordering entities situated on nearest training prototype in the component. K-NN is a kind of pattern based realizing, where the capacity is just close locally and all calculation is late amid arrangement. The K-NN method is alongside the most straightforward method of all machine learning processes. An entity is grouped by a large number of votes of its neighbour, with the item separately appointed to the class basically recognizable amongst its k nearest neighbours where K is a positive whole number, ordinarily little. In the event that K = 1, then the entity is essentially allotted to the class of that single nearest neighbour.
Prof. Feroza. M. Mirajkar1*, Dr. Ruksar Fatima2, Prof. Kaveri Shankar3 1
we have a two-dimensional feature space produced by the two measures made on each sample, measure 1 and measure 2.Each sample gives different values for these measures, but the samples of different classes give rise to clusters in the feature space where each cluster is associated with a single class.In Figure 4 we have seven samples of two known textures: class A and class B, depicted by X and O, respectively. We want to classify a test sample, depicted by +, as belonging either to class A or to class B (i.e. we assume that the training data contains representatives of all possible classes). Its nearest neighbor, the sample with least distance, is one of the samples of class A, so we could say that our test appears to be another sample of class A (i.e. the class label associated with it is class A). The clusters will be far apart for measures that have good discriminatory ability, whereas they will be overlap for measures that have poor discriminatory ability. That is how we can choose measures for particular tasks.
RESULTS AND DISCUSSIONS
The implementation of proposed methodology is discussed and shown how the artificial and natural entities are extracted and classified from satellite image. Here two classification methods are compared according to their execution. Total 60 data samples we are using to train the data from database. We are using 50 images for testing part. We are determining precision and recall to evaluate the performance of our proposed methodology.
051015202530354045500
10 20 30 40 50 60 70 no. of images precision %
(a)
051015202530354045500102030405060708090100
Recall % no. of images Recall
(b)
051015202530354045500102030405060708090
no. of images Accuracy % Accuracy
(c)
Fig 4: Evolution result, (a) Precision, (b) Recall, (c) Accuracy
The testing is done in total 50 images. The precision, recall and accuracy are evaluated where precision is calculated as 77.44%, recall is 87.34% and Accuracy is 76.91%, which leads to relatively better performance as shown in figure 4.1.
CONCLUSION
Entity recognition is an important and beneficial task for image processing. It is having many applications and usage in our developing age of science. Here we have used satellite images for entity extraction and we are classifying the artificial and natural entities from these images. The level set evolution we used
1
are used to classify the entities. The precision and recall are calculated and gave the satisfying result. Future research can be also being done for specific natural or artificial entities like pond or trees etc. automation can be improve for further work.
REFERENCES
About the Authors
Prof. Feroza M. Mirajkar. is been working as an Assistant Professor in Department of Electronics and Communication, in Khaja Banda Nawaz College of Engineering, Kalburgi, Karnataka, India. Completed Master of Technology (M.Tech) from Basaveshwara College of Engineering, Bagalkote, Karnataka, India
Dr. Ruksar Fatima . is been working as an Head and Professor in Department of Computer Science and Engineering, in Khaja Banda Nawaz College of Engineering, Kalburgi, Karnataka, India.
REFERENCES
Zhongbin Li, Wenzhong Shi, Qunming Wang, and Zelang Miao,” Extracting Man-Made Entities From High Spatial Resolution Remote Sensing Images via Fast Level Set Evolutions” IEEE Transactions on Geoscience and Remote Sencing, Vol, 53, No. 2, February 2015. C. S. Wu and A. T. Murray, “Estimating impervious surface distribution by spectral mixture analysis,” Remote. Sens. Environ., vol. 84, no. 4, pp. 493–505, Apr. 2003. S. L. Powell, W. B. Cohen, Z. Yang, J. D. Pierce, and M. Alberti, “Quantification of impervious surface in the Snohomish Water Resources Inventory Area of Western Washington from 1972–2006,” Remote. Sens. Environ., vol. 112, no. 4, pp. 1895–1908, Apr. 2008. Q. H. Weng, “Remote sensing of impervious surfaces in the urban areas: Requirements, methods, trends,” Remote. Sens. Environ., vol. 117, pp. 34– 49, Feb. 2012 R. Duca and F. Del Frate, “Hyperspectral and multiangle CHRIS-PROBA images for the generation of land cover maps,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 10, pp. 2857–2866, Oct. 2008. R. J. Dekker, “Texture analysis and classification of ERS SAR images for map updating of urban areas in the Netherlands,” IEEE Trans. Geosci. Remote Sens., vol. 41, no. 9, pp. 1950–1958, Sep. 2003. classification,” Pattern Recognit. Lett., vol. 24, pp. 3037–3058, Dec. 2003. S. E. Park, Y. Yamaguchi, and D. J. Kim, “Polarimetric SAR remote sensing of the 2011 Tohoku earthquake using ALOS/PALSAR,” Remote Sens. Environ., vol. 132, pp. 212–220, May 2013. X. H. Tong et al., “Building-damage detection using pre- and post-seismic high-resolution satellite stereo imagery: A case study of the May 2008 Wenchuan earthquake,” ISPRS J. Photogramm. Remote Sens., vol. 68, pp. 13–27, Mar. 2012.
Corresponding Author Prof. Feroza. M. Mirajkar*
Assistant Professor, Department of Electronics and Communication Engineering, Khaja Banda Nawaz College of Engg. & Technology, Gulbarga, Karnataka, India
E-Mail – mmferoza@gmail.com