Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Object-based classification of wetland vegetation using very high-resolution unmanned air system imagery

Object-based classification of wetland vegetation using very high-resolution unmanned air system... EUROPEAN JOURNAL OF REMOTE SENSING, 2017 VOL. 50, NO. 1, 564–576 https://doi.org/10.1080/22797254.2017.1373602 Object-based classification of wetland vegetation using very high-resolution unmanned air system imagery a a a b c Roshan Pande-Chhetri , Amr Abd-Elrahman , Tao Liu , Jon Morton and Victor L. Wilhelm a b School of Forest Resources and Conservation – Geomatics, University of Florida, Plant City, FL, USA; Invasive Species Management Branch, USACE, Stuart, FL, USA; Surveying and Mapping Branch, Operations Division – UAS Section, U.S. Army Corps of Engineers, Jacksonville, FL, USA ABSTRACT ARTICLE HISTORY Received 24 February 2017 The purpose of this study is to examine the use of multi-resolution object-based classification Revised 23 August 2017 methods for the classification of Unmanned Aircraft Systems (UAS) images of wetland Accepted 28 August 2017 vegetation and to compare its performance with pixel-based classification approaches. Three types of classifiers (Support Vector Machine, Artificial Neural Network and Maximum KEYWORDS Likelihood) were utilized to classify the object-based images, the original 8-cm UAS images Remote sensing; UAS; and the down-sampled (30 cm) version of the image. The results of the object-based and two wetland vegetation pixel-based classifications were evaluated and compared. Object-based classification pro- mapping; object-based duced higher accuracy than pixel-based classifications if the same type of classifier is used. classification; pixel-based classification; Support Vector Our results also showed that under the same classification scheme (i.e. object or pixel), the Machine Support Vector Machine classifier performed slightly better than Artificial Neural Network, which often yielded better results than Maximum Likelihood. With an overall accuracy of 70.78%, object-based classification using Support Vector Machine showed the best perfor- mance. This study also concludes that while UAS has the potential to provide flexible and feasible solutions for wetland mapping, some issues related to image quality still need to be addressed in order to improve the classification performance. Introduction seasonal variation in response to changes in water level and weather. Imagery with higher spectral resolu- Remote sensing has been used regularly to identify tion, such as hyperspectral imagery, can also be useful vegetation communities and monitor land cover for vegetation analysis. However, acquiring images changes. Olmsted and Armentano (1997) highlighted with high spatial, spectral and temporal resolution is the need for monitoring wetland vegetation and its often costly and can be logistically difficult. distribution in order to detect changes in the terres- The use of high spatial resolution aerial imagery trial–aquatic landscape transition. Wetland classifica- captured by small (1 to 2 m wingspan) Unmanned tion is challenging due to vegetation cover dynamics Aircraft Systems (UAS) in natural resource manage- with water fluctuation creating rapid and frequent ment is rapidly increasing (Abd-Elrahman, Pearlstine, changes in the type, distribution and density of plant & Percival, 2005; ; Laliberte, Rango, & Herrick, 2007; coverage (Belluco et al., 2006;Smith, Spencer,Murray, Rango et al., 2006; Watts et al., 2008). The use of such &French, 1998). This process is further complicated images is motivated by their potentially high tem- by the need for frequent data collection and high poral and spatial resolutions, increased technological spectral and spatial resolution imagery. Coarse spatial and operational feasibility and advances in image resolution images captured by satellite or high-altitude analysis techniques. Improvements in technology manned airborne missions may produce lower image and algorithms are gradually enabling autonomous classification accuracy, especially in riparian and wet- flying to produce high-quality georeferenced orthor- land areas (Maheu-Giroux & De Blois, 2005; Yang, ectified images. The temporal and spatial resolutions 2007). In contrary, high spatial resolution imagery of UAS imagery are controlled by the operator/user facilitates the extraction of texture features (Ge, who decides the mission parameters (e.g. flying Carruthers, Gong, & Herrera, 2006;Wang, Sousa, height) along with when exactly to fly; this gives a Gong, & Biging, 2004;Waser et al., 2008) that can significant advantage over traditional piloted image- assist the classification process. The ability to capture capturing missions. the images frequently is important for monitoring Although hyperspectral and lidar sensors have wetland vegetation undergoing rapid and severe been developed specifically for small UAS, these CONTACT Tao Liu taoliu@ufl.edu School of Forest Resources and Conservation – Geomatics, University of Florida, 1200 N Park Rd, Plant City, FL 33563, USA © 2017 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. EUROPEAN JOURNAL OF REMOTE SENSING 565 spectral information of individual image pixels, sensors are still relatively expensive and lack the OBIA has the advantage of easily incorporating tex- spatial resolution of some of the off-the-shelf three- band (RGB) cameras. Using inexpensive cameras, as tural and thematic information, geometrical and con- textual relationships and ancillary data into the long as they can satisfy user needs, is favored, given classification process. While there have been some the relatively high possibility for UAS crashes. To take advantage of the high spatial resolution of low- studies comparing object-based and pixel-based clas- sification techniques, more studies are needed in wet- cost RGB cameras onboard most commercial UAS, land land cover classification using sub-decimeter analysis algorithms that build on the images’ spectral, contextual and textural information need to be devel- UAS imagery. Previous studies suggest that visual inspection indicated that the OBIA approach oped. Increasing image spatial resolution does not incurred fewer errors in larger regions of homoge- necessarily increase image classification accuracy; this is probably due to the increase in intra-class neous land cover and performed better in temporal spectral variability and the decrease in cross-class analysis of land cover change (Dingle Robertson & King, 2011). Niemeyer and Canty (2003) claimed that statistical separability (Hsieh et al., 2001; Yu et al., 2006). Traditionally, texture features extracted from object-based classification has greater potential for high spatial resolution imagery have been used to detecting change; this is mainly a result of the use improve classification (Ashish et al. 2009; Coburn & of higher-resolution imagery. Castillejo-González Roberts, 2004; Pearlstine, Portier, & Smith, 2005; St- et al. (2009) found that object-based method outper- Louis, Pidgeon, Radeloff, Hawbaker, & Clayton, 2006; formed five pixel-based supervised classification algo- Szantoi, Escobedo, Abd-Elrahman, Smith, & rithms (parallelepiped, minimum distance, Pearlstine, 2013). Object-Based Image Analysis mahalanobis distance, spectral angle mapper and (OBIA) takes advantage of the multi-scale details in Maximum Likelihood [ML] classifiers) in mapping high spatial resolution imagery (Laliberte & Rango, crops and agro-environmental land cover. Far greater 2009) as well as other information summarized at the accuracy (83.25%) was achieved by Yan, Mas, object level. OBIA is well known to suit high spatial Maathuis, Xiangmin, and Van Dijk (2006) when resolution UAS imagery, which has frequently been mapping 12 land cover classes using object-based utilized in urban/urban–rural feature extraction classification versus a pixel-based approach (Aguilar, Saldaña, & Aguilar, 2013; Chen, Shi, Fung, (46.48%). The object-based methodology used by Wang, & Li, 2007; Johnson, 2013; Li, Bijker, & Stein, Gao and Mas (2008) outperformed both ML and 2015; Tehrany, Pradhan, & Jebur, 2013). Along with nearest-neighborhood pixel-based methods in map- being applied to extract features in urban and rural ping land cover using SPOT 5 (10-m spatial resolu- landscapes, OBIA techniques helped in providing tion) imagery. However, the authors noted that after detailed classification of natural plant communities smoothing filters were applied to the imagery, the as represented in wetland areas (Cordeiro & Rossetti, accuracy of the pixel-based methods increased. 2015; Dronova, Gong, & Wang, 2011; Gao, Trettin, & Earlier pattern recognition efforts implemented Ghoshal, 2012; Moffett & Gorelick, 2013). low-level machine learning algorithms such as Nevertheless, the high spatial resolution of the images image thresholding, morphology and template taken by UAS creates a challenge, especially in nat- matching. Many of these algorithms were concerned ural land cover classification. Unlike pixel-based ana- with highly contrasted features and did not involve lysis, OBIA segments the image into relatively shape, contextual and textural information. Several homogeneous and semantically coherent objects, machine learning algorithms, such as Support based on different homogeneity criteria at different Vector Machine (SVM) (Mountrakis, Im, & Ogole, scales. In OBIA, spectral information is aggregated 2011) and Artificial Neural Networks (ANN) per object, and other features such as textural and (Benediktsson, Swain, & Ersoy, 1990), have been contextual information become available for object implemented using OBIA. The SVM algorithm sepa- analysis. Different classification algorithms can be rates the classes with a decision surface, known as applied on the objects at different scale levels, and optimal hyperplane, such that it maximizes the mar- classification rules can be developed based on the gin between the classes (Cortes & Vapnik, 1995; thematic meaning of the objects. Veenman, Reinders, & Backer, 2002). ANN is another Several studies claim that object-based classifica- common machine learning framework that tries to tion has greater potential for classifying higher-reso- solve a convex optimization problem. lution imagery than pixel-based methods (Gao & Floating invasive plants such as water lettuce and Mas, 2008; Myint, Gober, Brazel, Grossman-Clarke, water hyacinth are potentially detrimental to aqua- & Weng, 2011; Oruc, Marangoz, & Buyuksalih, 2004; tic ecosystem of Florida and are infesting lakes and Whiteside, Boggs, & Maier, 2011; Willhauck, rivers in an alarmingly rapid rate prompting the Schneider, De Kok, & Ammer, 2000). Contrasting State of Florida to pass a specific statute (Florida with traditional pixel-based methods utilizing the Statute 369.22) requiring agencies to manage these 566 R. PANDE-CHHETRI ET AL. invasive plants at the species level. These plants in texture, patch size and mixture. The United States need to be continually monitored and managed, Army Corps of Engineers (ACE) is specifically interested which constitute one of the motives for this study. in monitoring the populations of aquatic non-native The primary goal of this study is to investigate invasive species such as the water hyacinth (Eichhornia whether UAS is a viable platform for mapping wet- crassipes) and water lettuce (Pistia stratiotes), which can land vegetation. This study investigates the best cause serious ecological and navigational problems. approach among the commonly available methods Aerial images were acquired by a 10-megapixel true to achieve the optimal results for wetland vegetation color Olympus ES 420 camera mounted on the NOVA classification in a wetland area in South Florida 2.1 UAS in July of 2012. The system is developed by the using ultra-high-resolution UAS images as a way University of Florida Unmanned Aircraft Systems to detect and monitor invasive vegetation. Our Research Program for the US ACE, Jacksonville study compared the performances of three types of District. The autonomous Nova 2.1 model has a 2.7 m common classifiers (SVM, ANN and ML) under wingspan, weighs up to 6.4 kg and is capable of flying for up to 50 min covering 800 acre area with one single flight two classification schemes (object based and pixel based). In the next section, a methodology of this as well as landing on water. The NOVA 2.1 system was study is provided (Methodology section), covering designed for small-sized (less than 10K acres) or med- descriptions of the data used and the procedure. ium-sized (10–20K acres) sites. Typical daily acquisition Study results are presented in the Results section. using NOVA 2.1 depends on site geometry and the Discussion section includes a discussion of the difficulty in navigating the site. Four to six flights per achieved results, and finally, a concise conclusion day can be achieved, and with cooperative weather, an of the study is introduced in the Conclusionsection. experienced team can collect 20K acres in one work week. Information from the navigation sensors onboard the NOVA 2.1 UAS was used to rectify and mosaic Methodology the images using the PhotoScan Agisoft software. Study area and data Image preprocessing was conducted by the ACE per- sonnel according to their standard data collection and This study is conducted in the Eagle Bay wetland area preprocessing procedures. In this study, we analyzed located at the northern side of Lake Okeechobee covering images of 8-cm ground pixel size that cover about about 4700 acres in South Florida. The Eagle Bay area one-fourth of a whole area and represents the plant contains diverse freshwater wetland communities species in the Eagle Bay area. Figure 1 shows the including emergent and floating leafed species that vary Figure 1. Eagle Bay study area. EUROPEAN JOURNAL OF REMOTE SENSING 567 separated at different scales. Accordingly, the whole study area and a zoomed-in view (red box) of a mosaicked image was segmented into multiple scale part of the area. Descriptions of the training dataset used in the classification and its collection procedure hierarchical object levels using the multi-resolution segmentation method in eCognition. We visually are provided in the Image Classification section. tested a series of segmentation scale levels including Accuracy assessment was conducted using a ran- domly generated set of points. Initially, 264 points fine scales (5, 10, 15, 40, 75, 110, 150, 200 and 300) that systematically increase from fine to coarse scales. were generated randomly for the whole study area. Our visual inspection revealed that fine scale segmen- Then, an additional 99 points were randomly gener- ated in the area where a high diversity of vegetation tation using scale 15 tends to produce small objects with low relevance to their textural information. classes exist, resulting in a total of 373 assessment However, such fine scale segmentation results points. These points were labeled through visual inspection of the high-resolution UAS images. The expressed the visual diversity of the objects belonging inspection conducted by biologists with intensive to specific classes. Coarse scale segmentation levels (e.g. 110 and larger) tend to produce large objects experience in interpreting similar images and was assisted by field verification for manual delineation with multiple class mix. However, they clearly of land cover classes in the area. This dataset was showed large patches of homogeneous classes such used to assess both OBIA and pixel-based classifica- as large water bodies. Intermediate scale results such tion results. as scale 75 produced objects of single classes as well as mixed objects of 2 classes, while Level 40 segmen- tation resolved the mixed objects observed in higher- Image segmentation level (e.g. level 75) segmentation results. However, using Level 40, results only carried the potential of Most pixel-based classifications tend to utilize spec- losing important textural information and increasing tral information at individual pixels and potentially the classification result fragmentation. After explor- textural information extracted from neighboring pix- ing various scale levels, four scales (L15, L40, L75 and els. Pixel-based classification can highlight noise, cre- L200) were selected for the analysis. These scales are ate salt-and-pepper effects and ignore important referred to in this paper as (1) L200: coarse scale, (2) contextual, topological and semantic information in L75: high scale, (3) L40: low scale and (4) L15: fine the images (Baatz & Schäpe, 2000; Blaschke, 2010). In scale. The L200 coarse segmentation was chosen for this study, object-based classification was utilized broad categorization of the water and exposed soil using the Trimble’s eCognition object-based analysis (unpaved road) land cover from the rest of the vege- software. The eCognition software segments the tation land cover types. The L40 and L75 scales were images into homogeneous objects and uses informa- used to classify the remaining vegetation classes of tion derived from each object in the classification. various community sizes. For example, large homo- The multi-scale fractal net evolution object seg- geneous water lettuce communities were detected in mentation algorithm (Baatz & Schäpe, 2000), imple- the high (L75) scale, while mixed communities (e.g. mented in the eCognition software, starts with water lettuce and American lotus (Nelumbo lutea) individual pixels and applies a bottom-up merging were more accurately delineated in the low (L40) approach based on heterogeneity measures. scale classification. The fine (L15) scale segmentation Neighboring objects are merged until a predeter- was used to provide additional information, such as mined degree-of-fitting cost is reached. Hence, this the number of unique classes and the average class degree-of-fitting cost, named scale parameter, is a area, for the low (L40) scale classification. relative threshold that determines when the segmen- tation growth, based on spectral and spatial similar- ity, needs to be stopped. Scale controls object size and Object features leads to the multi-scale characteristic of the algo- OBIA enables the use of hundreds of spectral, tex- rithm. Object formation is greatly influenced by the tural segmentation scale as different objects can be identi- and geometrical object features computed from fied at different scales depending on image resolution the individual pixels within each object. Additionally, and the properties of the objects of interest (Laliberte contextual features are also facilitated based on the & Rango, 2009). Optimal segmentation scale is often topological relationship among features in the same determined empirically through visual inspection of segmentation level and across parent and child levels. the results (Flanders, Hall-Beyer, & Pereverzoff, 2003; The large number of often highly correlated features Radoux & Defourny, 2007), mainly based on the size highlights the typical high dimensionality problem and the complexity of the target objects (Whiteside associated with this type of analysis. We used the et al., 2011). eCognition feature optimization tool, which com- Our study area consists of diverse and mixed plant puted class separability for different feature and species including patches of different sizes that can be class combinations using the training dataset, to assist 568 R. PANDE-CHHETRI ET AL. the feature selection process. The feature optimiza- Table 1. Classes used in the classification. Invasive tion results revealed the features producing the lar- Species name Used class name plant gest average minimum distance between the training 1 Willow (Salix spp.) Willow matured, willow samples of different classes. As part of our prelimin- green 2 Smartweed (Polygonum Smartweed ary analysis, we implemented the SVM classifier (see spp.) Image Classification section) classification using the 3 Dry Cattail (Typha spp.) Cattail features identified in the feature optimization process 4 Water hyacinth Hyacinth Y 5 Water lettuce Lettuce, scattered Y as well as combinations of the features. The results of lettuce each test were assessed based on the confusion matrix 6 Water lily (Nymphaea Lily spp.) of the accuracy assessment dataset. The best feature 7 American lotus Lotus, scattered lotus combination that produced highest overall accuracy 8 Moonvine (Ipomea alba) Moonvine 9 Submerged grass Submerged vegetation and Kappa coefficient was used in the final 10 Water Water, glint classification. 11 Unpaved road/exposed Road dry soil The feature combination used in this study involves spectral, textural and hierarchical context Species split into two classes. features. The spectral features include the mean value of the red, green and blue bands and the max- imum difference between mean intensities of any two classes of willow (willow matured and willow bands, hue and saturation. Textural features include green), lotus (lotus and scattered lotus) and lettuce the angular second moment and correlation derived (lettuce and scattered lettuce) were introduced. The from the Gray Level Co-occurrence Matrix (GLCM) southern side of the image shows glinted Lake (Haralick, Shanmugam, & Dinstein, 1973). GLCM is Okeechobee waters, and hence, a glinted water class a matrix summarizing the probability pairs of pixels separate from the regular water class was introduced. with specific values happening in a given directions To apply the supervised classification method (this study used all directions to derive GLCM) formultiscaleOBIA, we need to collecttraining (Developer 2012). Angular second moment measures datasets to train the classifiers. In this study, three local homogeneity and is calculated by summing the sets of training objects were collected on level 15, squares of normalized values of all cells. Correlation 40 and 75 using one set of 422 randomly generated measures the linear dependency of gray levels of points. The objects from these levels, where the neighboring pixels (Developer, e, 2012). Hierarchical generated random points reside, were identified context features include the mean area of sub-objects and interpreted visually in the images. ACE biolo- at the fine scale level and the number of unique gists, who were actively monitoring and control- classes of sub-objects in fine object level, where fine ling invasive plants in the site for several years object level in the study refers to level 15. We used conducted the visual interpretation process. Only the spectral features only to classify the objects at the the coarse level (L200) was classified using simple L15 segmentation level. Then we were able to use the rules to isolate large water bodies and bright classification results of this level to provide the con- exposed soils. Besides the pure vegetation classes textual information used in the higher segmentation listed in Table 1, three mixed classes were added level (L40) classification. The use of spatial (textural) in the L75 segmentation level. These three mixed features did not improve the classification results of classes are lotus–lettuce, lotus–cattail and lettuce– the L15 segmentation level, probably due to the small smartweed. object size, and hence, textural features were dropped Many OBIA studies utilized hierarchical classifica- from the classification process of this level. For the tion (Myint et al., 2011) to classify broad vegetation L40 and L75 classification, textural, spectral and con- and non-vegetation land cover. Our study area is textual (computed from L15 classification to the L40 comprised mostly of wetland vegetation and water, objects) features were used. but only a small area of upland exposed soils or unpaved roads exist at the northern side of the Image classification image. It was found more efficient to broadly cate- gorize the water and exposed soils using simple rules Eight wetland and aquatic vegetation classes as well as applied to the spectral properties of the coarse seg- two non-vegetation classes (water and unpaved road/ mentation (L200) objects. The two spectral criteria exposed soil) identified in the study area are listed in applied to this level are the following: Table 1. Some of the plant species in the area have Max difference: Non-vegetation classes (road and significantly different spectral and textural properties water) showed a flat spectral graph representing based on age and abundance. Two classes were smaller value of maximum difference and were assigned to each of these species, and corresponding separated out from the rest of the wetland area training sets were prepared. In this context, two EUROPEAN JOURNAL OF REMOTE SENSING 569 using a single threshold applied to the maximum contextual information. The hyacinth plant species difference feature of each object. generally grows in close proximity to open water. ● To separate hyacinth, open water objects were iden- Blue layer mean: The non-vegetation classes were then categorized into water and road tified as those water objects with at least half of their classes using a threshold applied to the blue neighboring objects are “water” or “submerged grass” band (water has higher blue band values). classes. Then, all hyacinth objects located more than 80 m (personal communication with ACE biologists) The rest of the area, still unclassified, was then away from open water objects were reassigned to the subjected to further classification. This area is mostly combined willow-smartweed class. Finally, classes wetland vegetation plus some smaller water patches representing the same species, which were split as that escaped from the broad categorization of the separate classes with different training sets due to coarse scale objects. The complex land cover of the spectral/textural variations (lettuce and scattered let- study area and the low spectral resolution of the RGB tuce classes and the lotus and scattered lotus classes), images produced low classification accuracy if only were merged back together. Similarly, the water, objects from a single segmentation level were used. glinted water and submerged grass class were merged Our preliminary analysis indicated that the high since our focus is primarily on emerging wetland (L75) and low (L40) segmentation levels captured plants. Green-colored willows appear very close to the main variations in patch scale and were necessary smartweeds in color and texture, and those two land to identify existing wetland vegetation. The fine scale covers are visually inseparable. We do not believe L15 segmentation level results were used to compute there is enough information in the images at all tested hierarchical context features at the L40 level classifi- resolutions to separate these two classes and hence cation as mentioned earlier. they were merged in this study. The classification process was performed on the To evaluate the accuracy of the OBIA and pixel- high (L75) and low (L40) segmentation levels using based classification results, accuracy assessment data, the training dataset and selected object features. The prepared independently from the training dataset, low (L40) classification results separated all plant were used. A confusion matrix was created for each species; however, class confusion existed in some classification results using the truth assessment objects located within large textured objects at the points. The overall accuracy and Kappa coefficients L75 level. On the other hand, the high (L75) classifi- for the OBIA and pixel-based classified maps were cation performed well with the majority of classes by tabulated and compared. Figure 2 shows a schematic minimizing intra-class variability. However, the diagram of the overall methodology adopted in this results did not separate smaller species patches in study including the multi-scale segmentation, classi- mixed areas as efficiently. Mixed classes used in the fication, post-refinement and accuracy assessment L75 classes were resolved using the L40 classification. steps. The L40 and L75 classification results were merged, and priority was given to the single classes of the L75 Results classification followed by L40 results. Several classifiers were applied on the multi-reso- Object-based classification was conducted on the lution segmented objects. The ML classifier was used multiple segmentation levels of the study image due to its extensive historical use (Foody, Campbell, using the ML, SVM and ANN classification methods. Trodd, & Wood, 1992). The SVM and ANN machine The results were compared with corresponding pixel- learning methods were also used. In addition to the based classifications. Figure 3 shows three pixel-based object-based classification, the pixel-based classifica- maps resulting from the SVM, ANN and ML classi- tion was applied to the same image using the same fications applied to the original image (8 cm) resolu- training set data and classification methods (ML, tion in addition to three object-based maps produced SVM and ANN). The original image (8 cm pixels using corresponding classifiers. A zoomed-in repre- size) was down-sampled to a 30-cm image using the sentation of the same results is shown in Figure 4 to Pixel Aggregate algorithm implemented in the highlight the details of the classification results. ENVI5.0 software (Harris, 2017). Pixel-based classi- Visual comparisons of the maps produced using the fiers were applied on the down-sampled image, and pixel- and object-based classification methods give the results were compared with the OBIA results. preference to the object-based results. As expected, pixel-based classification results are generally frag- mented and less appealing visually. Object-based Post-classification refinement and accuracy results look smoother and show an overall better assessment matching to the land cover types expected from visual Refinement of the OBIA classification results was image interpretation. The salt-and-pepper effect was performed based on the species thematic and clearly alleviated by object-based classification 570 R. PANDE-CHHETRI ET AL. Figure 2. Schematic diagram of the overall methodology used in this study. compared to the pixel-based results; this shows con- presented in Table 2. The overall accuracies of the sistency with most research comparing the two pixel-based methods are between 50.9% and 61.9%. approaches (Blaschke, 2010; Lillesand, Kiefer, & Pixel-based classification of the lower-resolution Chipman, 2014; Xie, Roberts, & Johnson, 2008). image scored better (up to 61.9% for SVM). OBIA The lettuce and lily classes are mixed in some results scored better with object-based machine learn- areas on the ground. Some of the observed misclas- ing approaches producing overall accuracy around sification can be attributed to lily areas showing in 70.8%. The object-based ML classifier performed slightly out-of-focus (smoothed) or tilted images, moderately with an overall accuracy of 58.2%. The which produces texture features that match abun- best results were achieved using the object-based dant lettuce signatures. For similar reasons, some SVM classification with an overall accuracy slightly lettuce in mixed and disturbed areas were assigned above 70% and a Kappa coefficient of about 0.66. to the lily class. Figure 5 shows two examples of Since we observed similar errors in the highest two overlapped imagery showing the same location performing object-based classification methods with different quality (note the difference in image (ANN and SVM), only the confusion matrix of the blurriness and presence of haze). We expect that SVM results is presented in Table 3. Even though the such variations affected the image mosaic used in overall accuracy of 70.8% is on par with what is the study and hence reduced the achieved classifi- expected from true color image classification and cation accuracy. This observation highlights the reported by other ecological studies, the accuracy effect of the image quality and consistency on the distribution is not balanced among the classes. For classification results. Although these variations example, the table shows that 90.5% producer accu- existed in all types of aerial imagery, they probably racy was achieved for the moonvine class, while only have a significant effect on image mosaics produced 33.3% was produced for the hyacinth class. For let- by a large number of images captured by light tuce and hyacinth classes, which are two invasive species in the area, user accuracies are higher weight small UAS as is the case in our research. Quantitative assessment of the classified maps (73.2% and 90.0%, respectively), but the producer again indicates the superior performance of the accuracy is moderate for the lettuce (62.5%) and low for the hyacinth (33.3%) classes. The low produ- object-based classification results with higher overall accuracy and Kappa coefficients compared to pixel- cer accuracy (high omission error) indicates the high probability of missing this class in the classified map. based results for the same type of classifier, as EUROPEAN JOURNAL OF REMOTE SENSING 571 Figure 3. Classified final maps (whole study area) using various classifiers. (a) Object-based SVM, (b) Object-based ANN, (c) Object-based ML, (d) Pixel-based SVM, (e) Pixel-based ANN, (f) Pixel-based ML (see Figure 4 for zoom-in area). The confusion matrix in Table 3 shows that the the object it belongs to is rather limited. For example, hyacinth class is mixed mainly with the smartweed vegetation texture is usually a mixture of irregular class. shadow and vegetation (Ehlers, Gähler, & Janowsky, 2003), and this kind of texture cannot be captured by a single pixel or even a pixel neighborhood analysis. Discussion Individual pixel limitation is more significant This research demonstrated the use of object-based when it comes to UAS images, considering that methods to classify wetland land cover, including most commonly used cost-effective UAS imaging floating vegetation species in South Florida using platforms provide high spatial resolution images ultra-high spatial resolution (8 cm) imagery captured with a limited number of bands. Segmenting the by unmanned aerial vehicles. The advantage of using image into objects aggregates the pixels into mean- the object-based approach is obvious in comparison ingful objects, for which other types of information with the pixel-based approach, as shown in Table 2. can be extracted to supplement the poor spectral These results serve as additional proof that improve- content of the image. We utilized multi-resolution ments can be made when the object-based approach objects to resolve the different patch sizes of the is used for image classification (Blaschke, 2010). We vegetation communities in the analyzed area. Our believe that this is specifically true when high-resolu- experimentation with single scale classification con- tion imagery captured by small UAS is used since sistently yielded lower classification results and sug- pixel grouping reduces the within-class pixel varia- gested the integration of multiple segmentation levels. tions while maintaining the fine resolution bound- We used thematic rules to improve the classification aries between adjacent plant communities. With a results in addition to the contextual information sub-decimeter pixel resolution, the mosaicked image passed from one segmentation level to the other. used in this study belongs to the ultra-high spatial The use of rules and hierarchical context information resolution dataset category. In this image category, is considered one of the reasons for the improvement the information contained in a single pixel regarding offered by the object-based classification approach 572 R. PANDE-CHHETRI ET AL. Figure 4. Classified final maps (zoom-in area) using various classifiers. (a) Object-based SVM, (b) Object-based ANN, (c) Object- based ML, (d) Pixel-based SVM, (e) Pixel-based ANN, (f) Pixel-based ML. Figure 5. Two examples of inconstancies in image properties and associated effect on the mosaic. (Blaschke, 2010). More rules and other features can Table 2. Classification accuracy. be explored as our subject knowledge increases and as Overall Kappa Approach Classifier accuracy (%) coefficient new spatial and topological metrics are developed. Object based SVM 70.8 0.659 The results of our study show that SVM and ANN ANN 69.48 0.643 results are comparable when used for wetland vegeta- ML 58.2 0.512 Pixel based on high-resolution SVM 56.8 0.503 tion mapping with UAS images in the visible part of (8 cm) images ANN 50.9 0.441 the spectrum. The difference in these results cannot ML 53.6 0.471 clearly give preference to one of these classifiers in Pixel based on low-resolution SVM 61.9 0.560 (30 cm) image ANN 53.9 0.474 contrary to some research indicating some cooling ML 52.8 0.464 among the remote sensing community toward the EUROPEAN JOURNAL OF REMOTE SENSING 573 Table 3. Accuracy assessment (confusion) matrix for the OBIA SVM classification results. Ground truth Class Cattail Willo-smartweed Hyacinth Lettuce Lily Lotus Moonvine Water Road Other Users’ accuracy (%) Cattail 33 2 0 0 0 0 0 4 0 4 76.7 Willow-smartweed 10 49 14 10 6 4 1 4 0 1 49.5 Hyacinth 1 0 9 0 0 0 0 0 0 0 90.0 Lettuce 0 2 1 30 4 4 0 0 0 0 73.2 Lily 2 1 0 5 14 1 0 0 0 1 58.3 Lotus 1 1 2 3 0 46 1 8 0 0 74.2 Moonvine 0 0 0 0 0 1 19 0 0 0 95.0 Water 4 0 1 0 1 1 0 62 0 1 88.6 Road 0 0 0 0 0 0 0 1 1 0 50.0 Others 1 0 0 0 0 0 0 0 0 1 50.0 Producers’ accuracy (%) 63.5 89.1 33.3 62.5 56.0 80.7 90.5 78.5 100.0 12.5 70.8 Kappa 0.66 ANN classifier since the early 2000s (Balabin & close examination of the images in our dataset Lomakina, 2011; Shao & Lunetta, 2012). The accuracy shows that while most of the study area is covered achieved by our research is in line with other research by clear high-quality imagery, some of the images are efforts utilizing visible broadband imagery for vegeta- blurred and radiometrically inconsistent with the rest tion classification. Our accuracy is slightly higher of the images. This may be related to the sensor-sun- than the ones achieved by Zweig, Burgess, Percival, object geometry, differences in flying altitude, the and Kitchens (2015), where UAS images were used to bidirectional reflectance properties of the vegetation classify different sets of wetland vegetation commu- and illumination properties at the time of image nities in the Everglades area. exposure. When mosaicking two images with differ- Our results show lower classification accuracy of ent radiometric properties, adjacent areas look differ- ent and affect the classification quality. Efforts are the hyacinth class. The confusion matrix shown in Table 3 indicates that the hyacinth class is confused still needed to improve the radiometric calibration with the willow-smartweed class, which probably of UAS imagery probably through image-to-image normalization using inherent image contents and means that the features used in this study were not induced calibration targets. Integrating the objects enough to differentiate between these two classes. The images used in this study have a poor spectral bidirectional reflectance properties in the image radiometric calibration models could improve the resolution (only three bands in the visible spectrum). classification accuracy, especially with the expected Increasing the spectral resolution of the images by incorporating more bands, especially in the near- increase of high spectral resolution imagery captured by small UAS. infrared region, could increase the spectral separabil- Based on our results, we believe that UAS is a ity of the vegetation classes and improve the classifi- cation accuracy as indicated by other researchers small, fast and easily deployable system that provides an inexpensive platform for natural resource manage- (Carle, Wang, & Sasser, 2014; Lane et al., 2014). ment operations over small and medium size areas Some of the commercially available multispectral and line- or frame-based hyperspectral cameras when high-resolution imagery is in demand. This comes in line with the review article on UAS by designed for small UAS can be used to improve the Colomina and Molina (2014), which indicates that classification accuracy. However, these cameras are UAS is attracting more and more researchers from still expensive and may not be cost-effective to sup- the remote sensing community in different disci- port invasive plant control operations, especially with plines over the past 5 years. However, as an emerging the high possibility of sensor loss due to UAS acci- data source, UAS is still in need for more research dents. We expect that more technologically advanced and development, especially at the data processing cost-effective sensors and better UAS safety and level. recovery measures will facilitate the use of higher spectral resolution sensors in the near future. In fact, the research team is working currently on using an additional infrared band captured by a Conclusion modified low-cost camera in addition to the visible color images to improve the classification accuracy of According to the result of experiments, this study another wetland site in central Florida. indicates that the use of OBIA of high spatial resolu- One of the potential reasons contributing to the tion (sub-decimeter) UAS imagery is viable for wet- low classification accuracy of some classes is the land vegetation mapping, even though significant radiometric inconsistency among the hundreds of room for continued improvement still exists. UAS images contributing to the image mosaic. A Object-based classification produced higher accuracy 574 R. PANDE-CHHETRI ET AL. than pixel-based classification using our ultra-high Blaschke, T. (2010). Object based image analysis for remote sensing. ISPRS Journal of Photogrammetry and Remote spatial resolution images. This is consistent with Sensing, 65,2–16. doi:10.1016/j.isprsjprs.2009.06.004 other similar studies (Myint et al., 2011). However, Carle, M.V., Wang, L., & Sasser, C.E. (2014). Mapping the disadvantage of object-based scheme was also freshwater marsh species distributions using revealed in this study, with a great amount of time WorldView-2 high-resolution multispectral satellite ima- and efforts spent on scale parameter selection or gery. International Journal of Remote Sensing, 35(13), 4698–4716. doi:10.1080/01431161.2014.919685 post-classification refinement for an object-based Castillejo-González, I.L., López-Granados, F., García-Ferrer, approach with expert knowledge. The study also A., Peña-Barragán, J.M., Jurado-Expósito, M., de la Orden, showed that SVM tended to offer a higher accuracy M.S., & González-Audicana, M. (2009). Object- and pixel- level when compared to the other tested classifiers based analysis for mapping crops and their agro-environ- (i.e. ANN, ML). mental associated measures using QuickBird imagery. Future work includes developing advanced image Computers and Electronics in Agriculture, 68,207–215. doi:10.1016/j.compag.2009.06.004 calibration techniques to improve the radiometric Chen, Y., Shi, P., Fung, T., Wang, J., & Li, X. (2007). quality of the images. Also, it is recommended to Object-oriented classification for urban land cover map- experiment with higher spectral resolution (multi- ping with ASTER imagery. International Journal of spectral or hyperspectral images), which are increas- Remote Sensing, 28, 4645–4651. doi:10.1080/ ingly becoming more cost-effective and feasible for small UAS. Coburn, C., & Roberts, A. (2004). A multiscale texture analysis procedure for improved forest stand classifica- tion. International Journal of Remote Sensing, 25, 4287– 4308. doi:10.1080/0143116042000192367 Colomina, I., & Molina, P. (2014). Unmanned aerial sys- Disclosure statement tems for photogrammetry and remote sensing: A review. No potential conflict of interest was reported by the ISPRS Journal of Photogrammetry and Remote Sensing, authors. 92,79–97. doi:10.1016/j.isprsjprs.2014.02.013 Cordeiro, C.L.D.O., & Rossetti, D.D.F. (2015). Mapping vegetation in a late Quaternary landform of the Amazonian wetlands using object-based image analysis References and decision tree classification. International Journal of Remote Sensing, 36, 3397–3422. doi:10.1080/ Abd-Elrahman, A., Pearlstine, L., & Percival, F. (2005). 01431161.2015.1060644 Development of pattern recognition algorithm for auto- Cortes, C., & Vapnik, V. (1995). Support-vector networks. matic bird detection from unmanned aerial vehicle ima- Machine Learning, 20, 273–297. doi:10.1007/BF00994018 gery. Surveying and Land Information Science, 65, 37. Developer, e. (2012). User guide. Trimble Documentation. Aguilar, M., Saldaña, M., & Aguilar, F. (2013). GeoEye-1 Dingle Robertson, L., & King, D.J. (2011). Comparison of and WorldView-2 pan-sharpened imagery for object- pixel- and object-based classification in land cover based classification in urban environments. change mapping. International Journal of Remote International Journal of Remote Sensing, 34, 2583–2606. Sensing, 32, 1505–1529. doi:10.1080/01431160903571791 doi:10.1080/01431161.2012.747018 Dronova, I., Gong, P., & Wang, L. (2011). Object-based Ashish, D., McClendon, R., & Hoogenboom, G. (2009). analysis and change detection of major wetland cover Land-use classification of multispectral aerial images types and their classification uncertainty during the low using artificial neural networks. International Journal of water period at Poyang Lake, China. Remote Sensing of Remote Sensing, 30, 1989–2004. doi:10.1080/ Environment, 115, 3220–3236. doi:10.1016/j. rse.2011.07.006 Baatz, M., & Schäpe, A. (2000). Multiresolution segmenta- Ehlers, M., Gähler, M., & Janowsky, R. (2003). Automated tion: An optimization approach for high quality multi- analysis of ultra high resolution remote sensing data for scale image segmentation. Angewandte Geographische biotope type mapping: New possibilities and challenges. Informationsverarbeitung, XII,12–23. ISPRS Journal of Photogrammetry and Remote Sensing, Balabin, R.M., & Lomakina, E.I. (2011). Support vector 57, 315–326. doi:10.1016/S0924-2716(02)00161-2 machine regression (SVR/LS-SVM) – An alternative to Flanders, D., Hall-Beyer, M., & Pereverzoff, J. (2003). neural networks (ANN) for analytical chemistry? Preliminary evaluation of eCognition object-based soft- Comparison of nonlinear methods on near infrared ware for cut block delineation and feature extraction. (NIR) spectroscopy data. Analyst, 136, 1703–1712. Canadian Journal of Remote Sensing, 29, 441–452. doi:10.1039/c0an00387e doi:10.5589/m03-006 Belluco, E., Camuffo, M., Ferrari, S., Modenese, L., Silvestri, Foody, G.M., Campbell, N., Trodd, N., & Wood, T. (1992). S., Marani, A., & Marani, M. (2006). Mapping salt- Derivation and applications of probabilistic measures of marsh vegetation by multispectral and hyperspectral class membership from the maximum-likelihood classi- remote sensing. Remote Sensing of Environment, 105, fication. Photogrammetric Engineering and Remote 54–67. doi:10.1016/j.rse.2006.06.006 Sensing, 58, 1335–1341. Benediktsson, J.A., Swain, P.H., & Ersoy, O.K. (1990). Gao, P., Trettin, C.C., & Ghoshal, S. (2012). Object- Neural network approaches versus statistical methods in oriented segmentation and classification of wetlands classification of multisource remote sensing data.IEEE within the Khalong-la-Lithuny a catchment, Lesotho, Transactions on Geoscience and Remote Sensing, 28 Africa. In Geoinformatics (GEOINFORMATICS), 2012 (4), 540-552 20th International Conference on (pp. 1–6): IEEE EUROPEAN JOURNAL OF REMOTE SENSING 575 Gao, Y., & Mas, J.F. (2008). A comparison of the perfor- imagery. Remote Sensing of Environment, 115, 1145–1161. mance of pixel-based and object-based classifications doi:10.1016/j.rse.2010.12.017 over images with various spatial resolutions. Online Niemeyer, I., & Canty, M.J. (2003). Pixel-based and object- Journal of Earth Sciences, 2,27–35. oriented change detection analysis using high-resolution Ge, S., Carruthers, R., Gong, P., & Herrera, A. (2006). imagery. In Proceedings 25th Symposium on Safeguards Texture analysis for mapping Tamarix parviflora using and Nuclear Material Management (pp. 2133–2136) aerial photographs along the Cache Creek, California. Olmsted, I.C., & Armentano, T.V. (1997). Vegetation of Environmental Monitoring and Assessment, 114,65–83. Shark Slough, Everglades National Park.Homestead, doi:10.1007/s10661-006-1071-z FL: South Florida Natural Resources Center, Haralick, R.M., Shanmugam, K., & Dinstein, I.H. (1973). Everglades National Park. Textural features for image classification. IEEE Oruc, M., Marangoz, A., & Buyuksalih, G. (2004). Transactions on Systems, Man and Cybernetics, SMC-3, Comparison of pixel-based and object-oriented classifi- 610–621. doi:10.1109/TSMC.1973.4309314 cation approaches using Landsat-7 ETM spectral bands. Harris. (2017). How does the Pixel Aggregate method work In Proceedings of the IRSPS 2004 Annual Conference (pp. when resizing data with ENVI? Retrived from http:// 19–23). www.harrisgeospatial.com/Support/SelfHelpTools/ Pearlstine, L., Portier, K.M., & Smith, S.E. (2005). Textural HelpArticles/HelpArticles-Detail/TabId/2718/ArtMID/ discrimination of an invasive plant, Schinus terebinthifo- 10220/ArticleID/16094/How-does-the-Pixel-Aggregate- lius, from low altitude aerial digital imagery. method-work-when-resizing-data-with-ENVI-.aspx Photogrammetric Engineering & Remote Sensing, 71, Hsieh, P.-F., Lee, L.C., & Chen, N.-Y. (2001). Effect of 289–298. doi:10.14358/PERS.71.3.289 spatial resolution on classification errors of pure and Radoux, J., & Defourny, P. (2007). A quantitative assess- mixed pixels in remote sensing. IEEE Transactions on ment of boundaries in automated forest stand delinea- Geoscience and Remote Sensing, 39, 2657–2663. tion using very high resolution imagery. Remote Sensing doi:10.1109/36.975000 of Environment, 110, 468–475. doi:10.1016/j. Johnson, B.A. (2013). High-resolution urban land-cover rse.2007.02.031 classification using a competitive multi-scale object- Rango, A., Laliberte, A., Steele, C., Herrick, J.E., based approach. Remote Sensing Letters, 4, 131–140. Bestelmeyer, B., Schmugge, T., . . . Jenkins, V. (2006). doi:10.1080/2150704X.2012.705440 Research article: Using unmanned aerial vehicles for Laliberte, A.S., & Rango, A. (2009). Texture and scale in rangelands: Current applications and future potentials. object-based analysis of subdecimeter resolution Environmental Practice, 8, 159–168. doi:10.1017/ unmanned aerial vehicle (UAV) imagery. IEEE S1466046606060224 Transactions on Geoscience and Remote Sensing, 47, Shao, Y., & Lunetta, R.S. (2012). Comparison of support 761–770. doi:10.1109/TGRS.2008.2009355 vector machine, neural network, and CART algorithms Laliberte, A.S., Rango, A., & Herrick, J. (2007). Unmanned for the land-cover classification using limited training aerial vehicles for rangeland mapping and monitoring: A data points. ISPRS Journal of Photogrammetry and comparison of two systems. In ASPRS Annual Remote Sensing, 70,78–87. doi:10.1016/j. Conference Proceedings isprsjprs.2012.04.001 Lane, C.R., Liu, H., Autrey, B.C., Anenkhonov, O.A., Smith, G.M., Spencer, T., Murray, A.L., & French, J.R. Chepinoga, V.V., & Wu, Q. (2014). Improved wetland (1998). Assessing seasonal vegetation change in coastal classification using eight-band high resolution satellite wetlands with airborne remote sensing: An outline imagery and a hybrid approach. Remote Sensing, 6(12), methodology. Mangroves and Salt Marshes, 2,15–28. 12187–12216. doi:10.3390/rs61212187 doi:10.1023/A:1009964705563 Li, M., Bijker, W., & Stein, A. (2015). Use of Binary St-Louis,V.,Pidgeon,A.M.,Radeloff,V.C.,Hawbaker, Partition Tree and energy minimization for object- T.J., & Clayton, M.K. (2006). High-resolution image based classification of urban land cover. ISPRS Journal texture as a predictor of bird species richness. Remote of Photogrammetry and Remote Sensing, 102,48–61. Sensing of Environment, 105,299–312. doi:10.1016/j. doi:10.1016/j.isprsjprs.2014.12.023 rse.2006.07.003 Lillesand, T., Kiefer, R.W., & Chipman, J. (2014). Remote Szantoi, Z., Escobedo, F., Abd-Elrahman, A., Smith, S., & sensing and image interpretation. John Wiley & Sons. Pearlstine, L. (2013). Analyzing fine-scale wetland com- Maheu-Giroux,M., &deBlois,S.(2005). Mapping the invasive position using high resolution imagery and texture fea- species Phragmites australis in linear wetland corridors. tures. International Journal of Applied Earth Observation Aquatic Botany, 83, 310–320. doi:10.1016/j. and Geoinformation, 23, 204–212. doi:10.1016/j. aquabot.2005.07.002 jag.2013.01.003 Moffett, K.B., & Gorelick, S.M. (2013). Distinguishing wet- Tehrany, M.S., Pradhan, B., & Jebur, M.N. (2013). Remote land vegetation and channel features with object-based sensing data reveals eco-environmental changes in urban image segmentation. International Journal of Remote areas of Klang Valley, Malaysia: Contribution from object Sensing, 34, 1332–1354. doi:10.1080/ based analysis. Journal of the Indian Society of Remote 01431161.2012.718463 Sensing, 41,981–991. doi:10.1007/s12524-013-0289-9 Mountrakis, G., Im, J., & Ogole, C. (2011). Support vector Veenman, C.J., Reinders, M.J., & Backer, E. (2002). A machines in remote sensing: A review. ISPRS Journal of maximum variance cluster algorithm. IEEE Photogrammetry and Remote Sensing, 66, 247–259. Transactions on Pattern Analysis and Machine doi:10.1016/j.isprsjprs.2010.11.001 Intelligence, 24, 1273–1280. doi:10.1109/ Myint, S.W., Gober, P., Brazel, A., Grossman-Clarke, S., & TPAMI.2002.1033218 Weng, Q. (2011). Per-pixel vs. object-based classification of Wang, L., Sousa, W.P., Gong, P., & Biging, G.S. (2004). urban land cover extraction using high spatial resolution Comparison of IKONOS and QuickBird images for 576 R. PANDE-CHHETRI ET AL. mapping mangrove species on the Caribbean coast of Xie, Z., Roberts, C., & Johnson, B. (2008). Object-based Panama. Remote Sensing of Environment, 91, 432–440. target search using remotely sensed data: A case study in doi:10.1016/j.rse.2004.04.005 detecting invasive exotic Australian pine in south Waser, L., Baltsavias, E., Ecker, K., Eisenbeiss, H., Florida. ISPRS Journal of Photogrammetry and Remote Feldmeyer-Christe, E., Ginzler, C., . . . Zhang, L. (2008). Sensing, 63, 647–660. doi:10.1016/j.isprsjprs.2008.04.003 Assessing changes of forest area and shrub encroach- Yan, G., Mas, J.F., Maathuis, B., Xiangmin, Z., & Van Dijk, ment in a mire ecosystem using digital surface models P. (2006). Comparison of pixel-based and object- and CIR aerial images. Remote Sensing of Environment, oriented image classification approaches – A case study 112, 1956–1968. doi:10.1016/j.rse.2007.09.015 in a coal fire area, Wuda, Inner Mongolia, China. Watts, A.C., Bowman, W.S., Abd-Elrahman, A.H., International Journal of Remote Sensing, 27, 4039–4055. Mohamed, A., Wilkinson, B.E., Perry, J., . . . Lee, K. doi:10.1080/01431160600702632 (2008). Unmanned Aircraft Systems (UASs) for ecologi- Yang, X. (2007). Integrated use of remote sensing and geo- cal research and natural-resource monitoring (Florida). graphic information systems in riparian vegetation deli- Ecological Restoration, 26,13–14. doi:10.3368/er.26.1.13 neation and mapping. International Journal of Remote Whiteside, T.G., Boggs, G.S., & Maier, S.W. (2011). Sensing, 28,353–370. doi:10.1080/01431160600726763 Comparing object-based and pixel-based classifications Yu, Q., Gong, P., Clinton, N., Biging, G., Kelly, M., & for mapping savannas. International Journal of Applied Schirokauer, D. (2006). Object-based detailed vegetation Earth Observation and Geoinformation, 13, 884–893. classification with airborne high spatial resolution doi:10.1016/j.jag.2011.06.008 remote sensing imagery. Photogrammetric Engineering Willhauck, G., Schneider, T., De Kok, R., & Ammer, U. & Remote Sensing, 72, 799–811. doi:10.14358/ (2000). Comparison of object oriented classification PERS.72.7.799 techniques and standard image analysis for the use of Zweig, C.L., Burgess, M.A., Percival, H.F., & Kitchens, W. change detection between SPOT multispectral satellite M. (2015). Use of unmanned aircraft systems to deline- images and aerial photos. In,Proceedings of XIX ISPRS ate fine-scale wetland vegetation communities. congress (pp. 35–42). Citeseer Wetlands, 35, 303–309. doi:10.1007/s13157-014-0612-4 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png European Journal of Remote Sensing Taylor & Francis

Object-based classification of wetland vegetation using very high-resolution unmanned air system imagery

Loading next page...
 
/lp/taylor-francis/object-based-classification-of-wetland-vegetation-using-very-high-YtmWplFR0t

References (60)

Publisher
Taylor & Francis
Copyright
© 2017 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
ISSN
2279-7254
DOI
10.1080/22797254.2017.1373602
Publisher site
See Article on Publisher Site

Abstract

EUROPEAN JOURNAL OF REMOTE SENSING, 2017 VOL. 50, NO. 1, 564–576 https://doi.org/10.1080/22797254.2017.1373602 Object-based classification of wetland vegetation using very high-resolution unmanned air system imagery a a a b c Roshan Pande-Chhetri , Amr Abd-Elrahman , Tao Liu , Jon Morton and Victor L. Wilhelm a b School of Forest Resources and Conservation – Geomatics, University of Florida, Plant City, FL, USA; Invasive Species Management Branch, USACE, Stuart, FL, USA; Surveying and Mapping Branch, Operations Division – UAS Section, U.S. Army Corps of Engineers, Jacksonville, FL, USA ABSTRACT ARTICLE HISTORY Received 24 February 2017 The purpose of this study is to examine the use of multi-resolution object-based classification Revised 23 August 2017 methods for the classification of Unmanned Aircraft Systems (UAS) images of wetland Accepted 28 August 2017 vegetation and to compare its performance with pixel-based classification approaches. Three types of classifiers (Support Vector Machine, Artificial Neural Network and Maximum KEYWORDS Likelihood) were utilized to classify the object-based images, the original 8-cm UAS images Remote sensing; UAS; and the down-sampled (30 cm) version of the image. The results of the object-based and two wetland vegetation pixel-based classifications were evaluated and compared. Object-based classification pro- mapping; object-based duced higher accuracy than pixel-based classifications if the same type of classifier is used. classification; pixel-based classification; Support Vector Our results also showed that under the same classification scheme (i.e. object or pixel), the Machine Support Vector Machine classifier performed slightly better than Artificial Neural Network, which often yielded better results than Maximum Likelihood. With an overall accuracy of 70.78%, object-based classification using Support Vector Machine showed the best perfor- mance. This study also concludes that while UAS has the potential to provide flexible and feasible solutions for wetland mapping, some issues related to image quality still need to be addressed in order to improve the classification performance. Introduction seasonal variation in response to changes in water level and weather. Imagery with higher spectral resolu- Remote sensing has been used regularly to identify tion, such as hyperspectral imagery, can also be useful vegetation communities and monitor land cover for vegetation analysis. However, acquiring images changes. Olmsted and Armentano (1997) highlighted with high spatial, spectral and temporal resolution is the need for monitoring wetland vegetation and its often costly and can be logistically difficult. distribution in order to detect changes in the terres- The use of high spatial resolution aerial imagery trial–aquatic landscape transition. Wetland classifica- captured by small (1 to 2 m wingspan) Unmanned tion is challenging due to vegetation cover dynamics Aircraft Systems (UAS) in natural resource manage- with water fluctuation creating rapid and frequent ment is rapidly increasing (Abd-Elrahman, Pearlstine, changes in the type, distribution and density of plant & Percival, 2005; ; Laliberte, Rango, & Herrick, 2007; coverage (Belluco et al., 2006;Smith, Spencer,Murray, Rango et al., 2006; Watts et al., 2008). The use of such &French, 1998). This process is further complicated images is motivated by their potentially high tem- by the need for frequent data collection and high poral and spatial resolutions, increased technological spectral and spatial resolution imagery. Coarse spatial and operational feasibility and advances in image resolution images captured by satellite or high-altitude analysis techniques. Improvements in technology manned airborne missions may produce lower image and algorithms are gradually enabling autonomous classification accuracy, especially in riparian and wet- flying to produce high-quality georeferenced orthor- land areas (Maheu-Giroux & De Blois, 2005; Yang, ectified images. The temporal and spatial resolutions 2007). In contrary, high spatial resolution imagery of UAS imagery are controlled by the operator/user facilitates the extraction of texture features (Ge, who decides the mission parameters (e.g. flying Carruthers, Gong, & Herrera, 2006;Wang, Sousa, height) along with when exactly to fly; this gives a Gong, & Biging, 2004;Waser et al., 2008) that can significant advantage over traditional piloted image- assist the classification process. The ability to capture capturing missions. the images frequently is important for monitoring Although hyperspectral and lidar sensors have wetland vegetation undergoing rapid and severe been developed specifically for small UAS, these CONTACT Tao Liu taoliu@ufl.edu School of Forest Resources and Conservation – Geomatics, University of Florida, 1200 N Park Rd, Plant City, FL 33563, USA © 2017 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. EUROPEAN JOURNAL OF REMOTE SENSING 565 spectral information of individual image pixels, sensors are still relatively expensive and lack the OBIA has the advantage of easily incorporating tex- spatial resolution of some of the off-the-shelf three- band (RGB) cameras. Using inexpensive cameras, as tural and thematic information, geometrical and con- textual relationships and ancillary data into the long as they can satisfy user needs, is favored, given classification process. While there have been some the relatively high possibility for UAS crashes. To take advantage of the high spatial resolution of low- studies comparing object-based and pixel-based clas- sification techniques, more studies are needed in wet- cost RGB cameras onboard most commercial UAS, land land cover classification using sub-decimeter analysis algorithms that build on the images’ spectral, contextual and textural information need to be devel- UAS imagery. Previous studies suggest that visual inspection indicated that the OBIA approach oped. Increasing image spatial resolution does not incurred fewer errors in larger regions of homoge- necessarily increase image classification accuracy; this is probably due to the increase in intra-class neous land cover and performed better in temporal spectral variability and the decrease in cross-class analysis of land cover change (Dingle Robertson & King, 2011). Niemeyer and Canty (2003) claimed that statistical separability (Hsieh et al., 2001; Yu et al., 2006). Traditionally, texture features extracted from object-based classification has greater potential for high spatial resolution imagery have been used to detecting change; this is mainly a result of the use improve classification (Ashish et al. 2009; Coburn & of higher-resolution imagery. Castillejo-González Roberts, 2004; Pearlstine, Portier, & Smith, 2005; St- et al. (2009) found that object-based method outper- Louis, Pidgeon, Radeloff, Hawbaker, & Clayton, 2006; formed five pixel-based supervised classification algo- Szantoi, Escobedo, Abd-Elrahman, Smith, & rithms (parallelepiped, minimum distance, Pearlstine, 2013). Object-Based Image Analysis mahalanobis distance, spectral angle mapper and (OBIA) takes advantage of the multi-scale details in Maximum Likelihood [ML] classifiers) in mapping high spatial resolution imagery (Laliberte & Rango, crops and agro-environmental land cover. Far greater 2009) as well as other information summarized at the accuracy (83.25%) was achieved by Yan, Mas, object level. OBIA is well known to suit high spatial Maathuis, Xiangmin, and Van Dijk (2006) when resolution UAS imagery, which has frequently been mapping 12 land cover classes using object-based utilized in urban/urban–rural feature extraction classification versus a pixel-based approach (Aguilar, Saldaña, & Aguilar, 2013; Chen, Shi, Fung, (46.48%). The object-based methodology used by Wang, & Li, 2007; Johnson, 2013; Li, Bijker, & Stein, Gao and Mas (2008) outperformed both ML and 2015; Tehrany, Pradhan, & Jebur, 2013). Along with nearest-neighborhood pixel-based methods in map- being applied to extract features in urban and rural ping land cover using SPOT 5 (10-m spatial resolu- landscapes, OBIA techniques helped in providing tion) imagery. However, the authors noted that after detailed classification of natural plant communities smoothing filters were applied to the imagery, the as represented in wetland areas (Cordeiro & Rossetti, accuracy of the pixel-based methods increased. 2015; Dronova, Gong, & Wang, 2011; Gao, Trettin, & Earlier pattern recognition efforts implemented Ghoshal, 2012; Moffett & Gorelick, 2013). low-level machine learning algorithms such as Nevertheless, the high spatial resolution of the images image thresholding, morphology and template taken by UAS creates a challenge, especially in nat- matching. Many of these algorithms were concerned ural land cover classification. Unlike pixel-based ana- with highly contrasted features and did not involve lysis, OBIA segments the image into relatively shape, contextual and textural information. Several homogeneous and semantically coherent objects, machine learning algorithms, such as Support based on different homogeneity criteria at different Vector Machine (SVM) (Mountrakis, Im, & Ogole, scales. In OBIA, spectral information is aggregated 2011) and Artificial Neural Networks (ANN) per object, and other features such as textural and (Benediktsson, Swain, & Ersoy, 1990), have been contextual information become available for object implemented using OBIA. The SVM algorithm sepa- analysis. Different classification algorithms can be rates the classes with a decision surface, known as applied on the objects at different scale levels, and optimal hyperplane, such that it maximizes the mar- classification rules can be developed based on the gin between the classes (Cortes & Vapnik, 1995; thematic meaning of the objects. Veenman, Reinders, & Backer, 2002). ANN is another Several studies claim that object-based classifica- common machine learning framework that tries to tion has greater potential for classifying higher-reso- solve a convex optimization problem. lution imagery than pixel-based methods (Gao & Floating invasive plants such as water lettuce and Mas, 2008; Myint, Gober, Brazel, Grossman-Clarke, water hyacinth are potentially detrimental to aqua- & Weng, 2011; Oruc, Marangoz, & Buyuksalih, 2004; tic ecosystem of Florida and are infesting lakes and Whiteside, Boggs, & Maier, 2011; Willhauck, rivers in an alarmingly rapid rate prompting the Schneider, De Kok, & Ammer, 2000). Contrasting State of Florida to pass a specific statute (Florida with traditional pixel-based methods utilizing the Statute 369.22) requiring agencies to manage these 566 R. PANDE-CHHETRI ET AL. invasive plants at the species level. These plants in texture, patch size and mixture. The United States need to be continually monitored and managed, Army Corps of Engineers (ACE) is specifically interested which constitute one of the motives for this study. in monitoring the populations of aquatic non-native The primary goal of this study is to investigate invasive species such as the water hyacinth (Eichhornia whether UAS is a viable platform for mapping wet- crassipes) and water lettuce (Pistia stratiotes), which can land vegetation. This study investigates the best cause serious ecological and navigational problems. approach among the commonly available methods Aerial images were acquired by a 10-megapixel true to achieve the optimal results for wetland vegetation color Olympus ES 420 camera mounted on the NOVA classification in a wetland area in South Florida 2.1 UAS in July of 2012. The system is developed by the using ultra-high-resolution UAS images as a way University of Florida Unmanned Aircraft Systems to detect and monitor invasive vegetation. Our Research Program for the US ACE, Jacksonville study compared the performances of three types of District. The autonomous Nova 2.1 model has a 2.7 m common classifiers (SVM, ANN and ML) under wingspan, weighs up to 6.4 kg and is capable of flying for up to 50 min covering 800 acre area with one single flight two classification schemes (object based and pixel based). In the next section, a methodology of this as well as landing on water. The NOVA 2.1 system was study is provided (Methodology section), covering designed for small-sized (less than 10K acres) or med- descriptions of the data used and the procedure. ium-sized (10–20K acres) sites. Typical daily acquisition Study results are presented in the Results section. using NOVA 2.1 depends on site geometry and the Discussion section includes a discussion of the difficulty in navigating the site. Four to six flights per achieved results, and finally, a concise conclusion day can be achieved, and with cooperative weather, an of the study is introduced in the Conclusionsection. experienced team can collect 20K acres in one work week. Information from the navigation sensors onboard the NOVA 2.1 UAS was used to rectify and mosaic Methodology the images using the PhotoScan Agisoft software. Study area and data Image preprocessing was conducted by the ACE per- sonnel according to their standard data collection and This study is conducted in the Eagle Bay wetland area preprocessing procedures. In this study, we analyzed located at the northern side of Lake Okeechobee covering images of 8-cm ground pixel size that cover about about 4700 acres in South Florida. The Eagle Bay area one-fourth of a whole area and represents the plant contains diverse freshwater wetland communities species in the Eagle Bay area. Figure 1 shows the including emergent and floating leafed species that vary Figure 1. Eagle Bay study area. EUROPEAN JOURNAL OF REMOTE SENSING 567 separated at different scales. Accordingly, the whole study area and a zoomed-in view (red box) of a mosaicked image was segmented into multiple scale part of the area. Descriptions of the training dataset used in the classification and its collection procedure hierarchical object levels using the multi-resolution segmentation method in eCognition. We visually are provided in the Image Classification section. tested a series of segmentation scale levels including Accuracy assessment was conducted using a ran- domly generated set of points. Initially, 264 points fine scales (5, 10, 15, 40, 75, 110, 150, 200 and 300) that systematically increase from fine to coarse scales. were generated randomly for the whole study area. Our visual inspection revealed that fine scale segmen- Then, an additional 99 points were randomly gener- ated in the area where a high diversity of vegetation tation using scale 15 tends to produce small objects with low relevance to their textural information. classes exist, resulting in a total of 373 assessment However, such fine scale segmentation results points. These points were labeled through visual inspection of the high-resolution UAS images. The expressed the visual diversity of the objects belonging inspection conducted by biologists with intensive to specific classes. Coarse scale segmentation levels (e.g. 110 and larger) tend to produce large objects experience in interpreting similar images and was assisted by field verification for manual delineation with multiple class mix. However, they clearly of land cover classes in the area. This dataset was showed large patches of homogeneous classes such used to assess both OBIA and pixel-based classifica- as large water bodies. Intermediate scale results such tion results. as scale 75 produced objects of single classes as well as mixed objects of 2 classes, while Level 40 segmen- tation resolved the mixed objects observed in higher- Image segmentation level (e.g. level 75) segmentation results. However, using Level 40, results only carried the potential of Most pixel-based classifications tend to utilize spec- losing important textural information and increasing tral information at individual pixels and potentially the classification result fragmentation. After explor- textural information extracted from neighboring pix- ing various scale levels, four scales (L15, L40, L75 and els. Pixel-based classification can highlight noise, cre- L200) were selected for the analysis. These scales are ate salt-and-pepper effects and ignore important referred to in this paper as (1) L200: coarse scale, (2) contextual, topological and semantic information in L75: high scale, (3) L40: low scale and (4) L15: fine the images (Baatz & Schäpe, 2000; Blaschke, 2010). In scale. The L200 coarse segmentation was chosen for this study, object-based classification was utilized broad categorization of the water and exposed soil using the Trimble’s eCognition object-based analysis (unpaved road) land cover from the rest of the vege- software. The eCognition software segments the tation land cover types. The L40 and L75 scales were images into homogeneous objects and uses informa- used to classify the remaining vegetation classes of tion derived from each object in the classification. various community sizes. For example, large homo- The multi-scale fractal net evolution object seg- geneous water lettuce communities were detected in mentation algorithm (Baatz & Schäpe, 2000), imple- the high (L75) scale, while mixed communities (e.g. mented in the eCognition software, starts with water lettuce and American lotus (Nelumbo lutea) individual pixels and applies a bottom-up merging were more accurately delineated in the low (L40) approach based on heterogeneity measures. scale classification. The fine (L15) scale segmentation Neighboring objects are merged until a predeter- was used to provide additional information, such as mined degree-of-fitting cost is reached. Hence, this the number of unique classes and the average class degree-of-fitting cost, named scale parameter, is a area, for the low (L40) scale classification. relative threshold that determines when the segmen- tation growth, based on spectral and spatial similar- ity, needs to be stopped. Scale controls object size and Object features leads to the multi-scale characteristic of the algo- OBIA enables the use of hundreds of spectral, tex- rithm. Object formation is greatly influenced by the tural segmentation scale as different objects can be identi- and geometrical object features computed from fied at different scales depending on image resolution the individual pixels within each object. Additionally, and the properties of the objects of interest (Laliberte contextual features are also facilitated based on the & Rango, 2009). Optimal segmentation scale is often topological relationship among features in the same determined empirically through visual inspection of segmentation level and across parent and child levels. the results (Flanders, Hall-Beyer, & Pereverzoff, 2003; The large number of often highly correlated features Radoux & Defourny, 2007), mainly based on the size highlights the typical high dimensionality problem and the complexity of the target objects (Whiteside associated with this type of analysis. We used the et al., 2011). eCognition feature optimization tool, which com- Our study area consists of diverse and mixed plant puted class separability for different feature and species including patches of different sizes that can be class combinations using the training dataset, to assist 568 R. PANDE-CHHETRI ET AL. the feature selection process. The feature optimiza- Table 1. Classes used in the classification. Invasive tion results revealed the features producing the lar- Species name Used class name plant gest average minimum distance between the training 1 Willow (Salix spp.) Willow matured, willow samples of different classes. As part of our prelimin- green 2 Smartweed (Polygonum Smartweed ary analysis, we implemented the SVM classifier (see spp.) Image Classification section) classification using the 3 Dry Cattail (Typha spp.) Cattail features identified in the feature optimization process 4 Water hyacinth Hyacinth Y 5 Water lettuce Lettuce, scattered Y as well as combinations of the features. The results of lettuce each test were assessed based on the confusion matrix 6 Water lily (Nymphaea Lily spp.) of the accuracy assessment dataset. The best feature 7 American lotus Lotus, scattered lotus combination that produced highest overall accuracy 8 Moonvine (Ipomea alba) Moonvine 9 Submerged grass Submerged vegetation and Kappa coefficient was used in the final 10 Water Water, glint classification. 11 Unpaved road/exposed Road dry soil The feature combination used in this study involves spectral, textural and hierarchical context Species split into two classes. features. The spectral features include the mean value of the red, green and blue bands and the max- imum difference between mean intensities of any two classes of willow (willow matured and willow bands, hue and saturation. Textural features include green), lotus (lotus and scattered lotus) and lettuce the angular second moment and correlation derived (lettuce and scattered lettuce) were introduced. The from the Gray Level Co-occurrence Matrix (GLCM) southern side of the image shows glinted Lake (Haralick, Shanmugam, & Dinstein, 1973). GLCM is Okeechobee waters, and hence, a glinted water class a matrix summarizing the probability pairs of pixels separate from the regular water class was introduced. with specific values happening in a given directions To apply the supervised classification method (this study used all directions to derive GLCM) formultiscaleOBIA, we need to collecttraining (Developer 2012). Angular second moment measures datasets to train the classifiers. In this study, three local homogeneity and is calculated by summing the sets of training objects were collected on level 15, squares of normalized values of all cells. Correlation 40 and 75 using one set of 422 randomly generated measures the linear dependency of gray levels of points. The objects from these levels, where the neighboring pixels (Developer, e, 2012). Hierarchical generated random points reside, were identified context features include the mean area of sub-objects and interpreted visually in the images. ACE biolo- at the fine scale level and the number of unique gists, who were actively monitoring and control- classes of sub-objects in fine object level, where fine ling invasive plants in the site for several years object level in the study refers to level 15. We used conducted the visual interpretation process. Only the spectral features only to classify the objects at the the coarse level (L200) was classified using simple L15 segmentation level. Then we were able to use the rules to isolate large water bodies and bright classification results of this level to provide the con- exposed soils. Besides the pure vegetation classes textual information used in the higher segmentation listed in Table 1, three mixed classes were added level (L40) classification. The use of spatial (textural) in the L75 segmentation level. These three mixed features did not improve the classification results of classes are lotus–lettuce, lotus–cattail and lettuce– the L15 segmentation level, probably due to the small smartweed. object size, and hence, textural features were dropped Many OBIA studies utilized hierarchical classifica- from the classification process of this level. For the tion (Myint et al., 2011) to classify broad vegetation L40 and L75 classification, textural, spectral and con- and non-vegetation land cover. Our study area is textual (computed from L15 classification to the L40 comprised mostly of wetland vegetation and water, objects) features were used. but only a small area of upland exposed soils or unpaved roads exist at the northern side of the Image classification image. It was found more efficient to broadly cate- gorize the water and exposed soils using simple rules Eight wetland and aquatic vegetation classes as well as applied to the spectral properties of the coarse seg- two non-vegetation classes (water and unpaved road/ mentation (L200) objects. The two spectral criteria exposed soil) identified in the study area are listed in applied to this level are the following: Table 1. Some of the plant species in the area have Max difference: Non-vegetation classes (road and significantly different spectral and textural properties water) showed a flat spectral graph representing based on age and abundance. Two classes were smaller value of maximum difference and were assigned to each of these species, and corresponding separated out from the rest of the wetland area training sets were prepared. In this context, two EUROPEAN JOURNAL OF REMOTE SENSING 569 using a single threshold applied to the maximum contextual information. The hyacinth plant species difference feature of each object. generally grows in close proximity to open water. ● To separate hyacinth, open water objects were iden- Blue layer mean: The non-vegetation classes were then categorized into water and road tified as those water objects with at least half of their classes using a threshold applied to the blue neighboring objects are “water” or “submerged grass” band (water has higher blue band values). classes. Then, all hyacinth objects located more than 80 m (personal communication with ACE biologists) The rest of the area, still unclassified, was then away from open water objects were reassigned to the subjected to further classification. This area is mostly combined willow-smartweed class. Finally, classes wetland vegetation plus some smaller water patches representing the same species, which were split as that escaped from the broad categorization of the separate classes with different training sets due to coarse scale objects. The complex land cover of the spectral/textural variations (lettuce and scattered let- study area and the low spectral resolution of the RGB tuce classes and the lotus and scattered lotus classes), images produced low classification accuracy if only were merged back together. Similarly, the water, objects from a single segmentation level were used. glinted water and submerged grass class were merged Our preliminary analysis indicated that the high since our focus is primarily on emerging wetland (L75) and low (L40) segmentation levels captured plants. Green-colored willows appear very close to the main variations in patch scale and were necessary smartweeds in color and texture, and those two land to identify existing wetland vegetation. The fine scale covers are visually inseparable. We do not believe L15 segmentation level results were used to compute there is enough information in the images at all tested hierarchical context features at the L40 level classifi- resolutions to separate these two classes and hence cation as mentioned earlier. they were merged in this study. The classification process was performed on the To evaluate the accuracy of the OBIA and pixel- high (L75) and low (L40) segmentation levels using based classification results, accuracy assessment data, the training dataset and selected object features. The prepared independently from the training dataset, low (L40) classification results separated all plant were used. A confusion matrix was created for each species; however, class confusion existed in some classification results using the truth assessment objects located within large textured objects at the points. The overall accuracy and Kappa coefficients L75 level. On the other hand, the high (L75) classifi- for the OBIA and pixel-based classified maps were cation performed well with the majority of classes by tabulated and compared. Figure 2 shows a schematic minimizing intra-class variability. However, the diagram of the overall methodology adopted in this results did not separate smaller species patches in study including the multi-scale segmentation, classi- mixed areas as efficiently. Mixed classes used in the fication, post-refinement and accuracy assessment L75 classes were resolved using the L40 classification. steps. The L40 and L75 classification results were merged, and priority was given to the single classes of the L75 Results classification followed by L40 results. Several classifiers were applied on the multi-reso- Object-based classification was conducted on the lution segmented objects. The ML classifier was used multiple segmentation levels of the study image due to its extensive historical use (Foody, Campbell, using the ML, SVM and ANN classification methods. Trodd, & Wood, 1992). The SVM and ANN machine The results were compared with corresponding pixel- learning methods were also used. In addition to the based classifications. Figure 3 shows three pixel-based object-based classification, the pixel-based classifica- maps resulting from the SVM, ANN and ML classi- tion was applied to the same image using the same fications applied to the original image (8 cm) resolu- training set data and classification methods (ML, tion in addition to three object-based maps produced SVM and ANN). The original image (8 cm pixels using corresponding classifiers. A zoomed-in repre- size) was down-sampled to a 30-cm image using the sentation of the same results is shown in Figure 4 to Pixel Aggregate algorithm implemented in the highlight the details of the classification results. ENVI5.0 software (Harris, 2017). Pixel-based classi- Visual comparisons of the maps produced using the fiers were applied on the down-sampled image, and pixel- and object-based classification methods give the results were compared with the OBIA results. preference to the object-based results. As expected, pixel-based classification results are generally frag- mented and less appealing visually. Object-based Post-classification refinement and accuracy results look smoother and show an overall better assessment matching to the land cover types expected from visual Refinement of the OBIA classification results was image interpretation. The salt-and-pepper effect was performed based on the species thematic and clearly alleviated by object-based classification 570 R. PANDE-CHHETRI ET AL. Figure 2. Schematic diagram of the overall methodology used in this study. compared to the pixel-based results; this shows con- presented in Table 2. The overall accuracies of the sistency with most research comparing the two pixel-based methods are between 50.9% and 61.9%. approaches (Blaschke, 2010; Lillesand, Kiefer, & Pixel-based classification of the lower-resolution Chipman, 2014; Xie, Roberts, & Johnson, 2008). image scored better (up to 61.9% for SVM). OBIA The lettuce and lily classes are mixed in some results scored better with object-based machine learn- areas on the ground. Some of the observed misclas- ing approaches producing overall accuracy around sification can be attributed to lily areas showing in 70.8%. The object-based ML classifier performed slightly out-of-focus (smoothed) or tilted images, moderately with an overall accuracy of 58.2%. The which produces texture features that match abun- best results were achieved using the object-based dant lettuce signatures. For similar reasons, some SVM classification with an overall accuracy slightly lettuce in mixed and disturbed areas were assigned above 70% and a Kappa coefficient of about 0.66. to the lily class. Figure 5 shows two examples of Since we observed similar errors in the highest two overlapped imagery showing the same location performing object-based classification methods with different quality (note the difference in image (ANN and SVM), only the confusion matrix of the blurriness and presence of haze). We expect that SVM results is presented in Table 3. Even though the such variations affected the image mosaic used in overall accuracy of 70.8% is on par with what is the study and hence reduced the achieved classifi- expected from true color image classification and cation accuracy. This observation highlights the reported by other ecological studies, the accuracy effect of the image quality and consistency on the distribution is not balanced among the classes. For classification results. Although these variations example, the table shows that 90.5% producer accu- existed in all types of aerial imagery, they probably racy was achieved for the moonvine class, while only have a significant effect on image mosaics produced 33.3% was produced for the hyacinth class. For let- by a large number of images captured by light tuce and hyacinth classes, which are two invasive species in the area, user accuracies are higher weight small UAS as is the case in our research. Quantitative assessment of the classified maps (73.2% and 90.0%, respectively), but the producer again indicates the superior performance of the accuracy is moderate for the lettuce (62.5%) and low for the hyacinth (33.3%) classes. The low produ- object-based classification results with higher overall accuracy and Kappa coefficients compared to pixel- cer accuracy (high omission error) indicates the high probability of missing this class in the classified map. based results for the same type of classifier, as EUROPEAN JOURNAL OF REMOTE SENSING 571 Figure 3. Classified final maps (whole study area) using various classifiers. (a) Object-based SVM, (b) Object-based ANN, (c) Object-based ML, (d) Pixel-based SVM, (e) Pixel-based ANN, (f) Pixel-based ML (see Figure 4 for zoom-in area). The confusion matrix in Table 3 shows that the the object it belongs to is rather limited. For example, hyacinth class is mixed mainly with the smartweed vegetation texture is usually a mixture of irregular class. shadow and vegetation (Ehlers, Gähler, & Janowsky, 2003), and this kind of texture cannot be captured by a single pixel or even a pixel neighborhood analysis. Discussion Individual pixel limitation is more significant This research demonstrated the use of object-based when it comes to UAS images, considering that methods to classify wetland land cover, including most commonly used cost-effective UAS imaging floating vegetation species in South Florida using platforms provide high spatial resolution images ultra-high spatial resolution (8 cm) imagery captured with a limited number of bands. Segmenting the by unmanned aerial vehicles. The advantage of using image into objects aggregates the pixels into mean- the object-based approach is obvious in comparison ingful objects, for which other types of information with the pixel-based approach, as shown in Table 2. can be extracted to supplement the poor spectral These results serve as additional proof that improve- content of the image. We utilized multi-resolution ments can be made when the object-based approach objects to resolve the different patch sizes of the is used for image classification (Blaschke, 2010). We vegetation communities in the analyzed area. Our believe that this is specifically true when high-resolu- experimentation with single scale classification con- tion imagery captured by small UAS is used since sistently yielded lower classification results and sug- pixel grouping reduces the within-class pixel varia- gested the integration of multiple segmentation levels. tions while maintaining the fine resolution bound- We used thematic rules to improve the classification aries between adjacent plant communities. With a results in addition to the contextual information sub-decimeter pixel resolution, the mosaicked image passed from one segmentation level to the other. used in this study belongs to the ultra-high spatial The use of rules and hierarchical context information resolution dataset category. In this image category, is considered one of the reasons for the improvement the information contained in a single pixel regarding offered by the object-based classification approach 572 R. PANDE-CHHETRI ET AL. Figure 4. Classified final maps (zoom-in area) using various classifiers. (a) Object-based SVM, (b) Object-based ANN, (c) Object- based ML, (d) Pixel-based SVM, (e) Pixel-based ANN, (f) Pixel-based ML. Figure 5. Two examples of inconstancies in image properties and associated effect on the mosaic. (Blaschke, 2010). More rules and other features can Table 2. Classification accuracy. be explored as our subject knowledge increases and as Overall Kappa Approach Classifier accuracy (%) coefficient new spatial and topological metrics are developed. Object based SVM 70.8 0.659 The results of our study show that SVM and ANN ANN 69.48 0.643 results are comparable when used for wetland vegeta- ML 58.2 0.512 Pixel based on high-resolution SVM 56.8 0.503 tion mapping with UAS images in the visible part of (8 cm) images ANN 50.9 0.441 the spectrum. The difference in these results cannot ML 53.6 0.471 clearly give preference to one of these classifiers in Pixel based on low-resolution SVM 61.9 0.560 (30 cm) image ANN 53.9 0.474 contrary to some research indicating some cooling ML 52.8 0.464 among the remote sensing community toward the EUROPEAN JOURNAL OF REMOTE SENSING 573 Table 3. Accuracy assessment (confusion) matrix for the OBIA SVM classification results. Ground truth Class Cattail Willo-smartweed Hyacinth Lettuce Lily Lotus Moonvine Water Road Other Users’ accuracy (%) Cattail 33 2 0 0 0 0 0 4 0 4 76.7 Willow-smartweed 10 49 14 10 6 4 1 4 0 1 49.5 Hyacinth 1 0 9 0 0 0 0 0 0 0 90.0 Lettuce 0 2 1 30 4 4 0 0 0 0 73.2 Lily 2 1 0 5 14 1 0 0 0 1 58.3 Lotus 1 1 2 3 0 46 1 8 0 0 74.2 Moonvine 0 0 0 0 0 1 19 0 0 0 95.0 Water 4 0 1 0 1 1 0 62 0 1 88.6 Road 0 0 0 0 0 0 0 1 1 0 50.0 Others 1 0 0 0 0 0 0 0 0 1 50.0 Producers’ accuracy (%) 63.5 89.1 33.3 62.5 56.0 80.7 90.5 78.5 100.0 12.5 70.8 Kappa 0.66 ANN classifier since the early 2000s (Balabin & close examination of the images in our dataset Lomakina, 2011; Shao & Lunetta, 2012). The accuracy shows that while most of the study area is covered achieved by our research is in line with other research by clear high-quality imagery, some of the images are efforts utilizing visible broadband imagery for vegeta- blurred and radiometrically inconsistent with the rest tion classification. Our accuracy is slightly higher of the images. This may be related to the sensor-sun- than the ones achieved by Zweig, Burgess, Percival, object geometry, differences in flying altitude, the and Kitchens (2015), where UAS images were used to bidirectional reflectance properties of the vegetation classify different sets of wetland vegetation commu- and illumination properties at the time of image nities in the Everglades area. exposure. When mosaicking two images with differ- Our results show lower classification accuracy of ent radiometric properties, adjacent areas look differ- ent and affect the classification quality. Efforts are the hyacinth class. The confusion matrix shown in Table 3 indicates that the hyacinth class is confused still needed to improve the radiometric calibration with the willow-smartweed class, which probably of UAS imagery probably through image-to-image normalization using inherent image contents and means that the features used in this study were not induced calibration targets. Integrating the objects enough to differentiate between these two classes. The images used in this study have a poor spectral bidirectional reflectance properties in the image radiometric calibration models could improve the resolution (only three bands in the visible spectrum). classification accuracy, especially with the expected Increasing the spectral resolution of the images by incorporating more bands, especially in the near- increase of high spectral resolution imagery captured by small UAS. infrared region, could increase the spectral separabil- Based on our results, we believe that UAS is a ity of the vegetation classes and improve the classifi- cation accuracy as indicated by other researchers small, fast and easily deployable system that provides an inexpensive platform for natural resource manage- (Carle, Wang, & Sasser, 2014; Lane et al., 2014). ment operations over small and medium size areas Some of the commercially available multispectral and line- or frame-based hyperspectral cameras when high-resolution imagery is in demand. This comes in line with the review article on UAS by designed for small UAS can be used to improve the Colomina and Molina (2014), which indicates that classification accuracy. However, these cameras are UAS is attracting more and more researchers from still expensive and may not be cost-effective to sup- the remote sensing community in different disci- port invasive plant control operations, especially with plines over the past 5 years. However, as an emerging the high possibility of sensor loss due to UAS acci- data source, UAS is still in need for more research dents. We expect that more technologically advanced and development, especially at the data processing cost-effective sensors and better UAS safety and level. recovery measures will facilitate the use of higher spectral resolution sensors in the near future. In fact, the research team is working currently on using an additional infrared band captured by a Conclusion modified low-cost camera in addition to the visible color images to improve the classification accuracy of According to the result of experiments, this study another wetland site in central Florida. indicates that the use of OBIA of high spatial resolu- One of the potential reasons contributing to the tion (sub-decimeter) UAS imagery is viable for wet- low classification accuracy of some classes is the land vegetation mapping, even though significant radiometric inconsistency among the hundreds of room for continued improvement still exists. UAS images contributing to the image mosaic. A Object-based classification produced higher accuracy 574 R. PANDE-CHHETRI ET AL. than pixel-based classification using our ultra-high Blaschke, T. (2010). Object based image analysis for remote sensing. ISPRS Journal of Photogrammetry and Remote spatial resolution images. This is consistent with Sensing, 65,2–16. doi:10.1016/j.isprsjprs.2009.06.004 other similar studies (Myint et al., 2011). However, Carle, M.V., Wang, L., & Sasser, C.E. (2014). Mapping the disadvantage of object-based scheme was also freshwater marsh species distributions using revealed in this study, with a great amount of time WorldView-2 high-resolution multispectral satellite ima- and efforts spent on scale parameter selection or gery. International Journal of Remote Sensing, 35(13), 4698–4716. doi:10.1080/01431161.2014.919685 post-classification refinement for an object-based Castillejo-González, I.L., López-Granados, F., García-Ferrer, approach with expert knowledge. The study also A., Peña-Barragán, J.M., Jurado-Expósito, M., de la Orden, showed that SVM tended to offer a higher accuracy M.S., & González-Audicana, M. (2009). Object- and pixel- level when compared to the other tested classifiers based analysis for mapping crops and their agro-environ- (i.e. ANN, ML). mental associated measures using QuickBird imagery. Future work includes developing advanced image Computers and Electronics in Agriculture, 68,207–215. doi:10.1016/j.compag.2009.06.004 calibration techniques to improve the radiometric Chen, Y., Shi, P., Fung, T., Wang, J., & Li, X. (2007). quality of the images. Also, it is recommended to Object-oriented classification for urban land cover map- experiment with higher spectral resolution (multi- ping with ASTER imagery. International Journal of spectral or hyperspectral images), which are increas- Remote Sensing, 28, 4645–4651. doi:10.1080/ ingly becoming more cost-effective and feasible for small UAS. Coburn, C., & Roberts, A. (2004). A multiscale texture analysis procedure for improved forest stand classifica- tion. International Journal of Remote Sensing, 25, 4287– 4308. doi:10.1080/0143116042000192367 Colomina, I., & Molina, P. (2014). Unmanned aerial sys- Disclosure statement tems for photogrammetry and remote sensing: A review. No potential conflict of interest was reported by the ISPRS Journal of Photogrammetry and Remote Sensing, authors. 92,79–97. doi:10.1016/j.isprsjprs.2014.02.013 Cordeiro, C.L.D.O., & Rossetti, D.D.F. (2015). Mapping vegetation in a late Quaternary landform of the Amazonian wetlands using object-based image analysis References and decision tree classification. International Journal of Remote Sensing, 36, 3397–3422. doi:10.1080/ Abd-Elrahman, A., Pearlstine, L., & Percival, F. (2005). 01431161.2015.1060644 Development of pattern recognition algorithm for auto- Cortes, C., & Vapnik, V. (1995). Support-vector networks. matic bird detection from unmanned aerial vehicle ima- Machine Learning, 20, 273–297. doi:10.1007/BF00994018 gery. Surveying and Land Information Science, 65, 37. Developer, e. (2012). User guide. Trimble Documentation. Aguilar, M., Saldaña, M., & Aguilar, F. (2013). GeoEye-1 Dingle Robertson, L., & King, D.J. (2011). Comparison of and WorldView-2 pan-sharpened imagery for object- pixel- and object-based classification in land cover based classification in urban environments. change mapping. International Journal of Remote International Journal of Remote Sensing, 34, 2583–2606. Sensing, 32, 1505–1529. doi:10.1080/01431160903571791 doi:10.1080/01431161.2012.747018 Dronova, I., Gong, P., & Wang, L. (2011). Object-based Ashish, D., McClendon, R., & Hoogenboom, G. (2009). analysis and change detection of major wetland cover Land-use classification of multispectral aerial images types and their classification uncertainty during the low using artificial neural networks. International Journal of water period at Poyang Lake, China. Remote Sensing of Remote Sensing, 30, 1989–2004. doi:10.1080/ Environment, 115, 3220–3236. doi:10.1016/j. rse.2011.07.006 Baatz, M., & Schäpe, A. (2000). Multiresolution segmenta- Ehlers, M., Gähler, M., & Janowsky, R. (2003). Automated tion: An optimization approach for high quality multi- analysis of ultra high resolution remote sensing data for scale image segmentation. Angewandte Geographische biotope type mapping: New possibilities and challenges. Informationsverarbeitung, XII,12–23. ISPRS Journal of Photogrammetry and Remote Sensing, Balabin, R.M., & Lomakina, E.I. (2011). Support vector 57, 315–326. doi:10.1016/S0924-2716(02)00161-2 machine regression (SVR/LS-SVM) – An alternative to Flanders, D., Hall-Beyer, M., & Pereverzoff, J. (2003). neural networks (ANN) for analytical chemistry? Preliminary evaluation of eCognition object-based soft- Comparison of nonlinear methods on near infrared ware for cut block delineation and feature extraction. (NIR) spectroscopy data. Analyst, 136, 1703–1712. Canadian Journal of Remote Sensing, 29, 441–452. doi:10.1039/c0an00387e doi:10.5589/m03-006 Belluco, E., Camuffo, M., Ferrari, S., Modenese, L., Silvestri, Foody, G.M., Campbell, N., Trodd, N., & Wood, T. (1992). S., Marani, A., & Marani, M. (2006). Mapping salt- Derivation and applications of probabilistic measures of marsh vegetation by multispectral and hyperspectral class membership from the maximum-likelihood classi- remote sensing. Remote Sensing of Environment, 105, fication. Photogrammetric Engineering and Remote 54–67. doi:10.1016/j.rse.2006.06.006 Sensing, 58, 1335–1341. Benediktsson, J.A., Swain, P.H., & Ersoy, O.K. (1990). Gao, P., Trettin, C.C., & Ghoshal, S. (2012). Object- Neural network approaches versus statistical methods in oriented segmentation and classification of wetlands classification of multisource remote sensing data.IEEE within the Khalong-la-Lithuny a catchment, Lesotho, Transactions on Geoscience and Remote Sensing, 28 Africa. In Geoinformatics (GEOINFORMATICS), 2012 (4), 540-552 20th International Conference on (pp. 1–6): IEEE EUROPEAN JOURNAL OF REMOTE SENSING 575 Gao, Y., & Mas, J.F. (2008). A comparison of the perfor- imagery. Remote Sensing of Environment, 115, 1145–1161. mance of pixel-based and object-based classifications doi:10.1016/j.rse.2010.12.017 over images with various spatial resolutions. Online Niemeyer, I., & Canty, M.J. (2003). Pixel-based and object- Journal of Earth Sciences, 2,27–35. oriented change detection analysis using high-resolution Ge, S., Carruthers, R., Gong, P., & Herrera, A. (2006). imagery. In Proceedings 25th Symposium on Safeguards Texture analysis for mapping Tamarix parviflora using and Nuclear Material Management (pp. 2133–2136) aerial photographs along the Cache Creek, California. Olmsted, I.C., & Armentano, T.V. (1997). Vegetation of Environmental Monitoring and Assessment, 114,65–83. Shark Slough, Everglades National Park.Homestead, doi:10.1007/s10661-006-1071-z FL: South Florida Natural Resources Center, Haralick, R.M., Shanmugam, K., & Dinstein, I.H. (1973). Everglades National Park. Textural features for image classification. IEEE Oruc, M., Marangoz, A., & Buyuksalih, G. (2004). Transactions on Systems, Man and Cybernetics, SMC-3, Comparison of pixel-based and object-oriented classifi- 610–621. doi:10.1109/TSMC.1973.4309314 cation approaches using Landsat-7 ETM spectral bands. Harris. (2017). How does the Pixel Aggregate method work In Proceedings of the IRSPS 2004 Annual Conference (pp. when resizing data with ENVI? Retrived from http:// 19–23). www.harrisgeospatial.com/Support/SelfHelpTools/ Pearlstine, L., Portier, K.M., & Smith, S.E. (2005). Textural HelpArticles/HelpArticles-Detail/TabId/2718/ArtMID/ discrimination of an invasive plant, Schinus terebinthifo- 10220/ArticleID/16094/How-does-the-Pixel-Aggregate- lius, from low altitude aerial digital imagery. method-work-when-resizing-data-with-ENVI-.aspx Photogrammetric Engineering & Remote Sensing, 71, Hsieh, P.-F., Lee, L.C., & Chen, N.-Y. (2001). Effect of 289–298. doi:10.14358/PERS.71.3.289 spatial resolution on classification errors of pure and Radoux, J., & Defourny, P. (2007). A quantitative assess- mixed pixels in remote sensing. IEEE Transactions on ment of boundaries in automated forest stand delinea- Geoscience and Remote Sensing, 39, 2657–2663. tion using very high resolution imagery. Remote Sensing doi:10.1109/36.975000 of Environment, 110, 468–475. doi:10.1016/j. Johnson, B.A. (2013). High-resolution urban land-cover rse.2007.02.031 classification using a competitive multi-scale object- Rango, A., Laliberte, A., Steele, C., Herrick, J.E., based approach. Remote Sensing Letters, 4, 131–140. Bestelmeyer, B., Schmugge, T., . . . Jenkins, V. (2006). doi:10.1080/2150704X.2012.705440 Research article: Using unmanned aerial vehicles for Laliberte, A.S., & Rango, A. (2009). Texture and scale in rangelands: Current applications and future potentials. object-based analysis of subdecimeter resolution Environmental Practice, 8, 159–168. doi:10.1017/ unmanned aerial vehicle (UAV) imagery. IEEE S1466046606060224 Transactions on Geoscience and Remote Sensing, 47, Shao, Y., & Lunetta, R.S. (2012). Comparison of support 761–770. doi:10.1109/TGRS.2008.2009355 vector machine, neural network, and CART algorithms Laliberte, A.S., Rango, A., & Herrick, J. (2007). Unmanned for the land-cover classification using limited training aerial vehicles for rangeland mapping and monitoring: A data points. ISPRS Journal of Photogrammetry and comparison of two systems. In ASPRS Annual Remote Sensing, 70,78–87. doi:10.1016/j. Conference Proceedings isprsjprs.2012.04.001 Lane, C.R., Liu, H., Autrey, B.C., Anenkhonov, O.A., Smith, G.M., Spencer, T., Murray, A.L., & French, J.R. Chepinoga, V.V., & Wu, Q. (2014). Improved wetland (1998). Assessing seasonal vegetation change in coastal classification using eight-band high resolution satellite wetlands with airborne remote sensing: An outline imagery and a hybrid approach. Remote Sensing, 6(12), methodology. Mangroves and Salt Marshes, 2,15–28. 12187–12216. doi:10.3390/rs61212187 doi:10.1023/A:1009964705563 Li, M., Bijker, W., & Stein, A. (2015). Use of Binary St-Louis,V.,Pidgeon,A.M.,Radeloff,V.C.,Hawbaker, Partition Tree and energy minimization for object- T.J., & Clayton, M.K. (2006). High-resolution image based classification of urban land cover. ISPRS Journal texture as a predictor of bird species richness. Remote of Photogrammetry and Remote Sensing, 102,48–61. Sensing of Environment, 105,299–312. doi:10.1016/j. doi:10.1016/j.isprsjprs.2014.12.023 rse.2006.07.003 Lillesand, T., Kiefer, R.W., & Chipman, J. (2014). Remote Szantoi, Z., Escobedo, F., Abd-Elrahman, A., Smith, S., & sensing and image interpretation. John Wiley & Sons. Pearlstine, L. (2013). Analyzing fine-scale wetland com- Maheu-Giroux,M., &deBlois,S.(2005). Mapping the invasive position using high resolution imagery and texture fea- species Phragmites australis in linear wetland corridors. tures. International Journal of Applied Earth Observation Aquatic Botany, 83, 310–320. doi:10.1016/j. and Geoinformation, 23, 204–212. doi:10.1016/j. aquabot.2005.07.002 jag.2013.01.003 Moffett, K.B., & Gorelick, S.M. (2013). Distinguishing wet- Tehrany, M.S., Pradhan, B., & Jebur, M.N. (2013). Remote land vegetation and channel features with object-based sensing data reveals eco-environmental changes in urban image segmentation. International Journal of Remote areas of Klang Valley, Malaysia: Contribution from object Sensing, 34, 1332–1354. doi:10.1080/ based analysis. Journal of the Indian Society of Remote 01431161.2012.718463 Sensing, 41,981–991. doi:10.1007/s12524-013-0289-9 Mountrakis, G., Im, J., & Ogole, C. (2011). Support vector Veenman, C.J., Reinders, M.J., & Backer, E. (2002). A machines in remote sensing: A review. ISPRS Journal of maximum variance cluster algorithm. IEEE Photogrammetry and Remote Sensing, 66, 247–259. Transactions on Pattern Analysis and Machine doi:10.1016/j.isprsjprs.2010.11.001 Intelligence, 24, 1273–1280. doi:10.1109/ Myint, S.W., Gober, P., Brazel, A., Grossman-Clarke, S., & TPAMI.2002.1033218 Weng, Q. (2011). Per-pixel vs. object-based classification of Wang, L., Sousa, W.P., Gong, P., & Biging, G.S. (2004). urban land cover extraction using high spatial resolution Comparison of IKONOS and QuickBird images for 576 R. PANDE-CHHETRI ET AL. mapping mangrove species on the Caribbean coast of Xie, Z., Roberts, C., & Johnson, B. (2008). Object-based Panama. Remote Sensing of Environment, 91, 432–440. target search using remotely sensed data: A case study in doi:10.1016/j.rse.2004.04.005 detecting invasive exotic Australian pine in south Waser, L., Baltsavias, E., Ecker, K., Eisenbeiss, H., Florida. ISPRS Journal of Photogrammetry and Remote Feldmeyer-Christe, E., Ginzler, C., . . . Zhang, L. (2008). Sensing, 63, 647–660. doi:10.1016/j.isprsjprs.2008.04.003 Assessing changes of forest area and shrub encroach- Yan, G., Mas, J.F., Maathuis, B., Xiangmin, Z., & Van Dijk, ment in a mire ecosystem using digital surface models P. (2006). Comparison of pixel-based and object- and CIR aerial images. Remote Sensing of Environment, oriented image classification approaches – A case study 112, 1956–1968. doi:10.1016/j.rse.2007.09.015 in a coal fire area, Wuda, Inner Mongolia, China. Watts, A.C., Bowman, W.S., Abd-Elrahman, A.H., International Journal of Remote Sensing, 27, 4039–4055. Mohamed, A., Wilkinson, B.E., Perry, J., . . . Lee, K. doi:10.1080/01431160600702632 (2008). Unmanned Aircraft Systems (UASs) for ecologi- Yang, X. (2007). Integrated use of remote sensing and geo- cal research and natural-resource monitoring (Florida). graphic information systems in riparian vegetation deli- Ecological Restoration, 26,13–14. doi:10.3368/er.26.1.13 neation and mapping. International Journal of Remote Whiteside, T.G., Boggs, G.S., & Maier, S.W. (2011). Sensing, 28,353–370. doi:10.1080/01431160600726763 Comparing object-based and pixel-based classifications Yu, Q., Gong, P., Clinton, N., Biging, G., Kelly, M., & for mapping savannas. International Journal of Applied Schirokauer, D. (2006). Object-based detailed vegetation Earth Observation and Geoinformation, 13, 884–893. classification with airborne high spatial resolution doi:10.1016/j.jag.2011.06.008 remote sensing imagery. Photogrammetric Engineering Willhauck, G., Schneider, T., De Kok, R., & Ammer, U. & Remote Sensing, 72, 799–811. doi:10.14358/ (2000). Comparison of object oriented classification PERS.72.7.799 techniques and standard image analysis for the use of Zweig, C.L., Burgess, M.A., Percival, H.F., & Kitchens, W. change detection between SPOT multispectral satellite M. (2015). Use of unmanned aircraft systems to deline- images and aerial photos. In,Proceedings of XIX ISPRS ate fine-scale wetland vegetation communities. congress (pp. 35–42). Citeseer Wetlands, 35, 303–309. doi:10.1007/s13157-014-0612-4

Journal

European Journal of Remote SensingTaylor & Francis

Published: Jan 1, 2017

Keywords: Remote sensing; UAS; wetland vegetation mapping; object-based classification; pixel-based classification; Support Vector Machine

There are no references for this article.