Article Versions
Export Article
Cite this article
  • Normal Style
  • MLA Style
  • APA Style
  • Chicago Style
Research Article
Open Access Peer-reviewed

UAV-based Approach to Extract Topographic and As-built Information by Utilising the OBIA Technique

Hairie Ilkham Sibaruddin, Helmi Zulhaidi Mohd Shafri , Biswajeet Pradhan, Nuzul Azam Haron
Journal of Geosciences and Geomatics. 2018, 6(3), 103-123. DOI: 10.12691/jgg-6-3-2
Received August 08, 2018; Revised September 19, 2018; Accepted October 07, 2018

Abstract

In this study, the capability of Unmanned Aerial Vehicle (UAV) optical data to provide reliable topographic and as-built information was tested using the eBee Sensefly UAV system. The Object-based Image Analysis (OBIA) technique was used to extract important geospatial information for mapping. The robust Taguchi method was adopted to optimise the segmentation process. Feature space optimisation method was used to obtain the best features for image classification utilising different supervised OBIA classifiers, such as K-nearest neighbour (KNN), normal Bayes (NB), decision tree (DT), random forest (RF) and support vector machine (SVM). Results showed that SVM obtained the highest percentage of overall accuracy, followed by RF, NB, DT and KNN at 97.20%, 95.80%, 93.14%, 86.01% and 77.62%, respectively. The McNemar test was implemented to analyse the significance of the classifier results. The as-built information showed that dimensional accuracy was less than 1 metre compared with ground survey measurement. We conclude that the combination of UAV and OBIA provides a rapid and efficient approach for map updating. This technique could replace the current procedure that utilises piloted aircraft and satellite images for data acquisition and reduce the time for digitising each feature that represents land cover for urban mapping.

1. Introduction

A topographic map provides important information that represents data on land use and land cover for certain areas. It is a 2D representation of the Earth’s 3D landscape. Topographical data also provide an accurate measured plan of a site that encompasses the entire range of various feature information with detailed illustration of man-made and natural features on the ground, such as road, railways, rivers, lakes and buildings. Typically, a topographic map is used as a skeleton for design work before a construction project begins to address the requirements of land survey, urban planning, as-built planning, hazard assessment and disaster risk management.

In the time frame of a mapping survey, a dataset of the topographic map is gathered from platforms, such as space-borne satellites and manned aircraft. Most of the data are acquired by equipment that are too expensive to build and maintain for small-area map updating. In addition, the data are not always within the public domain. The process of acquiring aerial mapping is expensive given the constraint and requirement to map small areas and using large-format aerial or metric cameras to acquire data is uneconomical and unsuitable 1.

Unmanned Aerial Vehicle (UAV) system operates a powered aerial vehicle without a human operator. UAVs are prominent due to their provision of data with high spatial resolution 2, lightweight sensors and platforms, flexibility of flight planning and deployment and elimination of long dependency 3. UAVs could also obtain timely imagery of areas that are dangerous or difficult to access by traditional means. This imagery can usually be acquired at a minimum cost or at a cost that is cheaper than that involved in other collection methods 2, 3, 4. Current users prefer technologies with low cost but numerous benefits. UAVs are an example of such technologies because they provide highly applicable, immediate and near-real-time data at a resolution that is comparable to that of terrestrial means. UAVs, such as eBee Sensefly are an excellent technology that can provide high capability data for mapping purposes 5.

The features from aerial photo orthomosaic are normally detected and digitised manually from visual interpretation for mapping purposes. However, these methods consume much time, are tedious and expensive 6. Automation will provide substantial benefits 4. The level of automation could range from semi-automatic incorporated with human interaction to completely automated 7. The potential of acquiring accurate and low-cost UAV data relies on automatic object reconstruction and boundary extraction activities 8.

Pixel-based images analysis are often used to extract low-level features. However, an image is classified according to spectral information, and the pixels in the overlapping region are misclassified; the salt-and-pepper problem thus emerges in the classification result 9 and causes confusion among classes 10. Meanwhile, Object-based Image Analysis (OBIA) is used to extract high-level features, which constitute shapes in images that are detected regardless of illumination, translation, orientation and scale 7.

Object-based classification for high-spatial-resolution UAV data encounters several challenges despite its highest accuracy among all types of sensors 11. The scale parameters used for segmentation are much larger than those used for aerial and satellite imagery. The extreme detail on the imagery parses it into many different objects with varying spectral, morphological and proximity characteristics.

Most segmentation processes use trial and error, which is subjective, laborious and time consuming 10. Hence, a solution to optimise the segmentation process for classification is required. The Taguchi method, which was designed by Dr. Genichi Taguchi, has a simple statistical tool design 12. This method involves a tabulated design (arrays) system that permits a maximum number of main effects to estimate in an unbiased manner with the lowest number of experimental runs 13. Several studies had applied this method to optimise the segmentation process in OBIA 10, 14, 15, 16.

Most of the studies have utilised UAV data to produce topographic (DTM, DSM, orthophoto) 17 and land cover maps 11, 18, 19, 20. However, the data are not fully utilised for as-built plan information. A few studies have highlighted infrastructure information using other sensors, such as satellite image and aerial laser scanner, for building extraction 21, 22, 23, 24.

Therefore, the first objective of this study is to assess the capability of UAV to provide reliable topographic and as-built data information by utilising the OBIA technique. Specifically, the aim is to determine the most optimal OBIA parameters through segmentation and classification to deliver the required information from UAV data. The segmentation process is important for object classification intended for object-oriented image analysis. This study investigated the effect of parameter tuning with different sample numbers on the overall accuracy of the results to determine the optimal parameter. Machine learning classifiers were used. The second purpose is to extract topographic information, such as land cover features, from UAV data. Lastly, the study aims to extract as-built information, such as infrastructure geometry and dimensions. The geometry from OBIA data was compared from ground truth survey data using a high-accuracy total station equipment.

2. Data and Methods

2.1. Study Area

The study area is situated at the National Land and Survey Institute (INSTUN) in Behrang, Ulu, Tanjung Malim, Perak, Malaysia. The total area of this campus is approximately 200 acres. The area of research interest is limited between latitudes 3° 45’ 58.3’ N to 3° 46’ 2.16’ N and longitudes 101° 30’ 34.94’ E to 101° 31’ 26.01’ E, with a total area of 0.3628 km2. The study area is surrounded by man-made infrastructures, such as buildings, roads, drainages, sport courts, concrete benches, pavements and parking lots. Natural features, which are dominant, include bare soil, dead grass, grass lands, sand, crops, shrubs and trees. Other features include water bodies, such as swimming pool, lakes, septic tanks and shadows from tall buildings and trees.

2.2. Methodology

The methodology of this research was divided into three phases as shown in Figure 2. The initial phase relied on the acquisition of data from the eBee Sensefly UAV. The orthorectified images were generated using photogrammetric technique. Then, an object-based image analysis, which involved image segmentation, selection of training and testing samples, image classification, feature selection, tuning parameter setting for each classifier and accuracy assessment, was performed. All related data, such as digital surface model (DSM), digital terrain model (DTM), contour line, image classification output for generating topography map and as-built plan and information, were combined.


2.2.1. Phase 1 - Data Acquisition: Pre-processing of UAV Image

The imagery data were obtained on October 18, 2016 using eBee Sensefly UAV. The camera sensor, which is attached on this model, is Canon 16 MP IXUS with a visible colour band (red, green, and blue). During data acquisition, the side lap was 60%, and 80% of the front overlap had been set. The altitude of the UAV was set to 190 m above the ground level. The data were tied with six control points (benchmark and EDM calibration pillars).

The quality check on georeferencing showed that the mean RMS error was 0.025 m. The entire data were set geometrically with the coordinate system WGS84 datum and 47N zone in UTM projection. Raw images were mosaicked to generate an orthorectified image that covered the entire study area by using photogrammetry software Pix4D. The average ground sampling distance (GSD) through this orthorectified image was 5 cm. Seven classes of features were organised and investigated as follows: (1) soil/sand, (2) urban tree, (3) building/roof, (4) impervious surface (other infrastructures), (5) grassland, (6) water body and (7) shadows.

An orthomosaic image with the DSM, DTM and contour line was generated (Figures 3(a–d)) using Pix4D. The image was subjected to automatic radiometric and geometric correction.


2.2.2. Phase 2 - Segmentation and Classification

The initial and most important process during the implementation of the OBIA technique is the segmentation to divide an image into meaningful sections that are correlated to objects in the real world, as shown in the image 25. Multi-resolution segmentation is a type of bottom-up, region-based segmentation algorithm 26 and is applied using the software e-Cognition version 9.0. Image classification accuracy is promptly controlled with the quality of the segmentation, which is in turn controlled by defined parameters 27. The three parameters in the multi-resolution segmentation process are as follows: scale, shape and compactness.

The most effective parameter that affects the average image object size is the scale factor 25, 28, 29. This factor is controlled by the spatial resolution of the image and features 9, 12. Shape and compactness factors are associated with colour density and smoothness. Then, the amount of spectral information that should be aggregated to build the segments is identified 14. Initially, to obtain the best range of the scale parameter for image segmentation, a trial and error process was conducted in this work. The selected scale on segmentation was further optimised by applying the Taguchi method.

The segmentation process began with defining the possible range of multi-resolution segmentation parameters to identify the pertinent value of the scale and homogeneity parameter. The analysis was performed with different scale factors at 5, 25, 50, 75, 100, 125, 150, 175, 200 and 225. Visual interpretation was performed to identify the reliability of the segmented image. The main criterion was that the image object was under-segmented, and the over-segmented image object is eliminated. Previous studies 17, 25, 30 have consistently emphasised the scale parameter while the other factors remained constant. The shape parameter was set to 0.1, and compactness was set to 0.5 in all 10 preliminary tests to generate meaningful segmented objects. Both parameters used these default values for image segmentation.

The five best results of preliminary segmentation were identified via visual interpretation. The advantages of the Taguchi method include minimising the number of experiments 15 by adopting a fractional functional design and maintaining a consistent and simple experimental design. This system approach can significantly reduce the total testing time and experimental cost 31. The Taguchi orthogonal array disperses the parameter equally, and the column depicts independent orthogonal variables to guarantee an impartial comparison of all variables in each level and to examine each parameter separately 32. The experiment uses an orthogonal array within each pair of columns that corresponds as independent variables. Level combination exists in an equal number of times 13, 14, 33.

The orthogonal array design by Taguchi is limited to only the combinations of 25 experiments with 3 varying main parameters rather than considering 125 (5x5x5) experimental probabilities. The Taguchi orthogonal array was applied here using Minitab v.17 software. Prior to undertaking the statistical Taguchi optimisation, five levels of the three parameters were defined, as illustrated in the following table.

Then, the statistical Taguchi method and the spatial objective function were fused in a particular process to optimise the segmentation parameters 33, 34. The idea of combining statistical and spatial (objective function) optimisation methods in a particular process is to model the optimal parameters that guarantee an acceptable quality of segmentation 14. The objective function is accomplished with the fusion of spatial autocorrelation and variance indices to identify relevant segmentation 35. Spatial autocorrelation implies the level of distinctiveness between regions (heterogeneity). The variance indicator shows the uniqueness (homogeneity) of the pixels in a single segment 34. Hence, the condition of good-quality segmentation with the consequences of intra-segment homogeneity and inter-segment heterogeneity is strictly maximised 14.

The first element computed is the intra-segment variance of the regions created by a segmentation algorithm using the equation 1 34.

Intra-segment variance,

(1)

where ɑi and vi refer to area and variance respective to region i, respectively. Intra-segment variance v is a weighted average, where the weights are the areas of each region. The second element assessed is intersegment heterogeneity. The function employs Moran’s I autocorrelation index 36. It quantifies the degree of spatial affiliation as reflected in the data as a unit 34.

Moran’s I index is expressed as follows:

(2)

where n represents the total number of regions and wij is the spatial weight between objects i and j. yi is the mean grey value of region Ri, and ȳ is the mean grey value of the image (1 for adjacent regions and 0 otherwise). The test was executed, and the corresponding plateau objective of function (POF) was computed for each test based on the combination of parameter level in the orthogonal arrays. The test with the highest result of POF revealed the best performance and was marked as the strength of the quality of optimisation 34. The objective function (F) was combined with the within-segment variance (v) measure and the between-segment autocorrelation of Moran’s I index (I) 34. It can be expressed as follows:

(3)

where F(v) and F(I) are the normalisation functions.

(4)

Subsequently, the signal-to-noise (SN) ratio was employed to model the optimal segmentation parameters. The three types of SN ratio analysis were applied as follows: (1) lower is better (LB), (2) nominal is the best and (3) higher is better (HB). The aim of this experiment was to optimise the segmentation parameter for image data. Hence, the HB category of the SN ratio was used for modelling. The SN ratio for each experiment was calculated using mean value yi and variance si by determining the effect of each variable. It can be expressed as follows 14:

(5)

where ȳ is the mean and si refers to variance as denoted by the equation.

(6)
(7)

In Equations (6) and (7), i is the number of experiments, u is the trial number and Ni is the number of trials in experiment i. The average SN ratio was employed to evaluate the result of each experiment. A high SN ratio denotes the optimal parameter segmentation based on Equation (8). The average of the SN value for each level and factor was derived. Then, the result was exported as a table and a graph.

(8)

Seven classification classes were identified and are illustrated in Table 2.

The training and testing samples were selected randomly based on experience from ground truth assessment data on the study area. The sample was divided into five parts to check the classification accuracy and to examine the influences of classifiers on the sample size; 70% of the sample was used for training and 30% for testing. Each training and testing sample was selected differently to ensure that no repeated sample was selected for accuracy assessment.

An assessment using different numbers of samples for training and testing was also conducted by tuning the parameter for each classifier while excluding NB to obtain the relationship with the various parameters being set for optimising the classification result. The different numbers of samples for each class were selected randomly due to different areas for each class. The soil/sand class with the highest number of object classes was selected, followed by urban tree, building/roof, grassland, impervious surface, water and shadow, as shown in Table 3.

After selecting the training sample, spectral and spatial features were required to classify the image. Feature space optimisation (FSO) tools, which are available in e-Cognition software, were used for feature selection. More than hundreds of features are available for classification. A total of 41 features were selected for FSO to identify the appropriate features for further classification 18. The object features were used and processed for further analysis with different training and testing sizes. The features were divided into shape, texture and spectral properties. The values of best separation distance for samples 1, 2, 3, 4 and 5 were 3.975, 2.545, 2.166, 2.063 and 1.882, respectively. Sample 1 with 100 training data samples had the highest separation distance compared with the other data samples.

Five machine learning classification algorithms that are K-nearest neighbour (KNN), normal Bayes (NB), decision tree (DT), random forest (RF) and support vector machine (SVM) were used and tested thoroughly to evaluate their performance under varying conditions and to optimise their applicability in terms of the OBIA technique. To optimise the parameter for each classifier, several tests were performed by tuning the parameter for each classifier, except for NB with one available parameter for adjustment. The sensitivity of each classifier was examined using the selected training and testing samples by referring to the results of the accuracy assessment and by varying their respective parameters.

NB is a simple technique for constructing classifiers by applying the Bayes theorem (Bayesian statistics) 37. It is not a single algorithm for training classifiers; it is a family of algorithms based on a common principle. The NB classifier assumes that the value of a particular feature is independent. The data distribution function with one component per class is assumed to be a Gaussian mixture. The algorithm estimates the mean vectors and covariance matrices of the selected features for each class for classification. The advantage of the NB classifier is that it does not require any parameter to tune, which could be subjective and time consuming.

The KNN algorithm is a method for classifying objects based on closest training examples in the feature space. KNN is a non-parameter algorithm for instance-based learning or lazy learning 38. An object is classified by referring to the class attributes of its KNN parameters. Therefore, K is the key tuning parameter in this classifier, and it is largely determined the performance of the KNN classifier 37. In this study, the K values varied from 5 to 20 with 5 intervals each to identify the optimal K value for all training sample sets.

DT learning is a method used in data mining, in which a series of decisions are made to segment data into homogeneous subgroups. The aim is to create a model that predicts the value of a target variable based on several input variables. This process is repeated on each derived subset in a recursive manner (recursive partitioning). The recursion is completed when the subset at a node has the same value as the target variable or when splitting no longer adds value to the predictions. The purpose of analyses via tree-building algorithms is to determine a set of if–then logical (split) conditions 38. During this study, we tested the value of maximum depth from 1 to 20 for all five training samples. The other parameters, such as cross validation folds and min sample count, were set to 10 (default). Other factors remained constant.

It results in a class label of the training sample in the terminal node, where it ends 38. The value of the maximum depth was set from 1 to 20 for all five training samples in this study. The other parameters, such as cross validation folds and min sample count, were set to 10 (default).

SVM constructs a hyperplane or a set of hyperplanes in an infinite dimensional space, which can be used for classification and regression analysis. The most frequently used types of kernel functions or SVM algorithms are linear, polynomial and radial basis functions (RBF) and sigmoid kernels 37. In this study, the RBF kernel, which is the most frequently used and has been proven superior to other kernels, was adopted. The RBF kernel has two important tuning parameters: cost (C) and Gamma. The optimal values of C and Gamma are often estimated with the exhausted search method. We systematically tested five different values of C and Gamma to examine the effect of these two key parameters on the performance of SVM within the object-based approach.


2.2.3. Phase 3 - Accuracy Assessment

Object-based accuracy assessment is a measure of a statistical output to confirm the quality of classification results. The method that is most often used to assess accuracy is based on an error matrix. It utilises appropriate accuracy measures to compare different classification techniques 39, 40. An error matrix is a cross tabulation of the classes of the classified imagery and reference data. It offers a form of site-specific assessment of the correspondence or accuracy degree of the classified image and the objects in the site 40. In general, overall accuracy, producer accuracy, user accuracy and kappa coefficient are computed from an error matrix 41.

McNemar test was performed to examine the role of each classifier. This test identifies a change in the proportion of the paired data to determine whether the statistical differences in classification accuracies are quantitatively significant 42. This statistical test uses a non-parametric approach based on the statistics of a 2 × 2 matrix 43. The assessment relies on chi square (x2) distribution and indicates the statistical differences by measuring the z score under the null hypothesis that classification is different. A z score > 1.645 shows the confidence level at 95% quantitative significance (p-value of 0.05) with one degree of freedom 43. The McNemar test represents the statistical difference by measuring a z score under the null hypothesis that the classifications are different. The formula is expressed as follows:

(9)

where f12 and f21 indicate the number of ground truth data samples accurately classified in a set of classification but inaccurately classified in another classifier. These values are extracted from the data obtained from the classified image performed by classifiers 1 and 2 44.

Three classes of land cover, including building/roof and impervious surface (drainage, road), were selected for as-built geometrical assessment. Ground truth data were collected using survey equipment, such as total station for the area. The area of assessment was selected as the area of staff quarters in INSTUN.

3. Results and Discussions

3.1. Results of Preliminary Segmentation

Figure 4(a) to Figure 4(i) show the result of the preliminary segmentation using the trial and error method. The result shows that the scale parameters of 25, 50, 75, 100 and 125 results in relevant and acceptable segmentation against other under-segmented objects for the seven selected classes. The result was selected based on the criterion of segmented images, that is, an acceptable under-segmented area with a minimum number of over-segmented and under-segmented objects. Therefore, the five best segmentation results with different scale parameters were selected for further optimisation segmentation using the Taguchi method.

3.2. Results of Optimised Segmentation Using the Taguchi Method

Table 4 shows the result of each test in which several equations were evaluated to identify the optimised parameter for image segmentation. Refer to Table 4, it shows that the highest score for POF was the combination of scale, shape and compactness in levels 5, 1 and 5, respectively, with a score of 1.546. Further interpretation of the table reveals that a POF value with the highest POF in a pattern is in agreement with the analysed SN ratio result 14. The hybrid strategy slightly carried the orthogonal vectors and POF to calculate the SN ratios.

Figures 5(a) and 5(b) present the main effect plots of the means and SN ratios for multi-resolution segmentation. The response table for mean and SN ratio obtained from the analysis of the Taguchi method is presented in Tables 5a and 5b. The result shows that the optimum combination yielded the highest value of SN ratios and means with the associate of level 5 (125) for scale, level 1 (0.1) for shape and level 2 (0.3) for compactness.

Visual judgement confirmed that the optimised segmentation parameters yielded the utmost results after applying the Taguchi method, as shown in Figure 6(a) to Figure 6(h). Therefore, the merging of the statistical and spatial optimisation processes creates intrinsic sensitivity to the image pixels and their spatial relationship. The strength of these properties is utilised to obtain the desired quality.

3.3. Results of Image Classification
3.3.1. Effects of Parameter Tuning of the Classifiers

The tuning parameter of the classifier tremendously influenced the classification accuracy. The SVM classifier showed high impact and sensitivity to the tuning parameter, with a variation of up to 60% for each data sample from the result of minimum to maximum overall accuracy. The variation of KNN showed decreasing accuracy with increasing K parameter for all data samples; it is different from the RF classifier, in which the turning point is at the maximum depth parameter of 15. When the depth was more than 15, the overall accuracy suddenly decreased for all data samples. For DT, the trends showed increasing overall accuracy with the increase in the maximum depth parameter for all data samples. However, most of the data samples provided the highest result as the parameter reached 20.

Therefore, the optimal parameter setting for SVM varied with the data sample size. The optimum values for C and Gamma were 100,000 and 0.001 for samples 2 and 5, respectively, and the result obtained was better than that of the other sample combinations, which affected the accuracies of the SVM classification. Most of the data showed that regardless of the value of C, the overall accuracy decreases when Gamma increases to 0.001; the effect is a decrease in OA. The ranges of C and Gamma factors in all samples indicate that up to 90% accuracy can be obtained with a Gamma value of 0.1.

Moreover, with a small size of the data sample, a Gamma parameter of up to 0.0001 may negatively affect accuracy, which can only reach up to 50% of OA. On the contrary, with a sample size of up to 300, OA may consistently be within the minimum of 75% up to 97% with increasing sample data. Thus, the effect of tuning parameter is caused by OA, given that a small sample size (e.g. sample 1 with C and Gamma at 10 and 0.1) results in OA of up to 92%. The performance of NB, KNN, RF, DT and SVM classifiers with different sample sizes is shown in Figure 7 to Figure 8 and Table 6 to Table 11.


3.3.2. Effects of Varying the Number of Selected Samples

The result of the analysis showed that SVM obtained the highest accuracy among the five classifiers in terms of C and Gamma. The minimum result for the overall accuracy and kappa coefficient of SVM for all samples was 93.82% and 0.931, respectively, with the total average of five samples at 95.96% and 0.951. In addition, the overall accuracy of SVM was maintained with an accuracy of more than 90% each. The minimum accuracy of the SVM classifier was better than the highest overall accuracy and kappa coefficient for NB (93.14% and 0.915), DT (86.01% and 0.830) and KNN (77.62% and 0.721). Sample 2 of RF showed the highest overall accuracy and kappa coefficient of 95.80% and 0.948, respectively. Sample 5 of SVM obtained the highest result. The result is contrary to that of Sample 5 of RF, which obtained the lowest overall accuracy and kappa coefficient with the maximum size of sample data. The overall accuracy for NB was consistent in Samples 2 to 5, with an accuracy of more than 90% each. However, with a small number of training and testing data samples, the accuracy decreased to 81.12%. The average accuracy and kappa coefficient of the DT classifier were 81.77% and 0.780, respectively, and Sample 1 obtained the best result. Graphs in Figure 7 and Figure 8 show that KNN obtained the lowest accuracy and kappa values for all data sample sizes compared with the other classifiers.

The size of training and testing data affected the classification accuracy of certain classifiers, such as NB and DT, compared with RF and KNN, which are less sensitive to the increase in sample data. SVM and RF obtained a consistent overall accuracy with the most outstanding result (more than 90%) compared with the other classifiers. The KNN classifier obtained the highest score of 77.62% and 0.721 for accuracy and kappa values, respectively. Hence, increasing the number of data samples did not affect the increase in the overall accuracy of classification.

The result shows that the variation of the size of training samples from 100 to 500 (sample 1 to 5) increased the accuracy of NB and SVM by 12.02% and 3.38%, respectively. On the contrary, the accuracy of DT, RF and KNN decreased to 6.74%, 5.85% and 5.63%, respectively. The NB classifier was the most sensitive to the variation of sample size because the parametric classifier consumes the training samples to estimate the parameter value for data allocation. Hence, with the increasing number of training samples, a highly accurate parameter estimation can be achieved. SVM is the least sensitive to sample sizes because it requires support vectors rather than using all training samples to create a separating hyperplane. Table 11 shows that the classification accuracy of the three classifiers shifted and became inconsistent when Sample 3 with more than 300 samples was selected with a variation of DT (-8.74%), RF (-3.73%) and KNN (-0.59%). This result revealed that Sample 3 is the turning point among these classifiers. The additional sample size disrupted the accuracy.

Figures 9(a) to 9(f) present an orthomosaic of the study area and the result of the best classification for each classifier with the parameter indicated in Table 12. These findings present an important implication in the selection of appropriate classifiers. SVM outperformed the other classifiers and was the best classifier for land cover classification. The SVM classifier provided good overall classification for all sample data. NB can be an alternative classifier when the training samples are sufficiently large. RF also shows the potential to obtain high accuracies as those of SVM with sufficient parameter setting.

The McNemar test indicated that the SVM classifier was highly significant compared with the other classifiers. Hence, SVM outperformed the other classifiers. The significant results of the McNemar test are presented in Table 13.

3.4. Error Assessment of Geometrical Information

In conjunction with best classification result of the SVM classifier, the image underwent geometrical assessment with the dimensional features evaluated. The three types of civil infrastructures selected for the geometrical assessment were building, road and drainage.

The results show that the difference of the ground truth data with the image classification was less than 1 metre. The geometry of the as-built feature is acceptable because the error is less than 1 m and is justified for mapping purposes as Table 14. The final topographic map is shown in Figure 10.

4. Conclusion

We have evaluated and compared the performance of five machine learning classifiers in classifying high-resolution images by implementing an OBIA procedure. The SVM classifier obtained the highest results among all classifiers. The classification was affected by the accuracy of the tuning parameters for each classifier when different sizes of training and testing samples were used.

The extraction of as-built information was examined through a geometrical assessment with the output of the data extraction. The tolerance of as-built information (building, drainage and road) through the ground truth data using survey equipment (total station) with UAV OBIA data obtained an error of less than 1 metre. The combination of UAV and OBIA can provide a rapid and efficient approach for map updating, especially in rapidly changing urban areas. This technique can potentially replace current procedures that utilise piloted aircraft and high-resolution satellite data (more expensive and time consuming). Hence, the results of this study provide an additional insight into the use of OBIA for UAV optical imagery information extraction.

Acknowledgements

The authors would like to thank the National Land and Survey Institute for providing and sharing UAV data to realise this research. Gratitude is also extended to the staff of Photogrammetry Section, Division of Topography Mapping, Department of Survey and Mapping Malaysia (JUPEM) for their contributions, ideas, comments and criticisms on the current issues related to the scope of the study. UPM is also acknowledged for the provision of financial support via the GRF scheme. Comment from the anonymous reviewers are highly appreciated.

References

[1]  Ahmad, A., Hashim, K. A., & Samad, A. M. (2010). Aerial Mapping using High Resolution Digital Camera and Unmanned Aerial Vehicle for Geographical Information System. 2010 6th International Colloquium on Signal Processing & Its Applications (CSPA), 201-206.
In article      
 
[2]  Hardin, P. J., & Hardin, T. J. (2010). Small-scale remotely piloted vehicles in environmental research. Geography Compass, 4(9), 1297-1311.
In article      View Article
 
[3]  Laliberte, A. S., & Rango, A. (2009). Texture and Scale in Object-Based Analysis of Subdecimeter Resolution Unmanned Aerial Vehicle (UAV) Imagery. IEEE Transactions on Geoscience and Remote Sensing, 47(3), 761-770.
In article      View Article
 
[4]  Nex, F., & Remondino, F. (2014). UAV for 3D mapping applications: A review. Applied Geomatics, 6(1), 1-15.
In article      View Article
 
[5]  Ahmad, A. (2016). The Direction of UAV Technology in Malaysia & The Principle of Photogrammetry for UAV Mapping. PowerPoint presentation at the courses of UAV Technology and Image Processing, The National and Survey Institute (INSTUN), Perak, Malaysia.
In article      
 
[6]  Mohammadi, M., Hahn, M., & J, E. (2011). Road Classification and Condition Investigation Using Hyperspectral Imagery. In Applied Geoinformatics for Society and Environment. Jomo Kenyatta University of Agriculture and Technology Stuttgart University of Applied Sciences.
In article      
 
[7]  Crommelinck, S., Bennett, R., Gerke, M., Nex, F., Yang, M. Y., & Vosselman, G. (2016). Review of automatic feature extraction from high-resolution optical sensor data for UAV-based cadastral mapping. Remote Sensing, 8(689).
In article      View Article
 
[8]  Jazayeri, I., Rajabifard, A., & Kalantari, M. (2014). A geometric and semantic evaluation of 3D data sourcing methods for land and property information. Land Use Policy, 36, 219-230.
In article      View Article
 
[9]  Blaschke, T., Lang, S., Lorup, E., Strobl, J., & Zeil, P. (2000). Object-Oriented Image Processing in an Integrated GIS / Remote Sensing Environment and Perspectives for Environmental Applications. Environmental Information for Planning, Politics and the Public, (1995), 555-570.
In article      
 
[10]  Gibril, M. B. A., Shafri, H. Z. M., & Hamedianfar, A. (2017). New semi-automated mapping of asbestos cement roofs using rule-based object-based image analysis and Taguchi optimization technique from WorldView-2 images. International Journal of Remote Sensing, 38(2), 467-491.
In article      View Article
 
[11]  Ma, L., Li, M., Ma, X., Cheng, L., Du, P., & Liu, Y. (2017). A review of supervised object-based land-cover image classification. ISPRS Journal of Photogrammetry and Remote Sensing, 130, 277-293.
In article      View Article
 
[12]  Tzotsos, A., Karantzalos, K., & Argialas, D. (2010). Object-based image analysis through nonlinear scale-space filtering. ISPRS Journal of Photogrammetry and Remote Sensing, 66(1), 2-16.
In article      View Article
 
[13]  Rao, R. S., Kumar, C. G., Prakasham, R. S., & Hobbs, P. J. (2008). The Taguchi methodology as a statistical tool for biotechnological applications: A critical appraisal. Biotechnology Journal, 3(4), 510-523.
In article      View Article  PubMed
 
[14]  Idrees, M. O., & Pradhan, B. (2016). Hybrid Taguchi-Objective Function optimization approach for automatic cave bird detection from terrestrial laser scanning intensity image. International Journal of Speleology, 45(3), 289-301.
In article      View Article
 
[15]  Moosavi, V., Talebi, A., & Shirmohammadi, B. (2013). Producing a landslide inventory map using pixel-based and object-oriented approaches optimized by Taguchi method Vahid. Geomorphology.
In article      
 
[16]  Sameen, M. I., & Pradhan, B. (2017). A Two-Stage Optimization Strategy for Fuzzy Object-Based Analysis Using Airborne LiDAR and High-Resolution Orthophotos for Urban Road Extraction, 2017.
In article      
 
[17]  Majeed, Z. A., Saip, S. N., & Ng, E. G. (2017). Towards augmented topographic map: Integration of digital photograph captured from MAV and UAV platform. In FIG Working Week 2017 Presentation Paper (pp. 1-15).
In article      
 
[18]  Ma, L., Cheng, L., Li, M., Liu, Y., & Ma, X. (2015). Training set size , scale , and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 102, 14-27.
In article      View Article
 
[19]  Qian, Y., Zhou, W., Yan, J., Li, W., & Han, L. (2015). Comparing Machine Learning Classifiers for Object-Based Land Cover Classification Using Very High Resolution Imagery. Remote Sensing, 7, 153-168.
In article      View Article
 
[20]  Sharma, J. B., & Hulsey, D. (2014). Integrating the UAS in Undergradute Teaching and Research - Oppurtunity and Challanges at The University of Georgia. In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. ISPRS Technical Commission I Symposium (Vol. XL, pp. 17-20).
In article      
 
[21]  Grigillo, D., & Kanjir, U. (2012). Urban object extraction from digital surface model and digital aerial images. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Vol. 1-3, pp. 215-220).
In article      View Article
 
[22]  Murcko, J. (2017). Object-based classification for estimation of built-up density within urban environment.
In article      
 
[23]  Suzuki, K., Liu, W., Estrada, M., & Yamazaki, F. (2013). Object-based building extraction in Tacna, Peru using worldview-2 images. In Proceedings of ACRS 2013 (pp. 1159-1166).
In article      PubMed
 
[24]  Tomljenovic, I., Tiede, D., & Blaschke, T. (2016). A building extraction approach for Airborne Laser Scanner data utilizing the Object Based Image Analysis paradigm. International Journal of Applied Earth Observations and Geoinformation, 52, 137-148.
In article      View Article
 
[25]  Kavzoglu, T., & Yildiz, M. (2014). Parameter-Based Performance Analysis of Object-Based Image Analysis Using Aerial and Quikbird-2 Images. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-7, 2014, II(October), 31-37.
In article      
 
[26]  Baatz, M., & Schäpe, A. (2000). Multiresolution segmentation: An optimization approach for high quality multi-scale image segmentation. Proceedings of Angewandte Geographische Informationsverarbeitung XII, 12-23.
In article      
 
[27]  Martha, T. R., Kerle, N., Van Westen, C. J., Jetten, V., & Kumar, K. V. (2011). Segment optimization and data-driven thresholding for knowledge-based landslide detection by object-based image analysis. IEEE Transactions on Geoscience and Remote Sensing, 49(12 PART 1), 4928-4943.
In article      View Article
 
[28]  Li, C., & Shao, G. (2012). Object-oriented classification of land use / cover using digital aerial orthophotography. International Journal of Remote Sensing, 33, 922-938.
In article      View Article
 
[29]  Lowe, S. H., & Guo, X. (2011). Detecting an Optimal Scale Parameter in Object-Oriented Classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 4(4), 890-895.
In article      View Article
 
[30]  Pu, R., Landry, S., & Yu, Q. (2011). Object-based urban detailed land cover classification with high spatial resolution IKONOS imagery, 32(12), 3285-3308.
In article      
 
[31]  Chou, C.-S., Ho, C.-Y., & Huang, C.-I. (2009). The optimum conditions for comminution of magnetic particles driven by a rotating magnetic field using the Taguchi method. Advanced Powder Technology, 20(1), 55-61.
In article      View Article
 
[32]  Aggarwal, A., Singh, H., Kumar, P., & Singh, M. (2008). Optimizing power consumption for CNC turned parts using response surface methodology and Taguchi’s technique-A comparative analysis. Journal of Materials Processing Technology, 200(1-3), 373-384.
In article      View Article
 
[33]  Pradhan, B., Jebur, M. N., Shafi, H. Z. M., & Tehrany, M. S. (2015). Data Fusion Technique Using Wavelet Transform and Taguchi Methods for Automatic Landslide Detection From Airborne Laser Scanning Data and QuickBird Satellite Imagery. IEEE Transactions on Geoscience and Remote Sensing, 1-13.
In article      
 
[34]  Espindola, G. M., Camara, G., Reis, I. A., Bins, L. S., & Monteiro, A. M. (2006). Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation. International Journal of Remote Sensing, 27(14), 3035-3040.
In article      View Article
 
[35]  Gao, Y., Kerle, N., Mas, J. F., Navarrete, A., & Niemeyer, I. (2007). Optimized Image Segmentation and Its Effect on Classification Accuracy. 5th International Symposium - Spatial Data Quality, 4p. Retrieved from http://www.itc.nl/ISSDQ2007/proceedings/postersession/gao kerleet al.pdf.
In article      View Article
 
[36]  Fotheringham, A. S., Brunsdon, C., & Charlton, M. (2000). Quantitative Geography: Perspectives on Spatial Data Analysis,. Cleveland State University.
In article      
 
[37]  Wieland, M., & Pittore, M. (2014). Performance evaluation of machine learning algorithms for urban pattern recognition from multi-spectral satellite images. Remote Sensing, 6(4), 2912-2939.
In article      View Article
 
[38]  Trimble. (2014). eCognition ® Developer. User Guide.
In article      
 
[39]  Nichol, J., & Wong, M. S. (2008). Habitat Mapping in Rugged Terrain Using Multispectral Ikonos Images. Photogrammetric Engineering & Remote Sensing, 74(11), 1325-1334.
In article      View Article
 
[40]  Foody, G. M. (2002). Status of land cover classification accuracy assessment. Remote Sensing of Environment, 80(1), 185-201.
In article      View Article
 
[41]  Congalton, R. G., & Green, K. (1993). A Practical Look at the Sources of Confusion in Error Matrix Generation. American Society for Photogrammetry and Remote Sensing, 59(5), 641-644.
In article      
 
[42]  Hamedianfar, A., Shafri, H. Z. M., Mansor, S., & Ahmad, N. (2014). Improving detailed rule-based feature extraction of urban areas from WorldView-2 image and lidar data. International Journal of Remote Sensing, 35(5), 1876-1899.
In article      View Article
 
[43]  Foody, G. M. (2004). Thematic map comparison: evaluating the statistical significance of differences in classification accuracy. Photogrammetric Engineering & Remote Sensing, 70(5), 627-633.
In article      View Article
 
[44]  Leeuw, J. D., Jia, H., Yang, L., Schmidt, K., & Skidmore, A. K. (2006). Comparing accuracy assessment to infer superiority of image classification methods. International Journal of Remote Sensing, 27(1), 223-232.
In article      View Article
 
[45]  Goebel, K., & Saha, B. (2015). Handbook of Unmanned Aerial Vehicles. Springer Reference. Springer Dordrecht Heidelberg New York London.
In article      
 
[46]  Husran, Z (2016, November 8). The use of UAV Technology in Planning and Security Monitoring. PowerPoint presentation at the courses of UAV Technology and Image Processing, The National and Survey Institute (INSTUN), Perak, Malaysia.
In article      
 
[47]  International Civil Aviation Organization (ICAO) (Ed.). (2011). Unmanned Aircraft Systems (UAS). University Street, Montréal, Quebec, Canada.
In article      
 
[48]  JUPEM (2010). JUPEM: A pictorial journey 1885-2010 (Commemorating the 125th Anniversary). Kuala Lumpur, Malaysia.
In article      
 
[49]  Rabab, M.Z.M (2012). UAV capabilities for the purpose of Data Acquisition for geospatial defence data. Berita Ukur July – December 2012. Malaysia.
In article      
 
[50]  Wang, T. Y., & Huang, C. Y. (2007). Improving forecasting performance by employing the Taguchi method. European Journal of Operational Research, 176 (2), 1052-1065.
In article      View Article
 

Published with license by Science and Education Publishing, Copyright © 2018 Hairie Ilkham Sibaruddin, Helmi Zulhaidi Mohd Shafri, Biswajeet Pradhan and Nuzul Azam Haron

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

Cite this article:

Normal Style
Hairie Ilkham Sibaruddin, Helmi Zulhaidi Mohd Shafri, Biswajeet Pradhan, Nuzul Azam Haron. UAV-based Approach to Extract Topographic and As-built Information by Utilising the OBIA Technique. Journal of Geosciences and Geomatics. Vol. 6, No. 3, 2018, pp 103-123. http://pubs.sciepub.com/jgg/6/3/2
MLA Style
Sibaruddin, Hairie Ilkham, et al. "UAV-based Approach to Extract Topographic and As-built Information by Utilising the OBIA Technique." Journal of Geosciences and Geomatics 6.3 (2018): 103-123.
APA Style
Sibaruddin, H. I. , Shafri, H. Z. M. , Pradhan, B. , & Haron, N. A. (2018). UAV-based Approach to Extract Topographic and As-built Information by Utilising the OBIA Technique. Journal of Geosciences and Geomatics, 6(3), 103-123.
Chicago Style
Sibaruddin, Hairie Ilkham, Helmi Zulhaidi Mohd Shafri, Biswajeet Pradhan, and Nuzul Azam Haron. "UAV-based Approach to Extract Topographic and As-built Information by Utilising the OBIA Technique." Journal of Geosciences and Geomatics 6, no. 3 (2018): 103-123.
Share
  • Table 10. Variation of the combination of parameter C and Gamma with proportion to the size of sample data
[1]  Ahmad, A., Hashim, K. A., & Samad, A. M. (2010). Aerial Mapping using High Resolution Digital Camera and Unmanned Aerial Vehicle for Geographical Information System. 2010 6th International Colloquium on Signal Processing & Its Applications (CSPA), 201-206.
In article      
 
[2]  Hardin, P. J., & Hardin, T. J. (2010). Small-scale remotely piloted vehicles in environmental research. Geography Compass, 4(9), 1297-1311.
In article      View Article
 
[3]  Laliberte, A. S., & Rango, A. (2009). Texture and Scale in Object-Based Analysis of Subdecimeter Resolution Unmanned Aerial Vehicle (UAV) Imagery. IEEE Transactions on Geoscience and Remote Sensing, 47(3), 761-770.
In article      View Article
 
[4]  Nex, F., & Remondino, F. (2014). UAV for 3D mapping applications: A review. Applied Geomatics, 6(1), 1-15.
In article      View Article
 
[5]  Ahmad, A. (2016). The Direction of UAV Technology in Malaysia & The Principle of Photogrammetry for UAV Mapping. PowerPoint presentation at the courses of UAV Technology and Image Processing, The National and Survey Institute (INSTUN), Perak, Malaysia.
In article      
 
[6]  Mohammadi, M., Hahn, M., & J, E. (2011). Road Classification and Condition Investigation Using Hyperspectral Imagery. In Applied Geoinformatics for Society and Environment. Jomo Kenyatta University of Agriculture and Technology Stuttgart University of Applied Sciences.
In article      
 
[7]  Crommelinck, S., Bennett, R., Gerke, M., Nex, F., Yang, M. Y., & Vosselman, G. (2016). Review of automatic feature extraction from high-resolution optical sensor data for UAV-based cadastral mapping. Remote Sensing, 8(689).
In article      View Article
 
[8]  Jazayeri, I., Rajabifard, A., & Kalantari, M. (2014). A geometric and semantic evaluation of 3D data sourcing methods for land and property information. Land Use Policy, 36, 219-230.
In article      View Article
 
[9]  Blaschke, T., Lang, S., Lorup, E., Strobl, J., & Zeil, P. (2000). Object-Oriented Image Processing in an Integrated GIS / Remote Sensing Environment and Perspectives for Environmental Applications. Environmental Information for Planning, Politics and the Public, (1995), 555-570.
In article      
 
[10]  Gibril, M. B. A., Shafri, H. Z. M., & Hamedianfar, A. (2017). New semi-automated mapping of asbestos cement roofs using rule-based object-based image analysis and Taguchi optimization technique from WorldView-2 images. International Journal of Remote Sensing, 38(2), 467-491.
In article      View Article
 
[11]  Ma, L., Li, M., Ma, X., Cheng, L., Du, P., & Liu, Y. (2017). A review of supervised object-based land-cover image classification. ISPRS Journal of Photogrammetry and Remote Sensing, 130, 277-293.
In article      View Article
 
[12]  Tzotsos, A., Karantzalos, K., & Argialas, D. (2010). Object-based image analysis through nonlinear scale-space filtering. ISPRS Journal of Photogrammetry and Remote Sensing, 66(1), 2-16.
In article      View Article
 
[13]  Rao, R. S., Kumar, C. G., Prakasham, R. S., & Hobbs, P. J. (2008). The Taguchi methodology as a statistical tool for biotechnological applications: A critical appraisal. Biotechnology Journal, 3(4), 510-523.
In article      View Article  PubMed
 
[14]  Idrees, M. O., & Pradhan, B. (2016). Hybrid Taguchi-Objective Function optimization approach for automatic cave bird detection from terrestrial laser scanning intensity image. International Journal of Speleology, 45(3), 289-301.
In article      View Article
 
[15]  Moosavi, V., Talebi, A., & Shirmohammadi, B. (2013). Producing a landslide inventory map using pixel-based and object-oriented approaches optimized by Taguchi method Vahid. Geomorphology.
In article      
 
[16]  Sameen, M. I., & Pradhan, B. (2017). A Two-Stage Optimization Strategy for Fuzzy Object-Based Analysis Using Airborne LiDAR and High-Resolution Orthophotos for Urban Road Extraction, 2017.
In article      
 
[17]  Majeed, Z. A., Saip, S. N., & Ng, E. G. (2017). Towards augmented topographic map: Integration of digital photograph captured from MAV and UAV platform. In FIG Working Week 2017 Presentation Paper (pp. 1-15).
In article      
 
[18]  Ma, L., Cheng, L., Li, M., Liu, Y., & Ma, X. (2015). Training set size , scale , and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 102, 14-27.
In article      View Article
 
[19]  Qian, Y., Zhou, W., Yan, J., Li, W., & Han, L. (2015). Comparing Machine Learning Classifiers for Object-Based Land Cover Classification Using Very High Resolution Imagery. Remote Sensing, 7, 153-168.
In article      View Article
 
[20]  Sharma, J. B., & Hulsey, D. (2014). Integrating the UAS in Undergradute Teaching and Research - Oppurtunity and Challanges at The University of Georgia. In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. ISPRS Technical Commission I Symposium (Vol. XL, pp. 17-20).
In article      
 
[21]  Grigillo, D., & Kanjir, U. (2012). Urban object extraction from digital surface model and digital aerial images. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Vol. 1-3, pp. 215-220).
In article      View Article
 
[22]  Murcko, J. (2017). Object-based classification for estimation of built-up density within urban environment.
In article      
 
[23]  Suzuki, K., Liu, W., Estrada, M., & Yamazaki, F. (2013). Object-based building extraction in Tacna, Peru using worldview-2 images. In Proceedings of ACRS 2013 (pp. 1159-1166).
In article      PubMed
 
[24]  Tomljenovic, I., Tiede, D., & Blaschke, T. (2016). A building extraction approach for Airborne Laser Scanner data utilizing the Object Based Image Analysis paradigm. International Journal of Applied Earth Observations and Geoinformation, 52, 137-148.
In article      View Article
 
[25]  Kavzoglu, T., & Yildiz, M. (2014). Parameter-Based Performance Analysis of Object-Based Image Analysis Using Aerial and Quikbird-2 Images. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-7, 2014, II(October), 31-37.
In article      
 
[26]  Baatz, M., & Schäpe, A. (2000). Multiresolution segmentation: An optimization approach for high quality multi-scale image segmentation. Proceedings of Angewandte Geographische Informationsverarbeitung XII, 12-23.
In article      
 
[27]  Martha, T. R., Kerle, N., Van Westen, C. J., Jetten, V., & Kumar, K. V. (2011). Segment optimization and data-driven thresholding for knowledge-based landslide detection by object-based image analysis. IEEE Transactions on Geoscience and Remote Sensing, 49(12 PART 1), 4928-4943.
In article      View Article
 
[28]  Li, C., & Shao, G. (2012). Object-oriented classification of land use / cover using digital aerial orthophotography. International Journal of Remote Sensing, 33, 922-938.
In article      View Article
 
[29]  Lowe, S. H., & Guo, X. (2011). Detecting an Optimal Scale Parameter in Object-Oriented Classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 4(4), 890-895.
In article      View Article
 
[30]  Pu, R., Landry, S., & Yu, Q. (2011). Object-based urban detailed land cover classification with high spatial resolution IKONOS imagery, 32(12), 3285-3308.
In article      
 
[31]  Chou, C.-S., Ho, C.-Y., & Huang, C.-I. (2009). The optimum conditions for comminution of magnetic particles driven by a rotating magnetic field using the Taguchi method. Advanced Powder Technology, 20(1), 55-61.
In article      View Article
 
[32]  Aggarwal, A., Singh, H., Kumar, P., & Singh, M. (2008). Optimizing power consumption for CNC turned parts using response surface methodology and Taguchi’s technique-A comparative analysis. Journal of Materials Processing Technology, 200(1-3), 373-384.
In article      View Article
 
[33]  Pradhan, B., Jebur, M. N., Shafi, H. Z. M., & Tehrany, M. S. (2015). Data Fusion Technique Using Wavelet Transform and Taguchi Methods for Automatic Landslide Detection From Airborne Laser Scanning Data and QuickBird Satellite Imagery. IEEE Transactions on Geoscience and Remote Sensing, 1-13.
In article      
 
[34]  Espindola, G. M., Camara, G., Reis, I. A., Bins, L. S., & Monteiro, A. M. (2006). Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation. International Journal of Remote Sensing, 27(14), 3035-3040.
In article      View Article
 
[35]  Gao, Y., Kerle, N., Mas, J. F., Navarrete, A., & Niemeyer, I. (2007). Optimized Image Segmentation and Its Effect on Classification Accuracy. 5th International Symposium - Spatial Data Quality, 4p. Retrieved from http://www.itc.nl/ISSDQ2007/proceedings/postersession/gao kerleet al.pdf.
In article      View Article
 
[36]  Fotheringham, A. S., Brunsdon, C., & Charlton, M. (2000). Quantitative Geography: Perspectives on Spatial Data Analysis,. Cleveland State University.
In article      
 
[37]  Wieland, M., & Pittore, M. (2014). Performance evaluation of machine learning algorithms for urban pattern recognition from multi-spectral satellite images. Remote Sensing, 6(4), 2912-2939.
In article      View Article
 
[38]  Trimble. (2014). eCognition ® Developer. User Guide.
In article      
 
[39]  Nichol, J., & Wong, M. S. (2008). Habitat Mapping in Rugged Terrain Using Multispectral Ikonos Images. Photogrammetric Engineering & Remote Sensing, 74(11), 1325-1334.
In article      View Article
 
[40]  Foody, G. M. (2002). Status of land cover classification accuracy assessment. Remote Sensing of Environment, 80(1), 185-201.
In article      View Article
 
[41]  Congalton, R. G., & Green, K. (1993). A Practical Look at the Sources of Confusion in Error Matrix Generation. American Society for Photogrammetry and Remote Sensing, 59(5), 641-644.
In article      
 
[42]  Hamedianfar, A., Shafri, H. Z. M., Mansor, S., & Ahmad, N. (2014). Improving detailed rule-based feature extraction of urban areas from WorldView-2 image and lidar data. International Journal of Remote Sensing, 35(5), 1876-1899.
In article      View Article
 
[43]  Foody, G. M. (2004). Thematic map comparison: evaluating the statistical significance of differences in classification accuracy. Photogrammetric Engineering & Remote Sensing, 70(5), 627-633.
In article      View Article
 
[44]  Leeuw, J. D., Jia, H., Yang, L., Schmidt, K., & Skidmore, A. K. (2006). Comparing accuracy assessment to infer superiority of image classification methods. International Journal of Remote Sensing, 27(1), 223-232.
In article      View Article
 
[45]  Goebel, K., & Saha, B. (2015). Handbook of Unmanned Aerial Vehicles. Springer Reference. Springer Dordrecht Heidelberg New York London.
In article      
 
[46]  Husran, Z (2016, November 8). The use of UAV Technology in Planning and Security Monitoring. PowerPoint presentation at the courses of UAV Technology and Image Processing, The National and Survey Institute (INSTUN), Perak, Malaysia.
In article      
 
[47]  International Civil Aviation Organization (ICAO) (Ed.). (2011). Unmanned Aircraft Systems (UAS). University Street, Montréal, Quebec, Canada.
In article      
 
[48]  JUPEM (2010). JUPEM: A pictorial journey 1885-2010 (Commemorating the 125th Anniversary). Kuala Lumpur, Malaysia.
In article      
 
[49]  Rabab, M.Z.M (2012). UAV capabilities for the purpose of Data Acquisition for geospatial defence data. Berita Ukur July – December 2012. Malaysia.
In article      
 
[50]  Wang, T. Y., & Huang, C. Y. (2007). Improving forecasting performance by employing the Taguchi method. European Journal of Operational Research, 176 (2), 1052-1065.
In article      View Article