Evaluation of Minimum Noise Fraction Transformation and Independent Component Analysis for Dwelling Annotation in Refugee Camps Using Convolutional Neural Network

Omid Ghorbanzadeh, Zahra Dabiri, Dirk Tiede, Sepideh Tavakkoli Piralilo, Thomas Blaschke, Stefan Lang

Publikation: KonferenzbeitragPosterPeer-reviewed

Abstract

Up-to-date information on the areas affected by a conflict situation or a natural disaster is essential to the coordination of humanitarian relief. The capacity to monitor such areas is also essential in the context of human rights starting from primary information generation, such as the number of people who have been forced to flee their home. Refugee camps are often considered as the earliest accommodation for these displaced people. However, conflict monitoring and gaining access to the refugee camps for conducting ground-based documentation of human rights violations or population estimation are too limited or dangerous in most cases. Therefore, Earth observation (EO) data, including very high resolution (VHR) images, are widely considered as the most accessible source providing timely and detailed information necessary for supporting humanitarian response. The VHR satellite images have a significant potential for providing humanitarian organisations with essential insights into conditions on the refugee camps, including type, number and size of dwellings and consequently an estimated number of displaced people. Although annotation and classification of different dwellings in immense size and complex refugee camps is a challenging task, some workflows such as object-based image analysis (OBIA) used in full- and semi-automatic information extraction showed good results (see, e.g. Witmer, 2015). These computer-assisted workflows are mostly expert knowledge-based rule-sets that make it possible to transfer it to another refugee camp; nevertheless, often adaptation of the rule-sets is necessary. This study aims to develop an alternative approach for refugee camp mapping with convolutional neural networks (CNNs). Although many studies using CNN models resulted in higher accuracies as compared to conventional neural networks and machine learning models such as support vector machine, the CNN models require much more labelled training dataset for efficient training performance. Preparing such large training datasets can be expensive in practice. Thus, some augmentation techniques have been developed to increase the training dataset artificially. The potential impacts of using different data augmentation techniques on the final accuracy are not clear from the literature. However, they are believed to have a high potential as supportive techniques for improving the training performance of the CNNs. The data augmentation techniques used are also known as data distortion because they use specific deformations to multiply the volume of the training dataset artificially. Moreover, some deformations are more common such as rotation, randomly mirroring, translation the image and window shifting. All of these techniques may have particular pros and cons and have implemented to improve model performance. In this study, we used the CNN model implemented in Trimble’s eCognition software environment based on the Google TensorFlow library. We increased the spectral feature space to assist the learning procedure in our CNN workflow. The added spectral features were derived from the complexity reduction techniques. We compared the results with those achieved from using only the original spectral bands. Our study comprises the following main steps: (1) applying spectral complexity reduction techniques, including the minimum noise fraction transformation (MNF), the principal component analysis (PCA) and the independent component analysis (ICA); (2) training and testing the CNN model with the augmented data set and the original data; (3) using multiple parameters for assessing the relevance of the proposed strategy by comparing between with and without data augmentation. The MNF transformation is a linear transformation method used for the reduction of spectral bands. The process of MNF transformation consists of two separate principal components analysis rotations; at first, the PCA is used to de-correlate and rescale the noise in the data, resulting in transformed data in which the noise has unit variance and no band-to-band correlations. The second rotation uses the PCA to derive components from the original image data after they have been noise-whitened by the first rotation. The result of an MNF transformation is a group of projected components based on their variance, where the first component contains the highest variation and, hence, has the highest information content. The information content decreases as the number of components increases. The second dimensionality reduction technique used in this study is ICA. It is an unsupervised feature extraction method and is applied to separate components, with the assumption that each band is a linear mixture of independent components. The main difference between the ICA transformation and the MNF transformation based dimensionality reduction techniques is that in the ICA transformation, the assumption of normal distribution is not necessary. We applied PCA, MNF, and ICA transformations on a WorldView-3 satellite image captured in 12th April 2015. A visual inspection of the dimensionality reduction results revealed that ICA is more supportive for data augmentation and dwelling annotation in combination with the CNN model. The accuracy assessment on CNN approach was based on three kinds of classified objects, namely, true positive (TP), false positive (FP), and false negative (FN).
OriginalspracheEnglisch
DOIs
PublikationsstatusVeröffentlicht - 3 Juli 2019
Veranstaltung39th Annual EARSeL Symposium - Salzburg, Österreich
Dauer: 1 Juli 20194 Juli 2019
http://symposium.earsel.org/39th-symposium-Salzburg/

Konferenz

Konferenz39th Annual EARSeL Symposium
Land/GebietÖsterreich
OrtSalzburg
Zeitraum1/07/194/07/19
Internetadresse

Systematik der Wissenschaftszweige 2012

  • 102 Informatik

Dieses zitieren