Fusion of multisource data is becoming a widely used procedure due to the availability of complementary yet dissimilar datasets. The combined use of high spatial resolution imagery and lidar (light detection and ranging) derived digital surface models (DSM) can reduce interclass confusion in the fusion process. However, pixel-level data fusion does not take spatial information into account. Pixels from multisource images are fused depending on their spectral values, regardless of their neighbour values. Object-level fusion overcomes this shortcoming by segmenting multisource images into meaningful objects and then performing fusion with the information imbedded into their topology. This paper compares the results of the pixel- and object-level fusion of a lidar derived DSM with colour aerial photography and multispectral imagery. The comparison is based on the assessment of the classification accuracy where reference information has been collected through field survey. Pixel-level fusion of the colour photography and the DSM exhibits better results than sole classification of colour photography. The same result is found for the multispectral imagery and the DSM. Object-level fusion achieves superior results compared to all pixel-level classification of tested categories. Object-level fusion of the colour photography and the DSM shows the highest classification accuracy (91%).
History
Start page
3
End page
18
Total pages
16
Outlet
Innovations in Remote Sensing and Photogrammetry (Lecture Notes in Geoinformation and Cartography)