Authors
Yalong Pi
Publication date
2020/8/6
Description
Disasters affect every aspect of society, causing significant losses and interruptions to our way of life. Timely and reliable disaster information retrieval and exchange is key to efficiently implementing disaster mitigation, preparedness, response, and recovery. While aerial surveys of disaster-affected areas is one of the most effective ways for disaster reconnaissance, they are still costly, slow, and resource-intensive. The research presented in this dissertation investigates the use of artificial intelligence (AI) to augment current capacities in aerial footage processing, object localization and mapping, and quantification of disaster damage. This framework can provide relatively fast and accurate disaster impact information to first responders, affected people, governments and authorities, non-governmental organizations (NGOs), and other stakeholders, ultimately improving the quality and timeliness of decisions made to increase disaster resiliency. To enable visual recognition of the extent of disaster damage, two fully annotated, multi-class video datasets, Volan2018 and Voaln2019, are created. Convolutional neural network (CNN) architectures including you-only-look-once (YOLO), RetinaNet, Mask-RCNN, and PSPNet which are pre-trained on COCO, VOC, and ImageNet datasets, are then trained and tested on both Volan2018 and Voaln2019 datasets. Several experiments including object detection, projection, mapping, and quantification are carried out. Key performance factors including CNN architecture, viewpoint altitude, pre-trained weights, data balance, projection mechanism, and object sizes are also investigated. Findings of this work are …