Real-Time Adaptive model for Satellite Image Classification in Dynamic Disaster Environments
During catastrophic events like floods or wildfires, satellite imagery is essential for comprehending the conditions on the ground. Unfortunately, conventional computer vision models find it difficult to adjust to the quickly changing environment that occurs during catastrophes. As a result, the precision with which impacted regions are classified is reduced, making it more difficult to identify flooded areas or destroyed buildings. This research’s main goal is to overcome these obstacles by creating an adaptive computer vision model that is specially made for classifying satellite images. This model will have the capability to generalize across a variety of disaster scenarios and will dynamically adapt to changing conditions by including real-time adaptation mechanisms. As a result, during catastrophes, we will be able to use satellite imagery to gather more precise and fast information, which will increase human safety and disaster response. Our earlier research on the categorization of urban buildup, and land usage and land cover will be expanded upon in this study.
Related publications
- Automatic Detection of Natural Disaster Effect on Paddy Field from Satellite Images using Deep Learning Techniques, 8th International Conference on Control and Robotics Engineering (ICCRE), Niigata, Japan, April 21-23, 2023.
- Attention Toward Neighbors: A Context-Aware Framework of High-Resolution Image Segmentation, 28th IEEE International Conference on Image Processing (ICIP), Anchorage, Alaska, USA, 2021
- Understanding the Urban Environment from Satellite Images with New Classification Method—Focusing on Formality and Informality, Sustainability, vol 14, issue 7, MDPI, 2022
- LULC Segmentation of RGB Satellite Image using FCN-8″, in the proceedings of 3rd SLAAI International Conference on Artificial Intelligence, 2019, Sri Lanka and in Communications in Computer and Information Science, Book Chapter, Springer, 2019.