Crop Identification and Yield Estimation using SAR Data

Accurate crop identification and yield estimation are crucial for policymakers to develop effective agricultural policies, allocate resources efficiently, and support farmers in adopting suitable technologies. However, optical remote sensing methods, commonly used for crop identification and yield estimation, face challenges due to cloud cover and adverse weather conditions. King et al. (2013) [4] estimated that approximately 67 percent of the Earth’s surface is often obscured by clouds, making it difficult to obtain high-quality optical remote sensing data. Additionally, humid and semi-humid climate zones with abundant water sources pose further challenges for remote sensing in agriculture. To overcome these limitations, this project aims to utilize Synthetic Aperture Radar (SAR) data for crop identification and yield estimation. It enables continuous data collection regardless of light and weather conditions by using microwaves that can penetrate clouds. As SAR is sensitive to both the dielectric and geometrical characteristics of plants, it captures information below the vegetation canopy cover and provides insights into crop structure and health. Furthermore, SAR provides flexibility in imaging parameters such as incident angles and polarization configurations, facilitating the extraction of diverse information about agricultural landscapes.

Related Works:

  1. D. Suchi, A. Menon, A. Malik, J. Hu and J. Gao, Crop Identification Based on Remote Sensing Data using Machine Learning Approaches for Fresno County, California, 2021 IEEE Seventh International Conference on Big Data Computing Service and Applications (BigDataService), Oxford, United Kingdom, 2021, pp. 115-124, doi: 10.1109/BigDataService52369.2021.00019.
  2. Liu, C., Chen, Z., Shao, Y., Chen, J., Hasi, T., & Pan, H. (2019). Research advances of SAR remote sensing for agriculture applications: A review. Journal of Integrative Agriculture, 18(3), 506-525.
  3.   J. Singh, U. Devi, J. Hazra and S. Kalyanaraman, Crop-Identification Using Sentinel-1 and Sentinel-2 Data for Indian Region, IGARSS 2018 – 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018, pp. 5312-5314, doi: 10.1109/IGARSS.2018.8517356.
  4.   King, M. D., Platnick, S., Menzel, W. P., Ackerman, S. A., & Hubanks, P. A. (2013). Spatial and Temporal Distribution of Clouds Observed by MODIS Onboard the Terra and Aqua Satellites. IEEE Transactions on Geoscience and Remote Sensing, 51(7), 3826–3852. doi:10.1109/tgrs.2012.2227333

Test-Time Domain Adaptation for Urban Categorization from Satellite Images

The urban environment is a complex system comprising various elements such as buildings, roads, vegetation, and water bodies. Classifying urban cities from satellite images or images captured by UAVs is an important task for urban planning, disaster management, and environmental monitoring. Urban environments differ across various cities of the world, and existing models for urban classification struggle to adapt to these changes. This project aims to develop an adaptive model for classifying urban cities from satellite images. This model will have the capability to generalize across different urban environments and adapt to the changing urban environment in real time. The model will be trained on a large dataset of satellite images of urban cities from different parts of the world. In inference or test time, the parameters of the model will be updated based on the changes in the different urban environments it has been deployed. This work will be built on recent work on Test time domain adaptation methods and our earlier research on the categorization of urban buildup [1], land usage, and land cover [2,3].

Related Works:

  1. Cheng, Q.; Zaber, M.; Rahman, A.M.; Zhang, H.; Guo, Z.; Okabe, A.; Shibasaki, R. Understanding the Urban Environment from Satellite Images with New Classification Method—Focusing on Formality and Informality. Sustainability 2022, 14, 4336. https://doi.org/10.3390/su14074336
  2. Rahman, A.K.M.M.; Zaber, M.; Cheng, Q.; Nayem, A.B.S.; Sarker, A.; Paul, O.; Shibasaki, R. Applying State-of-the-Art Deep-Learning Methods to Classify Urban Cities of the Developing World. Sensors 2021, 21, 7469. https://doi.org/10.3390/s21227469
  3. Niloy, Fahim Faisal, et al. “Attention toward neighbors: A context aware framework for high resolution image segmentation.” 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021. 

Developing a Multi-Agent Framework for Multimodal Multi-Task Learning

This project is focused on enhancing the capabilities of large multimodal models. Multimodal learning is an area of machine learning where models are designed to process and correlate information from various input modalities, such as text, images, and audio. In this project, we are developing a multi-agent framework where each agent is specialized in understanding a specific modality and task. These agents work in tandem, the framework incorporates specific agents for the tasks they are specialized in dynamically, enabling the system to handle multiple tasks simultaneously. By integrating these multi-agent based ideas into large multi-modal models, our project aims to significantly improve performance in multi-task learning and generalization to new tasks.

Related publications:

  1. Large Multimodal Agents: A Survey
    Xie, J., Chen, Z., Zhang, R., Wan, X., & Li, G. (2024). Large Multimodal Agents: A Survey. arXiv:2402.15116. https://doi.org/10.48550/arXiv.2402.15116 
  2. AgentLite: ALightweightLibraryforBuildingandAdvancing Task-Oriented LLM Agent System
    Liu, Z., Yao, W., Zhang, J., Yang, L., Liu, Z., Tan, J., Choubey, P. K., Lan, T., Wu, J., Wang, H., Heinecke, S., Xiong, C., & Savarese, S. (2024). AgentLite: A Lightweight Library for Building and Advancing Task-Oriented LLM Agent System. arXiv:2402.15538. https://doi.org/10.48550/arXiv.2402.155381
  3. MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion
    Li, S., Wang, R., Hsieh, C.-J., Cheng, M., & Zhou, T. (2024). MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion. arXiv:2402.12741. https://doi.org/10.48550/arXiv.2402.12741

CSC490,SEN505,CSC602,CEN490

Five Papers Accepted in EMBC 2024

Undergraduate Project Update Presentation Day

CCDS Undergrad Project Update Presentation Day, held on February 8, 2024. Eight groups, under the supervision of CCDS mentors, showcased their progress and findings. The presentations encompassed a diverse range of topics and research endeavors. It was a culmination of dedicated efforts and collaborative work within the CCDS community. The event provided a platform for students to share their achievements and insights with peers and faculty members alike.

Boundary-Enhanced Attention for Satellite Imagery dataScience

Satellite image classification presents unique challenges distinct from traditional urban scene datasets, including significant class imbalance and the scarcity of comprehensive examples within single frames. While recent advancements in semantic segmentation and metric learning have shown promise in urban scene datasets, their direct applicability to satellite image classification remains uncertain. This paper introduces a novel approach, the Boundary Attention (BA) Loss , specifically designed to address these challenges in satellite imagery. BA emphasizes the significance of boundary regions within satellite imagery, aiming to mitigate information relation complexity by directing enhanced attention to minority classes and improving attention mechanisms along class boundaries. Through comprehensive experimental evaluation and comparison with existing methods, this paper demonstrates the effectiveness and adaptability of the BA method, paving the way for more accurate and robust satellite image classification systems. The proposed BA method offers a tailored solution that stands to significantly improve classification accuracy in the context of satellite image analysis.