We are excited to offer research internship positions

This can be a great opportunity to participate in our research projects and study sessions to prepare yourself for higher studies. We are looking for students who are truly enthusiastic about research and mathematics.

PROJECTS

  • Multimodal Models  
  • Bangla Language Processing
  • Reasoning and Causality in LLMs
  • Graph Neural Networks
  • State Space Models
  • ML for Physics and Biological sciences
  • Data Analysis
  • Remote Sensing using Satellite Data
  • Human Computer Interaction

ELIGIBILITY

  • Completed undergrad 
  • Must have strong background in Linear Algebra & Calculus
  • Must be interested in Learning advanced Math 
  • Must have knowledge in ML, DL, Computational
  • Experience in coding in PyTorch

WHY BE AN INTERN?

  • Enhance your  research experience
  • Receive hands-on training on coding            
  • Participate in advanced math sessions            
  • Join regular paper discussion sessions            
  • Be mentored  by experienced research assistants and faculty members              
  • Publish in prestigious venues                
  • Become full-time research assistants

HOW TO APPLY?

Apply through this form by Nov 15, 2024

https://forms.gle/PcEaSM8Uwt76wfeB7 
contact ccds@iub.edu.bd and visit https://ccds.ai/ & fb page CCDS.IUB for more information  

(full-time positions to open in July ‘25)

Be ready for higher studies

ATTENTION

Five unpaid internship positions are open.

UFlow-Net: A Unified Approach for Improved Video Frame Interpolation

In computer vision, video frame interpolation  plays a significant role in video enhancement by synthesizing intermediate frames to improve temporal resolution and visual quality. This techniques help reduce motion blur, create smoother slow motion videos and enhancing total viewing experience, especially in low frame rate video. This is vital for application like video processing, streaming and video restoration. We are developing UFlow-Net, a deep learning-based model that improves frame interpolation accuracy.

The process starts with a dataset of three consecutive video frames. The first and third frames are used as inputs, while the second frame is used as a reference for evaluation. These frames go through preprocessing, such as resizing, normalizing, and stacking the frames.

Next, the preprocessed frames are passed into UFlow-Net, which consists of two key steps. The Flow-Enhanced Encoder-Decoder captures motion and spatial details from the input frames, and reconstructs the features, keeping the motion consistent. The Refined Frame Synthesis step, refines the features more and generates the missing middle frame by using the learned motion patterns and spatial relationships. We evaluate our model using PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index Measure). Our Model achieved a PSNR of 35.65 dB and SSIM score 0.97

Relevant publications: 

F. Israq, S. B. Alam, H. Khatun, S.S. Sarker, S.T. Bhuiyan, M. Haque, R. Rahman and S. Kobashi ” UFlow-Net: A Unified Approach for Improved Video Frame Interpolation” in Proc. 2024 27th International Conference on Computer and Information Technology (ICCIT), Cox’s Bazar, Bangladesh, Dec. 20-22, 2024. 

ROI-Guided Lumbar Spine Degeneration Detection

Lumbar Spine Degeneration, otherwise known as Disk Degeneration Disease, is the deterioration/weakening of the intervertebral disks in the lower back, causing them to lose their ability to absorb shock and potentially leading to pain and discomfort due to nerve compression. While lumbar degeneration is a natural part of aging, its progression and severity can vary from person to person. Early detection and classification of the type and stage of degeneration are crucial for effective management and treatment. We are using the power of AI to automate the detection of lumbar degeneration.

Firstly, We merge and pre-process the Magnetic Resonance Imaging (MRI) scans to convert them to RGB images, comprised with multi-label diseases per image.Then, we used an YOLO(You Only Look Once) architecture to detect the region of interests (ROI) that include the intervertebral disks in the MRI scans from both Sagittal and Axial Plane. Finally, the ROIs are classified into different degree of degeneration.

The goal of this research is to detect Lumbar Spine Degeneration as early as possible in order to ensure the best quality of life and outcome for the patient.

Detection-Guided Kidney Abnormality Segmentation

Tumors and cysts are two major kidney abnormalities which can lead to cancer if left undetected and untreated timely. The current diagnostic process depend on computed tomography (CT) scan screening which is time-consuming and specialist-dependent. This leads to fatigue for radiologists and doctors. As a result, diagnostic errors increase. To enhance the diagnostic process, we are developing an AI-based automated method for kidney abnormality detection. 

The first step in this method is kidney detection using the YOLOv8 model. This is conducted on 2D sliced images extracted from 3D CT. The detected kidney regions are then cropped to reduce the background area. After that, abnormal region segmentation is conducted using the U-Net segmentation model. The abnormal region consists of either cyst or tumor or both. This produces a mask of abnormal region for each 2D slice. The 2D mask slices are combined to construct a 3D mask of the abnormal region. 

This study aims to automate the detection process of kidney tumor and cyst, offering a faster and enhanced approach to assist the doctors and radiologists in diagnostic process.

Relevant publications:

1. J. Faruk, S. B. Alam, S. S. T. Elma, S. Wasi, R. Rahman and S. Kobashi, “Kidney Abnormality Detection Using Segmentation-Guided Classification on Computed Tomography Images,” 2024 International Conference on Machine Learning and Cybernetics (ICMLC), Miyazaki, Japan, 2024, pp. 414-419, doi: 10.1109/ICMLC63072.2024.10935113.

2. S. Wasi, S. B. Alam, R. Rahman, M. A. Amin and S. Kobashi, “Kidney Tumor Recognition from Abdominal CT Images using Transfer Learning,” 2023 IEEE 53rd International Symposium on Multiple Valued Logic (ISMVL), Matsue, Japan, 2023, pp. 54-58, doi: 10.1109/ISMVL57333.2023.00021

Pelvic Bone Segmentation Guided Fracture Classification

Pelvic fractures are critical injuries that require timely and precise diagnosis. We are developing an AI-based automated computer-aided diagnosis (CAD) system to enhance the accuracy and reliability of pelvic fracture detection, ultimately supporting faster and more effective medical assessments.

The first step in our approach is multi-bone segmentation, a deep learning process that identifies and isolates the nine pelvic bones from the X-ray. The segmented bone masks are then aggregated to create a refined mask of the pelvic region. 

Next, we extract the segmented X-ray from the original X-ray using the aggregated mask. This helps the system focus only on relevant areas instead of the entire X-ray. We then feed it into a separate deep learning model for classification. This classification model determines whether a fracture is present.

To improve interpretability, we use GradCAM visualization, which highlights the critical areas the AI focuses during detection. This ensures the model’s decisions are transparent and aligned with anatomical relevance.

Relevant publications:

S. A. Ul Alam, S. Binte Alam, S. Saha, M. Haque, R. Rahman and S. Kobashi, “Pelvic bone region segmentation (PBRS) from X-ray image using convolutional neural network (CNN),” 2023 26th International Conference on Computer and Information Technology (ICCIT), Cox’s Bazar, Bangladesh, 2023, pp. 1-6, doi: 10.1109/ICCIT60459.2023.10441155.

Group photo of CSE, IUB faculty members who are part of CCDS

From left to right: Dr Ashraful Islam (Assistant Prof), Asif Mahmud (Lecturer), Dr Md Rashedur Rahman (Assistant Prof), Dr Saadia Binte Alam (Associate Prof), Dr. Farhana Sarker (Associate Prof), Dr Ashraful Amin (Prof), Dr Amin Ahsan Ali (Prof), Dr AKM Mahbubur Rahman (Associate Prof), Sanzar Alam (Lecturer).

We welcome the new CSE faculty members joining CCDS. Here is a brief introduction:

Dr. Saadia Binte Alam’s research area of interest is medical image and signal processing and machine learning. She is the current Head of the department of CSE, IUB.

Dr Farhana Sarker recently joined IUB and her research involves Health informatics, Human Computer Interaction, Machine Learning. Dr Farhana Sarker is the recipient of several international grants, including one in 2024 from Bill and Melinda Gates Foundation.

Dr Md Rashedur Rahman recently joined IUB after completing his D.Eng degree from University of Hyogo, Japan in 2024. He works on medical image processing and analysis, video analysis, machine learning among others.

We are expecting to have faculty members from other disciplines joining CCDS.

Our paper got accepted to CHI (the Top Ranked Conference on Human Factors in Computing Systems) 2025’s Late Breaking Work Track!

Our paper titled “Improving User Engagement and Learning Outcomes in LLM-Based Python Tutor: A Study of PACE” got accepted to CHI (the Top Ranked Conference on Human Factors in Computing Systems) 2025’s Late Breaking Work Track!
What gives us more pleasure is that this work came out of an undergraduate senior project. The project done was by two IUB CSE graduates Ashfaq and Shochcho and was supervised by Prof Amin Ahsan Ali. Apart from them, other coauthors include CCDS RA Rohan and Dr Ashraful Islam, Hasnain Heickal, and Dr Akm Mahbubur Rahman, and Prof Ashraful Amin.
This paper introduces PACE (Python AI Companion for Enhanced Engagement), a system leveraging Small LMs to deliver step-by-step guidance and adaptive feedback for teaching Python.
This study examines (1) the PACE system’s effectiveness in programming education according to learners, (2) learners’ trust in PACE versus traditional resources, and (3) design recommendations to enhance engagement and learning outcomes. PACE contributes to advancing cost-effective, scalable programming education.

Watch the demonstration video. [Click here…]

2nd CCDS Workshop on Deep Learning Code Management

This is the 2nd iteration of the workshop. This time we invited 28 graduates from IUB, NSU, BRAC U, UIU, MIST, IUT, AUST, Science University of Malaysia, DU, and BUET who had applied for a research intern position in CCDS. Together with them graduates and undergrads from IUB joined the workshop. Topics included hands on coding on CNNs, LSTMs, Transformers, as well as code management and visualization using VS code, PyTorch Lightening, and wandb.ai.

Dr. AKM Mahbubur Rahman with the research assistants conducted the workshop. Dr Ashraful Amin, Director of CCDS handed out the certificates.

We would like to specially thank our RAs – Fahim, Iftee, Moshiur, Sazzat, Monon, and Nabarun for helping us out preparing the materials, and helping Dr Mahbub in conducting the sessions.
We plan to soon share the resources used in the workshop soon.

Announcement of the Interdisciplinary Computational Biology workshop 2025

This 3-day workshop will be open to all senior undergrad and graduate students, Researcher, and faculty members. The aim of this workshop is to encourage participants in interdisciplinary research. For more details see the flyer below

Congratulations to Muhtasim Ibteda and Ashfaq for completing their senior project.

They developed PACE a Python AI companion for Enhanced Engagement. For this work they generated synthetic data from GPT 3.5 turbo for scaffolding and conversion fine-tuning. The LORA fine tuned Gemma 2B model was used for making the system relatively lightweight. This trains the LLM model to breakdown complex problems into subproblems and generate hints and structured steps for the students. On the other hand the conversation data allows the LLM to engage with users using natural, human-like dialogue, to avoid hallucinations, to supports error correction and detailed feedback, and to enhances motivation and interest through interaction with users with different learning styles, and pace. Evaluation of the system was also performed.

A wider evaluation of the system is underway and we plan to make the version 2 of the system available to our intro to python programming students. Interaction datasets collected from students (with their consent of course) will be valuable for making such a system more reliable.

We hope to see more exciting work from Ibteda and Ashfaq in the near future.

Congratulations to Farzana Islam, Sumaya, Md. Fahad Monir, and Dr Ashraful Islam for getting their paper accepted in Data in Brief, Elsevier.

The paper presents FabricSpotDefect dataset which is an annotated dataset for identifying spot defects in different fabric types.

Here is a short description of the paper:

The FabricSpotDefect dataset is, to the best of our knowledge, the first dataset specifically designed to accurately challenge computer vision in detecting fabric spots. There are a total of 1,014 raw images and manually annotated 3,288 different categories of spots. This dataset expands to 2,300 augmented images after applying six categories of augmentation techniques like flipping, rotating, shearing, saturation adjustment, brightness adjustment, and noise addition. We manually conducted annotations on original images to provide real-world essence rather than augmented images. Two versions are considered for augmented images, one is YOLOv8 resulting in 7,641 annotations and another one is COCO format resulting in 7,635 annotations. This dataset consists of various types of fabrics such as cotton, linen, silk, denim, patterned textiles, jacquard fabrics, and so on, and spots like stains, discolorations, oil marks, rust, blood marks, and so on. These kinds of spots are quite difficult to detect manually or using traditional methods. The images were snapped in home lights, using basic everyday clothes, and in normal conditions, making this FabricSpotDefect dataset established in real-world applications.

The figure below shows different spot samples with annotated bounding boxes and polygon annotation in red color 109 where (a) ink stain (b) paint spot (c) marker spot (d) makeup stain (e) rust stain (f) glue spot 110 (g)detergent stain (h) oil stain (i) coffee stain (j) food spot (k) blood spot, and (l) sweat stain.

link to download the dataset will be shared soon.