Masudur Rahman

prof_pic.jpg

PostDoc @PurdueEngineers (IE), Ph.D in CS@Purdue

Email: rahman64@purdue.edu

My research focuses on developing principled and adaptable intelligence for autonomous systems operating in complex, high-stakes environments. I investigate generalization, and sample efficiency in decision-making under uncertainty, with an emphasis on algorithms that enable agents to reason, adapt, and act in dynamic settings. I design novel reinforcement learning algorithms and advance the reasoning capabilities of vision-language models (VLMs and LLMs), with a focus on grounded, interactive environments. These contributions enable high-impact applications, including burn diagnosis through medical imaging, and medical and emergency robotics, where systems must perceive affordances and improvise actions in unstructured, rapidly evolving conditions.

I am currently a Postdoctoral Research Assistant in the Edwardson School of Industrial Engineering at Purdue University, working with Dr. Juan P. Wachs. I completed my Ph.D. in Computer Science at Purdue University in 2024 under the supervision of Dr. Yexiang Xue. I completed my M.S. in Computer Science at the University of Virginia in 2018. Before that, I worked as a Lecturer at BRAC University from 2013 to 2015, after earning my B.Sc. in Computer Science and Engineering from BUET in 2013.

KEY RESEARCH AREAS


Learning, Reasoning & Decision-Making

Modern decision-making agents are remarkably powerful in simulation, yet struggle to adapt, generalize, or explain their behavior in complex real-world tasks. Deep reinforcement learning (RL) methods often overfit narrow tasks and lack robustness under distributional shift, while multimodal foundation models (e.g., VLM) offer rich priors but operate largely without causal structure or grounded planning. This disconnect limits the deployment of intelligent systems in dynamic environments where adaptability and transparency are essential. My research addresses this gap by integrating sample-efficient reinforcement learning with structured reasoning, combining symbolic logic, procedural planning, and vision-language representations to build agents that generalize, improvise, and justify their actions beyond static benchmarks.

Embodied Autonomy

Despite advances in robotic control and learning, true autonomy in unstructured and high-stakes environments remains elusive. Robots deployed in the real world must contend with noisy sensors, changing dynamics, partial observability, and limited human oversight, all under strict constraints of time, bandwidth, and safety. These challenges are amplified in critical domains such as remote surgery or field robotics, where teleoperation is unreliable and scripted behavior fails. My research develops embodied systems that integrate predictive shared autonomy, affordance-aware planning, and real-time improvisation to enable robust task execution in settings marked by uncertainty, delay, and sparse supervision.

AI in Healthcare

While AI has demonstrated early success in medical imaging and language-based triage, current models often lack the transparency, adaptability, and real-world integration needed for deployment in high-stakes clinical care (e.g., Burn Diagnosis). Most models rely on large, clean datasets and provide limited explanation, which is a mismatch for domains such as trauma response, rural surgery, or battlefield medicine, where data is noisy and mistakes carry real cost. My research bridges this gap by building multimodal (e.g., medical imaging - ultrasound, Doppler), interpretable decision-support systems grounded in clinical reasoning, procedural knowledge, and symbolic logic. These systems operate in austere settings and outperform traditional models in surgical intervention planning, diagnosis, and adaptive triage.

news

Jun 24, 2025 A paper got accepted to the JMIR Medical Informatics 2025. Paper title: BURN-AID: AI-Driven Integrated System for Burn Depth Prediction with Electronic Medical Records.
Jun 23, 2025 Awarded a grant through the Health of the Forces Pilot Funding Program (Purdue University) for our project “Accelerated Expertise: AI-Powered Diagnostic Pathways for Rapid Clinical Mastery of Burns,” in collaboration with Dr. Juan Wachs (Industrial Engineering) and Dr. Aniket Bera (Computer Science). This project aims to enhance acute care and long-term outcomes for burn-injured service members using AI-powered diagnostic tools. Excited to continue working at the intersection of AI and burn care!
May 25, 2025 An abstract paper has been accepted to the Plastic Surgery The Meeting (PSTM) 2025.
The work will be presented at PSTM 2025 — the premier annual conference organized by the American Society of Plastic Surgeons (ASPS)— at the New Orleans, Louisiana, in October 2025.
May 20, 2025 An abstract paper has been accepted to the Military Health System Research Symposium (MHSRS) 2025.
Paper title: A Chain-of-Thought AI Reasoning Framework for Burn Diagnosis.
The work will be presented at MHSRS — the leading forum for military health research — at the Gaylord Palms Resort and Convention Center in Kissimmee, FL, in August 2025.
May 12, 2025 A paper has been accepted to the Annual Conference on Medical Image Understanding and Analysis (MIUA) 2025.
Paper title: Knowledge-Driven Hypothesis Generation for Burn Diagnosis from Ultrasound with a Vision-Language Model.
Attending MIUA in July in Leeds, UK.
Mar 17, 2025 A paper got accepted to the Military Medicine Journal 2025. Paper title: A Framework for Advancing Burn Assessment with Artificial Intelligence.
Nov 27, 2024 Completed NSF I-Corps Hub: Great Lakes Region. Digital Badge.
Nov 04, 2024 Started my PostDoc at Purdue Engineering (IE).
Sep 24, 2024 Defended my Ph.D. Thesis.
Jul 31, 2024 Presenting a paper on AI for burn care at the Military Health System Research Symposium (MHSRS) 2024 in August, Kissimmee, FL.

selected publications

  1. burnaid_toc.png
    BURN-AID: AI-Driven Integrated System for Burn Depth Prediction with Electronic Medical Records
    Md Masudur Rahman, Mohamed El Masry, Surya Gnyawali, Yexiang Xue, Gayle Gordillo, and Juan P. Wachs
    JMIR Medical Informatics, 2025
  2. MIUA
    surgery_koder_crop_acc.png
    Knowledge-Driven Hypothesis Generation for Burn Diagnosis from Ultrasound with Vision- Language Model.
    Md Masudur Rahman, Mohamed El Masry, Gayle Gordillo, and Juan P. Wachs
    2025
  3. MHSRS-Abstract
    mhsrs_2025_sample_cot.png
    A Chain-of-Thought AI Reasoning Framework for Burn Diagnosis
    Md Masudur Rahman, Mohamed El Masry, Gayle Gordillo, and Juan P. Wachs
    In In Military Health System Research Symposium , 2025
  4. NAACL
    language_state.png
    Natural Language-based State Representation in Deep Reinforcement Learning
    Md Masudur Rahman, and Yexiang Xue
    In Findings of the Association for Computational Linguistics: NAACL 2024 Appears in NeurIPS 2023 Foundation Models for Decision Making Workshop (FMDM) Workshop , 2024
  5. ECML-PKDD
    thinker_overview.png
    Bootstrap State Representation using Style Transfer for Better Generalization in Deep Reinforcement Learning
    Md Masudur Rahman, and Yexiang Xue
    In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD) , 2022
  6. ICRA
    icra2021_forward_framework.png
    Deserts: Delay-tolerant semi-autonomous robot teleoperation for surgery
    Glebys Gonzalez, Mridul Agarwal, Mythra V Balakuntala, Md Masudur Rahman, Upinder Kaur, Richard M Voyles, Vaneet Aggarwal, Yexiang Xue, and Juan Wachs
    In IEEE International Conference on Robotics and Automation (ICRA) , 2021
  7. Biomedical Journal
    sartres_framework.png
    SARTRES: A semi-autonomous robot teleoperation environment for surgery
    Md Masudur Rahman*, Mythra V Balakuntala*, Glebys Gonzalez, Mridul Agarwal, Upinder Kaur, Vishnunandan LN Venkatesh, Natalia Sanchez-Tamayo, Yexiang Xue, Richard M Voyles, Vaneet Aggarwal, and Juan Wachs
    Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2021
  8. IROS
    desk_dataset.png
    Desk: A robotic activity dataset for dexterous surgical skills transfer to medical robots
    Naveen Madapana*, Md Masudur Rahman*, Natalia Sanchez-Tamayo*, Mythra V Balakuntala, Glebys Gonzalez, Jyothsna Padmakumar Bindu, LN Vishnunandan Venkatesh, Xingguang Zhang, Juan Barragan Noguera, Thomas Low, and  others
    In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , 2019
  9. RO-MAN
    roman2019_transfer_architecture.png
    Transferring Dexterous Surgical Skill Knowledge between Robots for Semi-autonomous Teleoperation
    Md Masudur Rahman*, Natalia Sanchez-Tamayo*, Glebys Gonzalez, Mridul Agarwal, Vaneet Aggarwal, Richard M Voyles, Yexiang Xue, and Juan Wachs
    In 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) , 2019