Masudur Rahman

PostDoc @PurdueEngineers (IE), Ph.D in CS@Purdue
Email: rahman64@purdue.edu
My research focuses on developing principled and adaptable intelligence for autonomous systems operating in complex, high-stakes environments. I investigate generalization, and sample efficiency in decision-making under uncertainty, with an emphasis on algorithms that enable agents to reason, adapt, and act in dynamic settings. I design novel reinforcement learning algorithms and advance the reasoning capabilities of vision-language models (VLMs and LLMs), with a focus on grounded, interactive environments. These contributions enable high-impact applications, including burn diagnosis through medical imaging, and medical and emergency robotics, where systems must perceive affordances and improvise actions in unstructured, rapidly evolving conditions.
I am currently a Postdoctoral Research Assistant in the Edwardson School of Industrial Engineering at Purdue University, working with Dr. Juan P. Wachs. I completed my Ph.D. in Computer Science at Purdue University in 2024 under the supervision of Dr. Yexiang Xue. I completed my M.S. in Computer Science at the University of Virginia in 2018. Before that, I worked as a Lecturer at BRAC University from 2013 to 2015, after earning my B.Sc. in Computer Science and Engineering from BUET in 2013.
KEY RESEARCH AREAS
Learning, Reasoning & Decision-Making
Modern decision-making agents are remarkably powerful in simulation, yet struggle to adapt, generalize, or explain their behavior in complex real-world tasks. Deep reinforcement learning (RL) methods often overfit narrow tasks and lack robustness under distributional shift, while multimodal foundation models (e.g., VLM) offer rich priors but operate largely without causal structure or grounded planning. This disconnect limits the deployment of intelligent systems in dynamic environments where adaptability and transparency are essential. My research addresses this gap by integrating sample-efficient reinforcement learning with structured reasoning, combining symbolic logic, procedural planning, and vision-language representations to build agents that generalize, improvise, and justify their actions beyond static benchmarks.
Embodied Autonomy
Despite advances in robotic control and learning, true autonomy in unstructured and high-stakes environments remains elusive. Robots deployed in the real world must contend with noisy sensors, changing dynamics, partial observability, and limited human oversight, all under strict constraints of time, bandwidth, and safety. These challenges are amplified in critical domains such as remote surgery or field robotics, where teleoperation is unreliable and scripted behavior fails. My research develops embodied systems that integrate predictive shared autonomy, affordance-aware planning, and real-time improvisation to enable robust task execution in settings marked by uncertainty, delay, and sparse supervision.
AI in Healthcare
While AI has demonstrated early success in medical imaging and language-based triage, current models often lack the transparency, adaptability, and real-world integration needed for deployment in high-stakes clinical care (e.g., Burn Diagnosis). Most models rely on large, clean datasets and provide limited explanation, which is a mismatch for domains such as trauma response, rural surgery, or battlefield medicine, where data is noisy and mistakes carry real cost. My research bridges this gap by building multimodal (e.g., medical imaging - ultrasound, Doppler), interpretable decision-support systems grounded in clinical reasoning, procedural knowledge, and symbolic logic. These systems operate in austere settings and outperform traditional models in surgical intervention planning, diagnosis, and adaptive triage.
news
Jun 24, 2025 | A paper got accepted to the JMIR Medical Informatics 2025. Paper title: BURN-AID: AI-Driven Integrated System for Burn Depth Prediction with Electronic Medical Records. |
---|---|
Jun 23, 2025 | Awarded a grant through the Health of the Forces Pilot Funding Program (Purdue University) for our project “Accelerated Expertise: AI-Powered Diagnostic Pathways for Rapid Clinical Mastery of Burns,” in collaboration with Dr. Juan Wachs (Industrial Engineering) and Dr. Aniket Bera (Computer Science). This project aims to enhance acute care and long-term outcomes for burn-injured service members using AI-powered diagnostic tools. Excited to continue working at the intersection of AI and burn care! |
May 25, 2025 | An abstract paper has been accepted to the Plastic Surgery The Meeting (PSTM) 2025. The work will be presented at PSTM 2025 — the premier annual conference organized by the American Society of Plastic Surgeons (ASPS)— at the New Orleans, Louisiana, in October 2025. |
May 20, 2025 | An abstract paper has been accepted to the Military Health System Research Symposium (MHSRS) 2025. Paper title: A Chain-of-Thought AI Reasoning Framework for Burn Diagnosis. The work will be presented at MHSRS — the leading forum for military health research — at the Gaylord Palms Resort and Convention Center in Kissimmee, FL, in August 2025. |
May 12, 2025 | A paper has been accepted to the Annual Conference on Medical Image Understanding and Analysis (MIUA) 2025. Paper title: Knowledge-Driven Hypothesis Generation for Burn Diagnosis from Ultrasound with a Vision-Language Model. Attending MIUA in July in Leeds, UK. |
Mar 17, 2025 | A paper got accepted to the Military Medicine Journal 2025. Paper title: A Framework for Advancing Burn Assessment with Artificial Intelligence. |
Nov 27, 2024 | Completed NSF I-Corps Hub: Great Lakes Region. Digital Badge. |
Nov 04, 2024 | Started my PostDoc at Purdue Engineering (IE). |
Sep 24, 2024 | Defended my Ph.D. Thesis. |
Jul 31, 2024 | Presenting a paper on AI for burn care at the Military Health System Research Symposium (MHSRS) 2024 in August, Kissimmee, FL. |
selected publications
- BURN-AID: AI-Driven Integrated System for Burn Depth Prediction with Electronic Medical RecordsJMIR Medical Informatics, 2025
- MIUAKnowledge-Driven Hypothesis Generation for Burn Diagnosis from Ultrasound with Vision- Language Model.2025
- MHSRS-AbstractA Chain-of-Thought AI Reasoning Framework for Burn DiagnosisIn In Military Health System Research Symposium , 2025