Masudur Rahman

PostDoc @PurdueEngineers (IE), Ph.D in CS@Purdue
Email: rahman64@purdue.edu
My research focuses on developing principled and adaptable intelligence for autonomous systems operating in high-stakes environments where errors carry significant cost and adaptability is essential. I aim to bridge the gap between powerful simulation-trained agents and the unpredictable demands of the real world by unifying reinforcement learning (RL) with vision-language models (VLM) to enable systems that reason, adapt, and act under uncertainty. This work advances both generalization and sample efficiency in decision-making and integrates learning with structured reasoning to produce agents capable of grounded, interactive behaviors. These contributions drive real-world impact in domains such as burn diagnosis through medical imaging and emergency robotics, where systems must perceive affordances and improvise actions in unstructured, rapidly evolving conditions.
I am currently a Postdoctoral Research Assistant in the Edwardson School of Industrial Engineering at Purdue University, where I work with Dr. Juan P. Wachs. I am also a founding member of the Center for AI and Robotic Excellence in Medicine (CARE) at Purdue University. I completed my Ph.D. in Computer Science from Purdue University in 2024 under the supervision of Dr. Yexiang Xue. I completed my M.S. in Computer Science at the University of Virginia in 2018. Before that, I worked as a Lecturer at BRAC University from 2013 to 2015, after earning my B.Sc. in Computer Science and Engineering from BUET in 2013.
RESEARCH HIGHLIGHTS
I work on developing intelligent systems that are both principled, meaning they are grounded in rigorous, interpretable methods, and adaptable, meaning they can generalize, reason, and improvise in dynamic, high-stakes environments. These environments are characterized by uncertainty, limited data, time-critical decision-making, and serious consequences for failure. Real-world situations such as treating trauma patients or deploying robots in disaster zones vividly illustrate these challenges, where objectives shift rapidly, information is incomplete, and decisions must be made under pressure. Ironically, these are the very contexts where we most need AI, yet current systems often fall short in delivering robust and reliable performance. Overcoming these limitations requires a new generation of intelligent systems that combine real-time decision-making with contextual understanding and principled reasoning.
To meet these challenges, I design methods that combine reinforcement learning with foundation models, such as vision-language models, to enable agents that learn efficiently, act robustly, and generalize meaningfully beyond their training distributions. My work advances algorithms that smooth the optimization landscape to improve stability and generalization, while incorporating symbolic abstractions, procedural planning, and logic-based verification to support decision-making in ambiguous, under-specified, or novel scenarios.
In high-stakes clinical settings (AI in Healthcare), my burn diagnosis system achieved 95% accuracy in surgical decision-making using real-patient data from a regional burn center, significantly outperforming traditional methods and clinician judgment, which typically achieve 70% accuracy. For remote robotic surgery, my teleoperation framework maintained effective control under network delays of up to five seconds, whereas conventional systems often fail beyond 300 milliseconds in emulated settings. These capabilities demonstrate strong potential for greater autonomy in low-bandwidth and resource-constrained environments.
My research is organized into three interconnected thrusts: (1) learning generalizable and sample-efficient policies for decision-making; (2) enabling agents to reason and improvise in uncertain and data-limited settings; and (3) deploying these capabilities in embodied systems and safety-critical domains such as autonomous surgery, trauma response, and remote diagnostics. Across these directions, I aim to build AI systems that are both technically robust and structurally reliable, capable of performing with resilience, transparency, and trust in the unpredictable complexity of the real world.
news
Aug 18, 2025 | 📰 Featured in IE@Purdue News and Media — Best Paper Award on Burn Diagnosis, LinkedIn, X/Twitter |
---|---|
Aug 01, 2025 | ✨ Awarded NSF Robust Intelligence (RI) Grant — EAGER: Theoretical Foundations for Integrating Foundational Models into Reinforcement Learning (Award #2521982), $300,000, 2025–2027, PI: Juan P. Wachs, serving as Key Personnel. Excited to see support for this project I’ve been working on for quite a while. Grateful for the opportunity to take it further! |
Jul 30, 2025 | 🔗 Affiliated with CARE — Founding member of the Center for AI and Robotic Excellence in Medicine (CARE) at Purdue University. |
Jul 17, 2025 | 🏆 Awarded Best Paper Award (Full Paper Poster Category) at MIUA 2025 for the paper titled: Knowledge-Driven Hypothesis Generation for Burn Diagnosis from Ultrasound with Vision- Language Model. 🏆 Received Best Poster Presentation Award at MIUA 2025. |
Jul 15, 2025 | Attending MIUA 2025 at the University of Leeds, UK. |
Jun 24, 2025 | đź“‘ A paper got accepted to the JMIR Medical Informatics 2025. Paper title: AI-Driven Integrated System for Burn Depth Prediction With Electronic Medical Records: Algorithm Development and Validation. |
Jun 23, 2025 | ✨ Awarded a grant through the Health of the Forces Pilot Funding Program (Purdue University) for our project “Accelerated Expertise: AI-Powered Diagnostic Pathways for Rapid Clinical Mastery of Burns,” in collaboration with Dr. Juan Wachs (Industrial Engineering) and Dr. Aniket Bera (Computer Science). This project aims to enhance acute care and long-term outcomes for burn-injured service members using AI-powered diagnostic tools. Excited to continue working at the intersection of AI and burn care! |
May 25, 2025 | 📑 An abstract paper has been accepted to the Plastic Surgery The Meeting (PSTM) 2025. The work will be presented at PSTM 2025 — the premier annual conference organized by the American Society of Plastic Surgeons (ASPS)— at the New Orleans, Louisiana, in October 2025. |
May 20, 2025 | 📑 An abstract paper has been accepted to the Military Health System Research Symposium (MHSRS) 2025. Paper title: A Chain-of-Thought AI Reasoning Framework for Burn Diagnosis. The work will be presented at MHSRS — the leading forum for military health research — at the Gaylord Palms Resort and Convention Center in Kissimmee, FL, in August 2025. |
May 12, 2025 | đź“‘ A paper has been accepted to the Annual Conference on Medical Image Understanding and Analysis (MIUA) 2025. Paper title: Knowledge-Driven Hypothesis Generation for Burn Diagnosis from Ultrasound with a Vision-Language Model. Attending MIUA in July in Leeds, UK. |
selected publications
- AI-Driven Integrated System for Burn Depth Prediction With Electronic Medical Records: Algorithm Development and ValidationJMIR Medical Informatics, 2025
- MIUAKnowledge-Driven Hypothesis Generation for Burn Diagnosis from Ultrasound with Vision- Language Model2025
- MHSRS-AbstractA Chain-of-Thought AI Reasoning Framework for Burn DiagnosisIn Military Health System Research Symposium , 2025