Masudur Rahman

prof_pic.jpg

PostDoc @PurdueEngineers (IE), Ph.D in CS@Purdue

Email: rahman64@purdue.edu

My research focuses on developing principled and adaptable intelligence for autonomous systems operating in high-stakes environments where errors carry significant cost and adaptability is essential. I aim to bridge the gap between powerful simulation-trained agents and the unpredictable demands of the real world by unifying reinforcement learning (RL) with vision-language models (VLM) to enable systems that reason, adapt, and act under uncertainty. This work advances both generalization and sample efficiency in decision-making and integrates learning with structured reasoning to produce agents capable of grounded, interactive behaviors. These contributions drive real-world impact in domains such as burn diagnosis through medical imaging and emergency robotics, where systems must perceive affordances and improvise actions in unstructured, rapidly evolving conditions.

I am currently a Postdoctoral Research Assistant in the Edwardson School of Industrial Engineering at Purdue University, where I work with Dr. Juan P. Wachs. I am also a founding member of the Center for AI and Robotic Excellence in Medicine (CARE) at Purdue University. I completed my Ph.D. in Computer Science from Purdue University in 2024 under the supervision of Dr. Yexiang Xue. I completed my M.S. in Computer Science at the University of Virginia in 2018. Before that, I worked as a Lecturer at BRAC University from 2013 to 2015, after earning my B.Sc. in Computer Science and Engineering from BUET in 2013.

RESEARCH HIGHLIGHTS


I work on developing intelligent systems that are both principled, meaning they are grounded in rigorous, interpretable methods, and adaptable, meaning they can generalize, reason, and improvise in dynamic, high-stakes environments. These environments are characterized by uncertainty, limited data, time-critical decision-making, and serious consequences for failure. Real-world situations such as treating trauma patients or deploying robots in disaster zones vividly illustrate these challenges, where objectives shift rapidly, information is incomplete, and decisions must be made under pressure. Ironically, these are the very contexts where we most need AI, yet current systems often fall short in delivering robust and reliable performance. Overcoming these limitations requires a new generation of intelligent systems that combine real-time decision-making with contextual understanding and principled reasoning.

To meet these challenges, I design methods that combine reinforcement learning with foundation models, such as vision-language models, to enable agents that learn efficiently, act robustly, and generalize meaningfully beyond their training distributions. My work advances algorithms that smooth the optimization landscape to improve stability and generalization, while incorporating symbolic abstractions, procedural planning, and logic-based verification to support decision-making in ambiguous, under-specified, or novel scenarios.

In high-stakes clinical settings (AI in Healthcare), my burn diagnosis system achieved 95% accuracy in surgical decision-making using real-patient data from a regional burn center, significantly outperforming traditional methods and clinician judgment, which typically achieve 70% accuracy. For remote robotic surgery, my teleoperation framework maintained effective control under network delays of up to five seconds, whereas conventional systems often fail beyond 300 milliseconds in emulated settings. These capabilities demonstrate strong potential for greater autonomy in low-bandwidth and resource-constrained environments.

My research is organized into three interconnected thrusts: (1) learning generalizable and sample-efficient policies for decision-making; (2) enabling agents to reason and improvise in uncertain and data-limited settings; and (3) deploying these capabilities in embodied systems and safety-critical domains such as autonomous surgery, trauma response, and remote diagnostics. Across these directions, I aim to build AI systems that are both technically robust and structurally reliable, capable of performing with resilience, transparency, and trust in the unpredictable complexity of the real world.

news

Aug 18, 2025 📰 Featured in IE@Purdue News and Media — Best Paper Award on Burn Diagnosis, LinkedIn, X/Twitter
Aug 01, 2025 ✨ Awarded NSF Robust Intelligence (RI) Grant — EAGER: Theoretical Foundations for Integrating Foundational Models into Reinforcement Learning (Award #2521982), $300,000, 2025–2027, PI: Juan P. Wachs, serving as Key Personnel. Excited to see support for this project I’ve been working on for quite a while. Grateful for the opportunity to take it further!
Jul 30, 2025 🔗 Affiliated with CARE — Founding member of the Center for AI and Robotic Excellence in Medicine (CARE) at Purdue University.
Jul 17, 2025 🏆 Awarded Best Paper Award (Full Paper Poster Category) at MIUA 2025 for the paper titled: Knowledge-Driven Hypothesis Generation for Burn Diagnosis from Ultrasound with Vision- Language Model. 🏆 Received Best Poster Presentation Award at MIUA 2025.
Jul 15, 2025 Attending MIUA 2025 at the University of Leeds, UK.
Jun 24, 2025 đź“‘ A paper got accepted to the JMIR Medical Informatics 2025. Paper title: AI-Driven Integrated System for Burn Depth Prediction With Electronic Medical Records: Algorithm Development and Validation.
Jun 23, 2025 ✨ Awarded a grant through the Health of the Forces Pilot Funding Program (Purdue University) for our project “Accelerated Expertise: AI-Powered Diagnostic Pathways for Rapid Clinical Mastery of Burns,” in collaboration with Dr. Juan Wachs (Industrial Engineering) and Dr. Aniket Bera (Computer Science). This project aims to enhance acute care and long-term outcomes for burn-injured service members using AI-powered diagnostic tools. Excited to continue working at the intersection of AI and burn care!
May 25, 2025 đź“‘ An abstract paper has been accepted to the Plastic Surgery The Meeting (PSTM) 2025.
The work will be presented at PSTM 2025 — the premier annual conference organized by the American Society of Plastic Surgeons (ASPS)— at the New Orleans, Louisiana, in October 2025.
May 20, 2025 đź“‘ An abstract paper has been accepted to the Military Health System Research Symposium (MHSRS) 2025.
Paper title: A Chain-of-Thought AI Reasoning Framework for Burn Diagnosis.
The work will be presented at MHSRS — the leading forum for military health research — at the Gaylord Palms Resort and Convention Center in Kissimmee, FL, in August 2025.
May 12, 2025 đź“‘ A paper has been accepted to the Annual Conference on Medical Image Understanding and Analysis (MIUA) 2025.
Paper title: Knowledge-Driven Hypothesis Generation for Burn Diagnosis from Ultrasound with a Vision-Language Model.
Attending MIUA in July in Leeds, UK.

selected publications

  1. burnaid_toc.png
    AI-Driven Integrated System for Burn Depth Prediction With Electronic Medical Records: Algorithm Development and Validation
    Md Masudur Rahman, Mohamed El Masry, Surya Gnyawali, Yexiang Xue, Gayle Gordillo, and Juan P. Wachs
    JMIR Medical Informatics, 2025
  2. MIUA
    burn_ultrasound_ambush_miua2025.png
    Knowledge-Driven Hypothesis Generation for Burn Diagnosis from Ultrasound with Vision- Language Model
    Md Masudur Rahman, Mohamed El Masry, Gayle Gordillo, and Juan P. Wachs
    2025
  3. MHSRS-Abstract
    mhsrs_2025_sample_cot.png
    A Chain-of-Thought AI Reasoning Framework for Burn Diagnosis
    Md Masudur Rahman, Mohamed El Masry, Gayle Gordillo, and Juan P. Wachs
    In Military Health System Research Symposium , 2025
  4. NAACL
    language_state.png
    Natural Language-based State Representation in Deep Reinforcement Learning
    Md Masudur Rahman, and Yexiang Xue
    In Findings of the Association for Computational Linguistics: NAACL 2024 Appears in NeurIPS 2023 Foundation Models for Decision Making Workshop (FMDM) Workshop , 2024
  5. ECML-PKDD
    thinker_overview.png
    Bootstrap State Representation using Style Transfer for Better Generalization in Deep Reinforcement Learning
    Md Masudur Rahman, and Yexiang Xue
    In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD) , 2022
  6. ICRA
    icra2021_forward_framework.png
    Deserts: Delay-tolerant semi-autonomous robot teleoperation for surgery
    Glebys Gonzalez, Mridul Agarwal, Mythra V Balakuntala, Md Masudur Rahman, Upinder Kaur, Richard M Voyles, Vaneet Aggarwal, Yexiang Xue, and Juan Wachs
    In IEEE International Conference on Robotics and Automation (ICRA) , 2021
  7. Biomedical Journal
    sartres_framework.png
    SARTRES: A semi-autonomous robot teleoperation environment for surgery
    Md Masudur Rahman*, Mythra V Balakuntala*, Glebys Gonzalez, Mridul Agarwal, Upinder Kaur, Vishnunandan LN Venkatesh, Natalia Sanchez-Tamayo, Yexiang Xue, Richard M Voyles, Vaneet Aggarwal, and Juan Wachs
    Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2021
  8. IROS
    desk_dataset.png
    Desk: A robotic activity dataset for dexterous surgical skills transfer to medical robots
    Naveen Madapana*, Md Masudur Rahman*, Natalia Sanchez-Tamayo*, Mythra V Balakuntala, Glebys Gonzalez, Jyothsna Padmakumar Bindu, LN Vishnunandan Venkatesh, Xingguang Zhang, Juan Barragan Noguera, Thomas Low, and  others
    In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , 2019
  9. RO-MAN
    roman2019_transfer_architecture.png
    Transferring Dexterous Surgical Skill Knowledge between Robots for Semi-autonomous Teleoperation
    Md Masudur Rahman*, Natalia Sanchez-Tamayo*, Glebys Gonzalez, Mridul Agarwal, Vaneet Aggarwal, Richard M Voyles, Yexiang Xue, and Juan Wachs
    In 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) , 2019