Masudur Rahman

prof_pic.jpg

Ph.D in CS@Purdue

Email: rahman64@purdue.edu

I am a machine learning researcher committed to developing fundamental learning algorithms for medical surgery, with a focus on robotic surgery. I develop algorithms to automate the process of medical surgery. These developments require solving fundamental challenges that are absent in traditional machine learning, including data distribution shifts, limited data availability, and the lack of explainability in decision-making.

I completed my Ph.D. in Computer Science at Purdue University in 2024 under the supervision of Professor Yexiang Xue. I completed my M.S. in Computer Science at the University of Virginia in 2018. Before that, I worked as a Lecturer at BRAC University from 2013 to 2015, after earning my B.Sc. in Computer Science and Engineering from BUET in 2013.

KEY RESEARCH AREAS
AI in Burn Care: I have developed an AI system based on the Vision-Language Model (GPT4-Vision) for automated surgical decision-making in burn patients. This system enhances preoperative decisions, ensuring timely and accurate surgical interventions, and has demonstrated performance levels surpassing expert surgeon capabilities.
Teleoperated Robotic Surgery: My work includes developing semi-autonomous telesurgery systems that can function effectively despite communication delays (5s compared to 300ms for direct teleoperation) and unreliable connections. These systems make remote surgical operations feasible, providing critical surgical care to remote, austere and underserved areas.
Deep Reinforcement Learning: I develop reinforcement learning (RL) algorithms to cater to the needs of surgical decision-making and robotic surgery. My methods include enhancing generalization through style transfer, increasing sample efficiency via novel policy gradient techniques, and using natural language for interpretable policy training. These approaches address challenges such as distribution shifts and primacy bias, demonstrating superior generalization and performance in various testing environments.

FUTURE GOALS
Expanding Access to Surgery: In the near term, my goal is to develop systems that allow general medical personnel, such as nurses and medics, to perform surgical procedures with AI and robotic assistance. This approach aims to reduce the burden on hospital specialists, increase access to surgical care, and lower healthcare costs.
Autonomous Surgical Systems: Looking further ahead, I aim to create fully autonomous surgical systems capable of managing entire surgical processes. A prototype in development, RoBurn—an Automated Robotic Burn Surgeon—is equipped with technologies like ultrasound and digital cameras to perform comprehensive burn care, even in challenging environments.

COLLABORATION
I am fortunate to collaborate with a diverse group of disciplines, including engineering, robotics, hospitals and medical facilities (IU, UPMC), surgeons, and military personnel through various research projects. I am the student team lead for the AMBUSH project, an interdisciplinary collaboration involving the Department of Computer Science and School of Industrial Engineering at Purdue University, as well as the Department of Surgery at the School of Medicine, University of Pittsburgh (UPMC). We are developing an AI system for burn care with active collaboration from Gayle Gordillo, Professor of Plastic Surgery and Director of Wound Care, and Mohamed Salah El Masry, Assistant Professor of Surgery. My Ph.D. research is supported by grants from the NSF, NIH, and the Department of Defense (DoD).

news

Jul 31, 2024 Presenting a paper on AI for burn care at the Military Health System Research Symposium (MHSRS) 2024 in August, Kissimmee, FL.
Jul 31, 2024 Lightning talk on AI in Burn Surgery at ADSA 2024 in October at University of Michigan, Ann Arbor
Jul 31, 2024 Tutorial session on RL Benchmarking at ADSA 2024 in October at University of Michigan, Ann Arbor
Jul 17, 2024 NAACL 2024: Organizer and Chair of the Birds of a Feather (BoF) session on Vision-Language Models in Medical Surgery.
May 21, 2024 Lightning talk on Vision-Language Model in Deep RL at MMLS 2024.

selected publications

  1. NAACL
    language_state.png
    Natural Language-based State Representation in Deep Reinforcement Learning
    Md Masudur Rahman, and Yexiang Xue
    In Findings of the Association for Computational Linguistics: NAACL 2024 Appears in NeurIPS 2023 Foundation Models for Decision Making Workshop (FMDM) Workshop , 2024
  2. ECML-PKDD
    thinker_overview.png
    Bootstrap State Representation using Style Transfer for Better Generalization in Deep Reinforcement Learning
    Md Masudur Rahman, and Yexiang Xue
    In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD) , 2022
  3. ICRA
    icra2021_forward_framework.png
    Deserts: Delay-tolerant semi-autonomous robot teleoperation for surgery
    Glebys Gonzalez, Mridul Agarwal, Mythra V Balakuntala, Md Masudur Rahman, Upinder Kaur, Richard M Voyles, Vaneet Aggarwal, Yexiang Xue, and Juan Wachs
    In IEEE International Conference on Robotics and Automation (ICRA) , 2021
  4. Biomedical Journal
    sartres_framework.png
    SARTRES: A semi-autonomous robot teleoperation environment for surgery
    Md Masudur Rahman*, Mythra V Balakuntala*, Glebys Gonzalez, Mridul Agarwal, Upinder Kaur, Vishnunandan LN Venkatesh, Natalia Sanchez-Tamayo, Yexiang Xue, Richard M Voyles, Vaneet Aggarwal, and Juan Wachs
    Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2021
  5. IROS
    desk_dataset.png
    Desk: A robotic activity dataset for dexterous surgical skills transfer to medical robots
    Naveen Madapana*, Md Masudur Rahman*, Natalia Sanchez-Tamayo*, Mythra V Balakuntala, Glebys Gonzalez, Jyothsna Padmakumar Bindu, LN Vishnunandan Venkatesh, Xingguang Zhang, Juan Barragan Noguera, Thomas Low, and  others
    In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , 2019
  6. RO-MAN
    roman2019_transfer_architecture.png
    Transferring Dexterous Surgical Skill Knowledge between Robots for Semi-autonomous Teleoperation
    Md Masudur Rahman*, Natalia Sanchez-Tamayo*, Glebys Gonzalez, Mridul Agarwal, Vaneet Aggarwal, Richard M Voyles, Yexiang Xue, and Juan Wachs
    In 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) , 2019