Research
My research focuses on developing principled and adaptable intelligence for autonomous systems operating in high-stakes environments where errors carry significant cost and adaptability is essential. I aim to bridge the gap between powerful simulation-trained agents and the unpredictable demands of the real world by unifying reinforcement learning (RL) with vision-language models (VLM) to enable systems that reason, adapt, and act under uncertainty. This work advances both generalization and sample efficiency in decision-making and integrates learning with structured reasoning to produce agents capable of grounded, interactive behaviors. These contributions drive real-world impact in domains such as burn diagnosis through medical imaging and emergency robotics, where systems must perceive affordances and improvise actions in unstructured, rapidly evolving conditions.
My research is organized around three connected focus areas.
Learning, Reasoning & Decision-Making:
Modern decision-making agents are powerful in simulation yet struggle to adapt, generalize, or explain their behavior in complex tasks—limitations that prevent their safe and reliable use in high-stakes environments where unexpected conditions can lead to costly or unsafe actions. These challenges motivate my research, which integrates sample-efficient reinforcement learning with structured reasoning, combining symbolic logic, procedural planning, and vision-language representations to build agents that generalize, improvise, and justify their actions beyond static benchmarks and toward deployment in dynamic, real-world settings.
Embodied Autonomy:
Despite advances in robotic control and learning, true autonomy in unstructured, safety-critical environments remains challenging. Real-world robots face noisy sensors, changing dynamics, and limited oversight, often relying on teleoperation that breaks down under high latency or unexpected conditions. My research develops embodied systems that combine predictive shared autonomy, affordance-aware planning, and real-time improvisation to enable robust task execution in demanding settings such as remote surgery and field robotics.
AI for Healthcare:
AI has shown promise in medical imaging and triage, yet most models lack the transparency, adaptability, and integration needed in high-stakes clinical settings. These gaps can lead to errors in trauma response, rural surgery, or battlefield medicine, where data is noisy and time critical. My research develops multimodal, interpretable decision-support systems grounded in clinical reasoning, procedural knowledge, and symbolic logic. Designed for austere environments, these systems improve surgical planning, diagnosis, and adaptive triage by delivering accurate predictions with meaningful, verifiable justifications.
Publications:
My work is disseminated broadly across leading venues in machine learning, robotics, and medical AI. I have published at major ML conferences and workshops such as NAACL, ECML-PKDD, ICLR (RRL), and NeurIPS (FMDM), as well as top robotics venues like ICRA, IROS, and RO-MAN, and biomedical outlets including JMIR Medical Informatics, MIUA, and MICCAI (AE-CAI) and MHSRS (the leading forum for military health research). I also contribute actively to the community through service, regularly reviewing for NeurIPS, ICML, ICLR, AAAI, MICCAI, and the RA-L robotics journal.
Funding:
In addition to scholarly dissemination, my research has been supported through competitive external and internal funding that directly advances these directions. I am a key personnel on multiple funded projects, including an awarded NSF Robust Intelligence (RI) grant (Award #2521982, $300K) and internal support from the Health of the Forces Program at Purdue University ($10K). I also serve as key personnel on NIH proposals currently under review, including an NIH R21 (Explainable AI for ultrasound-based burn diagnosis) and an NIH R01 (Burn conversion through medical imaging), which together provide a strong foundation for the continued growth of my research program.