Research

Research Interests:

  • Explainable AI
    • Self-Explainable Architectures for CV and NLP tasks
    • Explaining Vision Language Models and their decisions
  • Understanding the limitations of Vision Language Models
    • Visual perception
    • Difference/Similarity comprehension

During my Master’s, I worked on Reinforcement Learning (RL) and Robotics, where my goal was to help robotic platforms acquire complex skills via RL.

Research To Date:

As part of my Ph.D. research, I am working on self-explainable and editable models for downstream CV/NLP tasks. More specifically, my research centers on the interpretability of minimal transformer layers (attention bottlenecks). I aim to use these bottlenecks so users can edit, debug and intervene in AI’s decision making.