I’m an M.S. in Computer Science student at UMass Amherst with interests in machine learning research and applied AI systems. My recent work spans both research and industry, including a Research Scientist Internship at IBM, an Applied Scientist Internship at Amazon, and graduate research at Adobe. Across these experiences, I’ve worked on problems related to machine learning pipelines, model development, and the practical challenges of bringing research ideas into real-world systems.
I’m currently interested in multimodal intelligence, embodied AI, agentic systems, and robust ML. More generally, I enjoy working on problems that connect perception, reasoning, and deployment, especially when they require balancing strong research foundations with practical system design. I’m motivated by building AI that is reliable, scalable, and useful outside of idealized settings, and I hope to contribute to systems that meaningfully bridge research and real-world impact.
Symmetry is a pivotal concept in machine learning, suggesting that a model or algorithm remains robust or invariant under specific transformations, such as rotation, reflection, or scaling of input data. Extending this principle, Lie Group Theory, which deals with continuous symmetry, finds compelling applications in the field of machine learning. Lie groups, as mathematical structures, embody sets of elements that demonstrate both algebraic and geometric properties. This unique blend enables them to effectively represent and process smooth transformations in data.
Developing machine learning models is usually an iterative process. You start with an initial design then reconfigure until you get a model that can be trained efficiently in terms of time and compute resources. As you may already know, these settings that you adjust are called hyperparameters. These are the variables that govern the training process and the topology of an ML model. These remain constant over the training process and directly impact the performance of your ML program. The process of finding the optimal set of hyperparameters is called hyperparameter tuning or hypertuning, and it is an essential part of a machine learning pipeline. Without it, you might end up with a model that has unnecessary parameters and take too long to train.
[Sep 2025] Joined Amazon as an Applied Scientist Intern, working on large-scale seller simulation using AI systems.
[May 2025] Joined IBM Research as a Research Scientist Intern, focusing on developing efficient inference accelerator.
[Feb 2025] Started as a Graduate Student Researcher at Adobe, working on agentic RAG, iterative retrieval, and RL-based reasoning systems.
[Sep 2024] Began my M.S. in Computer Science at UMass Amherst.
[Aug 2024] Started at UMass as a Graduate Student in the CS Department.
[Jan 2024] Understanding Choice Independence and Error Types in Human-AI Collaboration accepted at ACM CHI'24.
[Aug 2023] Started as a Research Affliate at Accessible and Accelerated Robotics Lab A²R Lab.
[Apr 2023] "PreAxC: Error Distribution Prediction for Approximate Computing Quality Control" accepted to ISQED 2023.
[Jan 2023] Started as an SDE Intern in the PV Payments team at Amazon Prime Video.
[Jun 2022] University of South Carolina Coverage: Poster Presentation as Junior McNair Fellow.
[Jun 2022] Visiting Research Specialist at University of South Carolina, USA.
[Jan 2022] Served as Reviewer for Design Automation Coference(DAC).