I am a graduate student at UMass. Currently being advised by Prof. Brian Plancher on analytical models for rigid-body dynamics; extending the work on Inverse Dynamics to compute the second-order Partial Derivatives for Forward/Backward Dynamics, commonly required for optimization and motion planning for legged robots. My research interests lie at the intersection of autonomous systems and machine learning.
In Jan 2023, I worked at Amazon Prime Video Team, where I worked on the Infrastructure Migration of a Tier-1 service using AWS Fargate and CDK, gaining solid knowledge about Cloud Computing. I spent the summer of 2022 as a Research Specialist at AISys Lab while closely collaborating with Prof. Pooyan Jamshidi (UofSC), Prof. Devashree Tripathy (Postdoc Harvard), and Dr. Behzad Boroujerdian (Facebook Research). Before that, I was a research intern in Sharc Lab (GaTech) under Prof. Callie (Cong) Hao.
Symmetry is a pivotal concept in machine learning, suggesting that a model or algorithm remains robust or invariant under specific transformations, such as rotation, reflection, or scaling of input data. Extending this principle, Lie Group Theory, which deals with continuous symmetry, finds compelling applications in the field of machine learning. Lie groups, as mathematical structures, embody sets of elements that demonstrate both algebraic and geometric properties. This unique blend enables them to effectively represent and process smooth transformations in data.
Developing machine learning models is usually an iterative process. You start with an initial design then reconfigure until you get a model that can be trained efficiently in terms of time and compute resources. As you may already know, these settings that you adjust are called hyperparameters. These are the variables that govern the training process and the topology of an ML model. These remain constant over the training process and directly impact the performance of your ML program. The process of finding the optimal set of hyperparameters is called hyperparameter tuning or hypertuning, and it is an essential part of a machine learning pipeline. Without it, you might end up with a model that has unnecessary parameters and take too long to train.
[Aug 2024] Started at UMass as a Graduate Student in the CS Department.
[Jan 2024] Understanding Choice Independence and Error Types in Human-AI Collaboration accepted at ACM CHI'24.
[Aug 2023] Started as a Research Affliate at Accessible and Accelerated Robotics Lab A²R Lab.
[Apr 2023] "PreAxC: Error Distribution Prediction for Approximate Computing Quality Control" accepted to ISQED 2023.
[Jan 2023] Started as an SDE Intern in the PV Payments team at Amazon Prime Video.
[Jun 2022] University of South Carolina Coverage: Poster Presentation as Junior McNair Fellow.
[Jun 2022] Visiting Research Specialist at University of South Carolina, USA.
[Jan 2022] Served as Reviewer for Design Automation Coference(DAC).