Hello and welcome to my page! đź‘‹
I am a research technician and lab manager at the Relational Cognition Lab led by Dr. Anna Leshinskaya at UC Irvine. Currently my research is focused on probing language models from both cognitive and AI safety perspectives, specifically on concepts of morality.
I am interested in using language models as tools to study human language processing and cognition in general. More specifically, I am interested in questions like:
How do cognitive science understandings of concepts (representational forms, relations, compositionality, etc.) inform our understanding of the same in AI models?
How do models behave as people? How do they react under tension? Are they competent social reasoners, and how do we actually evaluate that?
What are the neural correlates of AI models? Can we identify brain-like architectural components or neural circuits in models, or do lesion-like experiments with them?
Prior to this, I was a master’s student at Georgia Tech CS and a member of Language, Intelligence, and Thought (LIT) Lab led by Dr. Anya Ivanova. I was first introduced to neuroAI at Georgia Tech, specifically by a few amazing Psych faculty (Dr. S. Varma, Dr. R. Murty and, of course, Dr. A. Ivanova).
I completed my undergraduate education at UIUC in the CS + Philosophy program, where I took special interest in symbolic logic and ethics.
News
- (4/2026) My second paper “RBCorr: Response Bias Correction in Language Models”, authored with Dr. A. Ivanova, is accepted to the ACL 2026 GEM Workshop! We developed our previous work into a more adaptable and robust method with much more comprehensive testing, metrics, and comparisons, and I’m excited to get to share it!
- (4/2025) My first paper (authored with Dr. A. Ivanova), “Estimating and Correcting Yes-No Bias in LMs”, is accepted to the CogSci 2025 conference! I’ll be presenting it there, come and chat!
