i_am_rachit

I am Rachit, a PhD student at Harvard University. I am super fortunate to be advised by Prof. David Alvarez-Melis and Prof. Martin Wattenberg. Broadly, I am interested in making language models more useful and controllable. I am also interested in understanding and analysis.

Over the past few years, I walked my first baby steps as a researcher owing to some wonderful people and collaborations. Most recently, I was a pre-doctoral researcher at Google DeepMind, working on modularizing LLMs with Partha and Prateek. Before that, I pursued my bachelor’s thesis research with Yonatan at the Technion in Israel. There I had a great time studying how intrinsic properties of a neural network are informative of generalization behaviours. Before that, I was a research intern at Adobe’s Media and Data Science Research Lab, where I worked on commonsense reasoning for large language models.

I was fortunate to collaborate with Danish for more than two years to evaluate explanation methods in NLP1. I also had an amazing time working with Naomi studying mode connectivity in loss surfaces of language models2.

I also spent a couple of wonderful summers as a part of the Google Summer of Code program with the Cuneiform Digital Library Initiative (CDLI). Here, I was advised by Jacob and Niko.

News and Timeline

2024

2023

2022

2021

2020



  1. Started with a meek awe-inspired email 

  2. Started with a message on MLC’s Discord channel