I'm an Assistant Professor in Computer Science & Engineering at the University of Minnesota. I am also affiliated with the GroupLens Research Lab, a group of HCI faculty and students in the department.

My research areas are human-centered artificial intelligence, explainability and interpretability, and hybrid intelligence systems. I study these areas in two ways: (1) critically evaluating existing systems and tools on meeting their intended goals; and (2) designing and building new systems that leverage human-centered cognitive, social, and organizational norms for human-machine collaboration. I apply these methods in a variety of domains, including exploratory data analysis, workplace wellbeing and productivity, knowledge search and sensemaking.

I received my Ph.D. in Information and Computer Science & Engineering from the University of Michigan, where I was co-advised by Cliff Lampe and Eric Gilbert.

Latest News

April 2025: Paper on modeling XAI use based on people's personality, prior experience, and demographics accepted to FAccT. We hope this is a conversation starter for the measurement challenges related to user characteristics for XAI.

January 2025: Paper on a cross-level comparison of Generative AI use in design accepted to CHI. We present qualitative results from comparing how students and professionals use generative AI, and outline a rift in the value system of these stakeholders which is driven by their expertise differential in practice.

October 2024: Many thanks to Google for funding my research with an Academic Research Award.

September 2024: Welcoming my first PhD students to the lab: Anna Martin-Boyle, Malik Khadar, and Syeda Masooma Naqvi!

June 2024: Grateful to Microsoft and TikTok for their research funding gifts.

August 2023: Started my new position as a tenure-track faculty at the University of Minnesota. Excited to teach a research seminar on Human-Centered AI this Fall.

May 2023: Successfully defended my dissertation!

August 2022: FeedLens accepted to UIST. We present results from applying our polymorphic lenses technique to Semantic Scholar, improving engagement and exploration for literature search.

June 2022: Paper on Sensible AI accepted to FAccT. We propose an alternate framework for interpretability and explainability grounded in sensemaking theory from organizational studies.

Feb 2022: Paper on comparing Automatic Emotion Recognition technology and self-reported affective profiles accepted to CHI.

Upcoming Travel

June 29 - July 4, 2025:  Wadern, Germany (Dagstuhl Seminar)