I'm an Assistant Professor in Computer Science & Engineering at the University of Minnesota. I am also affiliated with the GroupLens Research Lab, a group of HCI faculty and students in the department.

My research examines appropriate reliance on AI for knowledge work through a socio-technical lens. I study technical aspects of appropriate reliance, such as explainable AI, model interpretability, trustworthiness, and human-in-the-loop architectures. At the same time, my work follows the belief that for human-AI interaction to be appropriate, effective, and safe, technical development in AI must come in concert with an understanding of human-centric cognitive, social, and organizational phenomena. My lab conducts experiments to better understand how people use AI in practice; and designs and deploys new systems that support appropriate reliance.

I apply these research goals to knowledge work contexts including scientific search and sensemaking, exploratory data analysis, and workplace wellbeing — with a broader interest in supporting computational literacy and meaningful cognitive engagement among everyday AI users.

I received my Ph.D. in Information and Computer Science & Engineering from the University of Michigan, where I was co-advised by Cliff Lampe and Eric Gilbert.

Want to work with me? Please see details under the Advising tab.

Latest News

May 2026: Two papers accepted to FAccT 2026 and one coming up at CSCW 2026! Happy to share our research agenda outlining the role of context from online communities and proficiency differentials for AI systems (at FAccT), and thinking about context and context collapse at a meta-level (at CSCW).

January 2026: Four papers accepted to CHI 2026! Excited to share this breadth of work on trustworthy LLMs for scholarly literature search, online meeting inclusivity, and algorithmic vs. design frictions on social media.

August 2025: Paper on a Socratic LLM system for improving multi-perspectivist data annotation accepted to CSCW. Very excited to discuss the potential for positive use-cases of LLMs like this one.

April 2025: Paper on modeling XAI use based on people's personality, prior experience, and demographics accepted to FAccT. We hope this is a conversation starter for the measurement challenges related to user characteristics for XAI.

January 2025: Paper on a cross-level comparison of Generative AI use in design accepted to CHI. We present qualitative results from comparing how students and professionals use generative AI, and outline a rift in the value system of these stakeholders which is driven by their expertise differential in practice.

October 2024: Many thanks to Google for funding my research with an Academic Research Award.

September 2024: Welcoming my first PhD students to the lab: Anna Martin-Boyle and Malik Khadar!

June 2024: Grateful to Microsoft and TikTok for their research funding gifts.

August 2023: Started my new position as a tenure-track faculty at the University of Minnesota. Excited to teach a research seminar on Human-Centered AI this Fall.

May 2023: Successfully defended my dissertation!

August 2022: FeedLens accepted to UIST. We present results from applying our polymorphic lenses technique to Semantic Scholar, improving engagement and exploration for literature search.

June 2022: Paper on Sensible AI accepted to FAccT. We propose an alternate framework for interpretability and explainability grounded in sensemaking theory from organizational studies.

Feb 2022: Paper on comparing Automatic Emotion Recognition technology and self-reported affective profiles accepted to CHI.

Upcoming Travel

October 10 - 14, 2026:  Salt Lake City (CSCW 2026)

June 14 - 18, 2026:  Denver (HCIC 2026)

May 20 - 23, 2026:  London (Google Trust & Safety Workshop)