
Jan Tolsdorf
Postdoctoral Associate, George Washington University
Jan Tolsdorf is a postdoctoral associate in the Usable Security and Privacy Lab at George Washington University. He studies human factors in trustworthy AI and human-centered approaches to information privacy and security. His research brings fresh insights to these areas through a multidisciplinary blend of qualitative and quantitative methods drawn from human-computer interaction, software and requirements engineering, social sciences, psychology, and economics.
Area of Expertise: Human-centered AI
-
Tolsdorf, J., Luo, A. F., Kodwani, M., Eum, J., Sharif, M., Mazurek, M. L., & Aviv, A. J. (2025, May). On a Scale of 1 to 5, How Reliable Are AI User Studies? A Call for Developing Validated, Meaningful Scales and Metrics about User Perceptions of AI Systems. 9th Workshop on Technology and Consumer Protection (ConPro’25).
Abstract: Public discourse around trust, safety, and bias in AI systems intensifies, and as AI systems increasingly impact consumers’ daily lives, there is a growing need for empirical research to measure psychological constructs underlying the human-AI relationship. By reviewing literature, we identified a gap in the availability of validated instruments. Instead, researchers seem to adapt, reuse, or develop measures in an ad hoc manner without much systematic validation. Through piloting different instruments, we identified limitations with this approach but also with existing validated instruments. To enable more robust and impactful research on user perceptions of AI systems, we advocate for a community-driven initiative to discuss, exchange, and develop validated, meaningful scales and metrics for human-centered AI research.
-
Tolsdorf, J., Luo, A. F., Sharif, M., Mazurek, M. L., & Aviv, A. J. Safety Perceptions of Generative AI Conversational Agents: Uncovering Perceptual Differences in Trust, Risk, and Fairness.
Public and academic discourse on the safety of conversational agents using generative AI, particularly chatbots, often centers on fairness, trust, and risk. However, there is limited insight into how users differentiate these perceptions and what factors shape them. To address this gap, we developed a survey instrument based on previous work. We conducted an exploratory study using factor analysis and latent class analysis on survey responses from n=123 participants in the U.S. to offer an initial attempt at measuring and delineating the dimensionality of these safety perceptions. Latent class analysis revealed three distinct user groups with sometimes counterintuitive perception patterns: The Hesitant Skeptics, The Cautious Trusters, and The Confident Adopters. We find that greater usage frequency of AI chatbots is associated with higher trust and fairness perceptions but lower perceived risk. Some demographic traits like sexual orientation, income, and ethnicity also had strong and significant effects on group membership. Our findings highlight the need for more refined measurement approaches and a more nuanced perspective on users' AI safety perceptions regarding trust, fairness, and risk, particularly in capturing the kinds of experiences and interactions that lead users to develop their perceptions.
Full Paper