Close

Presentation

91. In ChatGPT We Trust: Foundations for an Empirically Determined Scale of Trust in Generative AI Chatbots
DescriptionTo this researcher's knowledge, there exists no statistically validated scale for measuring trust specifically in generative artificial intelligence (Ai) chatbots such as Google Bard or ChatGPT. The objective of this study will be to develop an empirically based scale for assessing trust in generative Ai chatbots.

Broad contextual scales such as the TPA and TXAI may not effectively capture factors of trust relevant to the human-like qualities of Ai chatbots, highlighted as important in previous research, in relation to trust scale creation (Alsaid, Li, Chiou, Lee, 2023; Lankton, Harrison, Mcknight, & Tripp, 2015).

This work will develop a statistically validated scale of trust in AI chatbots by adapting the five-stage workflow used to develop one of the most widely cited scales for assessing trust in automated, computerized systems: the Trust between People and Automation (TPA) scale (Jian, Bisantz, & Drury, 2000).
Corresponding Author/Contributor
Event Type
Poster
TimeThursday, September 12th5:30pm - 6:30pm MST
LocationMcArthur Ballroom
Tracks
Aging
Augmented Cognition
Children's Issues
Communications
Cybersecurity
Education
Environmental Design
General Sessions
Human AI Robot Teaming (AI)
Macroergonomics
Occupational Ergonomics
Student Forum
Surface Transportation
Sustainability
System Development