:qualitative:qualitative ChatGPT Dissatisfaction

Understanding Users’ Dissatisfaction with ChatGPT Responses:  Types, Resolving Tactics, and the Effect of Knowledge Level

Jueon Lee


Jaehyuk Park

KDI School

Paper Dataset


Large language models (LLMs) with chat-based capabilities, such as ChatGPT, are widely used in various workflows. However, due to a limited understanding of these large-scale models, users struggle to use this technology and experience different kinds of dissatisfaction. Researchers have introduced several methods such as prompt engineering to improve model responses. However, they focus on crafting one prompt, and little has been investigated on how to deal with the dissatisfaction the user encountered during the conversation. Therefore, with ChatGPT as the case study, we examine end users’ dissatisfaction along with their strategies to address the dissatisfaction. After organizing users’ dissatisfaction with LLM into seven categories based on a literature review, we collected 511 instances of dissatisfactory ChatGPT responses from 107 users and their detailed recollections of dissatisfied experiences, which we release as a publicly accessible dataset. Our analysis reveals that users most frequently experience dissatisfaction when ChatGPT fails to grasp their intentions, while they rate the severity of dissatisfaction the highest with dissatisfaction related to accuracy. We also identified four tactics users employ to address their dissatisfaction and their effectiveness. We found that users often do not use any tactics to address their dissatisfaction, and even when using tactics, 72% of dissatisfaction remained unresolved. Moreover, we found that users with low knowledge regarding LLMs tend to face more dissatisfaction on accuracy while they often put minimal effort in addressing dissatisfaction. Based on these findings, we propose design implications for minimizing user dissatisfaction and enhancing the usability of chat-based LLM services.


Systematic Literature Review: Categorizing User-side Dissatisfaction

Through systematic literature review of papers dealing with limitations and challenges associated with LLMs and their application, we categorized the various aspects of user dissatisfaction arising from LLM responses into 19 distinct codes, further organized them into seven overarching themes.

SLR:User-side Dissatisfaction Category

Categorizing Tactics for Resolving Dissatisfaction

Through qualitative analysis, we categorized users’ tactics to understand and analyze how users address their dissatisfaction from ChatGPT’s response through subsequent prompts. We identified the user’s tactic with 13 codes and categorized them as four main themes.

User Tactic Category


💡RQ1. Analysis of how users experience dissatisfaction

We analyzed (1) the count and dissatisfaction score of dissatisfaction category and (2) their co-occurrence patterns as follows:

Dissatisfaction Analysis Table

Co-occurrence matrix

Through this, we found that D_intent is the most prevalent and frequently appears concurrently with all other categories. And users experienced D_acc as the most severely dissatisfying.

💡RQ2. Analysis of how users respond to dissatisfaction

(1) We analyzed the count and effectiveness score of tactic category and as follows:

Tactic Analysis Table

Through this, we found that T_specify is the most prevalent and most effective tactic.

(2) We analyzed tactics used for dissatisfaction and visualized the flow in a Sankey diagram.

Dissatisfaction and Corresponding Tactics

Tactic and Dissatisfaction Sankey diagram

It shows how users address various dissatisfactions when conversing with ChatGPT. About 34% don’t use tactics, while 66% employ them. Interestingly, 58% of dissatisfactions are resolved with tactics. However, users manage to resolve only 28% of their dissatisfactions using tactics, leaving 72% unresolved.

💡RQ3. Analysis of how dissatisfaction and tactics vary based on the user’s knowledge level of LLMs

We analyzed how users’ experience of dissatisfaction and their tactics differ depending on their knowledge levels regarding LLMs.

Dissatisfaction analysis by knowledge group

(1) In terms of dissatisfaction experience, we observed that the low-knowledge group experiences D_depth and D_refuse more frequently, while the high-knowledge group experiences D_acc and D_format more frequently.

Tactic analysis by knowledge group

(2) In terms of user’s tactics, we found that No Tactic and T_repeat was more prevalent in the low-knowledge group, while T_error was more prevalent in the high knowledge group.

Sankey diagrams by knowledge group

It presents Sankey diagrams that illustrate how users in the low-knowledge and high-knowledge groups experience dissatisfaction categories from ChatGPT’s responses, respond to the dissatisfactions with each tactic category at user prompts, and whether these tactics ultimately resolve their dissatisfactions or not. Through this, we can see that the rate of resolving dissatisfaction in the high-knowledge group (29%) is higher than low-knowledge group (23.5%).


We collected user experience data on dissatisfactory ChatGPT responses in actual conversations with ChatGPT through our data collection system.


      title={Understanding Users' Dissatisfaction with ChatGPT Responses: Types, Resolving Tactics, and the Effect of Knowledge Level}, 
      author={Yoonsu Kim and Jueon Lee and Seoyoung Kim and Jaehyuk Park and Juho Kim},

Logo of KIXLAB Logo of KAIST