AI Chatbots, Manipulative Design Tactics And The Right To Opt Out of Automated Grievance Redressal
- Malak Seth
- Apr 5
- 5 min read
-by Malak Seth, IVth year law student at Rajiv Gandhi National University of Law, Punjab
Introduction
The epoch-defining introduction of ChatGPT by OpenAI in 2022 marked a turning point for numerous industries, transforming various aspects of modern life. An area where it has had a huge impact is the customer service sector, where AI chatbots have been integrated into e-commerce sites as a medium to resolve grievances. These virtual assistants are designed to handle queries, assist with issues, and even process transactions, offering round-the-clock availability. Their integration into websites, substituting human agents as the first point of contact for grievance redressal, has enabled companies to reduce costs and enhance efficiency significantly.
However, like any new technology, the introduction of chatbots has faced challenges, particularly with the increasing reliance on them in customer service. Consumer rights activists have often highlighted the limitations of chatbots when addressing the wide-ranging problems consumers encounter during online purchases. Additionally, consumers frequently encounter situations where the chatbot fails to meet their expectations. The use of certain design strategies, such as dark patterns, intentional obstruction, and usability smells, has further complicated the relationship between chatbots and consumer rights. The specific tactics employed to manipulate users will be discussed later in the blog. Ultimately, the immense frustration of trying to navigate chatbot prompts to reach a human agent forces one to demand a ‘right to opt out of automated grievance redressal’.
This blog examines the misuse of AI chatbots by e-commerce companies, which prevent users from connecting to human agents through manipulative design tactics. It subsequently analyzes the current landscape of consumer laws that empower users to access swift and effective grievance redressal mechanisms. It also proposes the introduction of a ‘right to opt out of automated grievance redressal’ within the framework of consumer laws in India. Lastly, it explores the impact of such a right on businesses and suggests a balanced approach that considers both the interests of businesses and the rights of consumers.
Manipulative Design Tactics in AI Chatbots and Consumer Harm
The rapid adoption of chatbots and businesses' resulting reliance on them to handle customer grievances have highlighted the need to balance efficiency and fairness. However, consumer experience indicates that this balance frequently tips in favour of cost-saving measures employed by companies at the expense of consumer rights. One of the most pressing concerns is the implementation of manipulative design tactics within chatbots, which can deliberately deter users from seeking human intervention or achieving their intended outcomes. According to the Consumer Protection (E-Commerce) Rules, every e-commerce business must prominently display the email addresses, landline, and mobile numbers of customer service representatives and grievance officials. Unfortunately, by using AI chatbots as the first point of contact for all consumer grievances, many companies violate this directive by failing to display any contact details at all.
As previously mentioned, the manipulative tactics ingrained in AI chatbots represent a particularly concerning trend that undermines consumers’ rights to effective grievance redressal. Design tactics such as ‘obstruction’—which refers to the active hindrance posed by chatbots in connecting consumers to live agents despite the chatbot’s inability to resolve the issue—are a prime example. Another design flaw, referred to as ‘usability smells,’ pertains to poor interface design that hampers users' ability to complete common tasks. For instance, a research study titled ‘In Search of Dark Patterns in Chatbots’ found that in certain situations where users attempted to enter their 17-digit account number, the chatbot would not recognize it as it was only expecting 11 digits. In another case, a customer trying to report a faulty product through a chatbot became trapped in an endless loop, as the chatbot continued to request photo evidence before connecting them to a live agent. Moreover, after uploading the photo evidence, vague replies such as “We are processing your request” appeared, followed by the same prompt requesting photo evidence.
Consumers often find it challenging to bypass automated systems in order to connect with a human agent. This frustration is exacerbated by the intentional design of these chatbots to discourage users from opting out of automated interactions. Moreover, some chatbots require adept maneuvering to enable users to connect with human agents. This issue is particularly relevant in India, where digital literacy is only 38%, compared to much higher rates in developed countries. Thus, it can be inferred that most Indian consumers lack the skills to successfully navigate the complexities of getting a chatbot to connect them with a live agent.
Therefore, the author suggests that a ‘right to opt out of automated grievance redressal’ be introduced in Indian Consumer Law. However, implementing such a right is likely to face significant resistance from businesses that have benefited from reducing costs associated with backend call centres handling consumer grievances. Consequently, two solutions are proposed to address the issues highlighted in this blog, considering both user rights and business interests.
To qualify the ‘right to opt out of automated grievance redressal’ in a way that also protects business interests, the principle of ‘human alternatives, considerations and fallback’ from the United States’ AI Bill of Rights should be applied. This principle emphasizes that individuals should have the option to opt out of automated systems in favour of human interactions when ‘appropriate’. The appropriateness in these cases is assessed based on reasonable expectations under certain circumstances and specific contexts. For example, Blinkit recently introduced a 10-minute ambulance service. In light of this service's connection to health emergencies, consumers should have the ‘right to opt out of automated grievance redressal’ in such contexts.
Lastly, to resolve the problem of manipulative design tactics, the Central Consumer Protection Authority (CCPA), which is tasked under section 18 to protect the rights of consumers, should introduce AI standardization guidelines for the deployment of Chatbots. AI standardization refers to the process of establishing guidelines for the design, development and deployment of AI systems. Such guidelines can include benchmarks regarding the number of failed resolution attempts, after which the AI should automatically display a prompt seeking the user’s approval for connecting to a human agent. Further, to ensure that companies abide by such standards, independent assessment authorities should be established that verify that manipulative design tactics, such as active obstruction, usability smells and so on, are not part of the design of the Chatbots.
Conclusion
The integration of AI Chatbots in the customer service industry represents a significant paradigm shift in the industry. However, with the widespread adoption of the same, it must be ensured that consumer rights and trust are not taken for a ride. The use of manipulative design tactics hinders effective grievance redressal which also erodes the credibility of businesses.
Therefore, it is quintessential that consumer-centric approaches such as the ‘right to opt out of automated grievance redressal’ and standardization of AI Chatbots be introduced within the framework of Consumer laws in India. By adopting such solutions, businesses can balance the benefits of automation with the legal and moral obligation to protect the rights of their customers. Further, the CCPA has already shown a proactive approach by issuing guidelines on the prevention and regulation of Dark Patterns. It is recommended that it exercises its powers under sub-clause (l) of sub-section (2) of Section 18 of the Consumer Protection Act, to issue necessary guidelines to prevent harm to consumers’ interests caused by the use of manipulative design tactics in chatbots.
Comments