Consumer Protection in the Age of Artificial Intelligence

Consumer Protection in the Age of Artificial Intelligence

As the influence of Artificial Intelligence (AI) continues to permeate various aspects of our lives, the imperative to safeguard consumer interests within this technological landscape becomes increasingly apparent. In the realm of AI and consumer protection, we must scrutinize diverse facets where challenges emerge and demand comprehensive solutions. This blogpost delves into the multifaceted aspects of AI's impact on consumers, from potential harms and environmental considerations to legal and ethical dimensions. Understanding the intricate interplay between AI and consumer rights is crucial in navigating this evolving terrain.

Regarding the consumer protection aspects of AI, we can distinguish several areas where problems caused by AI need to be addressed.

In general, in terms of potential consumer harm, it is necessary to set out the basic requirements that must be followed:

- it should be clearly stated if commercial objectives influence the development of the AI system;

- eco-labelling of energy use and carbon emissions data should be mandatory for AI developers and AI users to indicate;

- effective means of consumer redress should also be introduced in the field of AI, such as claims for damages, public interest litigation. [1]

The environmental impact of AI is no small concern. With the tech industry booming, it's no surprise that pollution has tagged along for the ride. The carbon footprint of AI surpassing the aviation sector is quite the eye-opener. [2] Data centers, the powerhouses behind AI, are churning out their fair share of environmental baggage in terms of energy consumption and emissions. [3]

It's heartening to see efforts in the right direction, though. Eco-design requirements for goods and the push for digital product passports [4] are steps in the right direction. And I couldn't agree more about the importance of transparency in the AI world. Knowing and publishing the carbon dioxide emissions and energy consumption throughout an AI model's life cycle is a must. It's about making progress and finding that delicate balance between technological advancements and environmental responsibility.

In the context of AI at the level of legislation, on the one hand, it is necessary to explore its relationship to the regulatory regimes already in force (UCPD [5], GDPR [6]), and on the other hand, it also generates new regulatory needs (proposed AI regulation) in previously unregulated life situations.

The landscape where generative AI meets commercial practices is intricate, especially when it comes to consumer interactions. This falls under scrutiny and might face investigation under the UCPD. The concept of a transactional decision is broad, encompassing everything from tossing an item into your online shopping cart to the seemingly endless scroll that keeps you hooked. [7]

Now, when a generative AI throws advertisements into the mix, it's essentially playing in the commercial practice arena. If this practice is deemed unfair, the authority might step in under the UCPD. [8][9]And let's talk about generative AI's role as a link between consumers and services. Using it, say, in a chatbot, to guide consumers might be seen as a pushy commercial move—a potential aggressive practice. On the flip side, if the same AI is embedded in the purchase process, there's a risk of it feeding consumers misleading info about the product, making it liable for deceptive practices. [10] That's why it's crucial that the development of these models follows the fairness-by-design principle. [11]  It's about navigating this complex terrain with an ethical compass.

Yet, amidst these advancements, lingering questions persist. Is it deemed an unfair commercial practice for a generative AI chatbot to captivate consumer attention or tether a user to a service by manipulating their emotions? Generative AI models wield the power to craft content so realistic—be it in dialogue, voices, or multimedia—that it can easily deceive users, intentionally or due to subpar data quality. This potential for misleading content raises concerns, especially when misinformation can lead to tangible harm, such as receiving ill-advised health recommendations from faulty AI. The paradox lies in the users' growing comfort with chatbots, which makes them more susceptible to misinformation without even realizing it. As generative AI permeates our digital landscape, concerns arise regarding its capacity to influence emotions and opinions, blurring the lines between AI entities and authentic human interactions or robots.

The looming specter of deepfakes and misinformation adds another layer of complexity. Projections suggest that by 2026, a staggering 90% of online content will be AI-generated, with a substantial portion potentially illegal, like sexually explicit deepfakes. Alas, existing technologies to prevent and identify deepfakes fall short, leaving trust vulnerable to erosion. [12]

Moreover, generative AI is trained on personal data, derived from real people's images, texts, and conversations. This raises valid data protection concerns, addressed in part by regulations like GDPR. However, ensuring the interests of vulnerable groups [13] and implementing human review in automated decision-making processes remains a challenge, [14] as demonstrated by instances like the Italian data protection authority's procedures. [15]

In addition to the list of prohibited AIs, the proposed AI Regulation contains more detailed rules mainly for high-risk AIs. Generative generic AI for general use is not included among the high-risk AI and is therefore regulated in a more limited way. For chatbots or deepfakes, some transparency requirements are formulated, but this is not sufficient to protect consumers' interests as long as they are not classified as high-risk AI. [16] The Council's position is that general purpose AI should be considered as high risk when used as or part of a high risk AI system. [17]  The European Parliament would introduce the category of foundation model, which has been trained on a wide range of data and is designed for general purpose (generative systems could be included). This category would be subject to more transparency requirements, similar to high-risk AI. [18]  The Parliament would extend consumer rights, such as the right to lodge a complaint against the AI system and to have recourse to the courts if the authority rejects the complaint; and importantly, the right to information if the decision is taken by the high-risk AI for the consumer and the possibility of collective redress. [19]

As the dynamic between consumers and AI continues to evolve in personalized social media, content generation, and recommendation systems, debates on ensuring the safety, reliability, and fairness of generative AI persist. However, these discussions must translate into action. Consider the example of consumer searches—while traditional searches yield multiple results, generative AI narrows it down to a single answer, altering the landscape considerably.

 

 

 

[1]Forbrukerrådet: Ghost in the machine – Addressing the consumer harms of generative AI, 2023. június („Forbrukerrådet”), 9-64. p.

[2] Schorow, S. (2022, April 22). How can we reduce the carbon footprint of Global Computing?. MIT News | Massachusetts Institute of Technology. https://news.mit.edu/2022/how-can-we-reduce-carbon-footprint-global-computing-0428;

Kilgore, G. (2023, July 10). Carbon footprint of Data Centers & Data Storage Per Country (calculator). 8 Billion Trees: Carbon Offset Projects & Ecological Footprint Calculators. https://8billiontrees.com/carbon-offsets-credits/carbon-ecological-footprint-calculators/carbon-footprint-of-data-centers/

[3] Forbrukerrådet, 34-36. p.

[4] COM (2022) 142: Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL establishing a framework for setting ecodesign requirements for sustainable products and repealing Directive 2009/125/EC. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0142

[5] Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council (‘Unfair Commercial Practices Directive’) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:32005L0029

[6] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:32016R0679

[7] Commission Notice – Guidance on the interpretation and application of Directive 2005/29/EC of the European Parliament and of the Council concerning unfair business-to-consumer commercial practices in the internal market, point 2.4. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52021XC1229(05)

[8] Commission Notice – Guidance on the interpretation and application of Directive 2005/29/EC of the European Parliament and of the Council concerning unfair business-to-consumer commercial practices in the internal market, point 2.4. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52021XC1229(05)

[9] Bing already uses advertising in its generative AI search engine: https://techcrunch.com/2023/03/29/that-was-fast-microsoft-slips-ads-into-ai-powered-bing-chat

[10] Forbrukerrådet, 48 p.

[11] Forbrukerrådet, 48. p.

[12] Forbrukerrådet, 22-28 p.

[13] Forbrukerrådet, 32. p.

[14] Forbrukerrådet, 33-34. p.

[15] Conforti, M. (2023, April 25). The Italian chat-GPT saga. how (not) to regulate AI. LIBERI OLTRE LE ILLUSIONI. https://www.liberioltreleillusioni.it/news/articolo/the-italian-chat-gpt-saga-how-not-to-regulate-ai

[16] Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206 final, Art. 52. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[17] Position of the Council on the Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts - General approach, Art. 4c 2) https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf

[18] Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD),  https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html.

[19] ibid