
But the ubiquity of AI also means something else. It means that most of us have some kind of preconception about what AI can do. While trust and support for AI is not universal, many consumers are embracing the power of the technology. For example, 71% of customers want to see generative AI incorporated into their shopping experience. Meanwhile, though only 21% of respondents said AI features make them more likely to buy a product, this is still a large number of people.
In other words, businesses can raise their profile and boost revenue by advertising the "smart" credentials of their products.
So what happens when companies misrepresent this? What happens when businesses exaggerate the amount and level of AI they are actually using?
Well, this is AI washing, and it's becoming increasingly common.
AI washing in action
Amazon found themselves facing AI washing accusations last year when it transpired that their AI-powered physical stores and automated billing relied on thousands of human workers in India to review transactions. The company did refute this claim, although they did not refute the basis of the accusation. They said that human review teams are a key part of large-scale AI systems and that the company had not misrepresented themselves.
Coca-Cola was also accused of AI washing when it advertised a new drink flavour "co-created" with AI. Some questioned what "co-created" even meant and suggested the brand was piggybacking on the popularity of AI to sell more products.
But this doesn't just apply to big brands – and it's also not a particularly new phenomenon. A study from 2019 found that around 40% of organisations positioning themselves as "AI startups" weren't really using AI in their products and services at all.
Why is this such a big deal?
There are lots of reasons why AI washing is not okay. It's illegal for starters – you can't sell products and services based on claims that simply aren't true. But beyond this, why is AI washing something we all need to care about?
AI washing can change the perception of artificial intelligence
The public perception of AI is in a bit of a precarious position, and the picture is complex. For instance, a study from the University of Queensland found that almost two-thirds of people are either "ambivalent or unwilling to trust" AI. However, the study also found that it really depends on the application. Healthcare AI is an example of an area in which AI has high levels of trust.
The same study found that 85% of people believe there will be benefits to AI, but only 50% believe these benefits will definitely outweigh the risks. As AI technology advances and adoption accelerates, public trust needs to increase alongside this. If companies are not upfront about the way they are using AI, these trust levels are going to fall.
AI washing leads customers to pay more
As we discussed earlier in this article, 21% of consumers are more likely to purchase an AI-based product or service and so are likely to spend their money with companies that provide this kind of offering. While this is not the majority, it still represents a significant proportion of consumers, and this figure is likely to grow.
If more than a fifth of customers are willing to part with more money to receive an AI-powered solution or service, then businesses can profit from this. It's simply not fair to misrepresent a product or service and then directly profit from this misrepresentation.
AI washing can put consumers at risk
There are many instances in which we place our trust in artificially intelligent technology. As we've seen, healthcare AI, for example, has a high level of trust. Navigation apps are also highly trusted – if we are going on a journey, we tend to take the route the navigation app tells us to, even if we might know a different way ourselves.
If users are putting trust into these solutions, misrepresentations can be dangerous. The health, safety and well-being of the user can be put at risk by "false artificial intelligence".
Combatting AI washing
Now that AI has become such a buzzword, the dangers of AI washing are more prevalent than ever. We hear the word every single day, so it's easy for unscrupulous companies to add it to their own messaging without really engaging with what the word means.
So where do we go from here? Fortunately, there are ways to meaningfully combat AI washing.
Developing legislation
Measures like the European Union's AI Act are building frameworks of governance around artificial intelligence. While this kind of act is generally more focused on achieving safe and responsible usage, it is also helpful in combatting AI washing. If businesses need to be more transparent about how they are using AI, then it will become more difficult to misrepresent products and services.
Increasing public understanding
One of the reasons AI washing has become such a problem is that the public understanding of AI technology is low. A survey conducted in 2022 found that only 30% of Americans were able to correctly identify how AI was used in six common applications. A study from the UK in 2023 found that over half of those aged 70 or above are not able to identify when AI is being used. Enhancing understanding with education and support will empower consumers to call out false AI when they see it.
Holding perpetrators to account
Existing legislation does go some way to protecting consumers and end users. Consumer Protection from Unfair Trading Regulations, for example, came into force in 2008 and prevents businesses from misleading their customers. However, with vague and sometimes woolly definitions of AI, it can be difficult to identify where misrepresentation is taking place. Setting clear definitions and holding businesses accountable under relevant legislation may be effective in combatting the problem.
Harnessing real benefits from AI
There are real benefits to be had with AI, and there are challenges and risks too. To make sure society is able to leverage these benefits and mitigate the risk, we need to do something about AI washing. While it may appear relatively harmless, misrepresenting AI erodes public trust and can even put the end user at risk. As AI governance develops, however, it should become increasingly easy to spot AI washing when it happens – and stamp it out.