In 2016 Microsoft AI chatbot, Tay, started tweeting racist comments. Not only it sparked a heated debate about evil AI, but it was also a PR disaster for Microsoft, almost resulting in a lawsuit by Taylor Swift. Taylor’s legal team claimed the nickname “Tay” is associated with Ms. Swift, creating a false association between the popular singer and the chatbot. Source BBC: “Taylor Swift ‘tried to sue’ Microsoft over racist chatbot Tay”
Luckily for Taylor, the chatbot was withdrawn from the market but this quirky story is how I started to explore bias in AI, how it related to digital marketing and why we should all be more aware of the potential risks of biased algorithms.
Human bias can creep into AI through algorithms and data. In Tay’s (the chatbot, not the singer) case, it learned from its conversations with people on Twitter. Tay learned and replicated human bias.
When data used to train AI models comes from humans and if that data is not broad enough, the AI model doesn’t see enough data to generalize correctly.
So what can happen when an algorithm doesn’t see enough data?
Let’s take an example of a generic clothing campaign using AI algorithms to serve up content that matches what other people have already chosen. This immediately excludes results from people who made less popular choices, resulting in oversimplified personalization based on the biased assumptions for a group or an individual.
To put this into a context, an online retailer may suggest drawn-out dresses to me because, based on the “wisdom of the crowd”, those are popular items for my demographic. However, it’s a product that I’ll never purchase and translates into poor customer experience and an unsubscribe.
Another more extreme example, where an AI algorithm lacked enough data to understand the context, happened during the Californian wildfires. Amazon’s algorithm picked up on the fact that people were ordering more fire extinguishers, so the price increased due to a sudden surge in demand (We’ve all experienced it when trying to order an Uber on a busy weekend evening). Amazon’s AI algorithm didn’t know about the state of the emergency and the prices went up. In this case, the bias had a positive effect on the business but negative on the customer.
In the above example AI unfairly took advantage of people in the emergency. So to what degree can we regulate bias in AI? Tiago Ramahalo the Lead Research Scientist at Cogent Labs, previously at DeepMind, comments:
“Bias reduction techniques will always entail a reduction in the absolute predictive accuracy of a model. Where to draw the line of how much bias reduction should be performed at what cost requires the general public and especially legislators to be more aware of AI modeling techniques so that engineers, researchers and all interested parties can collaborate to build models that are as useful as possible to society while respecting all principles of fairness.”
I don’t believe we shouldn’t just wait around for big organizations to reduce bias in AI and we all should take ownership of it. HBR, in the article “What do we do about the biases in AI”, suggests taking the following steps:
- Educate business leaders. Decision-makers need to understand the risks and need to understand the need for fairness, transparency and privacy.
- Establish processes within the organization to mitigate bias in AI such as testing tools or hiring external auditors.
- Understand and accept that human bias does affect data and will influence results. Having this defined as a risk to mitigate allows the business to introduce processes such as running algorithms alongside human decision-makers, comparing results, and using “explainability techniques.”
When we control these risks, AI can create amazing experiences for our customers.
Raj Balasundaram, SVP of AI at Emarsys, summarized it perfectly when I asked what excites him the most about AI.
“Everything we do, we do it in mass scale, whether it’s buying online, looking at credit score or getting a car loan. In the past, working at scale meant that we couldn’t consider individual circumstances and things unique to you. It couldn’t be done because we didn’t have the infrastructure to do it. But now, I can look at an individual and really find out what is unique and important to that that individual, and really personalise the experience to that customer whether it’s marketing or financial services or customer services. That’s the part which is amazing. Now every brand can offer exclusive and personal service to all their customers.”