The rise of AI in business largely goes unquestioned, until a poor decision comes out of a black box that no one can fathom or that causes actual damage. To avoid this, businesses need to adopt AI tools that are provable and customer-friendly, with chatbots paving the way until AI can be truly trusted.
In most business cases, artificial intelligence helps companies progress when it comes to their varied use cases. From understanding us humans and our convoluted languages, recovering data from forms, predicting outcomes etc., AI helps spot meaning, intent and value, and provides the power for chatbots, analytic services and other digital business tools.
However, as with 5G and 4G before it, as with robots in factories, and those pesky vaccines that keep us alive, there is a narrative in the media that AI is here to destroy us, to wipe out jobs, to weaken employees and other negative outcomes.
A black box AI, for example, could create results that imply bias with no proof behind them, so one person gets a low insurance quote while someone else in similar circumstances is charged more. Or, the AI may produce outcomes that are largely negative for the people-element of any equation. In regulated markets like banking, that could be a major issue. You can read more about the fine detail in black box technology here.
In one example, where the press brought their firepower to bear, take the black box AI in Target’s Shipt delivery app that is damaging (by 30% to 50%) the livelihoods of gig workers. “At other gig economy giants that rely on so-called ‘black box’ algorithmic pay structures, such as Instacart, DoorDash, Uber, and Lyft, workers who rely on the app as a primary source of income have found themselves at the mercy of constant, unexpected tweaks to their pay structure and no guarantee that they’ll make the minimum wage.”
But in others, the mystery of a black box AI will be more subtle and the risks need to be mitigated by the business, or avoided altogether. “One of the biggest concerns around AI is that complex ML-based models often operate as “black boxes.” This means the models — especially “deep learning” models composed of artificial neural networks — may be so complex and arcane that they obscure how they actually drive automated inferencing. Just as worrisome, ML-based applications may inadvertently obfuscate responsibility for any biases and other adverse consequences that their automated decisions may produce.”
To avoid your business falling into this trap, becoming the next target for a post-truth media assault, adoption of any chatbot or tool that uses AI requires some careful navigation during the process.
The simplest way for any business to get into AI is to start without it. Any company looking to improve their customer-facing technology can build a completely normal scripted chatbot. One that does nothing more than following the classic support or help script, delivering a few positive business outcomes for customers or clients, or directing them to the appropriate human support if the bot can’t deal with the question.
With that as the basis for a first effort, developers or creators can then start adding AI elements such as natural language processing (NLP), which as SnatchBot explains, “To develop your NLP model over time, so that it becomes more and more accurate at solving the task you want it to address, you will want the chatbot to learn, especially from its mistakes. Machine Learning is a hot topic in the search for true Artificial Intelligence. Our models embody Machine Learning in the sense that on the basis of your having provided example sentences and their outcomes, the model will make decisions about new sentences it encounters.”
Having improved a chatbot with NLP, natural language understanding or deep learning, the business will see how the bot can improve outcomes, reduces the cost of investment through smart learning and provides a path for wider use of AI. That next step might be analytics with the likes of Domo providing business intelligence based on AI that a company can fine-tune to meet their own needs.
Using bots and other smaller tools, your teams will be better able to understand how the AI delivers results, and what to watch out for. As in-house teams or developers work with software or service partners who are increasingly bundling AI with business applications, having a broader understanding will help avoid mistakes and adopting poor solutions. Any provider selling a black box solution that “just does it” will clearly need to provide better reasoning.
Deloitte is just one brand highlighting the risks and providing advice on how to manage the black box of AI, citing risks including:
- Erroneous decisions
- Overlooked vulnerabilities
- Increased scrutiny and higher expectations from consumers
- Reputational, legal, and regulatory consequences
- Delays in proper redress of business issues
- Third-party induced risks due to limited visibility into algorithm design
However, by building a pillar of knowledge in the business using AI in bot and analytics tools, a company will be better positioned to identify these risks and be ready to avoid or deal with them. At their hearts, most current AI tools are relatively simple creatures, delivering predictable and measurable benefits to a company, but as AI gets smarter those risks will grow.
Then companies can look at the poor examples of AI and really consider if they should adopt them, would slashing contractor pay really show your company in a good light? While the future of artificial general intelligence (AGI) might open up a fresh can of worms, when it comes to business use, any company leadership needs to monitor and control how its AI is used today.
Another risk could see teams adding AI tools to services independently or without IT oversight, with the danger of shadow AI creeping into a company. Imagine if that AI starts to infect outcomes or poison data. How will the business explain to customers or an angry press if it cannot explain what went wrong?
As with much of cloud technology, AI is just another tool in the box, but while some email downtime isn’t that much of a deal, when and AI goes wrong or is misused, there could be a very nasty sting in the tail for users and the company, so developing a clear AI strategy and carefully considered adoption is essential to avoid ending up on the wrong side of the AI argument, or looking less than intelligent when it goes wrong.