In uncertain times, businesses tend to double down on efforts to manage expenses. But recently, when it comes to machine learning (ML) and artificial intelligence (AI), many still seem willing to splurge. The global AI market, including software, hardware and services, is growing at a compound rate of nearly 20% annually and is expected to break the US$500 billion mark by 2024.
Companies are prioritizing AI for many reasons - to deliver better customer experience, to help employees excel at their jobs, sometimes in the hopes of cutting costs, or simply to keep up with the competition. But is this always money well spent?
Anecdotal and statistical evidence paints a mixed picture. One recent study found that while most executives view AI as critical to the future of their company, the typical return on AI investment is just over 1%. Most firms will wait at least one, and up to three, years for payback on an AI project.
The wait for AI ROI
In some cases, AI implementations come with high risks. A highly celebrated effort by the University of Texas MD Anderson Cancer Center and IBM to develop an AI advisor for cancer patients, for example, was mothballed despite MD Anderson investing over US$60 million in the project.
The damage can also be reputational. One leading digital insurer was recently forced to backpedal after a boast about its ability to use AI to detect fraud and boost profits provoked an overwhelming consumer backlash, and a wider bout of industry soul-searching.
Instances like these point to both AI’s incredible potential, and to how even the best AI plans and intentions can go wrong. The stakes are especially high for AI versus other technologies because it is comparatively expensive, and because by contributing to or even making important decisions, it easily touches on ethical concerns, such as who gets access to services, or the balancing of the human and machine workforce.
A few false moves, and AI can create mistakes, bias and unintended negative consequences. Yet when planned and applied effectively, it also contributes to significant business and societal gains. AI does this by automating mundane or dangerous tasks - but also by solving complex problems, improving product innovation, even helping organizations blaze entirely new paths.
i. Identifying and acting on AI opportunities
A successful approach to AI starts with an important question: Is it actually something the enterprise needs? “More often than not, AI is a distinctive tool looking for a problem,” notes Rebecca Parsons, Chief Technology Officer, Thoughtworks. “People have spent a lot of money trying to do something they thought was the right thing and it didn’t work. That doesn’t necessarily mean the technology is wrong - it might just be that it’s being misused.”
“More often than not, AI is a distinctive tool looking for a problem. People have spent a lot of money trying to do something they thought was the right thing and it didn’t work. That doesn’t necessarily mean the technology is wrong - it might just be that it’s being misused.”
Rebecca Parsons
Chief Technology Officer, Thoughtworks
The roots of what’s now known as AI go back decades, but progress was limited by computational power, an obstacle that might seem unthinkable today when data and processing capabilities are cheap, plentiful and continue to propagate. Yet for all the progress, sometimes ‘old’ tricks are just as effective as new solutions.
“Just using more traditional analysis techniques isn’t necessarily ‘cool,’ but you can learn some interesting things from data sets using some of the basic statistical and data analytics approaches,” says Parsons.
Though the terms are often used interchangeably, Parsons notes that it’s also important to draw a distinction between ML - that is, machines running algorithms, learning models and extracting patterns or information from data - and AI. The definition of the latter is much more contested but could be broadly viewed as machines performing tasks that could be considered smart in the human sense - that is, demonstrating ‘thinking’ of their own.
A convergence of technology forces lays the foundation for AI to shine
Traditionally, ML and AI have delivered the most when applied to improve productivity and efficiency, and for quality assurance, replacing tedious and repetitive tasks performed by people with automation. Gartner forecasts breakneck worldwide growth for technologies enabling ‘hyperautomation’ - a term that bundles the rapid identification, vetting and automation of a vast number of processes. By 2024, organizations combining hyperautomation with redesigned processes are expected to slash operational costs by as much as 30%.
AI offers many opportunities to speed up or remove human error from routine processes, particularly those based on identifying and acting on patterns.
Neural networks - intricate, algorithm-driven systems modeled after the human brain - “are incredibly good at pattern recognition,” explains Parsons. “Sometimes, when people hear of pattern recognition, they think of image or character recognition. But fraud detection is effectively a pattern recognition problem. The model essentially builds up patterns of what tends to be fraudulent activity and what's normal activity for you. Using AI to flag unusual credit card activity, or perform image analysis, frees people up for higher-value work.”
AI and ML are also highly effective when organizations need to do things at scale. Variables increase exponentially when demands ramp up and relying on human experience and analysis alone can quickly prove insufficient. For example, a local retail store serving a regular set of customers in a small town and deploying a few regular delivery routes might not see much scope to boost efficiency with AI. However, organizations with large, global operations and complex supply chains tend to benefit greatly from AI-driven optimization.
A fundamental factor for successful AI projects is to ensure that the processes identified for automation are deeply relevant to the business, and already work well. “Companies need to bear in mind that AI won’t ‘fix’ a broken process,” says Parsons. “It will only automate it - and possibly even make it worse.
Once a specific area where AI applications could add measurable value is identified, “it’s advisable to start with several good lightweight, cost-efficient pilots to test your AI idea out quickly, to see if it is going in the right direction,” advises Maria Pusa, Principal Data Science Consultant at Fourkind, part of Thoughtworks. “This is the only way to evaluate if the project is going to be a valuable investment.”
“One of the mistakes companies make is to buy or build a massive data platform before mapping their AI journey,” she adds. “This is risky because it requires a lot of resources, can be expensive, and it might take too long before the data is usable. Meanwhile, your competitors might have moved ahead with smaller but impactful AI projects.”
At the same time, rushing the planning and evaluation process is also dangerous. “A lot more ROI can be gained if more time is spent on planning and thinking through what you are actually going to do, before running a pilot or POC (proof of concept),” says Jarno Kartela, Principal Machine Learning Partner at Fourkind, part of Thoughtworks.
One recent survey showed an overwhelming majority of knowledge workers who use automation software see benefits, but that nearly a quarter are determined to avoid automation completely, mainly because they’re not convinced it will prove useful to their role. This makes it all the more vital to invest time in winning people over even in the planning stage.
“Quickly moving to production without proper alignment with the rest of the team could result in issues down the line,” Kartela explains. “People might not trust the system or feel like they’ve been left out of the creation process. You need to establish an internal pull or demand for whatever you’re doing - not only from top management, but also from individuals - and to spend more time on planning and co-creation to get the best results.”
ii. From automating the mundane to collaboration and creativity
Organizations should also decide the extent of their AI ambitions - whether they stop at automating mundane and repetitive, often trivial tasks; or extend towards more creative and strategic opportunities, where the system collaborates with human skills and expertise, becoming what Kartela calls “technology as a co-worker.” This is one of the key trends identified in Thoughtworks’ Looking Glass report on the forces that will shape technology, and business, in the future.
“Automation of mundane tasks delivers quick ROI and that value can be significant,” he says. “But that possibility is very clear to all of your competitors as well, so it’s not going to give you a competitive advantage for too long. Thinking about what AI could do for decision making, creative tasks and strategic planning can take things to another level.”
"Thinking about what AI could do for decision making, creative tasks and strategic planning can take things to another level.”
Jarno Kartela
Principal Machine Learning Partner at Fourkind, part of Thoughtworks
The AI value continuum
Source: Fourkind
Instead of focusing on productivity and efficiency alone, AI can now be used to help deliver outsized value and guide the organization’s direction. AI is no longer only for data-intensive tasks - it can be equally useful for solving problems that involve only small bits of data but a very large possibility space, such as research and development and scenario planning.
“Techniques like these can be used when you don’t know the answer to a pressing question or need to coordinate something like a complex logistics network at significant scale,” Parsons notes. “You can let algorithms explore the data and see what they find, and perhaps point the way to potential innovations.”
“You can let algorithms explore the data and see what they find, and perhaps point the way to potential innovations.”
Rebecca Parsons
Chief Technology Officer, Thoughtworks
Finland’s Kittilä airport is a showcase of AI’s superior capabilities when it comes to ensuring optimal performance despite limited resources and unpredictable scenarios. The fast-growing facility was suffering from a shortage of parking slots for the rising number of flights, which resulted in frequent logistical bottlenecks.
Collaborating with the airport’s management and parking experts, Thoughtworks created an optimization model that uses flight data to construct an improved parking plan that can also predict and learn from arrival times and passenger numbers.
The airport saw a 12% increase in the number of flights and a 20-point increase in its net promoter score (NPS). A drop in the number and duration of airport-related flight delays also amounted to an estimated €500,000 (US$588,000) in cost savings. What’s more, carbon dioxide emissions were reduced as planes spent less time circling the airport while waiting for a parking spot.
“Supply chain optimization combined with ML has significant potential because it is not just about being cost-efficient; it’s also about being sustainable,” Pusa says. “The more complex the processes, the more potential ROI there is.”
“Supply chain optimization combined with ML has significant potential because it is not just about being cost-efficient; it’s also about being sustainable. The more complex the processes, the more potential ROI there is.”
Maria Pusa
Principal Data Science Consultant at Fourkind, part of Thoughtworks
The capacity of AI to bring organizations to new heights doesn’t stop there. “AI-augmented research and development can also have a huge impact, because there is so much potential for any business - from cookies to sneaker design to architecture - to use creative AI for product innovation,” notes Pusa.
In addition to creating original music, poems and artwork, AI has also ‘worked’ with experts to develop brand-new products, such as chairs and even an award-winning spirit.
Thoughtworks joined forces with celebrated Swedish distiller Mackmyra to develop the world’s first AI-blended whisky. By building a model that incorporated data from a wide variety of sources - previous recipes, tasting notes, expert reviews and cask information - the team was able to develop a new blend much faster than would have been possible manually, while also hitting on new combinations that otherwise may never have been considered. Potential recipes were evaluated and further fine-tuned based on input from Mackmyra’s master distiller and chief nose officer before the best recipe was selected - a perfect example of augmentation in action.
“The core idea of creative AI is not to predict something as accurately as possible, or to try to get some kind of conversion or action, but to mimic creativity in a way that will make the ideation process for new products easier, faster and more efficient,” says Kartela.
“The core idea of creative AI is not to predict something as accurately as possible, or to try to get some kind of conversion or action, but to mimic creativity in a way that will make the ideation process for new products easier, faster and more efficient.”
Jarno Kartela
Principal Machine Learning Partner at Fourkind, part of Thoughtworks
iii. Navigating data, human and ethical dimensions
Seizing on AI’s creative potential requires organizations to balance three critical, and complex considerations - data, people and ethics.
As the foundation for any model or algorithm, data ultimately dictates the performance of an AI solution, meaning the old computer science maxim of ‘garbage in, garbage out’ applies.
“It takes an awful lot of resources to train an AI model,” notes Parsons. “Not only do you have to have clean data, but it also helps to understand the forces that have shaped that data, and very often people don’t go that far.”
No matter how powerful the underlying technology, disorganized or distorted data can create a system that generates problematic outcomes. In AI/ML, minor issues can quickly spiral into major ones as the system evolves based on previous inaccurate or incomplete conclusions.
“Organizations tend to underestimate how much work it is to get the data in a position where it can be used,” Parsons says. “Data doesn’t age well, and over the years many data sets develop little traps. Perhaps for six months a particular column was used for a completely different purpose, so when you’re trying to analyze 10 years’ worth of data it doesn’t all mean the same thing. You can’t get meaningful answers out of dirty data.”
Data quality has been pinpointed as the single biggest data-related challenge organizations face around AI projects. Other difficulties include governance, security and integrating disparate data sources.
The data barriers to AI achievement
“Typically in order to have high-quality ML you need data from across the company,” says Pusa. “But I’ve witnessed many cases where companies can’t access their own data because it’s buried in legacy systems. If the company, and its data, is divided into several silos it can be almost impossible to combine all that in one place to build a huge predictive model.”
Overcoming this requires an honest assessment of what data resources are available, and how they might be effectively ‘cleaned’ and combined. At the same time, issues with historical data shouldn’t prevent an organization from exploring AI. In fact, experts say, overreliance on historical data can be counterproductive.
“It’s something of a misconception that you absolutely must have a single source of truth in one place in a clean format to start on applied AI,” Kartela explains. “You can’t necessarily predict your future from your own past, and the data gathered in the last 10 years might be irrelevant for a project that’s connected to pricing, for example. Instead, create ML models that actually explore new spaces, and learn causal relationships between price and a specific customer action in the present.”
“The idea that you need tons of data to get started on anything is outdated,” agrees Pusa. “Learning has effectively gone online because this is the only way to keep up with changes. It can be easier to get started with real-time insights than historical data because the solutions are typically more lightweight. The idea is to try out different things, and see what works and what doesn’t. Only that allows you to react to fast-developing trends.”
“It can be easier to get started with real-time insights than historical data because the solutions are typically more lightweight. The idea is to try out different things, and see what works and what doesn’t…. to react to fast-developing trends.”
Maria Pusa
Principal Data Science Consultant at Fourkind, part of Thoughtworks
Welcoming AI to the team
As AI projects contribute to, and sometimes take over, roles and processes performed by people, they need to be evaluated in terms of human impact. This means they can’t be the exclusive domain of technologists if they’re to secure the business relevance and organizational buy-in needed to drive genuinely transformative results.
“In applied AI and ML, asking the right question and resolving it is key,” notes Kartela. “When you’re trying to discover that question, the more cross-disciplinary a team you build the better, but you need at least three main roles. One, the person who knows technology in applied fashion and how to use it to solve business problems – typically the CTO or CDO. Second, the person who knows your business delta, the thing you’re trying to do that will get you where you need to be in five years. And third, someone with a strategic design role, whose responsibility day to day is to think in terms of people, roles and competencies, and the empathy to design things that make a difference.”
Firm commitment from senior leadership and ongoing communication go a long way to overcoming any resistance, according to Pusa. “Driving these AI/ML initiatives requires someone who is thriving inside the organization having discussions with each department taking part,” she says. “The main mindset you need to instill is that the project is trying to help them do their job better. It’s very important that all the people involved in a process are also included in a project to augment that process. That’s also necessary to create a good ML model, because you need to truly understand the challenges, constraints and goals.”
“When people get involved, it’s usually a game changer for them,” she adds. “I’ve witnessed several projects where those against them initially ended up being the biggest cheerleaders.”
AI projects need all the support they can get in part because relevant skills remain in short supply. In one recent survey enterprises cited a lack of talent as the single biggest drag on AI adoption, above even data and technical challenges.
Talent shortage weighs on AI adoption
According to Parsons, rather than waiting for the situation to improve, companies need to revert to what was once standard practice.
“The talent crunch is certainly not an overblown problem,” she says. “But it means companies are going to have to invest in developing those capabilities, and then creating the kind of environment where they can retain them. If you look at the way the relationship between employer and employee has evolved over the last 30 years, it used to be that organizations had extensive internal training programs and expected to have to use them. Then we went through this intense period where you couldn’t get a job unless you already knew how to do it. Now people are realizing if there’s not enough talent out there they might have to go back to focusing on internal development.”
"If you look at the way the relationship between employer and employee has evolved over the last 30 years, it used to be that organizations had extensive internal training programs and expected to have to use them. Then we went through this intense period where you couldn’t get a job unless you already knew how to do it. Now people are realizing if there’s not enough talent out there they might have to go back to focusing on internal development.”
Rebecca Parsons
Chief Technology Officer, Thoughtworks
Companies can help foster skills internally by viewing AI projects as educational opportunities – for everyone. “Upskilling at large is a problem that organizations need to address, as most roles will change dramatically over the next 10 years,” Kartela says. “You need to figure out how to scale up the competencies of the entire organization, not just individual roles, but also top management.”
“It’s crucial that you think about how you can turn your technology projects into learning projects for the entire enterprise,” agrees Pusa. “Learning should be part of every project’s key results and objectives.”
With the field rapidly evolving, careful thinking also has to go into identifying what kind of AI/ML talent the company needs.
“As machine learning and applied AI goes forward we’re going to make better and better tools and frameworks, and they’ll be easier to use – which will mean AI and ML will merge into software development over the next few years, and become much more available for most companies to use and adopt,” Kartela explains. “Most companies are currently in-housing data science talent really quickly, but if you’ve not created the right environment that talent may end up doing the work of software developers and data engineers – which means they’ll quickly become bored and leave.”
Kartela sees AI/ML talent evolving in two main directions – “one focused on strategy and how to apply AI/ML to problem solving, the other completely specialized in a specific subset of problems, like computer vision or natural language processing. There will be cases where it makes sense to hire that specific talent, but not for most companies.”
Pusa points out that AI/ML is one area where organizations shouldn’t hesitate to seek outside help.
“At times, identifying the right tailored solution requires mathematical expertise,” she says. “It’s easy to buy off-the-shelf AI solutions but you may end up spending a lot of money on something relatively simple. Because there’s hype around it, there are also opportunities to overprice. Speaking to other consultants or experts in the field can enable you to find the best option, without making those kinds of mistakes.
Addressing ethical blind spots
Along with talent, ethics is one of the key challenges companies almost inevitably have to wrestle with when developing an AI/ML practice. The ethical impacts of an AI solution are tricky to predict, and may not be apparent until well after a solution is put in play.
For instance, assessments of some early facial recognition algorithms made it clear that they had vastly reduced accuracy when dealing with certain demographic groups, particularly Black and female subjects. As some of these solutions were marketed to law enforcement, these discrepancies had the potential for highly destructive consequences.
One recent study by FICO showed over three-quarters of business leaders understand AI/ML can be misused. Yet this awareness has yet to translate into focus or action. Only 35% of the same leaders said their organizations made efforts to use AI in a way that is fair and accountable, and 78% reported feeling poorly equipped to handle the ethical implications of using new AI systems.
According to Parsons, the best insurance against AI producing negative ethical outcomes is building diverse teams. “To the fullest extent possible, you want people involved who have different views of the system, and different intersection points with the system or with the data,” she says. “You want some people who are very familiar with how it’s supposed to work, and you also want people who are familiar with the consequences. It’s very important because doing the exploration of the systemic issues that can affect the generation of the data is a difficult problem. You need the input of people who aren’t really vested in defending the system as it currently exists.”
“To the fullest extent possible, you want people involved who have different views of the system, and different intersection points with the system or with the data. You want some people who are very familiar with how it’s supposed to work, and you also want people who are familiar with the consequences.”
Rebecca Parsons
Chief Technology Officer, Thoughtworks
The elements of ethical tech
Explainability, particularly around data sources, is the other essential element of a responsible AI approach. Parsons points out that many of the problems that dogged facial recognition could have been avoided with a clear-headed assessment of the data sets used to train these systems, which skewed overwhelmingly white and male.
“You have to do a census of your data,” she says. “Start with running the numbers to see if certain categories are over or underrepresented, and what differences in representation may imply for the kind of problem you’re looking at. Then do a thorough exploration of the system that results in the data, and see if there’s any way to compensate for bias or rectify underrepresentation in your data set. Think through the consequences for the use of that data set, given the biases that have been identified. These are complex conversations, but they still need to happen - and many people are only just starting to have them.”
“The conundrum (with AI/ML) is that in order to build a system that’s completely ethical when it comes to race, gender, age or other factors you need to know that data,” Kartela notes. “You’d think the opposite is true, that providing this information would lead to bias, but in fact it’s the only way you can know you’re treating different groups equally. If you don’t have that data there’s a risk of the model learning race or something else from implicit data, the actions and interactions of people and how they use the service. It gets even worse if you use your own biased past data to make recommendations for the future.”
“What’s most important is being transparent in whatever you do for the end-customer,” Kartela adds. “If you’re transparent in terms of actions and the data points you’re using to create them, that creates the most trust and removes most issues.”
“Ethical questions come up with any application, and AI is no different,” says Pusa. “The best rule of thumb is to never use data you can’t admit to using in public.”
“Ethical questions come up with any application, and AI is no different. The best rule of thumb is to never use data you can’t admit to using in public.”
Maria Pusa
Principal Data Science Consultant at Fourkind, part of Thoughtworks
An additional, often overlooked ethical consideration is that AI comes at a relatively high environmental cost. “Training these models is computationally intensive, expensive and burns a lot of energy,” Parsons explains. “There’s a staggering amount of computing resources needed, particularly if you’re dealing with a really large data set.”
By some estimates the carbon footprint of training a single model can equate to up to five times the lifetime emissions of an average automobile - a fact organizations will need to keep in mind as they seek to develop and strengthen their sustainability strategies.
From old limitations to new possibilities
The payoff for working through these issues and cultivating the necessary internal capabilities around AI/ML is only set to grow, with the field continuing to develop. Thoughtworks experts see AI playing a more strategic, and creative, role as advancements in data and computing deliver higher levels of insight and anticipatory intelligence.
“The real game-changer in the future is going to be machine learning combined with other tailored computational methods, thanks to the availability of operations data in digital form, and easy and affordable access to computational power,” says Pusa. “This combination enables so many opportunities to use tailored algorithms that go well beyond the basic ML functions like recommendation engines, to create optimization and predictive models that have dramatic results.”
“Reinforcement learning is really underused, but it’s going to be the biggest thing since the AI hype started in two to five years,” Kartela notes. “The idea is that you don’t take all your past data and depend on that, but create an agent that takes actions and learns from the feedback in real time, completely autonomously. That will basically remove the problem of trying to line up all the data assets in the world before you can make an intelligent prediction, because you’re learning from the present rather than the past, engaging in intelligent, controlled exploration as you go.”
“Reinforcement learning is really underused, but it’s going to be the biggest thing since the AI hype started in two to five years."
Jarno Kartela
Principal Machine Learning Partner at Fourkind, part of Thoughtworks
Parsons sees significant promise for enterprise applications as systems improve their understanding of human language. Indeed in its latest AI and machine learning outlook, research firm S&P Global Market Intelligence flagged the rapid progress of large language models and tools such as advanced chatbots and content moderation as set to pave the way for a new kind of scalable enterprise search - “enabling workers to find information in context to any type of query, no matter how complex.”
“There’s so much that can be done once computers really understand language,” she says. “It still has a long way to go, but there have been significant improvements in some of the standard language recognition and translation capabilities. That has a lot of potential for enterprises, particularly when you think about people-to-people interactions. I’ve dealt with online chat systems where a bot could have handled the whole exchange - except for the fact that right now, we humans still want to communicate as humans.”
Certainly, a degree of unease still surrounds human-AI exchanges. One poll of consumers in the US found 86% preferred to interact with a human over an AI-based system when seeking customer support, and that half believe chatbots and virtual assistants actually make it harder to get their issues resolved. Nonetheless experts like Parsons see negative perceptions of AI fading as sensitivity to its ethical dimensions grows, and more clarity emerges around the positive societal roles it can play.
“AI will certainly take some jobs, but then the questions are: what is created, what becomes possible as a result, and what jobs will come from that?” she says. “We might welcome the fact that now we have an autonomous vehicle that can go explore former minefields, so we don’t have to send a person or an animal to take that risk. We might actually improve the quality of life of pathologists who used to go through scans manually by having AI take out all the low-hanging fruit, so their attention can be focused on the things that really need it.”
And despite the astonishing advances of the last decade, none of the Thoughtworks experts see even a remote possibility of AI displacing human talent and decision-making. At least for the time being, there are roles even the most sophisticated systems just can’t play.
“There are systems that can create paintings that laypeople can’t distinguish from the great masters, but what is the probability that an AI that is trained on the existing corpus of art would create a whole new school?” Parsons says. “That spark of creativity, that thing that allows for the creation of something the world hasn’t seen before, on what basis would an AI have that? We don’t yet have that level of capability. And we’re still very far away from having AI systems that can show empathy.”
To discover more about the value of creative AI, check out this episode of Pragmatism in practice
“That spark of creativity, that thing that allows for the creation of something the world hasn’t seen before, on what basis would an AI have that? We don’t yet have that level of capability. And we’re still very far away from having AI systems that can show empathy.”
Rebecca Parsons
Chief Technology Officer, Thoughtworks
Perspectives delivered to your inbox
Timely business and industry insights for digital leaders.
The Perspectives subscription brings you our experts’ best podcasts, articles, videos and events to expand upon our popular Perspectives publication.