Perspectives
Introduction: Believe the hype?
Is all the hype around AI justified by concrete business opportunities? Thoughtworks AI experts say the answer is yes – with a few caveats.
An appetite for AI knowledge
“Too many people are focusing on ChatGPT when it’s just the tip of the iceberg,” says Barton Friedland, Principal Advisory Consultant, Thoughtworks. “The capabilities have existed for some time. What’s changed is we can now use the AI itself as the interface, so if there’s a system for something like fraud detection, dynamic pricing or supply chain management, instead of having to point and click your way to a solution, you can just tell the system what you want, and it can understand and produce the outcome you’re expecting. Our interaction with computers will be more smooth, because we now have a greater choice of modality.”
“AI has reached a level of maturity where its potential has become more visible and tangible for everyone,” says Ossi Syd, Principal Consultant, Thoughtworks. “We’ll see more and more opportunities as more organizations start to understand what it might mean for them, and how it can be used to solve problems.”
The real benefits of this shift will only accrue to the organizations that take a deliberate approach – and free themselves of some of the misconceptions – around AI.
Take the common myth that AI requires vast amounts of ‘clean’ historical data, when in fact that’s not necessarily the case. “Data still plays a critical role, but there are key shifts,” notes Thoughtworks’ Director of AI and Data Practice, David Colls.
“Creative and strategic applications can't be driven by only looking in the rear view mirror, but must incorporate novel data from the world or your collective imagination,” he says. “The use of AI services and foundation models, on the other hand, distills vast but undifferentiated external stores of data into your organization. These only provide sustainable advantage when leveraged with your expertise and, though external, they still require considered governance. Exactly how AI is connected to data needs to be reexamined.”
Similarly, for all the focus on hiring data scientists, “the idea that AI considerations are best left to a dedicated team of AI and data science experts is another myth that needs to go,” Colls continues. “AI solutions are ultimately designed for people, and a multidisciplinary team that comprises domain and technical expertise as well as a human focus, will enable organizations to get the most value out of them.”
“AI solutions are ultimately designed for people, and a multidisciplinary team that comprises domain and technical expertise as well as a human focus, will enable organizations to get the most value out of them.”
David Colls
Director of AI and Data Practice, Thoughtworks
Syd advises companies not to be overly intimidated by what an AI implementation might involve – or to expect immediate payback. “There’s a belief that AI integration and adaptation is one giant leap that requires significant upfront investments before producing results, when in fact both transformation and the AI journey are more gradual in nature. The same belief was associated with the digital transformation back in the day.” he explains.
Organizations – and their people – can also dismiss concerns that advanced generative AI models will render many roles obsolete. “There was a period where end-to-end automation was seen as the solution, but a number of incidents have shown that’s not a very resilient approach,” says Colls. “Roles will not necessarily be replaced wholesale, but mundane stuff will be made easier to do. People will be placed in a role of setting the direction and then handling deviations from normal parameters, applying creative thinking and problem solving to deal with those situations.”
The best results can be achieved “not necessarily by buying into the hype, but by rebuilding your strategy to take into account this evolution in technology,” Friedman adds. “Technology is only really effective when it amplifies what’s distinctive about your organization, reduces friction and supports growth.”
i. Why an AI strategy is now everyone’s business
With virtually every industry set to be affected by AI, factoring it into the broader vision for any enterprise has become critical.
“Any organization looking at how they’re going to maintain market share and remain competitive over the next decade needs to pay attention, and to act,” says Friedland. “The risk of not doing that is current modes of working lose effectiveness, or products and the market itself changes in a way that doesn’t allow the business to keep up.”
“The urgency stems from the fact that in the next few years, all software, including mundane use cases like warehouse management and price planning, will have AI embedded so it’s more autonomous, while augmenting human activity,” Syd explains. “In this kind of world, it no longer makes sense to invest in non-AI software as it’s almost as expensive to produce, without the ROI and efficiency of AI-embedded software.”
“The urgency stems from the fact that in the next few years, all software, including mundane use cases like warehouse management and price planning, will have AI embedded so it’s more autonomous, while augmenting human activity. In this kind of world, it no longer makes sense to invest in non-AI software as it’s almost as expensive to produce, without the ROI and efficiency of AI-embedded software.”
Ossi Syd
Principal Consultant, Thoughtworks
So what does an effective AI strategy entail? First, it’s based on understanding AI less as a specific tool than an approach that can be embedded into any number of applications, processes or potential solutions. “It is a collection of multiple paradigms that are suitable for different contexts and solving different types of problems,” says Syd. “Essentially, AI capabilities should be thought of as one of the core building blocks or assets required to produce business value.”
Increasingly it’s the ability to integrate AI into the business that will create an organization’s edge. “AI techniques, especially those available as a service with the swipe of a credit card, are those that your competitors have access to as well,” Colls notes. “The custom solutions that will be built in-house will also, to a large extent, be based on open source. The ability to build or consume solutions isn't necessarily going to be your differentiator – but the ability to integrate them into your processes and products in the best way is.”
Second, the strategy is deeply aligned with the business and not entirely technology-led.
“There have been many instances of organizations beginning with proofs of concept that can’t be implemented ‘outside of the lab’ because of the inability to scale,” Syd explains. “Organizations should, from day one, consider how a successful proof of concept can be scaled up and embedded into the business, so it doesn't just stay an interest or internal experiment forever.”
Pulling these forces together requires strong executive engagement, which in turn may involve a degree of education.
“Very often board members are pressured from both ends, being asked to mobilize new technology while ensuring the company’s not at risk of something horrible happening as a result, even when they don’t know how the technology works,” notes Friedman. “A briefing can give them a broad understanding of the kinds of concerns they need to address and what they need to do to ensure they remain compliant.”
More organizations are beginning to introduce AI-specific leadership roles to drive buy-in and vision for AI at the highest possible levels. While this can provide an impetus for change, “organizations need to be mindful of relegating AI to a separate strategy that someone in that function is responsible for,” says Syd. “All verticals in the company must be up to date on, and involved in, the organization’s AI developments.”
When all parts of the enterprise are involved, it becomes easier to identify, and prioritize, specific processes or offerings where the re-distribution of work between humans and machines can be gradually shifted to serve meaningful business outcomes.
“Organizations can set themselves up for success by identifying the opportunity first, then building an overall picture of that opportunity, which shows areas of focus as well as commonalities between different parts of the business where you might be able to leverage the same solution or approach multiple times,” explains Colls. “Ideally they're working with business stakeholders and a multidisciplinary technology team to drive the application forward.”
Syd warns executives against getting overwhelmed by all the theoretical applications of AI, or expecting it to create new opportunities from scratch for everyone. For many, it will be about enhancing existing operations, enabling the business to do what it already does better.
“My advice would be not to let AI confuse you,” he says. “You’ve run your business before and made many prioritization decisions that are still valid. Your services are in many cases the same services as before; they’re just more autonomous, more efficient or more relevant because of AI.”
ii. Use cases that deliver outcomes
Some of the most compelling AI augmentation opportunities will emerge around businesses’ data resources. Amid the shift to the cloud, many companies continue to struggle with data that resides in disparate systems and code that’s no longer fit for purpose, complicating operations and decision-making. With AI able to take on some of the burden of bringing data together, identifying software issues and even updating programs, it can help companies draw new connections while contributing to the resilience of systems.
Friedland points to the example of Marimekko, a Finnish design house that turned to Thoughtworks to elevate the engine that recommends its products to customers. This was already a success, having increased revenue by an average of 24% per user, but with AI it was possible to extend the engine’s capabilities even further.
“There are hits and there are misses in the recommendation process; AI can be used to process that secondary data to highlight gaps in the product line so it’s not only being used to increase sales, but also to develop products,” Friedland says. “That’s a really important way to think about it. When organizations are making a significant investment (in AI), why not think through the value chain and look at how the lifecycle of that data can be continued, to create more value from it?”
It’s also become more feasible to use AI-based approaches to write programs to reduce the amount of labor required, and to rapidly and consistently deliver information needed to sustain complex operations.
“All that’s needed is to retrain the model or include new data when situations change,” Friedland says. “Programmers will still be needed to fit graphic user interfaces around more advanced approaches to building programs, and to bring the data in. But one of the key problems AI solves is that proven programs can be changed at very low cost. These two dynamics – better connected systems, with the ability to change them more quickly – should put organizations in a place where they can keep up with the pace of change much more effectively.”
"One of the key problems AI solves is that proven programs can be changed at very low cost. These two dynamics – better connected systems, with the ability to change them more quickly – should put organizations in a place where they can keep up with the pace of change much more effectively.”
Barton Friedland
Principal Advisory Consultant, Thoughtworks
“Generally, everything that currently involves software will be impacted by AI,” agrees Syd. “Whatever business problem organizations needed to solve back in the day, they had software tools to do so, and used them to improve human efficiency. AI allows businesses to automate further and supercharge human efficiency, more than any prior software could ever have done.”
Another case in point is an aviation client, where Thoughtworks, in collaboration with facilities management and parking experts, developed a model to optimize the allocation of spaces to arriving planes that factors in dozens of complex variables to create, and fine-tune, plans at a rate that would be impossible manually. Since its introduction, the model has slashed flight-related delays at the airport by over 60%.
With a long track record of putting generative AI solutions into production, Thoughtworks has witnessed their potential to go beyond predictive or efficiency-improving measures.
“Getting help generating ideas, refining designs for products and services, or understanding strategic options, is equally or even more valuable,” Colls notes. “There's a lot of opportunity to look beyond the pure productivity mindset and examine how AI tools can help creative discovery as well.”
This is evident in how AI is already being used to establish new patterns or incorporate more domain expertise into development. At one of our clients, one of the world’s leading snack food producers, AI is supporting some elements of recipe creation, which was historically complicated by challenges with the collection of data, skills shortages and inconsistent tastes. By partnering product specialists with AI, the organization is able to generate recipes of higher quality exponentially faster. The AI does not work in isolation, but augments skilled teams who provide guidance and feedback that further improves outcomes.
The organization’s system has reduced the number of steps needed in the development process from 150 on average for a new product to just 15. “That's really significant because if organizations can create high quality products that quickly, it might change the way the products are marketed and sold – more limited editions, more delighting customers with new experiences that keep them interested and connected to the brand,” says Friedland. “It’s an entire shift in business model.”
The latest iterations of AI can even have far-reaching strategic implications, by making it possible to include a much greater range of viewpoints and considerations, including those that may typically be marginalized, in decision-making.
“If an executive board is making a decision, they've probably got a number of people who represent very different perspectives or interests, and pretty much see the world from their own lenses,” Friedland explains. “But to make better decisions, we really need to understand different concerns and how they interlink. That's not something that human minds do very well, but if you create a mirror with a computer by modeling the assumptions that people are making about what groups like workers want, what the needs of finance are, what the goals and aspirations of marketing are, along with considerations like sustainability, entirely new connections and ideas can come through.”
iii. Guardrails and good practices
Amid the excitement it must also be acknowledged that advances in AI will present businesses with new challenges.
While pursuing productivity gains with AI, “organizations also need to be conscious of the quality, which is something that can be difficult to assess,” cautions Colls. “Depending on the level of fault tolerance that organizations have, the case for adoption might change.”
As solutions become more sophisticated, and are embedded more frequently and deeper into software, products and day-to-day operations, “their potential to allow people to make mistakes more easily, or achieve goals with ill intent,” so too expands, notes Syd.
Research already points to a surge in AI incidents and controversies, whether ‘deepfake’ videos of political and business leaders, or biases feeding into monitoring and data analysis. The reputational and regulatory consequences make it essential for enterprises to take proactive steps to ensure their AI experimentation remains ethically sound, and legally compliant.
Steady rise in AI incidents and controversies reported over the last decade
However the potential for mishaps shouldn’t provoke paralysis. The basic tenets of responsible technology, ensuring solutions are based on values like equity, accessibility and sustainability – values that many businesses already hold – form a meaningful first line of defense. Responsible AI, in other words, becomes a natural extension of the responsible tech approach.
“The problems AI is used to solve for most businesses are naturally completely sandboxed from fundamental human rights or privacy questions,” says Syd. “In any case, a strong company culture and ethical norms are the most important guardrails. The core principles of ethical and responsible business are still as necessary, and valid, in the age of AI.”
Diversity in organizations and delivery teams is another primary consideration, notes Colls. By incorporating a plurality of views into development, and assessing the potential consequences of the solutions through a wider variety of lenses, it makes it more likely models will avoid bias and other issues.
“Ethics is one of those spaces where you’re never done, in the same way that businesses continually pursue better growth and profitability,” Friedland notes. “You can always look at a value chain and improve the ethical outcomes.”
Responsible AI also acknowledges that there are no shortcuts.
“Some AI solutions have not been built with robust engineering practices that allow them to be confidently deployed into production or easily evolved,” Colls notes. “Challenges to business or domain teams getting access to a safe, well-governed experimentation environment can also be a barrier, so the opportunity cost of being unable to pursue a bunch of good ideas becomes another failure.”
“Organizations have to be just as rigorous with the testing of AI models as they would for any other application – which isn’t always the case as companies rush to get things to market,” Friedland agrees. “A test-driven development approach can include edge use cases where organizations want to make sure boundaries are set early on to reduce risks.”
The costs of rushing generative AI have already been painfully evident to firms like Google, which saw well over US$100 billion shaved off its market value after it hastily launched a rival to Microsoft-backed ChatGPT that suggested flawed responses to some queries. The company pledged to address this through more rigorous testing.
To avoid these kinds of issues, AI solutions should be supported by the same continuous delivery principles that underpin good product development, with progress made through incremental changes that can easily be reversed if they don’t have the desired impact.
This also ensures models evolve, rather than emerge massively complex – and therefore more prone to problems or failures – straight out of the gate. “It can help organizations iteratively and gradually build up a model’s complexity, and develop a good idea of how behavior varies from the simpler cases that they started with,” says Colls.
This includes continuing to evaluate and improve solutions post-launch – another process on which AI can be brought to bear – and even retiring them if necessary.
Especially where AI performance is variable or uncertain, “businesses should design experiences for failure, so wherever possible, they can gracefully degrade, and some level of experience can be provided without AI,” Colls says. “They then always have the option to change the experience at short notice if anything unexpected comes up.”
Being judicious about how data is sourced and applied also enables organizations to minimize risks at the outset. Sometimes, this means simply learning when to say ‘enough.’
“Data security and privacy are foundational concerns for AI initiatives, so whatever you can do to minimize your dependence on large amounts of data, especially sensitive customer data, pays off many times over.”
David Colls
Director of AI and Data Practice, Thoughtworks
“Data security and privacy are foundational concerns for AI initiatives, so whatever you can do to minimize your dependence on large amounts of data, especially sensitive customer data, pays off many times over,” Colls explains. “While data-centric approaches are effective, when data comes with cost or risk, organizations should look at it quite critically and say: ‘Do we actually need this specific data to deliver that experience?’”
“The more you can do to clearly make [the application] a good experience from a privacy perspective, the better it's going to serve customers,” he adds.
A key concept to keep in mind throughout the lifecycle of an AI solution is explainability, or interpretability – in broad terms, being able to identify how the model arrives at its output, and the factors that influenced the process. “Simple models tend to be more explainable than more complex models, which tend to be higher-performing,” says Colls.
In trying to develop a model that strikes the delicate balance between performance, transparency and reliability, one tactic is weighting different measures of performance to assign a value to explainability, or a trade-off factor between explainability and the model’s accuracy. The calibration of these metrics depends on the use case and context.
“It's important to understand the cost of failure,” Syd notes. “In some cases people are willing to accept it if they understand that they are working with AI. In some contexts, failure is not an option at all. People may die because of it.”
“If you, for instance, are trying to detect cancer, you want to make sure that you don't miss any actual instances of cancer,” Colls says. “In the financial domain, money laundering or failure to follow sanctions might come with much heavier penalties than fraud, which is typically just a business expense. So you might set the thresholds accordingly.”
Generally, organizations have more flexibility on explainability in internal use cases. As Colls points out: “If the AI model is used to help prioritize tasks, you might not necessarily need to provide an explanation to employees about why a task is ranked number one and not number two. An explanation on the factors that the algorithm considers might suffice.”
In situations where techniques could challenge requirements for explainability, Colls recommends “front loading the understanding of consumer expectations of explainability, and what's required from a regulatory perspective.” Organizations can also explore tools such as approximate models to explain why decisions are happening within a certain area of a complex system, even if it’s hard to provide a universal explanation.
“Some of these issues can be addressed by what we call ‘shifting left’; where ethical and security issues are addressed early in the development process, not as an afterthought,” Friedland says.
iv. Guiding the business forward with AI and a skilled human touch
Essentially rather than attempting to plug every potential ethical or security gap, organizations should aim to strike a balance between encouraging people to adopt AI, ensuring they remain sensitive to the problems it may present, and allowing room to innovate.
“Attempting to solve this challenge by making it 100% ‘waterproof’ through technological means is likely to lead to costly misinvestments,” says Syd. “It's more about getting people to behave ethically.”
Some guidelines may be necessary to navigate the intellectual property dilemmas presented by models drawing on external sources, like ChatGPT. “There needs to be careful governance parameters around that, as you don't want to become dependent, or be exposed to risk,” Colls says. “But you want to set those parameters while enabling teams to do it, which is a more sustainable way to keep up to speed.”
Far from stifling creativity, constraints can even, at times, enhance it – just as regulations on vehicles have driven innovation around fuel efficiency, Colls notes.
Since most of the challenges around AI have as much to do with people as technology, it follows that user awareness and support are major determinants of progress. To mitigate risks – and frustration – solutions have to be considered in light of the capabilities of, and their impact on, those who will work with them.
“Expert users can correct faults, whereas we should be cautious of exposing novice users to a high incidence of faults – because they might not even recognize them as such,” Colls points out.
Part of this can involve shedding assumptions about where AI should be applied, and giving teams a say in where it can augment their work – which, after all, they know better than anyone.
“Many companies suffer from a lack of engagement around their AI strategy,” says Friedland. “The strategy magically appears, and no one quite knows what to do. It works much better if organizations can actually cultivate or crowdsource, and engage people in the process of change. People might have really good ideas for improving workflow efficiency if organizations solicit their contributions and actually support their involvement – and reward them for it.”
People, and the enterprises that they work for, also need to be reassured that whatever they’re doing with AI creates value. Over time the success and relevance of an AI solution will boil down to measuring performance against the right goals.
“Whatever you do, make sure that AI takes your business forward in some fashion – it may be something other than money – and be ready to prove it with numbers. The business goals are the foundation that form the ultimate sanity test for anything you do in a company.”
Ossi Syd
Principal Consultant, Thoughtworks
“Whatever you do, make sure that AI takes your business forward in some fashion – it may be something other than money – and be ready to prove it with numbers," says Syd. “The business goals are the foundation that form the ultimate sanity test for anything you do in a company.”
“When great delivery teams are not provided clear metrics to determine how their work aligns with the business strategy, it doesn't matter what they build; it’s not necessarily going to move them in the right direction,” agrees Friedland. “As organizations build something, they need to be able to measure whether it's resonating with clients and hitting business targets, and gathering real-time feedback on what’s missing.”
At the same time, a separate set of AI metrics can actually introduce bias depending on how they’re designed. “Whatever metrics organizations use to assess the success of their business will work the same magic with AI as well,” says Syd.
“It's the same as with digitalization,” he adds. “It might harm your business if you get misled by semi-artificial metrics to demonstrate how 'digital' or 'AI' you are now. The key questions should be: Are you doing better business? And do you have a realistic vision of how to scale the business when you experience success?”
Depending on the organization’s context, productivity and customer experience are often important considerations from the outset. But “internal capability measures – such as awareness of the potential of AI for teams, comfort exploring or adopting or deploying AI solutions – are leading indicators as well, to give organizations confidence that they're on track towards those lagging indicators of customer experience, productivity and revenue improvement,” says Colls.
For organizations agonizing over whether to take the leap or how to justify the necessary budgets, Thoughtworks experts urge them to see AI as an incremental investment in ongoing business change, instead of a standalone, tech-driven exercise – and as something that will evolve with the company.
“Treat AI like you’ve treated computers and software thus far,” says Syd. “It’s not a question of whether you use computers or software in your business, but how. While AI has the power to effect bigger change, the questions to ask are still similar – not whether to use it or not, but how.”
“Remember that right now we are looking at a slice in time,” says Colls. “Don't let current solutions dictate your thinking; they’ve not appeared overnight. AI is already pervasive. This period is a steeper step, catapulting AI forward in the user experience, but it is still one step on a long climb to ubiquitous AI. So any response should consider sustainably following the future trajectory as well as capitalizing on today's capabilities.”
Efforts to practice responsible AI and tighter regulations will drive “better ways to handle failures so that we can harvest the times it gets it right,” he explains. “Developments in the trustworthiness of complex models so people can gain confidence that they will perform as expected will enable a whole new realm of opportunities that organizations won’t pursue at the moment, because they don’t have that assurance of safety or predictability.”
The full business impact of AI will be “not so much explicitly dependent on AI itself, but rather how initiatives get leveraged and amplified by what AI can do,” Friedland notes. And it’s the potential to further extend connections between data, systems, the human and the technical that he sees poised to revolutionize business in the years ahead.
“If you effectively interconnect the strategy with the AI, the data, and the engineering that supports it, you're going to enable people to get value out of data and their daily experiences so they can come up with much better ideas than any individual on their own – and end up with much higher-performing systems,” he says.
“If you effectively interconnect the strategy with the AI, the data, and the engineering that supports it, you're going to enable people to get value out of data and their daily experiences so they can come up with much better ideas than any individual on their own – and end up with much higher-performing systems.”
Barton Friedland
Principal Advisory Consultant, Thoughtworks
Perspectives delivered to your inbox
Timely business and industry insights for digital leaders.
The Perspectives subscription brings you our experts’ best podcasts, articles, videos and events to expand upon our popular Perspectives publication.