A rising, and collective, responsibility
In many respects, it’s a sign of progress that ethical considerations are no longer an occasional concern, but core to the way we do business. The COVID-19 pandemic, rising demands for social justice, and the widening digital divide and consequent denial of opportunity for some segments of the population have all put ethics on the agenda of enterprises globally in a way that’s virtually unprecedented. Even better, firms are taking a multitude of positive steps in response, from pledging support to social causes to announcing steps to foster diversity within their workforces.
Most business leaders would acknowledge that technology has ethical implications, but despite technology becoming more and more central to what enterprises do, it’s not always clear how to approach and apply technology in an ethical way. “Technologists have, for a long time, been operating with a utopian mindset,” says Rebecca Parsons, Chief Technology Officer at Thoughtworks. “The assumption is technology can solve the world’s problems, and there’s no bad technology, it’s just sometimes put to bad uses.”
The truth is to produce positive ethical outcomes and minimize risks, technology has to be managed and monitored as actively as any other aspect of the business - perhaps even more so. This issue of Perspectives will explore the specific strategies and frameworks that can put technology-embracing enterprises on sounder ethical footing.
Why it matters
With many businesses still in survival mode, it’s easy to conclude that ethical technology doesn’t need to be a priority, or can be lumped in with other ‘soft’ aspects of corporate social responsibility. But there are multiple reasons why it’s become business-critical and could have a massive impact on an enterprise’s ability to build value in the future.
First, what’s meant by ‘technology’ in the business context has changed radically. A few decades ago, when it was largely limited to accounts or payroll systems, “it either worked right or it didn’t,” says Parsons. “There was very little that could go wrong from an ethical perspective. If the software was functioning properly and there was no fraud involved, there really weren’t any ethical implications. It was easier in many ways to know whether something was working as intended.”
Contrast that with today, when technology is embedded in sensitive areas like healthcare, criminal justice and access to financial services. “These are all areas where the ethical impact of getting something wrong is far greater,” Parsons says. “In some cases, it’s even difficult to define what constitutes the right answer.”
Second, consumer awareness of, and sensitivity to, ethical issues is arguably at an all-time peak - and many are willing to vote with their wallets on a company’s ethical performance. One recent survey of consumers in the US, for example, found a significant majority (68%) see sustainability as important when making a purchase, and that 49% are willing to pay more for sustainable products.
Technology-driven ethical lapses, like Apple’s credit scoring system apparently making sexist decisions, or unintended racial bias in an algorithm used by healthcare provider UnitedHealth, can quickly spiral into full-blown scandals, endangering relationships with customers and regulators and pressuring the bottom line. Recent disclosures to investors by Microsoft and Google have warned of the potential havoc ‘bad’ AI could wreak on their respective brands.
“There’s real evidence as to why companies should do the right thing over and above the risk factor. There are tangible benefits to having an ethical technology approach and being purpose-led.”
Laura Paterson
Principal Consultant, Thoughtworks
“There’s real evidence as to why companies should do the right thing over and above the risk factor,” says Laura Paterson, Principal Consultant at Thoughtworks. “There are tangible benefits to having an ethical technology approach and being purpose-led.”
A major consideration is that how a company uses technology is likely to have a direct effect on its ability to attract and retain future talent. Research shows millennial and Generation Z talent aspire to work for ethical enterprises and are highly concerned about the consequences of the adoption of technologies like AI in their organizations. A study by responsible technology think tank, Doteveryone, of tech workers in the UK found 28% had seen decisions made about technology that they believed could have negative ethical consequences - and that 18% of those ended up leaving their organizations as a result.
Proportion of tech workers who’ve experienced decisions that could lead to negative consequences for people and society
For companies failing to address technology’s ethical ramifications, “if you look at the social movements, the zeitgeist of the moment, there’s not only a huge reputational risk externally, there’s a huge risk internally with employees,” notes Chad Wathington, Chief Strategy Officer at Thoughtworks. “In tech companies, we’re seeing a wave of activism in the workforce. Workers are aware of the power of corporate interests in modern democracies, and are prepared to organize to influence corporations to act politically, and ethically, on their behalf.”
Common blind spots
Any application of technology can have ethical effects, but there are two key areas where these implications are especially likely to be significant and direct, and, therefore, merit close attention - AI and the use of customer data. Both are seeing massive adoption by businesses, and are playing a greater role in decisions and strategies that were once the exclusive domain of humans.
Awareness of AI bias - systems making questionable decisions due to bad data or assumptions introduced, consciously or unconsciously, by their designers – as an issue is growing as the tool takes over more and more business functions. One recent poll of IT decision-makers found an extremely high proportion - 94% in the US and 86% in the UK - were planning to boost investment in AI bias prevention measures over the next year.
However, many organizations’ efforts to address the issue are either nascent or misdirected - and complicated by the fact that AI bias is, in many respects, an invisible enemy, prone to finding its way into systems constructed with the best intentions. Developers of UnitedHealth’s algorithm, for example, attempted to eliminate bias by not including race data in their models but effectively re-introduced it through a ‘back door’ by segmenting patients on the basis of their healthcare costs, which varied according to ethnic group.
It is hard for companies to be vigilant, but one way of raising awareness about ethical risks is to look to artists who critique tech, such as the British artist, Karen Palmer. Karen's work dealing with systemic bias and AI, incubated and developed by Thoughtworks Arts, has been exhibited at the Cooper Hewitt Smithsonian Design Museum and in Wired magazine, and will be highlighted in an upcoming Augmented Reality app which showcases pioneering artists grappling with the impacts of new technologies.
“When you talk about reinforcement learning, the whole point is to detect the patterns that existed in the data in the past. And if that data is coming from a system that’s biased in any way, that bias is not only going to be manifested in the patterns that emerge, but those patterns are going to be reinforced.”
Rebecca Parsons
Chief Technology Officer, Thoughtworks
As another example of what can go wrong, Parsons cites the case of a research hospital that employed AI to decide whether patients should be admitted to the ICU after a particular procedure, and only realized later standard protocols had left them with an incomplete data set that didn’t take asthmatics into account. Seemingly minor omissions or distortions such as these can be highly dangerous because they are amplified as the system does its work.
“When you talk about reinforcement learning, the whole point is to detect the patterns that existed in the data in the past,” she explains. “And if that data is coming from a system that’s biased in any way, that bias is not only going to be manifested in the patterns that emerge, but those patterns are going to be reinforced.”
As data has become the new lifeblood of business, changing consumer expectations and regulations like the EU’s General Data Protection Regulation (GDPR) have made companies more conscious of how they gather, use and retain information - a good thing, as many customers are willing to act out against firms that don’t appear to take data policies seriously.
The 'Privacy Actives' Segment
However, “we still have a long way to go,” says Wathington. “The ways we capture mass sums of data about people online and across all their devices to build profiles hasn’t really slowed down. As much as some players in the space have tried to make tracking harder, it’s an arms race, and the means of monitoring are getting more and more sophisticated.”
The reality is there’s often a business imperative to deploy technology in an unethical way. Incorporating behavioral design or subliminal messaging to ‘hook’ a customer on a game or app can, for example, be the ‘right’ thing to do in the pursuit of profit or shareholder value. “By defining revenue as your measurement of success, that is what you’ll focus on,” says Paterson. “And as long as that’s seen as the indicator of success by society - and investors - it will difficult for organizations to escape from that mindset.”
“By defining revenue as your measurement of success, that is what you’ll focus on. And as long as that’s seen as the indicator of success by society - and investors - it will difficult for organizations to escape from that mindset.”
Laura Paterson
Principal Consultant, Thoughtworks
That said, perceptions are shifting and progressive organizations are increasingly looking to measure value in other ways. “There’s a myth that corporations only need to care about shareholder value, which was a theory advanced by Milton Friedman and other Chicago School economists,” notes Wathington. “Yet if you look at the laws around incorporation, most allow for balancing concerns and different constituencies - shareholders, employees, the local communities in which companies operate, customers and competitors. It’s also in your interests to be thinking about all those other touchpoints.”
Ultimately, Parsons says, every business has to grapple individually with these questions - “Is it better to make more profit or be fair? And to what extent? How much profit are you willing to give up to increase your level of fairness? What’s the balance point? There isn’t necessarily a right answer that applies everywhere, so it’s important for every organization to have that discussion - to define what their stance is, what they will or won’t do, and where to draw the line.”
The elements of ethical tech
The complexity and breadth of considerations most enterprises face in applying technology mean ethical technology has to be a consistent organizational focus, rather than a one-off initiative or list of principles posted on a wall. By considering the elements of ethical technology, organizations can develop a comprehensive approach across multiple fronts, from product development to the way leaders interact with their teams.
The elements of ethical tech
Diversity (Make sure there’s a range of viewpoints in the room):
The ethical issues or implications of the products a company builds can only be fully thought through when they’re examined from different viewpoints - and that requires the participation of diverse teams.
“It’s very difficult for people to actually think and view a problem from another person’s perspective,” says Parsons. “We try our best - in fact it’s one of the principles in our own social change manifesto, trying to view the world through the eyes of the oppressed - but we don’t always succeed. It’s much easier if you’ve got somebody in the room who can represent that perspective because it’s their lived experience.”
Ideally diversity should expand beyond the lines of gender, background, ethnicity or sexual orientation to functional areas. “You certainly need people from design to be represented because they’re the stewards of how the customer interacts with the technology,” notes Wathington. “But also, someone from finance because they might need to examine or balance the profit motive. And legal, compliance and security, so those processes aren’t a gate-check at the end, but built in from the start with the right concerns in mind.”
Diversity and inclusivity also need to be reassessed as products develop, since seemingly welcome advancements can have negative consequences for access and affordability. “We started with paper forms and moved to online, then mobile, and now we’re looking at ways of interacting with technology that go beyond that,” Paterson says. “The challenge is that as you move along the continuum you’re potentially losing your ability to reach all users. If you develop a product feature for Alexa, what does that mean for people who can’t hear or speak? Diversity has to include the intersectionality of people who have the technology and people who can or can’t use it.”
“It’s very difficult for people to actually think and view a problem from another person’s perspective. We try our best - in fact it’s one of the principles in our own social change manifesto, trying to view the world through the eyes of the oppressed - but we don’t always succeed. It’s much easier if you’ve got somebody in the room who can represent that perspective because it’s their lived experience.”
Rebecca Parsons
Chief Technology Officer, Thoughtworks
Inquiry (Ask the tough questions, systematically):
Connected to diversity, as Parsons points out, “unless the right group of people are asking the right questions, you won’t get the answers that accurately reflect the ethical implications of what you’re building - especially for groups that aren’t represented.”
To ensure the ‘right questions’ are raised, it can help to employ formal tools and frameworks that guide teams through structured processes of inquiry - and a number of methods have been designed and fine-tuned for precisely that purpose. (See 'The ethical tech toolkit' below).
According to Wathington these exercises shouldn’t be viewed as a chore but “part of a holistic approach to design and customer experience” - a welcome opportunity to flag trouble spots that could come back to haunt the enterprise later, and to introduce appropriate checks and balances.
The ethical tech toolkit
What is it?
- Tools to shape strategy & values of a company and its products
- Includes checklist of risk zones/future scenarios and instructions on applying these in a workshop context
Why/when/how to use it?
- To prepare for a project (in any phase) to highlight concerns and illustrate negative future scenarios
- To better understand risks of existing products/solutions
What is it?
- Full set of workshop materials with guide and cue cards
- Foundation for a structured session to explore intended/unintended consequences of a product or feature
Why/when/how to use it?
- Product vision/ideation/roadmap development stage
- Can also be employed as a retrospective or each time a new feature is introduced
What is it?
- Deck of cards with provocative questions designed to help creators envision unexpected ethical outcomes
- Can be used to drive brainstorming sessions
Why/when/how to use it?
- In the early stages of product ideation to expand thinking/dialogue on impacts
- In the design process to flag possible negative consequences
What is it?
- Visualization tools for the exploration of AI/ML data
- Helps highlight distortions in training and validation data sets
Why/when/how to use it?
- When creating data sets to train AI/ML algorithms
- Can also be used to visualize other data sets
What is it?
- Risk-based approach to designing secure software
- Brings teams together to brainstorm threats before they materialize
Why/when/how to use it?
- Should be conducted for every product iteration
- Participants should include business analysts, product managers, and security team to raise awareness and get various risk perspectives
Constituencies (Make sure multiple stakeholders are considered):
Ultimately product builders won’t be able to answer all ethical questions or consider all impacts themselves. For any innovation with potentially game-changing consequences for society or the environment, there should be an effort to secure a broader consensus on what’s being created. “When you start asking questions like what self-driving cars should do when faced with the dilemma of prioritizing the life of a driver or a pedestrian, it’s no longer something a programmer or some group of analysts sitting around a room should decide,” says Parsons. “These are questions that society as a whole needs to start tackling, and deciding what the right ethical response should be.”
When rolling out a technology, companies should consider limitations or knock-on effects that may only apply to specific groups, such as children, the disadvantaged or the elderly. “It’s about understanding all your possible users, how they experience your customer journey and use technology to interact with you,” says Wathington. “There are still huge gaps there. In mobile, a lot of developers still go for the iPhone first because it’s the more expensive mobile platform. Some companies don’t care about whether the product is more expensive to consume, or a worse experience, for those who are poor. Companies have to ask themselves whether they’re just marketing to the affluent - and whether that’s okay.”
Pointing to how organizations can sometimes neglect this process, Paterson gives the example of the collaborative tools - Zoom, G Suite and the like - that have proven crucial to enabling work to continue throughout pandemic-induced lockdowns. At many organizations these were adopted without much thought to varying levels of access.
“It’s increased the digital divide in many respects because no one really stopped to ask whether people could access these services,” she says. “I was astonished to find that even within our own organization there were some people who didn’t have broadband, and we had to work out a way to provide it. We had blind spots in our awareness of the type of access people have, and also that quality of access can be an issue. Maybe someone’s connection isn’t great. Maybe they’ve got five people in the house trying to do video calls at once. It’s important to understand technological decisions won’t impact all people the same way.”
“When you start asking questions like what self-driving cars should do when faced with the dilemma of prioritizing the life of a driver or a pedestrian, it’s no longer something a programmer or some group of analysts sitting around a room should decide. These are questions that society as a whole needs to start tackling, and deciding what the right ethical response should be.”
Rebecca Parsons
Chief Technology Officer, Thoughtworks
Methodology (Formalize ethical processes where feasible):
While ethical guidelines are difficult to set in stone, it’s important for there to be a basic reckoning of what the organization aspires to be and stand for, to form a ‘north star’ that can be used to guide technology decisions.
“The first step is being clear about your mission, and it’s very seldom about technology - it’s always more than that,” Paterson explains. “The next is defining values so everyone knows the parameters within which they’re working and making decisions. Subsequent to that is creating the channels for communication, and opening up the diversity of opinion.”
Once ethical standards and goals are set, they can be formalized and inculcated, with the establishment of frameworks or guidelines for specific processes, like the early stages of product development, or the use and retention of customer data. Rather than rigid codes of conduct - which can be difficult to enforce, or even allow the company’s leadership to effectively ‘wash their hands’ of ethical responsibility by pushing the burden onto frontline workers - these should include “definitions for developers which are based on your mission and principles, showing what these look like in practice,” Paterson says.
There are opportunities that spring from this process. For one, as Parsons points out, formalizing some aspects of ethical decision-making paves the way to apply technology to the cause. “Once you have a definition of what constitutes good, many of these things can be automated,” she says. “There are well established tools for monitoring things like data theft, and well-understood approaches for looking at various kinds of code vulnerabilities.”
Second, it positions the enterprise to evaluate the technology vendors and partners it chooses to work with, and hold them to similar standards. The practice of what’s become known as ethical procurement is, according to Wathingon, creating virtuous circles.
“This has multiple facets for a company like us - whether the software we’re buying is operating in an ethical way, and whether the vendor itself is,” he says. “A lot of companies are adopting tech from other vendors, so it becomes a two-way, mutually reinforcing, process. The buyer starts to think about the ethics of what they’re purchasing, and the company building the software starts to think about the ethics of what they’re making.”
Capability (Strive to constantly improve the organization and its products):
An ethical technology approach has to be honed by the organization like any other skill. It’s a focus the leadership can and should introduce and encourage, but that also has to spring from the ground up, since employees on the front lines of product development and end-user interaction will in many cases be those confronting ethical choices directly.
“Without effective communication, ethics won’t become part of the general ethos,” Wathington says. “But what you’re trying to do with communication is affect a change in people’s hearts and minds so that they own it, and it’s not you as a leader trying to enact a change on them. You want to encourage people to innovate, to continue thinking about what they can do, and make their own contributions.”
“Without effective communication, ethics won’t become part of the general ethos. But what you’re trying to do with communication is affect a change in people’s hearts and minds so that they own it, and it’s not you as a leader trying to enact a change on them. You want to encourage people to innovate, to continue thinking about what they can do, and make their own contributions.”
Chad Wathington
Chief Strategy Officer, Thoughtworks
Internal structures at some companies prevent this. “Most organizations still aren’t set up to be able to have the flow of information, of feedback,” says Paterson. “As a technologist who’s influencing some externally-facing, high-stakes system, you’re probably best positioned to know where the vulnerable spots are. If you’re not being asked or listened to, the company doesn’t know what to fix, and you’re also not able to highlight opportunities.”
In this regard, being ethical has much in common with being effective. “There’s a lot of parallels between agile ways of working and ethical ways of working,” Paterson notes. The same feedback loops that drive effective product development and regular, incremental improvement can be used to support better ethical choices.
For companies seeking to build these capabilities, Wathingon has one main recommendation: start small, perhaps with a core group working on a single product, to establish the mechanisms, then learn from the experience and scale up.
“Once you’ve got critical mass, you get to institutional knowledge, where you don’t need the same 10-15 people to lead everything because you’ve worked through it,” he says. “You have the stories, templates and frameworks that enable you to handle whatever’s important in your context.”
“Most organizations still aren’t set up to be able to have the flow of information, of feedback. As a technologist who’s influencing some externally-facing, high-stakes system, you’re probably best positioned to know where the vulnerable spots are. If you’re not being asked or listened to, the company doesn’t know what to fix, and you’re also not able to highlight opportunities.”
Laura Paterson
Principal Consultant, Thoughtworks
Reasons for optimism
Recent headlines around topics from data privacy to contact tracing and autonomous vehicles show technology is likely to remain an ethical minefield. Nonetheless, at times quietly and behind the scenes, a number of encouraging trends are taking shape. Entire ecosystems of frameworks and solutions are emerging around danger spots like AI and data security. More off-the-shelf models and solutions are becoming available for enterprises aiming to make ethics an integral part of the development of products and interaction with customers.
Perhaps most importantly, technologists themselves are increasingly determined to weigh the ethical consequences of what they do - and course-correct where necessary.
“The fact that these conversations are happening at all is hopeful,” Parsons says. “There are more people who are saying: We as technologists have to take responsibility for the choices that we make and the products that we develop; we can’t just be order-takers. If someone asks us to build something that we don’t think is right, we have a responsibility to push back.”
“There are more people who are saying: We as technologists have to take responsibility for the choices that we make and the products that we develop; we can’t just be order-takers. If someone asks us to build something that we don’t think is right, we have a responsibility to push back.”
Rebecca Parsons
Chief Technology Officer, Thoughtworks
“There’s a whole burgeoning of awareness around many of these issues that is really positive,” agrees Wathington. “I’ve met a lot of excited technologists who want to do better, and who have passion in areas like the climate or accessibility. There’s a broadening recognition that we need to address these considerations by default - which is amazing, because 10 years ago most companies wouldn’t even think these are things you should talk about.”
Another reason for hope is the growing awareness that ethical and business performance are not parallel or competing priorities, but intimately connected. As Paterson points out, companies that make significant progress on ethics are also more responsive. By practicing openness and transparency, and prioritizing long-term relationships above short-term gain, they’re more connected to customers and other stakeholders, and better positioned to anticipate market and regulatory trends. Being ethical, in other words, means being ahead of the curve.
“Technology creates complex problems, increasing in complexity with every global progression,” Paterson says. “You can’t predict exactly what’s going to happen next, and you also can’t resolve your approach to all these complex issues in advance. But if you put values and principles in place you can start to make good decisions regardless. It’s one of the only ways organizations can future-proof themselves.”
“Technology creates complex problems, increasing in complexity with every global progression. You can’t predict exactly what’s going to happen next, and you also can’t resolve your approach to all these complex issues in advance. But if you put values and principles in place you can start to make good decisions regardless. It’s one of the only ways organizations can future-proof themselves.”
Laura Paterson
Principal Consultant, Thoughtworks
Perspectives delivered to your inbox
Timely business and industry insights for digital leaders.
The Perspectives subscription brings you our experts’ best podcasts, articles, videos and events to expand upon our popular Perspectives publication.