In the midst of the AI race – are ethics moving fast enough?
Published: November 12, 2019
In the current climate of upheaval around the world, lightning-fast news cycles, and questions about where our technological advancements and abilities are headed, it is understandable that we are wary of what’s next. Anna Gudmundson thinks that technology doesn’t just create problems but also can hold solutions.
I have studied and worked in the software industry since the late 90’s and I’ve witnessed first hand the incredible evolution of the industry during that period.
Data has always been an important part of my job – leading product teams and technology companies – and we are now at a point where Artificial Intelligence and Machine Learning really are commercially applicable. This advancement is primarily due to the availability of more processing power. The support from an evolving suite of optimization components, an AI-augmented software development process, as well as hugely improved utilization of data (although 90 percent of the world’s data was created last year, we are currently only using one percent of it), also play a significant role in this evolution.
As per today (and this may change) most AI projects in the business space are working with some type of human-generated data, and in this human origin we find cause for new challenges. Humans are complex and have certain well-known flaws, such as biases – both conscious and unconscious. We make different decisions based on, for example, whether we’re tired or hungry, and we tie emotion to a lot of our decision making. But, we also have a sense of ethics, an innate feel for what is right and what is wrong.
I’m sure most of us are very familiar with the sometimes shockingly biased output we’ve seen from the bots, recruitment, and facial recognition tools coming out of the 'G-MAFIA' (Google, Microsoft, Amazon, Facebook, IBM, and Apple). These companies have nearly unlimited resources, a data-driven culture, and I believe they mean well but are still producing some seriously flawed outcomes. So, clearly, it is not that easy…
During the last few decades, there has been a race towards being the best at influencing people, generally with a view to convincing us to buy things. This race started in advertising tech and has now moved into almost all areas of our economy, which inspired the term “surveillance capitalism”, coined by Shoshana Zuboff in 2014.
She also predicts a positive evolution, and makes an interesting point, that since “Every survey of internet users has shown that once people become aware of surveillance capitalists’ backstage practices, they reject them. That points to a disconnect between supply and demand: a market failure. So once again, we see a historic opportunity for an alliance of companies to find an alternative ecosystem — one that returns us to the earlier promise of the digital age as an era of empowerment and the democratization of knowledge.”
So, I would say a lot is at stake if we blindly continue this race. We have to also ask ourselves, what do we really stand for? What business problems are we trying to solve? What are we trying to create? Is this ethical? More than ever it becomes critical to know the answers to both ’what are our goals’ and ‘what are our values’?
There are lots of positive changes on the way. There are exciting new companies driving efforts to correct for existing bias by implementing algorithm auditing services, as well as initiatives aimed at improving diversity in the STEM fields. Never before has there been more conversation about privacy and effective use of data, which benefits both the user and the company, minimizes risk and allows for higher quality outcomes from the use of AI and ML. Companies like Privitar provide large corporates with important products as part of the data ecosystem, allowing intelligent analytics and data processing while safeguarding privacy. Privacy and conscious, intelligent data management are also part of the consulting services that Thoughtworks deliver to companies across the globe.
Many companies, such as DeepMind, are setting up ethics boards, and there is promising engagement on the topic both from the public, the industry, the business community and, to a debatable extent, regulators.
Google may not have got their ethics board right the first round (I commend the Google employee engagement and holding the company to high standards), but we should also commend the efforts - it’s ok to fail as long as we keep trying and self-correct.
As the industry matures, it is important to involve a diversity of expertise and competencies in AI projects. Data science, analysis and engineering is one part of it – and necessary to deliver the projects. As in any other technology project, we also need clear business goals, product expertise and a rounded management to ensure high-quality results. This is the area I have the good fortune to work in and I love the broad range of considerations and responsibilities.
It is all up to us. If we use AI in pragmatic, conscious, and ethical ways, there are immense opportunities for businesses and our future.
Read more from Anna.
Is a Bright Future Possible for Data Projects and AI?
I have studied and worked in the software industry since the late 90’s and I’ve witnessed first hand the incredible evolution of the industry during that period.Data has always been an important part of my job – leading product teams and technology companies – and we are now at a point where Artificial Intelligence and Machine Learning really are commercially applicable. This advancement is primarily due to the availability of more processing power. The support from an evolving suite of optimization components, an AI-augmented software development process, as well as hugely improved utilization of data (although 90 percent of the world’s data was created last year, we are currently only using one percent of it), also play a significant role in this evolution.
The impossible is now possible
A lot of opportunities lie in areas where normal software development has come up against almost infinite complexity. What once were programmatically unsolvable problems, can now be solved using AI. This transforms some of the great challenges of the past, into great opportunities for businesses right now.As per today (and this may change) most AI projects in the business space are working with some type of human-generated data, and in this human origin we find cause for new challenges. Humans are complex and have certain well-known flaws, such as biases – both conscious and unconscious. We make different decisions based on, for example, whether we’re tired or hungry, and we tie emotion to a lot of our decision making. But, we also have a sense of ethics, an innate feel for what is right and what is wrong.
A human core strength – ethics
So how do we translate our ethics, this very human trait, into our systems and machines?I’m sure most of us are very familiar with the sometimes shockingly biased output we’ve seen from the bots, recruitment, and facial recognition tools coming out of the 'G-MAFIA' (Google, Microsoft, Amazon, Facebook, IBM, and Apple). These companies have nearly unlimited resources, a data-driven culture, and I believe they mean well but are still producing some seriously flawed outcomes. So, clearly, it is not that easy…
During the last few decades, there has been a race towards being the best at influencing people, generally with a view to convincing us to buy things. This race started in advertising tech and has now moved into almost all areas of our economy, which inspired the term “surveillance capitalism”, coined by Shoshana Zuboff in 2014.
She also predicts a positive evolution, and makes an interesting point, that since “Every survey of internet users has shown that once people become aware of surveillance capitalists’ backstage practices, they reject them. That points to a disconnect between supply and demand: a market failure. So once again, we see a historic opportunity for an alliance of companies to find an alternative ecosystem — one that returns us to the earlier promise of the digital age as an era of empowerment and the democratization of knowledge.”
‘Can’ we vs. ‘should’ we
Whether we see the current landscape as positive or negative, we’re actually at a really interesting breaking point. Having strived for a long time to be better and better at influencing people, we now *can* manipulate people. We know that by using AI/ML technologies and having access to relevant data, we can have a significant impact on elections, and drive individual behaviors in rather specific and predictable ways.So, I would say a lot is at stake if we blindly continue this race. We have to also ask ourselves, what do we really stand for? What business problems are we trying to solve? What are we trying to create? Is this ethical? More than ever it becomes critical to know the answers to both ’what are our goals’ and ‘what are our values’?
Back into the light
We’re in a, hopefully, short-lived twilight zone where companies wouldn’t get away with openly writing bluntly discriminating code, or have a sign on the storefront saying ‘minorities and poor people go to the back of the queue, please’. But, today it’s possible to get away with such things behind the data banner. This will change! Awareness, as well as regulation, will catch up. And there is an opportunity for companies who start out on the right track to be part of that evolved ecosystem that reimagines our collective, ethically aligned, future.There are lots of positive changes on the way. There are exciting new companies driving efforts to correct for existing bias by implementing algorithm auditing services, as well as initiatives aimed at improving diversity in the STEM fields. Never before has there been more conversation about privacy and effective use of data, which benefits both the user and the company, minimizes risk and allows for higher quality outcomes from the use of AI and ML. Companies like Privitar provide large corporates with important products as part of the data ecosystem, allowing intelligent analytics and data processing while safeguarding privacy. Privacy and conscious, intelligent data management are also part of the consulting services that Thoughtworks deliver to companies across the globe.
Many companies, such as DeepMind, are setting up ethics boards, and there is promising engagement on the topic both from the public, the industry, the business community and, to a debatable extent, regulators.
Google may not have got their ethics board right the first round (I commend the Google employee engagement and holding the company to high standards), but we should also commend the efforts - it’s ok to fail as long as we keep trying and self-correct.
Shaping our future, together
I’m very passionate about this topic. I want to see organizations better equipped to break down business problems and have much more awareness of what the impact is, depending on how we use AI. I’m also very aware of the benefits of values-driven companies across product strength, brand value, employee engagement and productivity, and of course the bottom line.As the industry matures, it is important to involve a diversity of expertise and competencies in AI projects. Data science, analysis and engineering is one part of it – and necessary to deliver the projects. As in any other technology project, we also need clear business goals, product expertise and a rounded management to ensure high-quality results. This is the area I have the good fortune to work in and I love the broad range of considerations and responsibilities.
It is all up to us. If we use AI in pragmatic, conscious, and ethical ways, there are immense opportunities for businesses and our future.
Read more from Anna.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.