Looking Glass
Lens four: Expanding impact of hostile tech
‘Hostile’ technology is commonly associated with criminal activity such as ransomware, breaking into a system to steal data or creating computer viruses — but this misses the complete picture. The landscape is evolving in a way that the definition of hostile tech should be broadened to include legal, even widely accepted, acts that ultimately threaten societal well-being.
Through the Looking Glass
As technology grows more complex, the ways in which it can be misused rise. And as people rely more on technology in daily activities, they are increasingly subjected to unintended — even hostile — consequences. Add in a high level of automation — making decisions at machine speed — and the possibility for things to go wrong rapidly escalates.
‘Hostile’ tech by our definition can encompass not just criminal tech such as malware and hacking tools but also use cases like advertising and customer targeting. Whether technology is hostile can be a matter of perspective. Some people don’t find internet ads, tracking cookies or social media influencing campaigns intrusive and are happy to trade their data for what they perceive as personalized offers or special value. Others install ad blocking software in their browsers and eschew Facebook completely. Consenting to track or the collection of personal data is for some basically automatic; for others, a carefully considered choice. That said, many people are oblivious to the fact that they have a choice in the first place, due to varying levels of access to and experience with technologies among different social and demographic groups, as well as discrepancies in the way information and options around consent are presented.
Not all hostile behavior is malicious or intended. One example is bias in algorithms or machine learning systems. These may exhibit ‘hostile’ tendencies towards certain customer groups without having been compromised or deliberately designed that way, because of unplanned and unnoticed distortions in the way they were constructed or developed.
Signals include:
- The increasing ubiquity of technology and concurrent expansion of the potential threat surface. One simple example is the sheer number of connections: Frost & Sullivan predicts the number of active Internet of Things (IoT) devices will top 65 billion globally by 2026. Each of these comes with potential security breaches that could be exploited
- Evolving consumer sentiment and behavior toward ad and marketing tech and increasing bifurcation between those who accept broad uses of their data and those who are more concerned about privacy
- Rising anxiety about the use and impact of social media in misinformation campaigns and how social media channels are shaping health, political and other societal debates
- Unintended consequences from the increased use of artificial intelligence (AI) and machine learning, such as bias in algorithms and in data sets collected. Concerns about hostile impacts are prompting attempts to control the use of AI in processes like hiring
- Increased regulation around data collection, retention and use, such as China’s new Personal Information Protection Law, the European General Data Protection Regulation (GDPR) the California Privacy Rights Act (CPRA) and equivalents in other jurisdictions
The opportunity
With data breaches approaching record levels, protection against deliberate hacking and malware is increasingly important. Companies must invest in defending a wider range of touchpoints against well-funded and organized adversaries. Yet as the potential for danger rises, other dimensions of hostile tech also have to be considered. We believe that being respectful of customer wishes, avoiding intrusive and self-serving targeting and rooting out bias within algorithmic systems and data sets is not only inherently ethical but conducive to trust, positive public perceptions and ultimately the health of the business.
According to media reports, the SolarWinds supply chain hack cost the company nearly US$20 million, with estimates for insurance claims reaching US$100 million, showing how easily the financial fallout from a hostile incident can spiral out of control. After a slow start GDPR fines have increased, with total penalties surging 113.5% over the last year. Most notably, Amazon’s gigantic GDPR fine of US$877 million, announced in the company’s July 2021 earnings report, is nearly 15 times bigger than the previous record. With consumers placing a higher value on their privacy, robust security practices have become a strong differentiator for some companies. A recent survey by Cisco found almost 80% of consumers factor data protection into purchasing decisions and are open to paying more for products or providers with higher privacy standards.
What we’ve seen
Trends to watch: Top three
Adopt
Secure software delivery. In the past year we’ve seen a significant rise in attacks on the “software supply chain” — not the software itself, but the tools, processes and libraries that help us get software into production. The US White House even issued an executive order on cybersecurity, including specific directives to improve supply chain security such as requiring a software “bill of materials” for all government systems. Secure software delivery emphasizes that security is everybody’s problem and should be considered throughout the software lifecycle.
Analyze
Ethical frameworks. Any decision has consequences. In the tech world as AI decision making has started to emerge into the mainstream, ethicists have been discussing ethical decision-making frameworks to attempt to bring transparency and clarity into the decision making process.
Anticipate
Quantum ML. While likely a force for good, solving complex chemical and materials science problems, quantum ML could also create further challenges in the ethical use of data.
Trends to watch: The complete matrix
- Decentralized security
- Secure software delivery
- DevSecOps
- Automated compliance
- Testing ML algorithms and applications
- Privacy first
- AI as a service
- Blockchain and distributed ledger technologies
- Personal information economy
- Synthetic media in a corporate context
- Computer vision
- Connected homes
- Biometrics
- Facial recognition
- AI in security
- Smart contracts
- Alternative currencies
- Ethical frameworks
- Explainable AI
- Code of ethics for software
- "Security forward" business
- Surveillance tech
- Addictive tech
- Technology for environmental and social governance
- Technology and sovereign power
- Smart cities
- Increased regulation
Advice for adopters
Cybersecurity is a game of cat and mouse with your adversaries. AI is fast becoming a popular tool to help organizations fight security threats, with a wide variety of products emerging to meet rising demand. The aim is to level the playing field by automating manual detection tasks, providing intelligence such as intrusion alerts and scrutinizing network traffic to detect odd behavior, policy breaches or bad bots. Perhaps the most critical asset of AI-enabled approaches is their ability to not just limit the attack surface and plug gaps but also to help predict where future attacks might occur, thus allowing the appropriate risk mitigation strategy to be adopted in advance.
It is important to remember that any technology used for defense can also be used by attackers and that while the organization might benefit from AI, it’s not a silver bullet. Enterprises need to move away from seeing AI, machine learning and data-oriented tools as ‘one size fits all’ solutions. Instead, any tool needs to be a part of a pervasive intelligence strategy embedded throughout the organizational structure. Machine learning, for example, can’t support effective security in isolation; it requires managing the lifecycle of data and models and feeding back outcomes. What’s more, security needs to be seen as everybody’s problem. This enables the application of zero-trust architectural approaches to subdivide the physical network and overlay security and data access principles in a way that scales safely and makes information available as needed; no more than is necessary for privacy purposes but no less either.
Adopt or construct a data ethics framework to make clear to your employees and customers how data is stored, used and kept safe. We advise that you only keep the data you actually need and no more. Modern compliance and privacy laws demand high levels of scrutiny and with careful thought can be turned into a positive differentiator. A robust data ethics framework can also play an essential role in your overall data strategy by serving as the basis of retention policies and data set construction and usage.
Even if it’s not immediately apparent, bias is always present — so work at it constantly. It’s also hard to remove after the fact, so dealing with issues like unfairness upfront is essential. It’s vital to record data in a way that allows for the actions, products or decisions based on it to be audited and analyzed in terms of their impact on certain groups. Specific thought needs to be applied to the representation of the source of the data, the demographics of samples extracted from it and choices of algorithms used. Our Responsible Tech Playbook provides guidance and best practices that can assist with this process. Never assume your data is free of bias. We’re humans — bias is everywhere
By 2023, businesses will…
… recognize, and work to seize the opportunity to stand out in the race for customers and talent by adopting holistic strategies incorporating social consequences as well as secure and ethical technology.