Looking Glass 2025
Reimagining responsible tech for the era of generative AI
The importance of ‘responsible tech’ has been widely discussed for a number of years now. However, across the industry it has struggled to properly cut through, remaining a somewhat marginal concern. While this might not be unsurprising during a period of economic uncertainty and tighter budgets, the rise of generative AI has made the topic more urgent than ever. This is because the ethical, legal and even philosophical questions raised by the technology can, at least in part, be addressed through responsible technology principles and practices.
This means 2025 is the year businesses need to properly embrace responsible technology. Without it, attempts to experiment and innovate with generative AI and associated technologies may contain risks that businesses simply do not need during a challenging period, ranging from the financial consequences of compliance failures to damaged consumer trust.
But what does embracing responsible tech in the generative AI era actually involve?
We see it as beginning with a recognition — that has long been absent — that responsible tech isn’t something you bolt on to existing activities and projects: it’s something that needs to be embedded in organizational values, team practices and cultures.
It means considering all the potentially negative consequences of, for instance, a new generative AI chatbot, whether in terms of data leaks and privacy breaches, all the way to disturbing and harmful content being served to end users. It also means anticipating regulatory and legislative demands rather than simply reacting to new legislation; given legislation is often driven by the concerns and interests of the wider public — ie. your consumers — it’s invariably a useful heuristic for building trust with those that matter most to your business.
You may need to fight for responsible tech on two fronts: in how these technologies are built and deployed; and in how these technologies are shaping what we do. Generative AI capabilities have been added to all manner of products, which may catch some consumers unaware — for instance, they might not realize the service representative they’re chatting to isn’t human. This means part of responsible tech isn’t just about being decisive — it’s also about being sensitive to the unknowns that are inherent in an environment where generative AI is everywhere. This mindset should be extended across the breadth of what your organization does, regardless of whether AI is a priority for your business or completely outside your operational scope.
Responsible technology isn’t a checklist — it’s a mindset. In the age of generative AI, embedding ethics at the core of innovation isn’t just about avoiding risks; it’s about building trust, anticipating change and leading with purpose in a fast-changing world.
Responsible technology isn’t a checklist — it’s a mindset. In the age of generative AI, embedding ethics at the core of innovation isn’t just about avoiding risks; it’s about building trust, anticipating change and leading with purpose in a fast-changing world.
Signals
- Questions about accountability and legal liability for harmful technology consequences. The lawsuit of a mother of a teenager who died after interacting with an AI chatbot could be a significant moment in AI legislation and shape how we think about liability and responsibility.
- Increasing awareness — or perhaps chaos — around what data has and has not been used to train major AI models. For instance, there was a remarkable moment when an OpenAI leader didn’t know whether Sora was trained on YouTube videos, while abusive material was found in the LAION 5-b dataset. There was also confusion when LinkedIn suspended data processing in the UK in September 2024 following concern from the Information Commissioners Office about the way UK users’ data would be used to train generative AI.
- Water consumption of data centers is causing significant concern. At a local level in areas affected by drought there is even greater political friction, highlighting the ongoing environmental questions raised by AI usage.
- The consequences of corporate greenwashing are becoming tangible as organizations are held to account for false claims about their green credentials.
- The interdependence of the industry — underlining the importance of trust and transparency at a technical level — demonstrated by the CrowdStrike outage and other supply chain vulnerabilities like the XZ Utils backdoor.
- Increasing shadow AI in organizations. As the generative AI market has grown, it is incredibly easy for employees to experiment with AI without oversight. This can create significant privacy risks.
- The fragmentation of the social media landscape. The lack of stability in this space — best demonstrated in the mass exodus of X (formerly Twitter) users — underlines significant content and safety problems with platforms as well as growing consumer fears about online safety, privacy, mis/disinformation and even their digital consumption habits more broadly.
- The growth of impact investing. This is where investors target big social problems — like health or the environment — with a view to capturing value. It has been called out as something that rather than tackling social issues can actually exacerbate them.
- Legislation taking on manipulative design. The EU, for instance, has introduced a law aimed at tackling dark patterns.
Trends to watch
Adopt
-
A formal agreement between two parties – producer and consumer – to use a dataset or data product.
-
More granular access controls for data, such as policy-based (PBAC) or attribute-based (ABAC) that can apply more contextual elements when deciding who has access to data.
Analyze
-
An emerging set of techniques to certify the provenance of data and to govern its use across an organization.
Anticipate
-
These are attacks on (or using) machine learning systems. Attackers may tamper with training data or identify specific inputs that a model classifies poorly to deliberately create undesired outcomes.
-
A collective term for systems and devices that can recognize, interpret, process, simulate and respond to human emotions.
-
A data architecture style where individuals control their own data in a decentralized manner, allowing access on a per-usage bases (for example, Solid PODs).
-
The use of probabilistic states of photons, rather than binary ones and zeros, to execute algorithms with significant speedup in specific problem domains. Recent advancements, such as Google's breakthroughs in quantum error correction, signal progress toward scalable systems. However, these developments also raise concerns about security, as quantum computers could potentially break classical cryptographic protocols, driving interest in quantum-resistant encryption methods.
-
Tools and techniques are emerging that support incorporating responsible tech into software delivery processes, primarily focusing on actively seeking to incorporate under-represented perspectives; some examples include Tarot Cards of Tech, Consequence Scanning, and Agile Threat Modeling.
Adopt
Analyze
-
A digital representation of a person. The use of artificial intelligence allows the avatar to mimic the person it represents, thus making it ostensibly more convincing and life-like.
Anticipate
-
The concept of artificial general intelligence (AGI) refers to an AI system that possesses a broad range of capabilities across a range of intellectual tasks — it’s often compared to human-level intelligence. Debates about the threshold for AGI remain, and research into ways of achieving it continues and will play a part in wider discussions about AI and humanity.
The opportunities for reimagining responsible tech
By getting ahead of the curve on this lens, organizations can:
- Strengthen organizational culture by working to ensure there is genuine and authentic alignment between corporate rhetoric and values and the perspectives and values of employees. This is certainly not easy at a commercially challenging time, but mistrust and cynicism will have long-term consequences that may prove even more difficult to repair.
- Revisit and strengthen those values. Values need to be maintained and evolved — if left without stewardship they will prove useless. Organizations, and leaders in particular, should spend time considering existing values and whether they’re relevant and, most importantly, actionable. They need to be things that can guide behaviors and decision-making at every level; they should be things that can be put into practice.
- Leverage AI thoughtfully. AI can offer a competitive advantage: but simply rushing to integrate it is risky. The real opportunity is to be thoughtful about why, where and how AI capabilities are used. This will not only minimize potential risks, it will also strengthen your relationship with customers and ensure you are delivering even more value.
- Build trust with consumers. Concerns about privacy and the way data is used and managed continue to grow. Businesses that seek to buck the perceived trend for ever-increasing extraction may be able to gain an advantage in the market. Transparency and trust can be a differentiator.
- Be more intentional and considered about what you’re doing to meet user needs. Foregrounding accessibility practices in your organization can help ensure you are building products, services and experiences that provide value for even more people.
- Reduce waste and improve efficiency. While businesses should, of course, be focusing on their environmental impact, in challenging times the bottom line takes absolute priority above everything else. However, it’s possible to do both: in fact, framing environmental action in terms of efficiency can be an effective way to ensure it is taken seriously across the business. In other words, business leaders need to present responsibility and commercial impact as things that are closely intertwined, not mutually exclusive.
- Focus on skill development. Avoid the temptation to automate everything and rely on AI tools to ‘do more with less’. What happens when you don’t have knowledge or skills needed to solve problems down the line? Considering how human skills and AI capabilities can complement one another will ensure you have a team that is able to help the organization reach its future objectives.
What we've done
Swann Security
Swann Security partnered with Thoughtworks to develop the world’s first AI security concierge, a groundbreaking solution designed to enhance home protection while safeguarding privacy. Leveraging generative AI, the system engages visitors naturally, whether homeowners are present or not. Prioritizing privacy, Thoughtworks crafted a prompt engineering strategy to ensure the AI’s responses respect security boundaries and defend against intrusive or adversarial interactions.
The AI concierge can manage deliveries, greet guests and deflect inappropriate requests, all while maintaining a courteous and secure demeanor. Rigorous testing ensured the system’s resilience against privacy threats and unanticipated scenarios. This customizable framework allows Swann to tailor settings to household-specific needs, enhancing user control.
Showcased at CES and named a Smart Home Honoree, this innovation sets the stage for Swann’s future AI-powered products, demonstrating that cutting-edge security solutions can protect both homes and privacy in an increasingly connected world.
Actionable advice
Things to do (Adopt)
- Use the Responsible Tech Playbook for practical activities software delivery teams can actually do on projects.
- Implement holistic and consistent policies around AI use. Shadow AI without oversight can lead to a diverse range of issues, risking everything from privacy breaches to reputational problems.
- Leverage new techniques to make generative AI more reliable. This includes things like evals (a form of testing whereby outputs are assessed according to the context in which they will be received) and guardrails, a programmed set of policies that permits and prevents certain kinds of outputs. These need to be underpinned by your values and to do that you need to be able to define and articulate them.
- Ensure your technology strategy is collaborative, not top down. Although senior decision makers have an important role to play in setting a vision and an agenda for how an organization will use technology, involving other parties in that process not only builds organizational trust but also helps increase confidence in the decisions that are made. One effective way of doing this is to create a technology radar like the one we create twice a year at Thoughtworks. It allows people to question, voice concerns and propose alternatives in a way that is safe and supportive.
- Treat responsible tech practices as a capability and skill issue. Identify relevant training opportunities and resources for technology teams and other parts of the organization and ensure that teams see this as a valued area in which to develop new skills.
- Be intentional about the social platforms you’re using. Are they spaces in which you want to play? Do you want to be associated with the type of content that is shared there?
Things to consider (Analyze)
- Dedicate time to revisiting and reconsidering your values. Are they meaningful? Can they be practiced? Do they actually guide action?
- Consider whether your organization is putting its professed values into action. If it’s not, why not? What are the business risks holding you back?
- Think about how you might measure responsibility. This could include anything from environmental measures through to employee perception and morale.
- Employee attitudes to your values. Do people feel they are being embraced? Are they something they themselves can enact?
- Legislation. Monitor new regulations and analyze how they may impact your organization. Keeping a close eye on broader conversations about future changes to law can also help you be prepared for the future and avoid being reactive.
- Ownership and accountability inside the organization. Many issues around responsibility quite literally require oversight from a responsible person. Expecting it will take care of itself is likely to be ineffective. Think seriously about who should be responsible or accountable and who should own or measure your performance in this area.
- Pay attention to your software supply chain. Do you understand what’s in your stack? Consider using a software bill of materials (SBOM) to track dependencies and provide technological transparency.
Things to watch for (Anticipate)
- How the law evolves on legal culpability for negative impacts of tech. This is a question largely pertinent to AI, but it is also important to monitor legislation around content, data privacy and accessibility.
- Consumer attitudes to AI. AI is currently extremely hyped. However, this doesn’t mean that consumer sentiment will follow the industry in the medium or long-term. Poor products, negative and dangerous effects and even pure fatigue may cause many people to see AI as either problematic or overhyped.
- The future of ESG investing. Just because ESG falls out of favor does not mean responsibility — whether that’s social, environmental or otherwise — no longer matters. However, it may make it harder to make the case for it. Leaders need to do the right thing and ensure that there is a commitment to behaving with integrity and transparency — this will strengthen consumer confidence in your brand in the long term.