As AI adoption grows, how do we ensure the AI products we want to build are safe and responsibly governed? A trio of Thoughtworkers — Katharine Jarmul (now Thoughtworks Alumni), Erin Nicholson and Jim Gumbley — tackled this challenge in a novel way. They created Singularity, a card game to help teams collaboratively explore and brainstorm AI governance risks. And, in the spirit of open collaboration, the game materials are freely available!
The idea for Singularity emerged from a discussion between the three colleagues in November 2023 in Thoughtworks' London office. Jim is a Business Information Security Officer at Thoughtworks, Katharine is a data scientist who recently authored the O'Reilly book Practical Data Privacy and Erin is Thoughtworks' Global Data Protection Officer.
"We all agreed that the foundations of governance for AI are basically the same as those for cyber and data protection," says Jim. "But there's just a broader set of risks that need to be looked at."
True to Thoughtworks' lean and fun culture, they started jotting down some of those risks on cards to help colleagues navigate the new risk landscape. What emerged was Singularity — a gamified approach to exploring and brainstorming risks for AI products.
Who can play Singularity?
It all starts with gathering a group of people together to play: the more diverse and cross-functional the group, the better, given collaboration across disciplines and teams is a crucial part of AI governance. The range of perspectives is important for the game; a data scientist might, for instance, flag technical concerns around model bias that a lawyer then translates into policy safeguards. An engineer could propose architectural changes to bolster an AI system's security and privacy protection, while a compliance specialist might explain a risk in a way which inspires a novel technical control to be developed.
What the game covers
"The idea is simple," explains Katharine. "Each card provides a clear prompt for conversation, making it easy for teams to identify potential risks."
Each card presents a thought-provoking scenario covering a specific AI governance risk. The "bias" card, for example, challenges players to examine how an AI system might perpetuate or amplify biases, leading to unfair outcomes. Meanwhile, the "filter bubbles" card prompts a discussion about how AI may create echo chambers, limiting peoples’ exposure to diverse perspectives.
Other cards delve into privacy concerns. The "surveillance" card asks players to consider the implications of AI-powered monitoring, while the "anonymity" card focuses on protecting individual privacy in AI systems.
Technical risks are also covered. The "cybersecurity" card explores how to safeguard AI from hacking and the "prompt Injection" card highlights the potential for malicious inputs to be used to manipulate AI outputs.
Finally, the eponymous "singularity" card challenges players to consider the ultimate AI governance risk: could the AI they're building evade human control and threaten humanity's very existence? While the card takes a playful tone, it seriously pushes teams to grapple with AI's potential existential risks.
Also note that the scenarios in the game cannot be exhaustive, as AI regulation and technology are rapidly evolving.
How to get started with Singularity
So, how exactly do you play? The rules are simple…
First, gather a diverse group of four to eight people involved in your AI project, including roles like data scientists, engineers, lawyers, compliance specialists and cyber security experts. AI governance benefits from many perspectives. This mix of perspectives is key to a productive discussion.
Next, choose a scribe. Nominate someone to capture key points, actions and owners in a collaboration tool as you play. They can use a spreadsheet, Trello board, or any app that lets you organize items.
The gameplay proceeds as follows:
The person who is most optimistic about the future of AI goes first. Play continues clockwise.
On your turn, pick a card and read the scenario aloud.
Discuss how the scenario could impact your AI product. Categorize the card as:
Acceptable risk: Low risk, continue as planned
Future puzzle: Revisit later for deeper discussion
Action plan: Mitigate risk, note next steps and owners
If categorized as "Acceptable risk", discard the card and end the turn.
If categorized as "Future puzzle" or "Action plan", scribe records the card, discussion points, next steps, and owners in the collaboration app.
Continue taking turns until the group has discussed all relevant cards or agrees to stop.
It's as simple as that! The scribe shares the prioritized risks, mitigation plans, and owners with all participants.
Singularity has already garnered enthusiastic feedback after being play-tested on several of Thoughtworks' AI engagements.
"I'd love people to try the game out and give us more feedback," says Erin. The Singularity team plans to iterate and expand the game based on the community's input.
At a time of both excitement and concern around AI's explosive growth, Singularity provides a practical, engaging way for teams to start important conversations about governing AI responsibly. It's an innovative approach to the critical challenge of building safe and ethical AI systems.
As Jim, Erin and Katharine put it: "Let's play Singularity!"
To access the game materials, click here.
If you have any questions or feedback about playing the game feel free to contact singularity-game@thoughtworks.com
免责声明:本文内容仅表明作者本人观点,并不代表Thoughtworks的立场