AI has created an immense breadth of opportunity for media companies. But to unlock that opportunity, there are a few key challenges teams have had to overcome — most notably, managing AI governance.
AI models and capabilities depend on large volumes and continuous streams of clean, relevant, high-quality data. Plus, outputs must be closely monitored to ensure they’re free of bias and misinformation, and that they don’t lead to negative or discriminatory human experiences. And to make things even more challenging, because AI governance is a new discipline, organizations need to build frameworks and processes from scratch
To learn more about how digital leaders in the media industry are tackling the challenges of AI governance and setting themselves up for success, we held a live panel, “Data governance in the age of AI: Balancing innovation with responsibility,” in November 2024.
You can listen to the full discussion recording here, but here’s a quick look at some of the panel’s top takeaways.
Data and AI governance are two sides of the same coin
For media, publishing and entertainment organizations, data and AI governance are concerned with very similar things and should be approached together. “AI needs good data, so having a system that ensures your data can be relied on is really important,” said Nathalie Berdat, Head of Data and AI Governance, BBC.
“Putting data and AI governance together makes a lot of sense,” added Tiankai Feng, Head of Data Strategy & Governance, Thoughtworks. “Many principles from data governance can be carried over to AI governance. But there’s also a middle area between them to think about which is the data being used for fine-tuning and generative AI. They have different requirements, such as that data must be unbiased and meaningful for the model, which can be difficult to measure.”
The biggest challenges arise from the kinds of data used for media and entertainment AI use cases.
For companies in the media and entertainment space, generative AI is based on unstructured data that doesn’t fit into neat rows and columns. This requires a different governance approach compared to structured data. The data we use may be unstructured, but governance requires structure, which takes an extra step to build
To complicate matters, each different kind of data used by AI models could be subject to different regulations. “A structure for AI governance is so important because you need a way to review each data type and understand the regulatory impact if the data is further used,” said Erin Nicholson, Global Head of Data Protection and Privacy, Thoughtworks. “Without that visibility, people won’t know the kinds of risk they’re exposing the business to.”
AI governance demands people, process and technology change
One of the reasons AI governance has proven to be such a challenging new discipline is that it’s so multifaceted. Tiankai explained that it’s comprised of several key elements:
Ownership and stewardship: AI models need ownership, and so does AI governance. The right people must be accountable for ensuring AI models are used in the right ways.
Cross-functional decision-making: A cross-domain thinking and decision-making model is essential. One central function can’t make every AI-relevant governance decision, so you need ways to bring the accountable people together.
Processes and metadata: Teams must make their models explainable, so everyone can understand the quality of their outputs and the root causes of any negative outcomes.
Technology enablement: Technology must support governance frameworks and make them work at scale.
This shows that AI governance requires a combination of people, process and technology change. The panel agreed that the ‘people’ element is the toughest to manage effectively.
Nathalie Berdat, Head of Data and AI Governance, BBC, explained some of the people-specific challenges that she has encountered along its AI governance journey. “Generative AI has brought on everyone as users, so we’ve had to up the game on helping people understand classification and the impacts of their actions,” she said.
Erin Nicholson, Global Head of Data Protection and Privacy, Thoughtworks, went on to explain why changing behaviors can be the hardest part of upholding AI governance. “It’s very difficult to get people to classify unstructured data,” she explained. “It’s better to look at what the individual works on and make sure you clearly understand what data they use. Then, you can identify their challenges and position governance as a way to solve those challenges and improve outcomes. That will help increase their buy-in.”
Safeguarding against risk without slowing down innovation
One of the biggest concerns around AI governance is that putting rules and restrictions in place will limit teams’ ability to innovate, and constrain the business value they can drive AI use cases. But the panel agreed that when it’s implemented correctly, AI governance shouldn’t limit innovation — it can support and accelerate it.
“In many cases across the media and entertainment industries, data and AI governance actually enable innovation,” said Lydia Ray, Senior Analytics Solution Architect, AWS. “Without them, you’ll keep hitting compliance issues that bring your projects to a halt and slow your time to market. If you work with high-quality, reliable data, innovation won’t be slowed — it will actually be sped up.”
Some teams are now thinking in terms of ‘data liquidity’ — the speed at which they’re able to monetize specific data sets and apply them to value-creating use cases. By framing governance as activities that increase data liquidity, media and entertainment firms can begin to challenge preconceptions and reposition AI governance specifically as an innovation accelerator.
Some organizations are already running with this idea and applying AI to support their governance efforts. They’re creating a virtuous circle where using AI to improve data quality and governance further improves the value of their AI models and use cases.
AI can automate complex processing, cleansing and interpretation tasks. It’s great for detecting anomalies, duplications, errors — the things that really impact data quality and AI use cases and hinder innovation
Four tips for effective AI governance implementation in media and entertainment
Closing the session out, each expert on the panel shared a tip for media and entertainment organizations taking their first steps into the world of AI governance:
1. Be flexible and adaptive. It’s all about people. Reach out to them and talk to them. Don’t come in with a set of rigid frameworks and practices. Listen to their challenges and adapt your roadmap to support those challenges.
2. Try to think big, but do things in smaller chunks. Don’t try to boil the ocean; do things use case by use case and requirement by requirement to show value along the way and learn what works over time.
3. AI governance merges areas that may have never worked together before. Data governance, records management, machine learning, legal, data protection, data science and more. It’s a learning curve to help those people work together and understand how each other works, so make that a focus.
4. There is no single tool that solves everything to do with AI governance. You have to empower people and enable them to continuously learn and transfer knowledge. That helps the people augment the tools you give them.
The full session is available to view on demand here, or if you’d like to discuss your own AI and data governance challenges with us and get some one-to-one support with them, you can contact us.