Call for action: Why the M&E industry must now seriously address the ethics & legal implications of AI

By Shirish Nadkarni
Many in India woke up one morning to find on their social media platforms, a video of America’s President-elect Donald Trump grooving to the beat of a popular Bollywood number, his lips exquisitely synchronised to the words of the song.
It is quite another thing that Trump would probably have never heard that particular song, and that it is a deepfake generated by Artificial Intelligence (AI). It was created simply because someone had the AI tools and the know-how, and far too much time on his hands.
Although the majority of people familiar with the capabilities of AI would instantly identify the video as a deepfake and laugh it off, the ethical implications of creating such deepfakes are frightening.
AI has become a huge threat to the media and entertainment (M&E) industry, and a large portion of the industry is expressing serious concerns about AI getting ahead of human intelligence and eventually taking over the most creative of functions.
Award-winning TV producer and writer Anthony Sparks has expressed his concern about the matter. “The embrace of AI in the film and television industries is the biggest threat to the viability of the profession of film-writing that I’ve seen in my career and lifetime,” he said.
“Not only is AI-produced film and television destructive to a unionised workforce by aiming to drastically reduce our workforce, but it is also theft.
“Plain and simple. It is theft!
“AI would consume the hard work of current and previous generations of writers, then splice it and dice it into a regurgitated stew of nonsense that presumes human creativity has reached its limits.
“AI is not only dangerous because of its job-killing capability in the creative industry; it is dangerous because it forever freezes our popular culture and limits it to the current moment in time.
“This embrace of AI in the entertainment industry must be stopped.”
The entertainment industry is one of the most forward-looking, constantly seeking new ways to capture an audience with a compelling story line. With the creation of streaming media and AI, how Hollywood is telling its stories has become erratic. The Writers Guild of America (WGA) called this moment an “existential crisis” — recalling the 2023 prolonged strike that the writers carried out.
The 2022 release of ChatGPT, a natural language processing tool driven by AI technology, was one of the leading forces that led to the strike. The World Economic Forum predicted that AI would disrupt a quarter of all jobs over the next five years.
The problem lies in the fact that AI could potentially produce a first draft of simple prompts. Consequently, when writers were hired, they could be hired at a lower pay rate because the first concepts would be completed for them.
“AI lacks the passion, nuance, and perspective of work created by humans,” the WGA stated. “By analysing existing scripts, any work created is always going to be a regurgitation of that work. These pieces will never have a truly unique perspective or vision.”
Other issues revolved around reducing the work or pay for writers: streaming media’s shorter TV seasons and lower renewal rates of those shows leading to fewer steady jobs; smaller writers’ rooms leading to fewer hires and lower pay; and shrinking residuals for past shows that were streamed or syndicated.
According to a recent WGA report, the median weekly writer-producer pay declined by 23% over the last decade when taking in inflation.
The Society of Motion Picture & Television Engineers (SMPTE) has called on the M&E industry to be more active and vocal in the debate about developing ethical AI systems. Doing nothing, or not doing enough, is not an option because “failure may come at a high human cost,” the organisation maintains..
“The time to discuss ethical considerations in AI is now, while the field is still nascent, teams are being built, products roadmapped, and decisions finalised,” SMPTE President, Renard T. Jenkins, told APB+. “AI development is no longer just a technical issue, it is increasingly becoming a risk factor.”
This call for action forms a substantial part of the “SMPTE Engineering Report: Artificial Intelligence and Media,” which was produced alongside the European Broadcasting Union (EBU) and the Entertainment Technology Center (ETC).
The report was the result of a task force on AI standards in media that began in 2020. ETC AI and Neuroscience in Media Director Yves Bergquist said the group found “both an issue and an opportunity was … surfacing ethical and legal questions around deployment of AI in the media industry.”
Jenkins emphasised that the media industry are consumers of this technology. “While that alone is table stakes for the ethics debate, we also have a great responsibility in ourselves because we are able to touch millions with a single programme or a single piece of content,” he said.
Bergquist, who also serves as CEO of Corto AI, said, “I love looking at AI from within the media industry because the media industry is a technology industry. M&E has a massive track record in marrying human creativity with technology. It is also not a producer of AI. It is a consumer of AI products.”
Bergquist also noted that technology’s omnipresence has had “some very substantial consequences and impact on the way we live.” Therefore, he said, the ethical issue now has to be baked into every single conversation about technology.
“However, the practice of ethical AI is identical to the practice of good, methodologically sound AI,” he added. “You need to know biases in your data. You need to have a culturally and intellectually diverse team.
“In fact, I have yet to see a requirement of ethical AI that isn’t also a requirement of rigorous AI practice.”
Since the publication of the report, it has become clearer to everyone that AI will transform the media industry from pre-production through distribution and consumption.
The report provides background to media professionals on how AI and machine learning (ML) are being used for production, distribution and consumption. It explores ethical implications around AI systems and considers the need for datasets to help facilitate research and development of future media-related applications.
SMPTE suggests that the quality and diversity of training sets — “how colour correction can affect representation of minorities” — and the use of deepfake technology are “critical areas” where ethical considerations are paramount.
“Bias is the model-killer,” Jenkins contended. “Black box algorithms help no one. Intellectual and cultural diversity is critical to high performance. Product teams must broaden their ecosystem view.”
How does one define “Ethical AI”?
Ethical AI is AI that adheres to well-defined ethical guidelines regarding fundamental values, including such things as individual rights, privacy, non-discrimination, and non-manipulation.
Among the major ethical concerns of AI are: unjustified actions, opacity, bias, discrimination, autonomy, informational privacy and group privacy, moral responsibility as well as distributed responsibility, and automation bias.
All these need to be dealt with to make AI ethical — and also subservient to human intelligence.
“The time to discuss ethical considerations in AI is now, while the field is still nascent, teams are being built, products roadmapped, and decisions finalised. AI development is no longer just a technical issue, it is increasingly becoming a risk factor.”
“I believe that AI will continue to see exponential growth and adoption throughout 2025,” said Jenkins. “Entertainment houses simply cannot do without AI; they have to embrace it. Therefore, it is imperative that we examine the overall impact that this technology can have in our industry.”
The report describes today’s AI as “disruptive, vague, complex and experimental” — all at once. “It is difficult to understand, and easy to load up with fears and fantasies,” the report reads. This is a dangerous combination.”
Organisations must examine the downside risk of deploying underperforming and unethical AI systems, especially because, in most cases, ethical and technical requirements are the same.
“For example, unseen bias is as bad for model performance as it is discriminatory. Model transparency is not just an ethical consideration: it is a trust-building instrument.”
SMPTE urges the M&E industry to bring its own voice “and nearly 150 years of success marrying human and technological genius” to the debate.
“Media holds a substantial and powerful place in our society as the mass distributor of human narratives and social norms,” said Jenkins. “Media must bring this unique voice and hybrid human/machine culture to AI development and the debate on AI ethics.”
The report explains how M&E companies collect and process large amounts of consumer data, and that increasingly, this means they must comply with a growing list of legal regimes and data governance requirements. Similarly, there is a substantial opportunity to use computer vision in virtual production and post-production processes.
The media industry’s history of sophisticated legal practice around likeness rights, royalties, residuals, and participations is a “substantial advantage in navigating issues related to computational derivatives of image and content,” Jenkins said.
The paper argues for a standards-based approach to verification and identification, and not only of the image (format and technical metadata, for example), but also of the talent itself and the authenticity of content.
“Persistent, interoperable, and unique identifiers have aided media supply-chains in the past, and could well help with the labeling and automating the provenance of authentic talent in the future age of AI in M&E,” Jenkins stated. Such work is ongoing, including at the Coalition for Content Provenance and Authenticity (C2PA).
“At a minimum, requirements for data and model transparency would go a long way towards reinforcing trust in computational methods and help convert those in the industry still reluctant to use statistical learning to optimise human processes,” the report states. Ethics, it says, should be part of quality assurance for any and all computational systems.
“AI is still a technical ungoverned frontier,” the report adds. “Everything around it, from roadmapping to modeling to seeding in company culture, is complex and challenging. Mistakes will happen. Organisations must communicate comprehensively and with humility about their journey to approach and implement processes around Ethical AI, for the benefit of all.”
All told, AI is seen by many as a tremendous transformative technology. Once machines are considered as entities that can perceive, feel and act, it is not a giant leap to ponder their legal status. It makes sense to spend time thinking about what is expected from these systems and what they should do — and address ethical questions to build these systems with humanity’s common good in mind.




