APB+News & Events

Generative AI: Your new co-creative partner is an algorithm, transforming the process of storytelling

By Joe Tan

Artificial intelligence (AI) is transforming industries worldwide, but few sectors are experiencing its disruptive impact as deeply as media and broadcasting. From film studios to digital publishers, organisations are no longer viewing AI merely as a tool for efficiency. Instead, AI is emerging as a creative partner, ushering in a new era of what Adobe calls the “AI-empowered creator.” 

“An AI-empowered creator is someone who can recognise the collaborative relationship with AI technology and harness it to amplify their imagination and creativity,” Preeti Rao, Asia Marketing Director, Adobe, told APB+. “These tools are not here to replace inherent human creativity, but to amplify it in unprecedented ways.” 

Adobe’s philosophy, she explained, is rooted in its vision of “creativity for all”, making creative power more accessible while helping media organisations shift from traditional, linear production models to AI-native, story-first workflows. 

For decades, media organisations operated with rigid, linear workflows: content was conceived, then painstakingly produced and distributed. The rise of generative AI (GenAI) is collapsing those barriers, enabling more fluid, story-first approaches where ideation, experimentation, and production happen simultaneously. 

“With AI integrated into the creative workflow, it acts as an intelligent collaborator, helping creators explore ideas faster and experiment more freely with various content formats,” said Rao. “This partnership between human ingenuity and AI-driven insights allows creators to generate and iterate concepts at breathtaking speeds while maintaining the emotional depth that defines truly meaningful art.” 

Adobe Firefly exemplifies this evolution. The GenAI platform integrates with Adobe’s core creative tools — Photoshop, Premiere Pro, and Express — to give creators unfettered AI assistance across every stage of content creation. Designers, filmmakers, and marketers can generate images, videos, audio, and vectors on a single platform, all while retaining creative control. 

One of Firefly’s flagship initiatives, The Unfinished Creator Film, shows how AI can support collaboration on a large scale. Launched with director Sam Finn, the project invites creators worldwide to download an unfinished film sequence, remix it with Firefly, and reimagine the story. The results have been both personal and innovative, ranging from claymotion combined with digital effects to futuristic visual experiments. 

“Before AI, creativity was often shaped by time, tools, and complexity,” Finn said of his experience with Firefly. “Now I can create exactly what I envision, almost as quickly as I think it. For the first time, filmmaking feels like a stream-of-consciousness art form.” 

The same creative empowerment is now redefining live media and broadcast storytelling, where AI is enhancing realism, speeding up production, and deepening audience engagement. 

“An AI-empowered creator is someone who uses the power of AI to improve workflow efficiencies in their productions,” explained Chris Black, CMO at Vizrt. “This can range from using generative AI to speed up script ideas to AI-powered talent immersion in a virtual environment, which in Viz Engine generates accurate reflections and shadows, providing depth and making the scene more realistic on screen.”

According to Black, AI’s greatest strength in live production lies in giving teams time, a crucial resource for creativity. “By speeding up processes, automating repetitive tasks, and reducing the time needed to get your story up and running, production teams are given more time to explore creative ideas. This is how AI helps push the boundaries of visual storytelling.”

AI has already become integral to virtual and augmented reality (VR/AR) production environments. In Vizrt’s virtual sets, AI enhances realism through dynamic lighting and presenter intersection with digital spaces. In sports broadcasting, it simplifies complex tasks such as keying and player cutouts, enabling broadcasters to deliver immersive analytics more efficiently.

“AI-powered calibration also reduces the cost and complexity of creating AR graphics, giving production teams more time and lowering the barrier to entry for creating stunning visuals,” Black added. “Creators can now produce richer, more engaging content, whether it’s a football analysis segment or a live studio show.”

The benefits of these AI-driven workflows extend far beyond mere production speed. For media organisations, they unlock new ways to deepen audience engagement, accelerate decision-making, and open entirely new revenue streams.

The key is personalisation at an unprecedented scale. 

“AI opens up new revenue opportunities by enabling personalised experiences at scale,” Rao said. By analysing real-time data from social platforms to search trends, AI helps organisations understand audience preferences and dynamically tailor content. This could mean creating multiple ad variations for different platforms, experimenting with niche programming, or even offering premium subscription tiers built around personalised experiences.  

For instance, Adobe GenStudio for Performance Marketing provides insights that show which video ads are performing best and why, allowing marketing teams to “optimise marketing and media speed, ensuring every asset contributes to stronger ROI,” said Rao. 

She further highlighted the Central Provident Fund Board (CPFB) in Singapore, which utilised Adobe Experience Manager to revamp its content delivery. For a government agency where clear and prompt communication is crucial to maintaining public trust, the outcomes were remarkable. The content process was cut from three days to “instantaneous publishing,” which contributed to a 10% increase in open rates for its educational messages. 

Adobe has also implemented AI within its own marketing operations. Using Adobe Mix Modeler, the company improved its understanding of how earned, owned, and paid media channels drive engagement; and the outcome —  an 80% increase in return on media spend and 75% growth in the contribution of media to digital subscriptions over five years. 

Perhaps, the most profound shift lies in the democratisation of creativity. Historically, only large studios with deep pockets have been able to afford the technology and manpower necessary for ambitious projects. With AI, those barriers are falling away. 

“This democratisation means that creatives will have access to powerful experimental models that can unlock entirely new forms of storytelling. The rise of lightweight, efficient models means these capabilities won’t be limited to large studios with vast resources,” Rao noted. “Independent filmmakers, designers, and even small media teams will be able to harness AI to create at scale, iterate quickly, and bring high-quality productions to market faster.”

Vizrt’s Black agreed, noting that the coming decade will bring transformative changes to live production and multi-media storytelling. “Generative AI in video will be a highly disruptive technology for content creators,” he said. “It will enable creators to produce high-quality, story-driven content without relying solely on real-world footage. This shift will redefine how stories are told.” 

He also foresees the rise of AI assistants, like Copilot in Microsoft tools, embedded throughout the production chain. “In Vizrt Artist, for example, a Copilot-style assistant could help build complex scenes, adjust lighting, and create realistic virtual sets with ease. Journalists will use AI to craft compelling narratives, optimising timing, and building impactful rundowns.” 

Within one to two years, Black expects broad adoption of these tools. In five to 10 years, many production processes could be fully automated, with AI-generated presenters enabling content delivery in multiple languages and formats, offering personalised storytelling at scale. 

Looking ahead, Rao sees AI’s influence on media creation accelerating along two key paths. The first is a “dual trajectory” of AI development. “Large-scale open-source models will continue to push the boundaries of experimentation,” she predicted, “while on the other hand, smaller and more efficient models will make advanced AI more accessible, cost-effective, and easier to integrate into everyday creative workflows.” 

The second, more transformative trend is the rise of agentic AI systems. These are not just reactive tools that respond to prompts but autonomous systems that can anticipate needs, make decisions, and learn from context. It is a shift from wielding a specialised power tool to relying on an intelligent project foreman who orchestrates the entire workflow with foresight and autonomy.

“Their ability to autonomously anticipate needs, make decisions, and learn from context will transform everything from managing complex broadcast workflows to delivering personalised viewing experiences,” Rao explained.

The ultimate vision, she concluded, is AI that could “autonomously oversee end-to-end content creation, from ideation to distribution, while continuously learning from audience behaviour to refine storytelling in real time.”

Show More

Related Articles

Back to top button

Subscribe to the latest news now!

 

    Close