Eye on ROI: Challenges that will push broadcasters to adopt only fit-for-purpose technologies in 2026 - APB+ News

APB+ News

APB AWARDS - NOMINATE NOW!

[master-leader-web]
[master-leader-mob]

Eye on ROI: Challenges that will push broadcasters to adopt only fit-for-purpose technologies in 2026

Add Your Heading Text Here

By Dr Amal Punchihewa 

Declining advertising revenue and operational budgets, escalating cyber threats to broadcast infrastructure, the rise of fake news, talent shortages, and heightened market volatility are among the key forces driving transformation in the broadcast and media industry in 2026.

This article will offer observations, analysis, and recommendations to help the industry navigate this challenging terrain. While the obstacles are significant, this is not the first time the broadcast and media sector has faced such pressures, and lessons from past cycles can still offer valuable guidance today.

Financial pressures are reshaping technology investment across the industry. As streaming services struggle to achieve profitability, operators are cutting back on spending, while consolidation has led to the scaling down of redundant operations and services. Together, these forces are making cost reduction a primary driver of technology decisions.

It has long been understood that organisations should evaluate projects based on the total cost of ownership over their full lifecycle rather than upfront purchase price, a principle that now needs to be applied more rigorously than ever.

While cloud-based systems may offer lower initial costs, they can introduce substantial and often unanticipated operating expenses. Proprietary systems, meanwhile, typically require ongoing maintenance fees and offer limited flexibility. As a result, open standards and modular architectures are gaining favour for their ability to reduce long-term costs and minimise vendor lock-in.

Broadcasters should place greater emphasis on reliability and agility when investing in infrastructure, and avoid being locked into proprietary systems or exposed to unjustifiable price escalations in cloud workflows that deliver limited practical value from AI features.

The push for efficiency is accelerating a shift from costly, dedicated systems to software running on standard IT infrastructure. This transition lowers hardware costs and enables organisations to scale resources according to actual demand rather than peak capacity.

Return on investment (ROI) assessments are now central to equipment decisions, with infrastructure projects required to demonstrate clear operational savings or revenue potential to secure funding. While this scrutiny may slow some deployments, it helps ensure that adopted technologies deliver measurable value.

Cloud services were surrounded by hype for many years, yet deployments remained limited. While public cloud services offer scalability and reduce upfront capital investment, their operational costs, particularly for bandwidth-intensive video workflows, have proven unpredictable. As public cloud costs are heavily driven by fluctuating data volumes, organisations handling large amounts of content are finding that cloud expenses can exceed those of owning and operating their own infrastructure.

These cloud economics and operational realities are reshaping adoption strategies. While cloud-based workflows continue to evolve, deployments remain cautious, with adoption patterns shifting towards hybrid architectures that combine public cloud and on-premises resources.

Hybrid models allow organisations to leverage cloud resources for scalable tasks, such as multi-format for diverse platforms, while keeping core workloads on on-premises infrastructure. This approach balances cost predictability with the flexibility to handle variable demand.

Organisations are increasingly prioritising open standards and modular architectures, enabling components from different vendors to interoperate. This shift addresses the limitations of proprietary systems, which can restrict flexibility, hinder the addition of new capabilities, and lead to expensive upgrade cycles.

In broadcast and media innovation, containerisation has emerged as a key enabler of modern, scalable, and efficient workflows. Broadcast system developers often face the challenge of ensuring applications run reliably across diverse environments, from local laptops to staging and production servers.

Containerisation, a form of operating system virtualisation, addresses this by packaging an application and all its dependencies, such as libraries and configuration files, into a single, isolated, and executable unit called a container. This ensures consistency, so what works in development will also work in production.

As someone who provides strategic guidance and leads broadcast and media initiatives across the Asia-Pacific region (APAC), I am pleased to note that the European Broadcasting Union’s (EBU) Strategic Programme on Media Infrastructures & Cybersecurity has published a comprehensive Reference Architecture for vendors seeking to align their systems with the Dynamic Media Facility (DMF) concept.

The Dynamic Media Facility Reference Architecture outlines a layered model for modern, software-defined production infrastructures, building on established cloud concepts such as Open System Interconnection (OSI).

At its core, the model enables users to make independent technology and product choices for each element of the infrastructure. Functions can be added as stateless, containerised media micro-services, deployed on-premises, at remote sites, or in the public cloud. A unified Media eXchange Layer (MXL) manages everything, providing high-performance, asynchronous communication between media functions.

The MXL standardises how media processing functions in containerised environments share and exchange data. By enabling open collaboration and shared memory, MXL is transforming modern production workflows.

Open standards such as SMPTE ST 2110 for IP video, MXL for workflow orchestration and DMF for cloud-native production enable different systems to exchange information and coordinate operations. This interoperability allows organisations to choose components based on specific needs rather than compatibility constraints. By the end of 2026, major broadcasters are expected to showcase fully realised DMFs, not just as conceptual architectures, but as measurable business engines.

The industry anticipates being able to quantify cost savings and new revenue generated by DMP-aligned, microservices-driven infrastructures, enabling the production of significantly more events at broadcast quality.

The standards also enable software-defined workflows, where functions are delivered as applications rather than dedicated hardware. This allows organisations to update capabilities via software and run multiple functions on shared infrastructure.

Interoperability is especially critical for organisations operating across multiple facilities or collaborating with partners. Standardised interfaces lower integration costs and enable workflows that span different systems.

MXL enables ultra-low-latency, high-throughput IP transport across distributed production environments. By bypassing CPU bottlenecks, minimising latency, and efficiently handling Ultra HD (UHD) and high dynamic range (HDR) signals across multiple trucks or remote teams, it has the potential to transform cloud-based workflows.

MXL allows multiple vendors, services, and applications to operate within the same compute environment, sharing memory directly rather than managing packets. This streamlines cloud productions, especially in public cloud environments, reducing overhead, lowering costs, and delivering far more predictable performance when integrating tools from different manufacturers.

Growing pressure from the public and government bodies is pushing broadcasters to address security, compliance, and data sovereignty issues. The EBU promotes media exchange standards that support security, compliance, and interoperability, and the industry is increasingly embracing MXL, the Media eXchange Layer of the DMF reference architecture.

Even smaller broadcasters must navigate these requirements, highlighting a strong need for education. The EBU plays a key role in this, particularly around networking fundamentals and zero-trust network security.

Deepfakes are audio-visual content generated or manipulated using AI to misrepresent a person or event. New generative AI tools make it possible to create highly realistic content, significantly lowering the barrier for anyone, even with modest technical skills, to produce deepfakes. 

Indeed, deepfakes are harming public figures, celebrities, political candidates, and ordinary people. They can humiliate or abuse victims, disproportionately women and girls, by falsely depicting them in non-consensual sexual acts, impersonate loved ones to facilitate financial scams, or spread disinformation to influence political and social opinion. To maintain trust in their services, the broadcast and media industry must adopt appropriate standards and tools.

Technical upgrades require organisational adaptation, yet many companies are not fully prepared. New technologies reshape workflows, decision-making, and organisational structures, and without corresponding operational changes, their full potential cannot be realised.

Software-defined workflows enable faster iteration and greater operational flexibility than hardware-based systems. However, they demand different decision-making processes, and organisations accustomed to long planning cycles must adapt to continuous modification and optimisation.

Adopting new technologies requires staff capable of implementation, maintenance, and operation. The current technical transition is already creating workforce challenges: traditional broadcast engineers require IT and networking expertise, while IT professionals must understand broadcast fundamentals. Operating IP-based systems demands skills that differ significantly from those required for baseband equipment.

Traditional broadcast engineers are skilled in video signal flow, synchronisation, colour science, and hardware operation, working with fixed signal paths. In contrast, IP-based systems demand expertise in network architecture, software configuration, cybersecurity, and IT management.

This transition is outpacing the evolution of workforce skills.

Broadcasters are facing a skills crisis: many are planning IP infrastructure upgrades while underestimating how quickly the gap is widening between traditional broadcast engineers and the skillsets needed for IT-based systems.

The change affects both implementation and ongoing operation. Organisations installing IP systems need staff who can configure networks, troubleshoot software, and integrate multiple platforms. Once operational, these systems demand different maintenance approaches than hardware-based facilities, as signal formats are fundamentally different.

The skills gap also shapes technology decisions. Organisations lacking IT expertise may delay transitions or select systems that require less specialised knowledge, limiting the benefits of new technologies and prolonging reliance on legacy infrastructure.

Education and training programmes are evolving to address the skills gap, but rapid technical change makes it difficult to stay current. While organisations are hiring from IT and retraining broadcast staff, the transition continues to create operational challenges.

In conclusion, successful broadcast and media technology transitions depend not only on the tools themselves but also on careful consideration of human factors in planning and implementation.

Join The Community

Join The Community

Eye on ROI: Challenges that will push broadcasters to adopt only fit-for-purpose technologies in 2026

Add Your Heading Text Here

By Dr Amal Punchihewa 

Declining advertising revenue and operational budgets, escalating cyber threats to broadcast infrastructure, the rise of fake news, talent shortages, and heightened market volatility are among the key forces driving transformation in the broadcast and media industry in 2026.

This article will offer observations, analysis, and recommendations to help the industry navigate this challenging terrain. While the obstacles are significant, this is not the first time the broadcast and media sector has faced such pressures, and lessons from past cycles can still offer valuable guidance today.

Financial pressures are reshaping technology investment across the industry. As streaming services struggle to achieve profitability, operators are cutting back on spending, while consolidation has led to the scaling down of redundant operations and services. Together, these forces are making cost reduction a primary driver of technology decisions.

It has long been understood that organisations should evaluate projects based on the total cost of ownership over their full lifecycle rather than upfront purchase price, a principle that now needs to be applied more rigorously than ever.

While cloud-based systems may offer lower initial costs, they can introduce substantial and often unanticipated operating expenses. Proprietary systems, meanwhile, typically require ongoing maintenance fees and offer limited flexibility. As a result, open standards and modular architectures are gaining favour for their ability to reduce long-term costs and minimise vendor lock-in.

Broadcasters should place greater emphasis on reliability and agility when investing in infrastructure, and avoid being locked into proprietary systems or exposed to unjustifiable price escalations in cloud workflows that deliver limited practical value from AI features.

The push for efficiency is accelerating a shift from costly, dedicated systems to software running on standard IT infrastructure. This transition lowers hardware costs and enables organisations to scale resources according to actual demand rather than peak capacity.

Return on investment (ROI) assessments are now central to equipment decisions, with infrastructure projects required to demonstrate clear operational savings or revenue potential to secure funding. While this scrutiny may slow some deployments, it helps ensure that adopted technologies deliver measurable value.

Cloud services were surrounded by hype for many years, yet deployments remained limited. While public cloud services offer scalability and reduce upfront capital investment, their operational costs, particularly for bandwidth-intensive video workflows, have proven unpredictable. As public cloud costs are heavily driven by fluctuating data volumes, organisations handling large amounts of content are finding that cloud expenses can exceed those of owning and operating their own infrastructure.

These cloud economics and operational realities are reshaping adoption strategies. While cloud-based workflows continue to evolve, deployments remain cautious, with adoption patterns shifting towards hybrid architectures that combine public cloud and on-premises resources.

Hybrid models allow organisations to leverage cloud resources for scalable tasks, such as multi-format for diverse platforms, while keeping core workloads on on-premises infrastructure. This approach balances cost predictability with the flexibility to handle variable demand.

Organisations are increasingly prioritising open standards and modular architectures, enabling components from different vendors to interoperate. This shift addresses the limitations of proprietary systems, which can restrict flexibility, hinder the addition of new capabilities, and lead to expensive upgrade cycles.

In broadcast and media innovation, containerisation has emerged as a key enabler of modern, scalable, and efficient workflows. Broadcast system developers often face the challenge of ensuring applications run reliably across diverse environments, from local laptops to staging and production servers.

Containerisation, a form of operating system virtualisation, addresses this by packaging an application and all its dependencies, such as libraries and configuration files, into a single, isolated, and executable unit called a container. This ensures consistency, so what works in development will also work in production.

As someone who provides strategic guidance and leads broadcast and media initiatives across the Asia-Pacific region (APAC), I am pleased to note that the European Broadcasting Union’s (EBU) Strategic Programme on Media Infrastructures & Cybersecurity has published a comprehensive Reference Architecture for vendors seeking to align their systems with the Dynamic Media Facility (DMF) concept.

The Dynamic Media Facility Reference Architecture outlines a layered model for modern, software-defined production infrastructures, building on established cloud concepts such as Open System Interconnection (OSI).

At its core, the model enables users to make independent technology and product choices for each element of the infrastructure. Functions can be added as stateless, containerised media micro-services, deployed on-premises, at remote sites, or in the public cloud. A unified Media eXchange Layer (MXL) manages everything, providing high-performance, asynchronous communication between media functions.

The MXL standardises how media processing functions in containerised environments share and exchange data. By enabling open collaboration and shared memory, MXL is transforming modern production workflows.

Open standards such as SMPTE ST 2110 for IP video, MXL for workflow orchestration and DMF for cloud-native production enable different systems to exchange information and coordinate operations. This interoperability allows organisations to choose components based on specific needs rather than compatibility constraints. By the end of 2026, major broadcasters are expected to showcase fully realised DMFs, not just as conceptual architectures, but as measurable business engines.

The industry anticipates being able to quantify cost savings and new revenue generated by DMP-aligned, microservices-driven infrastructures, enabling the production of significantly more events at broadcast quality.

The standards also enable software-defined workflows, where functions are delivered as applications rather than dedicated hardware. This allows organisations to update capabilities via software and run multiple functions on shared infrastructure.

Interoperability is especially critical for organisations operating across multiple facilities or collaborating with partners. Standardised interfaces lower integration costs and enable workflows that span different systems.

MXL enables ultra-low-latency, high-throughput IP transport across distributed production environments. By bypassing CPU bottlenecks, minimising latency, and efficiently handling Ultra HD (UHD) and high dynamic range (HDR) signals across multiple trucks or remote teams, it has the potential to transform cloud-based workflows.

MXL allows multiple vendors, services, and applications to operate within the same compute environment, sharing memory directly rather than managing packets. This streamlines cloud productions, especially in public cloud environments, reducing overhead, lowering costs, and delivering far more predictable performance when integrating tools from different manufacturers.

Growing pressure from the public and government bodies is pushing broadcasters to address security, compliance, and data sovereignty issues. The EBU promotes media exchange standards that support security, compliance, and interoperability, and the industry is increasingly embracing MXL, the Media eXchange Layer of the DMF reference architecture.

Even smaller broadcasters must navigate these requirements, highlighting a strong need for education. The EBU plays a key role in this, particularly around networking fundamentals and zero-trust network security.

Deepfakes are audio-visual content generated or manipulated using AI to misrepresent a person or event. New generative AI tools make it possible to create highly realistic content, significantly lowering the barrier for anyone, even with modest technical skills, to produce deepfakes. 

Indeed, deepfakes are harming public figures, celebrities, political candidates, and ordinary people. They can humiliate or abuse victims, disproportionately women and girls, by falsely depicting them in non-consensual sexual acts, impersonate loved ones to facilitate financial scams, or spread disinformation to influence political and social opinion. To maintain trust in their services, the broadcast and media industry must adopt appropriate standards and tools.

Technical upgrades require organisational adaptation, yet many companies are not fully prepared. New technologies reshape workflows, decision-making, and organisational structures, and without corresponding operational changes, their full potential cannot be realised.

Software-defined workflows enable faster iteration and greater operational flexibility than hardware-based systems. However, they demand different decision-making processes, and organisations accustomed to long planning cycles must adapt to continuous modification and optimisation.

Adopting new technologies requires staff capable of implementation, maintenance, and operation. The current technical transition is already creating workforce challenges: traditional broadcast engineers require IT and networking expertise, while IT professionals must understand broadcast fundamentals. Operating IP-based systems demands skills that differ significantly from those required for baseband equipment.

Traditional broadcast engineers are skilled in video signal flow, synchronisation, colour science, and hardware operation, working with fixed signal paths. In contrast, IP-based systems demand expertise in network architecture, software configuration, cybersecurity, and IT management.

This transition is outpacing the evolution of workforce skills.

Broadcasters are facing a skills crisis: many are planning IP infrastructure upgrades while underestimating how quickly the gap is widening between traditional broadcast engineers and the skillsets needed for IT-based systems.

The change affects both implementation and ongoing operation. Organisations installing IP systems need staff who can configure networks, troubleshoot software, and integrate multiple platforms. Once operational, these systems demand different maintenance approaches than hardware-based facilities, as signal formats are fundamentally different.

The skills gap also shapes technology decisions. Organisations lacking IT expertise may delay transitions or select systems that require less specialised knowledge, limiting the benefits of new technologies and prolonging reliance on legacy infrastructure.

Education and training programmes are evolving to address the skills gap, but rapid technical change makes it difficult to stay current. While organisations are hiring from IT and retraining broadcast staff, the transition continues to create operational challenges.

In conclusion, successful broadcast and media technology transitions depend not only on the tools themselves but also on careful consideration of human factors in planning and implementation.

Join The Community

Stay Connected

Facebook

101K

Twitter

3.9K

Instagram

1.7K

LinkedIn

19.9K

YouTube

0.2K

Subscribe to the latest news now!

 

    Scroll to Top