One Stream to Rule Them All: Navigating the HDR Journey
MainConcept’s Frank Schönberger on How HDR Is Reshaping the Economics of Video Delivery
Introduction
Codec technology sits at the foundation of every video experience, from a live Super Bowl broadcast to a decade-old sitcom served up by a free streaming platform at two in the morning. MainConcept has been building and licensing codec software for more than three decades, supplying the technical infrastructure that broadcasters, over-the-top platforms, and production companies depend on to encode, package, and deliver content across a constantly expanding universe of devices and formats. The company plays a critical role in helping the ecosystem manage the intensely competitive nature of providing digital entertainment to consumers, while managing the complexity of introducing innovation while protecting technology investments made by industry and individuals alike.
High Dynamic Range video has arrived as a genuine differentiator in a crowded market. According to Global Market Insights, the global HDR market was valued at $41.79 billion in 2024 and is projected to grow at a compound annual rate of 27.2% over the next decade. Yet the infrastructure built to deliver premium HDR content must still accommodate hundreds of millions of legacy devices that cannot display it. Frank Schönberger, senior product manager at MainConcept GmbH, sat down with BizTechReports to explain what it takes to bridge that gap, and why the solution may be simpler than the industry once assumed.
Here is what he had to say:
Q: MainConcept has been in the codec business for more than three decades. How does that depth of experience shape the way you approach a technology shift like HDR?
Frank Schönberger: It gives you perspective that younger companies simply do not have. We have watched the industry move through analog to digital, standard definition to high definition, file-based workflows to streaming, and now this HDR transition. Each of those shifts created winners and losers, and the difference usually came down to whether a company could support the old infrastructure while building for the new one. That is essentially the same challenge we face today. Our customers cannot abandon their SDR viewers overnight. They need solutions that allow them to move toward HDR without leaving legacy audiences behind, and without doubling their operational costs in the process.
Q: Premium content has always been a differentiator, but you seem to be saying the stakes are higher now. Why?
Schönberger: The number of streaming services competing for subscriber attention has grown dramatically. At the same time, broadcast is losing ground to streaming in most markets. So you have a situation where everyone is fighting for the same viewer, the same subscriber dollar, the same advertising spend. Premium content, and the technology that delivers it at its best, becomes a real competitive lever. HDR is part of that. When it works well, it is genuinely better. Brighter whites, deeper blacks, more vivid colors. It is a more realistic picture. Viewers who have experienced it on a quality display notice the difference, and that matters to the operators paying for it.
Q: HDR is not a single standard. There are at least six formats in active use. How does MainConcept manage that fragmentation?
Schönberger: That fragmentation is one of the more difficult aspects of the current landscape. You have HDR10, HDR10+, Dolby Vision, Hybrid Log-Gamma, PQ, and Advanced HDR by Technicolor, and each of them has different technical requirements and different levels of device support. Our approach is to build support for all of those formats into our encoder software development kits (SDKs) so that our customers are not forced to make an either-or decision. We also have a dedicated universal color space converter that handles the transformation between formats, including conversion back to SDR when needed. The goal is to give operators the flexibility to work within whatever HDR ecosystem their distribution partners and device manufacturers require.
Q: The industry has largely been running dual workflows, one for SDR and one for HDR. What does that actually cost an operator?
Schönberger: It doubles everything. You have double the hardware, double the maintenance, double the operational overhead. For a broadcaster with a large content catalog and multiple distribution channels, that is a significant ongoing expense. And it creates workflow complexity that introduces risk. More moving parts means more opportunities for something to go wrong, especially in live production environments where you have no margin for error. The industry has understood for some time that the goal should be a single pipeline that serves both SDR and HDR audiences from one stream. The question has always been how to get there without degrading quality at either end.
Q: That is where Advanced HDR by Technicolor enters the picture. How does it address the single-stream problem?
Schönberger: Advanced HDR by Technicolor is genuinely well-suited to this challenge because of the way it was designed. It uses embedded metadata and machine learning to deliver a single stream as HDR on devices that support it, while automatically falling back to SDR on those that do not, protecting investments in legacy infrastructure without sacrificing image quality on modern displays. And the machine learning component is important. Traditional SDR-to-HDR conversion relied on static lookup tables, which produced inconsistent results and compressed the HDR signal in ways that undermined the whole point of using HDR in the first place. The adaptive approach in Advanced HDR by Technicolor adjusts on a per-frame basis, which is what you need for live content where lighting conditions are constantly changing. A stadium broadcast at noon looks nothing like the same stadium at dusk, and a static conversion cannot handle that transition gracefully.
Q: Free ad-supported streaming services carry enormous libraries of older content. What specific challenges does that create for HDR delivery?
Schönberger: The scale and inconsistency of legacy content are core issues. When you have a catalog built over decades, you are dealing with content that was produced under completely different technical standards. Different color spaces, different gamma curves, different aspect ratios, varying levels of metadata, some of it analog in origin. There is no single conversion approach that handles all of that uniformly. Each piece of content may need a different treatment to look correct in an HDR environment. And the operational complexity compounds when you factor in live scheduling and dynamic ad insertion, because you may be switching between content from completely different eras and technical backgrounds within a single viewing session. The codec infrastructure has to absorb all of that variation and produce a consistent output stream.
Q: The advertising model is the revenue engine for FAST services. What happens when an HDR program is interrupted by an SDR commercial?
Schönberger: It is a real problem, and it goes beyond aesthetics. When a viewer is watching premium HDR content and an SDR advertisement appears, there is an immediate and jarring shift in brightness and color quality. The viewer notices. In some cases, it looks like something has gone wrong with the broadcast. For an operator who has positioned their service around a premium viewing experience, that is a credibility problem. The ad has to match the quality of the content surrounding it. Our encoders handle that conversion in real time, translating an incoming SDR advertising feed into HDR output without introducing a visible break. The viewer should only notice the ad, not the technical transition.
Q: Where does HDR fit in the emerging world of augmented and virtual reality?
Schönberger: It is a natural fit. The whole premise of immersive content is that you want the viewer to feel present in what they are watching. HDR contributes directly to that by extending the contrast ratio and color palette beyond what SDR can produce, which means the image is closer to what the human eye actually perceives in real life. As wearable devices improve, the display technology will increasingly be capable of rendering HDR content in ways that are genuinely immersive. The production and delivery infrastructure needs to be ready for that. Dynamic HDR, which adjusts image parameters on a per-frame basis rather than applying a single static setting across a whole program, is where that is heading. It is more computationally demanding, but the combination of better processing hardware and AI-assisted automation is making it practical at scale.
Key Takeaways
The HDR transition is not simply a display technology upgrade. It is a fundamental reorganization of how video content gets produced, packaged, and delivered across an increasingly fragmented device landscape. For operators managing legacy content libraries, dual SDR and HDR workflows, and advertising feeds that may arrive in any format, the codec layer is the critical point of integration.
Schönberger’s central argument is that the industry no longer needs to accept the cost and complexity of parallel pipelines. Solutions like Advanced HDR by Technicolor, now available as an MainConcept SDK and already deployed by broadcasters operating under ATSC 3.0, make single-stream delivery viable today. The technology handles the SDR-to-HDR conversion in real time, preserves metadata integrity through the encoding process, and adapts dynamically to the kind of variable content that defines modern broadcasting. For an industry navigating simultaneous pressures from subscriber competition, advertising revenue requirements, and the approaching demands of immersive media, that combination of efficiency and quality may prove to be the most consequential codec development in a generation.

