Skip to main content
Industry Highlights

What Comes After 8K? Rethinking Displays with 6P Color

By December 10, 2025December 12th, 2025No Comments

By: Gary Mandle, 6P Color Board Member

What’s Next?

Display technology has been advancing rapidly since the discontinuation of CRTs and the shift from analog to digital signal systems. We’ve seen resolution go from a 525-line analog design to 720p, then 1080p, 4K, and now 8K. We’ve watched display brightness advance from the 100-nit norm to 1000-nit displays and beyond. Color gamuts have widened from the old CCIR 601 days to the much larger ITU-R BT.2020 standard.

So, what do we do next to entice customers to make a new display purchase?

A small company with a big vision in Waco, Texas, called 6P Color might have an answer.

The limits of today’s technology

While we’ve seen many advances with wider color gamuts, the standards sometimes outpace reality. This goes back to the original gamut design of NTSC and is also exhibited in the latest ITU-R BT.2020 standard, which was published ten years ago, and there are still very few displays that can achieve it. The ones that do are enormously expensive. At some point manufacturers will get there, but isn’t there an easier way? One proposal is to move from the typical set of RGB primaries and add several more colors to increase gamut size.

Why multi-primary hasn’t worked

This is not a new idea, but past attempts haven’t amounted to much more than experimentation, except for Sharp’s Quattron LCD technology. This, too, was not a commercial success.  Multi-primary displays have several advantages. One straightforward way to expand the gamut is to add primaries in regions of the spectrum that a conventional RGB set can’t reach.  Adding more primaries can also lower power consumption since you can light colors that have higher optical efficiency than an RGB set. More primaries give you the advantage of using them all or a selected few. As an example, for flesh tones, instead of using a combination of red and green with some blue, you can light just the red and yellow primaries. Yellow, having a higher emission efficiency, draws less power.

The core problem is that the input signal doesn’t support the extra colors. So, the display must fabricate the data to drive them. No one is interested in color information that really isn’t part of the picture. You could modify the signal system and add additional data to do this, but outfitting the ecosystem is not remotely practical. Then there are even more complications in standardizing any new gamuts. What colors are, where? and things spiral out of control.

A new signal approach

To make this successful, we need to rethink how we move image data. Just look around and we can get some hints.

Assume that for a file or a video stream, the buckets of RGB or YUV data are just that, a number in a signal that could be anything depending on how you want to interpret it.

First, we can look at what they are doing in Cinema. When the DCI first documented their recommendation, it specified that the file must be encoded as XYZ, not RGB or YUV. The reason for this was to future-proof the media so that as display technology improved, they wouldn’t have to reauthor any content.

Using a coordinate system instead of an RGB system allows any color to be encoded, not just a specific set of three colors. Think of this as defining a color within a three-dimensional volume. When we use an RGB signal system, things such as white point and gamut size are not defined in the signal, so you must guess what was originally used. Instead, the display will use that data as it relates to the display settings.  All we know is that there is an R, a G, and a B value that the display will have to map.

In the case of XYZ, we tell the display that the color is at this specific three-dimensional coordinate, and the decoder maps this to what the display is capable of. It also removes the issue of what white point was used during authoring, since an XYZ encode must include that adaptation correctly. XYZ does have a significant practical drawback:  Today’s cinema implementation is essentially a closed system: the server expects a very specific format, can tolerate large data sets, and the XYZ encoding requires 4:4:4 sampling because each channel carries luminance information.

As we move to broadcast, this level of sampling can’t be accommodated. In streaming, where bandwidth is everything, even a 4:2:2 sampling structure is avoided.

How 6P Color’s system solves these core problems

Let’s look at the coordinate system. The key here is to have a separate luminance channel (just like a YUV system), but the color description is two coordinates that don’t include luminance. Initial designs used an encoding of Yxy. This has been superseded by a differential encoding method that improves compression efficiency. These three components can be used in any signal system, in any sampling method, at 8-, 10-, and 12-bit depths. They can directly substitute the RGB data already configured in any signal transport. HDMI, SMPTE-SDI, and Ethernet standards can all accept this, since it’s just data.

With this signal method, several issues have become redundant. The first is gamut. Since the signal can define any color in a 3D space, it can drive any primary. All possible colors can be encoded. This makes it backward compatible with any RGB system and opens the door for manufacturers who might want to develop a multi-primary display.

Let’s take this one step further. Right now, we have several EOTFs (electronic-to-optical transfer functions) that the display must conform to. To do this, we also must include an extensive metadata set to describe which and how to implement it.  This is a legacy method that originated in our analog days, before digital cinema and television. Cameras and displays used inherently nonlinear tubes.  A gamma was included in the system based on the response of these devices. These are long gone, and the OOTF (optical-to-optical transfer function) system has been greatly refined, but there are still many OETF and EOTF designs to address.

Let’s look around again to see if there’s a different way to do this.

Currently, media is formatted on the assumption that the display will use parameters of a specific standard requiring many copies. One version would be for television, another for cinema, and several others for streaming. All due to the limitations of the display, which we assume will be used.

Then we layer on HDR, whether PQ or HLG, plus our cinema standards (P3/PQ/2.6), and so on. This increases the cost of content creation, complicates the signal distribution system, and in many cases introduces color and brightness errors for the viewer.

What if our new format also included a method for displaying both SDR and HDR from the same stream?  And what if this new system could deliver 16-bit words to the display regardless of the signal transport and bit depth?

Sounds far-fetched.

Right now, file formats and signal paths transport image information using an optical gamma. If we want to watch the content with a different EOTF, we need a conversion process. This process must manage all three channels since they all include a luminance component. But in the 6P Color system, only one channel includes luminance. So right away, signal process overhead is reduced, which will also reduce processing needs and display costs.

Let’s look at this a bit closer.

Why do we send content with an optical transfer function? We are no longer sending images as analog. Why not use a method that optimizes data packing to the bit level of the signal stream?

This is another component of the 6P Color system. Image data is converted during encoding to a linear data set. Nonlinearity is applied not as an optical property, but to maximize data recovery during decoding. Different nonlinearities are used based on the signal bit depth. This allows a 16-bit word to be sent through a data stream or saved as a file at a bit depth as low as 8 bits. On the decode side, an inverse nonlinearity is applied, allowing the delivery of a complete set of three 16-bit linear words to the display. In any signal path, an EOTF can then be applied directly to the image with no additional conversion, since the display is treating this data as linear.

6P Color has combined this system into three product offerings.

The first is a new encoder that can bring in camera and computer RGB imaging or spectral information and encode it into an .FCR file. This file format has been tested with several compression standards and includes all color information as well as an OETF process for the FCR system. The file can also deliver the FCR data package to any signal transport, such as HDMI, DisplayPort, SMPTE SDI, and/or SMPTE ST2110 Ethernet. The second product is a color engine that resides in the display. This color engine delivers three 16-bit words to the display process. In this process, a characterization step maps input data to the display’s native color gamut. This mapping can be for a typical RGB display, a monochrome display, or even a multi-primary display. Key to this process is the ability to map to nonstandard colors. This allows loosening color filter tolerances or broadening LED binning requirements while still maintaining an accurate color gamut or tailoring the display design to a manufacturer-defined gamut.

A third product is called the CMaaS™ system. This is a cloud-based system that uses machine learning to develop maps from the decoder to the native display primaries. This is a dynamic system, not anything like a LUT. It creates a map for each subpixel, calibrating to a value specified by the manufacturer. This can be updated at any time, even after the display has been delivered to the customer. In addition, other mapping is possible in which the creative can define a look, and the data is sent to the display, which then modifies the calibration to reflect what was intended. This includes changes to the color applied to individual commercials. If an advertiser desires a specific look, it can be applied dynamically and reverted to the original look once the commercial has finished. For new feature programs, the content owner/creator can specify a specific look. This can be used as an extra paid service or to differentiate a particular program.

What does adaptation look like?

All in all, this would be a significant change to how imaging works, and it can fit within the current infrastructure. For content, it would simply be adopting the file format. For streaming and other distribution methods, most likely just a firmware/software update. For display, any technology that uses a white emitter, such as WOLED or QD emitters, which can be tailored to any color, can easily be switched from an 8K RGB display to a 4K six-color display. Even microLED can be improved by using colors that are easier to make. Binning becomes much easier and cost-efficient. Since the display’s input can be mapped using the CMaaS system, specific filter or emission color tolerances can be relaxed, and even the color gamut spec can be opened or removed.

What This Means for Manufacturers, Creatives, and Consumers

For the display manufacturer, a new line of displays could be offered for entertainment, gaming, and other applications. Color-critical displays can be produced at lower cost, enabling the expansion of these markets. Medical displays can deliver more accurate imaging, which gives marketing a whole new message.

For content distribution, a single file is all that’s required to play on any display. The same file or stream can feed an old RGB plasma or the latest multi-primary display on the market. Metadata management is now much simpler. No changes are required to the stream format. A simple firmware update to the customer’s receiver or TV is all that’s needed.

And for content creators, this now enables a “single master” workflow.

6P Color plans on exhibiting at the January 2026 CES, demonstrating the working system and its advantages for both RGB and multi-primary displays.

Contact Bob Scaglione at bob.scaglione@6pcolor.com for more details.


About the Author:

Gary Mandle is a Board Member at 6P Color and a veteran display engineer known for advancing Sony’s professional CRT, LCD, SXRD, CLED, and Emmy- and SciTech Award–winning OLED technologies. He has authored numerous technical papers, holds multiple imaging patents, and serves as SMPTE Western Area Governor.

Original Article Here