On Feb. 10, 2015, the co-founders of SSIMWAVE published a paper about a revolutionary concept called SSIMPLUS, one that would reinvent – literally – the way that video quality is measured.

Today, SSIMPLUS is the core of a suite of products that have become the go-to tools used by the video production and distribution industry to ensure that the highest-quality video reaches viewers in the most cost-effective fashion.

SSIMPLUS’ genius is it can predict what the video consumer will see, and can do so at every step in the video delivery chain, giving video engineers the ability to monitor and troubleshoot as they transmit images to end-viewers – ensuring that the person sitting in his den watching a hockey game on a 50-inch display, or the one watching on a cell phone in coffee shop in northern Ontario, is getting the highest quality product possible.

“What we do is make people happy,” says Dr. Abdul Rehman, SSIMWAVE’s CEO and co-founder, “Whether we’re talking about the people who create video content, or the people who watch it, or everyone in between who is responsible for its delivery. We help make sure that a video product is the best that it can be.”

SSIMPLUS, designed for the complexities and demands of the modern video era, grew out of an algorithm known as “SSIM,” a concept which was published in 2004 by a group of researchers and university professors, a group that included Dr. Rehman’s colleague, Dr. Zhou Wang, SSIMWAVE’s eventual co-founder and Chief Science Officer.

“SSIM,” an open-source algorithm, was itself a breakthrough concept. It described a new idea known as a “structured similarity,” which was designed to address the limitations and shortcomings of existing video quality metrics.

“Back in 2002 we started having discussions around the fact that we had metrics to measure image quality, but none of them were impressive or good at predicting scores,” says Dr. Wang.

Up until that point, video quality assessment depended on a hodgepodge of techniques, many dating back to the 1970s or earlier. Among them: Video bitrate; bitrate error; package drop rate; network delay; peak signal to noise ratio, or PSNR; mean square error.

Each metric was capable of telling only a part of the video quality story and able to only roughly approximate the quality of experience of the end-viewer. In some cases, they were not only unreliable, but they often provided information that didn’t accurately reflect the quality of a viewer’s experience.

But that changed in 2004, when the SSIM paper, and the algorithm it described, was published. Its arrival heralded a more dependable way to measure video quality.

“It’s basically the starting point of a new philosophy of how you measure image quality,” says Dr. Wang, adding that the paper has been cited more than 15,000 times and, due to a decision to put the source code online, has spawned many other papers and other similar solutions.

“SSIM is kind of milestone work,” says Dr. Wang. “Before that, [this capability] wasn’t something people thought was realistic.”

The SSIM index uses signal processing algorithms that simulate the behavior of the human visual system. SSIM marked the beginning of predicting video quality from the point of view of the human being.

“Suddenly you have something very simple, but at that point way better than any other option,” says Dr. Wang. “The previous methods were more of a bottom-up approach. SSIM was the first top-down approach.”

(Industry users who adopted the technology found it to be so useful that in 2015 its creators, including Dr. Wang, won a Primetime Emmy Award for their contributions to engineering.)

The problem, with SSIM, however, was that it was unable to contend with the rapidly evolving complexities of its era. Launched in the same year as Motorola’s Razr, a product that generated excitement but was quickly eclipsed, SSIM emerged during a period of sweeping and fundamental technological advancement. At that time, tube TVs were in the process of being replaced wholesale by digital and high definition displays. The BlackBerry was about to be supplanted by the iPhone, and the amount of video delivered over the Internet, in part because of widespread adoption of the smartphone, was growing exponentially. So, too, were demands for quality.

“Video was evolving, but the [SSIM] metric did not have the capability to answer or provide good quality measurement adapted to all these changes,” says Dr. Rehman.

“We needed to make the quality measurement so generic that it would support all these variations and answer the impact of those changes on viewer experience.

“So we kept doing research.”

In 2009, Dr. Rehman began his PhD studies under Dr. Wang. Rehman accumulated feedback from various industry users and set out to improve SSIM, to create a product capable of dealing with the digital transformation then underway.

The result, in 2015, was SSIMPLUS. It addressed all the shortcomings of SSIM, and was able to prove itself against the demands of a production environment.

Today, SSIMPLUS, unlike SSIM, is able to generate a quality Viewer Score for video streams of varying resolutions (1080, 720, 360, etc.), that have been transcoded from a high-quality source video.

And SSIMPLUS, unlike SSIM, can distinguish the difference in the perceptual quality of the viewing experience generated by devices of different types and sizes. A cell phone versus a 50-inch living room display, for example.

And finally, SSIMPLUS, unlike SSIM, can handle the complexities of videos of differing dynamic ranges, including HDR and 4K TV – the next big thing in high-definition performance.

In short, it’s more accurate. It’s faster. It not only shows broadcasters and internet pipeline owners a report in real time about the quality of their video under transport, it’s capable of guiding their engineers and technicians to trouble spots and helping with load balancing – ensuring the proper amount of bandwidth is allocated, achieving cost savings while ensuring high quality.

“What SSIMPLUS primarily contributed was providing metrics for video delivery that can be compatible to current needs,” says Dr. Rehman.

Unlike SSIM, SSIMPLUS is not open-sourced. That, says Dr. Rehman, is deliberate. A decision undertaken to ensure that upgrades and new functionality to the software continue to be uniform and aligned with industry standards and SSIMWAVE’s own expectations.

“It’s about being accountable,” says Dr. Rehman. “It’s about taking responsibility to make sure it works perfectly. Industry wants cutting edge technology, and to provide that you need a company dedicated to ensuring the capabilities of the underlying product.

“We believe the solutions that the industry is looking for require one common metric that will help the whole delivery chain work in a coherent fashion so that the viewer experience is measured in a unified and accurate manner.”

“Yes, with SSIM we tried the open-source approach. And, yes, structural similarity got more popularity. But it did not really become the standard.

“Any modification or update or revision you do on top [of the existing software] has to work using the same philosophy we have used to build the metric. At the end of the day, you have to protect the integrity of your intellectual property.

“So we took responsibility and we standardized it.”

And what comes next for SSIMPLUS?

“That’s an interesting question,” says Dr. Rehman.

“Up until this point, video quality has been treated in a very intangible way – as if it’s something you cannot really measure. Everybody used to like to say, ‘We deliver as high quality as possible,’ as if it’s not something they can quantify.

“We’re saying, no, actually it is tangible. We are beginning by measuring it. The next step is to use the measurement to start controlling the video experience. You control the experience by looking at the huge amount of data you’re collecting across a huge variety of content, both live content and video on demand content, and say, I’m going to start making decisions for the viewer based on the quality measurement, and I’m going to start maturing viewer experience on a per-view level, or a per-network level, or a per-viewer level. Or any level you’re interested in.

“That means you’re building a brain, or intelligence around quality measurement, to not only support quality but to start making decisions. We are focussed on that hugely.

“That’s one of the directions we’re working on while we’re improving video quality measurement.”