Lower Latency Equals Higher Viewer Satisfaction Lower Customer Churn

Lower Latency Equals Higher Viewer Satisfaction Lower Customer Churn
Andy Marken

Back in the day, David Pogue, a New York Times tech columnist at the turn of the century who recently returned, wrote a column about PCs where he discussed all the hardware/firmware/software parts from all of the suppliers and marveled that the darn things even worked at all.

“You said, “A few vehicles in pursuit, maybe.” We count THREE war parties!” – The Rock Rider Chief, “Mad Max: Fury Road,” Warner Bros., 2015

Now, video is streaming extremely efficiently over the Internet across broadband and wireless pipes to your TV, computer, tablet, smartphone in your home, in the office/classroom, across the country or halfway around the world (assuming your choice of programming is not geo-locked) and you don’t think twice about it until it isn’t there.

Then BAM! It is a problem.

Of course, you’re aware of all of the services – Amazon Prime, Netflix, HBO, Hulu, Apple TV Plus, Disney +. Then there’s CBS On the Go, YouTube, Pluto TV, FuboTV, Mubi, BFI, Curzon, BBC, Tencent, iQiyi, Alibaba, voot, Hotstar, Viki, Hooq and tons of others – but those are, for the most part, just “channels that offer their services in specific countries.

First to Go – Nothing turns viewers off more than streaming video that constantly stutters, stops. If the service is continually sluggish, folks have a solution … cancel the service.

Most of these offerings rent bandwidth, service and support from infrastructure folks they never talk about in public – Akamai, Limelight, Alibaba, AWS, Tata, Huawei, Facebook, Microsoft, Google and other global organizations, who do all the hard work…build out the internet and local communications infrastructure and keep things running so your shows/movies always look great.

So, now comes that time of year when we’re looking forward to chatting with these executives at IBC – especially those in the Future Zone – to find out how they ensure beautiful, efficient and stable streams on “their network.”

Don’t get us wrong, that stuff can be confusing next to the actual creative production of the content, but it’s the most vital part of the content creation/delivery services because if the content is slow in loading, voice/video aren’t in sync or images aren’t clean/crisp, viewers are … gone!

First, there are six “best” video file formats commonly used today:

  • AVI (audio video interleave) and WMV (Windows media video) …
  • MOV and QT (Quicktime formats) …
  • MKV (matroska format) …
  • AVCHD (advanced video coding, high definition) …
  • FLV and SWF (Flash formats)

Not all of them are compatible with every digital production/playback platform.

Instead, each is the “best” for different things so “your mileage may vary.”

So why should IBC attendees care and want to know more?

Round ‘n Round – Today’s consumers expect the same type of entertainment they had with their appointment TV. The spinning buffer wheel is a sure way to lose subscribers.

One word … buffering.

Recently Akamai put a dollar amount on what that *#*&%# spinning wheel costs content owners/distributors/services–$85,000 every time it shows up.

According to their user study, each rebuffering incident means one percent abandonment; and after that, you do the math.

It’s not good!

Lost Revenue – While experiencing one percent buffer per video play may not sound like a lot, it can quickly translate into $85K plus lost income for the channel service.

Cutting it to zero is close to impossible in the streaming content world we live in today.

Akamai’s traffic grew 22,000-fold from a 1 Gbps stream in 2001 (the first Victoria’s Secret webcast which was forecast to break the internet) to 23 Tbps in June 2018 for a global football tournament.

Rebuffering can’t be easily solved because it is caused by ISP, CDN, user device/browser, Wi-Fil configuration, bandwidth, network traffic and even the content.

Content World – While the Internet was never designed for constant streams of data, people increasingly expect to view content anytime, anyplace and on any screen; accounting for more than 80 percent of Internet in the years ahead.

So, you minimize your exposure/loss by trying to keep rebuffering below 0.5 percent.

The neat thing is viewers are great at spotting quality issues and tell the world using various platforms like Twitter.

If we didn’t get your attention, they will!

The key is to use the video format (codec, container) that’s best for your content/service to minimize buffering and latency.

Bigger Files – The visual difference between HD (high definition) and 4K content is easily seen – and preferred – and it’s more difficult for streaming services to deliver because more image data must be delivered more rapidly. With 8K content on the horizon, technologists are seeking better ways to improve codecs to speed flicker-free content distribution.

As we all know, video files are big–huge even, and they’re getting huger.  While there’s still a lot of HD (high definition) content being streamed, folks “know” they want 4K and HDR (high dynamic range) and 8K is just around the corner.

Even if you have strategically located unlimited storage and 5G bandwidth everywhere, file size is an issue that has to be dealt with using an optimum compression/codec.

The codec compresses/decompresses a video file for distribution/viewing.

They are either lossy or lossless.  Lossy creates smaller file size, leaving out some of the data (lower video quality); while lossless keeps all the file data producing better quality video and eliminates file degradation from multiple saves and yes, larger files.

Sometimes you have to compromise between the highest quality video format and the smallest file size.

Slow Transition – Video content production/delivery is slowly moving from HD to 4K to 4K HDR which requires more bandwidth-efficient codecs to provide consumers with flicker-free viewing. Services are slowly transitioning from H.264 codecs to efficient H.265 and other “options.”

Common video codecs are:

  • 264 – standard for HD content with smaller file size and high quality with lossy or lossless compression
  • MPEG-4 – commonly used for HD content and newer versions identical to H.264
  • DivX – old codec with emphasis on video quality with much larger file size
  • HEVC (H.265) – next generation solution for 4K, lossy or lossless
  • AVI

Now that seemingly everyone is bypassing appointment TV “channels” for streamed, consumers have increased their demand for OTT quality and the latency they had when they were stuck with their traditional broadcast TV.

The development of HTTP adaptive streaming (HAS) delivered part of the solution but suffered from end-to-end latency (time lag).

Tolerance Levels – Today’s OTT content viewers expect the “instant on” latency they had with their older appointment TV enjoyment but with real-time activities such as live sporting/concert streams. Online gambling viewers expect faster than fast image delivery.

“The industry is in the process of advancing ULL-CMAF (Ultra Low Latency-Common Media Application Format),” said Allan McLennan, Chief Executive of PADEM Media Group.  He should know, he works with global organizations in developing/deploying multi-device IP networks and multi-platform video distribution. And they keep asking him back.

CMAF was standardized by the MPEG standards group back in 2017 and defines the fragmented mp4 container that holds all of the video, audio, text data.

McLennan emphasized that CMAF does nothing to reduce latency by itself; but when paired with the encoder–especially through the new MPEG5 foundation, multiple CDNs and client behaviors, the overall system can enable superior and efficient low latency.

Chunky – With newer, more bandwidth efficient encoders/decoders, content producers/streamers are delivering the video material cord-cutters, cord nevers demand to minimize/eliminate screen buffering. Increasingly, it isn’t a premium service … it’s expected.

He noted that the best way to achieve Ultra-Low Latency Streaming is by using Chunked-Encoded and Chunk-Transferred CMAF.

To reduce latency even further, the player can request a segment or download a packed of chunks and decode forward before starting playback which can reduce latency to less than 500ms. Or, the player can defer early playback and request added chunks as they are available.

Of course, this is a slight oversimplification, but it does provide lower latency even if the player isn’t fully optimized.

The benefit of chained chunk transfer is that the video segments are delivered with consistent timing regardless of the throughput between the client screen and edge server. All the viewer knows is that she/he is getting smooth, consistent streaming video so she/he doesn’t cancel the service.

Latency is really based on the viewer’s expectations.  If you recall the old TV days, you would push the channel up/down and a new show would appear.

Netflix has done a lot of work in improving its customer experience with streaming previews and starting streams at lower video quality until the network and cache catch up.

It’s better but still not the instant gratification people expect.

Higher Standards – Streaming media content providers may deliver greater viewer options, but they still have to work at meeting yesterday’s TV delivery quality. The challenge is that there are multiple technologies/services that must all work together smoothly to deliver low and ultra-low service.

The solution is delivering latency that is good enough for the task at hand and the viewer’s expectation – low latency, ultra-low latency, zero latency.

Some folks like to brag that they are very sensitive to latency but then they are probably the same folks who say that a good enough beer first thing in the morning is good enough.

The goal for the streaming industry is to deliver the lowest latency possible to deliver a seamless user experience that meets/exceeds the viewer’s expectations. For most streaming content that is low latency.

Of course, if you’re gambling, participating in real time gambling, watching live entertainment/sporting events or conducting meeting presentations/demonstrations, then ultra-low latency will be extremely important.

McLennan emphasized that new media standards, protocols and techniques that will be explored and discussed at IBC are designed to reduce latency to the point where it becomes even more highly efficient. Companies to keep an eye out for at IBC are for example V-Nova and Net Insight.

“Good enough isn’t really good enough,” he noted, “because in the long term, we need to deliver true live engagement through enhanced personalization, so the content is tailored to the person viewing it on her/his screen.

“The more personalized the content is, the more viewers will engage” McLennan continued. “That means they will continue their subscriptions and will be more receptive to accurately targeted commercials which have proven to be more accepted and perceived as part of the programming.

“True, dynamic ad insertion by user and device will deliver potentially higher streaming service advertising CPM’s but it will be a major advance for advertisers and the viewer.”

We’re really looking forward to seeing/hearing about the strides the industry has made in raising consumer expectations at IBC.

Still, there’s a lot of work to be done in the backroom to meet those expectations.

But, as Capable said in Mad Max, “There’s no going back!”