Kyle Goodwin of Vecima Networks explores, in layman’s terms, how the industry is preparing to tackle low-latency streaming

Sports Tech image

During the 2018 World Cup, media measurement company Conviva recorded 75.8 million attempts to stream the quarter-final games. More than 15% of those attempts failed – that’s more than 11 million unsuccessful attempts.

Those delivering OTT services are aware of the technical and operational challenges they face to provide the broadcast-quality streaming experience TV viewers expect. Anything less than a flawless viewing experience won’t be tolerated.

The problem is that live OTT TV services are susceptible to significant video latency.

A video stream is divided into chunks or files, which means that a level of buffering must be applied in the streaming server. In addition, buffering is needed in the end device to circumvent network jitter and server overloads. As a consequence, the end-to-end delay experienced by users is much greater than in the case of traditional broadcast.

The issue of end-to-end delay is a significant factor that affects the overall quality of experience of OTT services.

An important factor that has a significant effect on latency is packaging format. Apple’s HLS format is among the most widely used streaming protocols, but, by default, isn’t suitable for low-latency streaming. HTTP-based protocols like HLS stream segments of video, and video players need a certain number of segments buffered—typically three—before they start playing. The Apple HLS recommendation of six seconds per segment means you’re already 18 to 30 seconds behind the live content without accounting for any other latency.

To counteract buffering latency, content must be encoded and packaged differently. The two leading approaches are DASH and Low Latency HLS, both of which produce a shorter effective latency and buffer duration while maintaining stream stability.

They work by encoders creating much smaller chunks of video called fragments which do not contain a full segment of video.

Each fragment has its own header information and can be streamed to the client as soon as it is produced rather than waiting for the entire segment to finish.

As a result, video can enter the client player’s buffer sooner and more frequently, in smaller chunks, to keep the stream both closer to the live point and more stable under varying network conditions. Combined with more advanced delivery mechanisms, these fragments can be delivered efficiently to the client at very low latency.

Although it’s still early days, there’s a lot of buzz around using the HTTP/3 delivery mechanism for HLS and DASH content delivery. HTTP/3 will allow for more control over content streaming between the server and the client and reduce the bottlenecks encountered in delivery when packet loss occurs on the network.

Additionally, traditional set top boxes work on streaming rather than on segmentation, so if there’s a continuous video stream, they don’t have to worry about segmenting it or going to multiple bitrates.

With HTTP/3 and other protocols on the horizon, OTT providers could potentially use an unmanaged network like the open internet to stream unsegmented video. We may see some early commercial deployments in this area next year. That, combined with next year’s major sporting events, means 2020 will no doubt be an exciting year.

Kyle Goodwin, VP Product and Innovation, Vecima Networks

 Kyle Goodwin is VP product and innovation at Vecima Networks.