This is the second blog in a series talking about how video streaming data, pulled from various parts of the workflow, can be used to support business goals. The next post will look at ways to increase the amount of content streamed and how that can align to KPIs for advertising and stickiness.

Perhaps one of the most obvious uses for data pulled from components in the streaming video technology stack is to improve Quality of Experience (QoE). Because what does a poor QoE result in? Video exits. So, at the heart of improving QoE is reducing the number of video exits. But why does one user exit a video when another doesn’t? Is it the quality of the content? The content type? Unfortunately, QoE can be subjective. Different viewers have different thresholds for when the viewing experience has degraded enough to warrant abandoning a view. Rather than rely on a subjective definition then, or guessing about QoE, you can use data gleaned from elements within the workflow which correlate directly to user behavior to define what an exit means for your streaming platform, use data to identify them, and, most importantly, mitigate them when appropriate.

What is a Video Exit?

The first step is to use the data to define a video exit. By analyzing when a viewer exits a video, through a unique playback_session_id element which would be generated each time the user viewed a video asset, you can breakdown exits into the following specific reasons:

  • Exiting after an error
  • Exiting before the video starts
  • Exiting during stalls
  • Exiting during playback
  • Exiting during an ad break

Exiting After an Error

Users experience an error which is either fatal (the video player crashes or the video ceases) or unpleasant enough to make them leave. Data from the workflow can be used to track down the error and correlate it to the number of unique playback_session_ids thereby generating a general View Error Rate and, being more granular, a User Error Rate

You can use the View Error Rate to determine if there is a disproportionate amount of errors for specific content types or content IDs. Conversely, you can employ the User Error Rate to see if users of a particular geographical location, device type, ISP, or subscription privileges are more likely to experience an error. Once that has been identified, you can configure monitoring to alert on these error rates.

Digging even further, you can look at the specific error rate or total errors for a time period and break it down by error code and error detail. This is useful for operations and engineers to see which is the top contributing video error and inform them how to best approach it.

Exiting Before the Video Starts (EBVS)

In this type of exit, the video took too long to start or was initialized improperly. To determine that, you need to be able to differentiate between the two. You can accomplish such differentiation by capturing the viewer’s Playback Intent, which is when a viewer clicks or taps on the button to trigger the player to render and the video to play back. In many cases, Playback Intent occurs before the player is rendered.

In the case of improper initialization, you can identify this when the player renders with no preceding playback intent. The cause could be that a button is triggering the playback intent when it shouldn’t (i.e., a JavaScript event firing on page load when it should fire on mouse click) or it is behaving as expected, but the user doesn’t expect it. The result? The user closes the player which fires the exit event and generates EBVS data. Data like this can be correlated across various playback intent locations (i.e., an application, a website, etc.). If one of them has a disproportionately high EBVS, then the user likely doesn’t want to trigger playback at that point. This can be corrected to remove this EBVS data from the overall EBVS data picture to get a truer sense of why viewers are exiting before a video starts playing.

When a video takes too long to start, that is the time between playback intent (the player has launched and the video will start playing) and the first frame displayed to the user. The question to ask, then, is “why is the video taking too long to load?” One of the ways to answer this question is to understand when the user dropped off during the video delivery path:

  • During playback intent?
  • When the player was rendered?
  • When the playback was requested?
  • When the buffering started?
  • When the buffering ended?
  • When the playback started?

The answer to those questions benefit from additional insight by looking at the various components related to playback and their relative time. These components will be relative to your specific implementation, but they might include:

  • GraphQL
  • Player creation
  • HLS validation
  • Status call
  • JSON parsing
  • Dash validation

Exiting During Stalls

Buffering is a natural part of the playback process and must happen. What users get frustrated with is when the player stalls to fill the buffer. As such, you can look at the frequency of stall starts (so when the player starts after stalling) and the rate at which users exit the video. This is often encapsulated in a popular QoE metric, Rebuffer Ratio.

One of the primary reasons for a stall is the bitrate is too high. By pulling log data, you can correlate bitrates to stall start events. Generally, you want them all to be the lowest bitrate, which means you are already sending the smallest amount of data available. If stalls are occurring on higher bitrates, it means the bitrate logic is not working properly. In this case, where the bitrate adaptation is self-managed, the player should be scaling down the requested bitrate when the buffer starts to empty.

Exiting During Playback

Although this is a powerful metric, which shows when users have abandoned a specific video, it is difficult to immediately identify a reason, even when looking at the data. That’s because this can be a highly subjective measure. For example, is it about the content? Is it not interesting or compelling enough? You can look at the distribution of playhead position for the heartbeats in a given title to help determine this. If viewers are abandoning a piece of content around the same place, it could help shape future content decisions. But if there is a spike in viewership near the end of the content, seeing the jump in playhead position could mean viewers are skipping parts of the content they find unappealing. Further analysis of minute distribution across a video title can further enlighten content decisions by signifying what users find exciting, or boring.

Another angle on playback exits can relate to video quality. Even though it varies by region, it’s pretty much universal that viewers don’t like to watch poor quality video (i.e., less than 420p). To understand the impact of bitrate on video exits, you can split up the playback video exits by region and resolution. If certain regions are seeing a lot of low resolution exits, you can consider a closer CDN or modifying the compression. To cross verify this claim, you can look at the exits which occurred within minutes of a resolution change. It’s not a direct attribution, but you can see if it correlates with other low resolution video exit findings. 

Exits During Ad Break

Advertising can be a death knell for viewership. Many users will leave a video during an ad break because they perceive the content as less valuable than the time and annoyance of watching an ad. To determine how tolerant your viewers are towards ads, you can split the exit data by content title. This answers the question, “are users willing to watch a specific ad only in certain titles?” When the answer is yes, the viewer clearly perceives enough value in the content to sit through the ad.

Another possibility could be the quality of the ads being shown. Similar to other types of abandonment measures around quality, monitoring the bitrate, stall ratio and time to first frame for ad content can uncover issues with ad delivery quality that may be adversely impacting viewership and engagement. If low quality ads are coming from a certain ad network, you might consider blacklisting that ad network to avoid the risk of losing engagement with your audience. 

Aligning Exit Data to Business KPIs

There usually isn’t a business KPI specifically linked to video exits. But, as you can probably surmise, video exits do have a correlation to subscriber retention. If a user is exiting videos consistently, something should be done to understand why and then intervene in probably churn. This might be special incentives, such as a free month of service or access to exclusive content (for a lower-tier subscriber). Regardless, failing to address viewer exit behavior can be detrimental to the KPIs which do count, such as viewing hours per user, and subscribers per month.

The post Understanding Video Exits appeared first on Datazoom.

Stay Up-to-Date on The Streaming Video Datatecture!

The Streaming Video Datatecture and microsite provide free and educational compilations of all of the companies and services involved in the streaming video industry. It provides a deep look into the ecosystem and layout, and explains the technologies that power video streaming. Created by Datazoom, the enterprise video data platform technology company, the goal is to help content owners, technology and service providers, investors, and other stakeholders better understand the data, systems, and leveled architecture that come together to create the data fabric of streaming video.

Contact Information

Recent datazoom news

Datazoom The Video Data Platform

  • Why Video Data Telemetry is Critical to DevOps Success (And How to Get It)
    by The Datazoom Team

    DevOps has been a revolution in software development. To support agile methodologies, in which developers can incrementally release features that are tested by users in real-time, developers needed to have an increasingly larger role in operationally supporting their code. Waiting for operations to make configuration changes to environments or push out new code didn’t support The post Why Video Data Telemetry is Critical to DevOps Success (And How to Get It) appeared first on Datazoom.

Copyright 2021. Datazoom. Inc. All trademarks and logos are owned by their respective companies. Inclusion of a logo in the datatecture is not an endorsement by Datazoom, Inc. nor representative of any relationship between that company and Datazoom, Inc.
Scroll to Top