In Part four of this series, we dig into some of the deeper layers of the Streaming Video Datatecture in the Workflow category, defining many of the individual sub-categories and explaining their purpose in the broader workflow.

ย 

As we covered in the first post of the series, the Datatecture is governed by three main categories: Operations, Infrastructure, and Workflow. Within these categories are also a myriad of other sub-categories, often branching into even more specific groups. This structure isnโ€™t intended as a parent-child hierarchy. Rather, it is just a way of illustrating relationships between specific components and categories of functionality. For example, there are many systems and technologies within analytics that donโ€™t compete against each other because they handle different sets of data from video player metrics to customer behavior.

What is Workflow?

As was discussed in the initial blog post, Workflow refers to the core systems which enable a stream to be ingested, secured, delivered and played.ย 

Delivery, Security, Playback, Transformation, Monetization, and Content Recommendations

Within the Workflow category, there are six primary sub-categories which were outlined in the first post of this blog series. Letโ€™s dig past those and go deeper into Workflow to understand the individual systems involved in this area of the Datatecture.

Delivery

At the heart of streaming is delivering a video stream to a viewerโ€™s player. In technical terms, this most often means a web server sending video segments in response to an HTTP request. But there are many ways to accomplish that as evidence by the sub-categories within this Datatecture group:

Content Delivery Network (CDN). A CDN is a cache-based network which improves the act of responding to user requests for video segments by placing popular segments closer to the user and reducing the round-trip time. Most streaming operators employ multiple CDNs which have strengths in specific regions (because of network saturation and size) or overall global penetration. CDNs often work hand-in-hand with network operators (ISPs) by existing within their network (as caching boxes) or terminating at their network in peering fabrics. There are three primary types of CDNs: private networks, cloud deployments, and algorithm-based (this is only Akamai). Private networks often employ lease-wavelength with their own optical gear to build a private loop network. Cloud deployments leverage existing Cloud Service Providers (CSPs) to provide distribution and scale without having to build physical infrastructure.
Ultra-Low Latency Streaming. Certain use cases, such as online gambling, which frequire real-time interaction need to ensure delivery that is sub-second. Often relying on non-traditional streaming technologies like WebRTC, these services (sometimes offered by traditional CDNs) ensure super-fast round-trip times at the cost of scalability.
Multicast ABR. Streaming has historically been a unicast approach: each user that requests gets their own unique version of the stream. The reason for this is because streaming is often over-the-top (OTT) and requires the use of public internet for last-mile delivery. The distributed nature of the internet doesnโ€™t provide for the network services to manage that delivery like a traditional broadcast network (multicast). So, when there are millions of concurrent users, the unicast approach can require significant bandwidth and ultimately force a reduction in quality to meet bandwidth constraints. Multicast Assisted Adaptive Bitrate, or Multicast ABR, is a suite of technologies to enable the use of multicast (a single stream that is consumed by every viewer) over the internet.
Peer-to-Peer (P2P) Streaming. P2P streaming is the use of peers, such as a viewer, to deliver content to other viewers in a very limited geographic region. The technology โ€œseedsโ€ peers with video segments. These peers act as local caches for other peers within the P2P network. This network approach can significantly reduce bandwidth requirements for a platform operator by taking advantage of available viewer bandwidth they might not be using. P2P can be an especially useful approach for live content when working in conjunction with a traditional CDN.

Security

Unlike traditional broadcast which has a closed end-to-end system (from network operator to set-top box), streaming is a more open ecosystem. As such, content rights holders must utilize other technologies to ensure the security of their content when delivered via streaming. These methods can include:

Geo IP. This security technology attempts to limit access to viewers who only meet specific geographic requirements. For example, a streaming operator may only have rights to distribute content in a specific geography. If viewers from outside the geography attempt to gain access to that content, they can be blocked by resolving their IP address to geographic location and comparing against whitelist locations.
Digital Rights Management (DRM). This security technology employs encryption and decryption to keep content secured. A viewer that has purchased rights to watch content can be provided a license. When they request to watch DRM-encrypted content, the license is checked against a licensing server to verify rights. If rights are verified, the player can decrypt the content.
Watermarking. In some cases, such as live content, DRM may not be a viable option (as it can introduce additional latency). In these cases, watermarking can be a significant deterrent. The watermark is layered into the frames of a video pixel-by-pixel. The resulting pattern of pixel manipulation is a binary hash representing critical data about the content such as who originally purchased it, the IP address of the purchasing user, etc. If watermarked content is found on the internet, forensic technologies can pull the data from the watermark to identify how the content was made available.

Playback

This is where the rubber meets the road. Unlike traditional broadcast in which there is a single endpoint, streaming supports an infinite number of endpoints from which the viewer can consume content. In fact, any device with a screen and an operating system that can support a video player can be an endpoint. This includes SmartTVs, mobile phones, tablets, gaming consoles, and more. As such, this Datatecture category is broken down in a multitude of sub-categories which reflect both the endpoints and the player technologies itself:

Devices. The sub-categories within this category represent the endpoints on which a video player might exist and allow playback of streaming content. These endpoints can include:
Connected TVs. These are TVs with a software platform that allows the installation of applications such as streaming services which would include a player)
Gaming Consoles. Many gaming consoles, such as Microsoft XBox, Sony Playstation, and Nintendo Switch include video player software for content playback)
Mobile. Not only do the main operating systems provide a player, but each OS also supports an application ecosystem which may include other players as well)
Set-Top Boxes/OS. These companies create IP-based STBs, which include a player component, as well as STB operating systems that can be installed on generic hardware and also include built-in video player software while also sometimes supporting the installation of third-party players.
Connected Streaming Devices. Perhaps the newest entrance in the endpoint category, these represent self-contained platforms for users to consume video from a variety of service providers. They are similar to a SmartTV, but portable so they can be moved from television to television. They include built-in video player software as well as supporting third-party applications, such as a streaming service, that also can include proprietary video player software.

Players. The sub-categories within this category represent the three main flavors of player implementation:
Commercial. These are companies which have created and support video player software that can be installed within an application or as a standalone implementation.
Open-Source. Similar to commercial but without the price tag, open source player technology includes software created and supported by a community of developers.
Offline. A key functionality of many streaming platforms is the ability for the viewer to download a movie and watch offline (rather than streaming). To facilitate this, the player functionality needs to support it. Rather than building such functionality into a commercial or open-source player, some streaming operators opt for a commercial player that can support download-to-go functionality.

ย 

Transformation

Unlike traditional broadcast, streaming video must be transformed (encoded and packaged) prior to delivery to provide a stream which does not take all the available bandwidth. Whatโ€™s more, different player implementations on different devices (often a reflection of licensing costs) require different formats. All-in-all, this can significantly complicate the video workflow by requiring operators to support multiple codecs and packages. The sub-categories within this Datatecture group represent the technologies which streaming platforms use to ensure the content is consumable at the viewer endpoints. This can sometimes happen in real-time.

Encoding. This is the process by which the source material, say from camera acquisition, is converted into a format playable by an endpoint. This requires a specific codec which is often optimized for the kind of delivery, such as broadcast versus streaming. Once encoded, the endpoint player will then also need the same codec to do the decryption. There are a variety of ways to encode including using on-premise equipment (most often with traditional broadcast) to using virtualized encoders (offering scalability) to using an encoding-as-a-service provider (which obviates the need to keep the encoding software up-to-date).
Transcoding. This technology represents the re-encoding of content into a different format without changing the underlying aspect ratio of the content. Transcoding is the primary technology employed in adaptive bitrate (ABR) ladders, allowing endpoint players to โ€œswitchโ€ between bitrates depending upon the current parameters of the environment such as available bandwidth, available CPU, available memory, etc. Transcoding can happen via commercial and open-source software (i.e., FFMPEG) as well as service providers. Unlike encoding, it can also happen in real-time enabling streaming operators to deliver specific renditions when requested.
Packaging. Packaging is a group of technologies to โ€œwrapโ€ encoded or transcoded content into a format that is playable by the endpoint. There are a host of popular packages including Apple HLS, MPEG-Dash, and CMAF. Streaming operators can build their own packaging services or opt to utilize a service provider. In the later implementation, there is little maintenance involved by the streaming provider and they can rest assured that the packages are always up-to-date.
Metadata. One of the fundamental differences between streaming and broadcast content is metadata. This data, which is part of the streaming package, represents information about the content from the title to the content developer to even actors and other details. Metadata is crucial to streaming platforms as it provides the means by which content can be organized and recommended. The providers within this Datatecture group represent stores of content metadata from which a streaming provider can draw to add metadata to their content.

Monetization

The transition from broadcast distribution to streaming distribution is fraught with technical challenges. One of those is monetization, especially for streaming operators that have opted for advertising-based distribution models (rather than, or in conjunction with, subscriptions). The delivery of advertising in a traditional television broadcast is based on numerous standards with technology that has been tested and improved over time. With streaming, though, monetization of a video platform, such as embedding advertising into the videos, can involve a multitude of technologies which often arenโ€™t built to interoperate. Furthermore, streaming operators are still gathering data to better understand the translation of the broadcast television advertising model to the streaming ecosystem. The sub-categories within this Datatecture group reflect the myriad of technologies involved in monetizing streaming video.

Paywall. As the name suggests, this is a barrier between free content and content which the viewer must pay to watch. This monetization strategy can often complement an advertising-based approach and be used to create FOMO which can lead to more consistent and predictable revenue, such as a subscription.ย 
Advertising Systems.ย 
Supply-side Platforms (SSPs). SSPs are software used to sell advertising in an automated fashion and most often used by online publishers to help them sell display, video, and mobile ads. SSPs are designed by publishers to do the opposite of a DSP: to maximize the prices their impressions sell at. SSPs and DSPs utilize very similar technologies.
Ad Exchange. An ad exchange is a digital marketplace which enables advertisers and publishers to buy and sell advertising space, often through real-time auctions. Theyโ€™re most often used to sell display, video, and mobile ad inventory.
Video Ad Insertion. Getting advertisements into a video stream is in no way as easy or straight-forward as doing so in broadcast television. Streaming workflows which want to monetize content through advertising need technology to stitch the ad into the video stream. This process can happen server-side (SSAI) or client-side (CSAI). SSAI is often used for live content while CSAI is more utilized for on-demand content.
Buy-Side Ad Servers. Buy-Side Ad Servers are video ad servers utilized by the advertiser.ย 
Ad Network.
Video Ad Servers. An ad server is a technology which manages, serves, tracks, and reports online display advertising campaigns. The process by which ad servers operate is relatively simple. First, a user visits a video where the publisherโ€™s ad server gets a request to display the ad. Second, once the ad servers receive the request, it examines the data to choose the most appropriate ad for the viewer. The ad tag contains an extensive list of criteria fed by the advertiser. The ads will be selected based on several factors such as age, geography, size, behavior, etc. Third, once the best match has been made, itโ€™s passed to the video ad insertion technology (again, client-side or server-side) where it can be delivered to the player for playback. Finally, the player gathers information relating to the user interaction with the ad such as clicks, impressions, conversions, etc.ย 
Demand-Side Platforms (DSPs). Demand-side Platforms (DSPs) are used by marketers to buy ad impressions from exchanges as cheaply and as efficiently as possible. These are the marketerโ€™s equivalent to the SSP.

Content Recommendation

Perhaps one of the most exciting aspects of delivering video via streaming rather than broadcast is data. With streaming video, there is a myriad of data generated from each view, data that is not available in a broadcast environment. As such, streaming platform operators can tailor the viewing, content, and even advertising experience, more tightly with each individual viewer providing for a far more personalized experience. One of those technologies is content recommendation. Often packaged into an โ€œengine,โ€ these software components installed within the delivery workflow analyze data and, using the metadata attached to each piece of content, can recommend content for the viewer to watch based on what they, or people like them, have watched. This can significantly improve engagement metrics, such as viewing time, as well as revenue.

The Workflow is a Process

Unlike the other two categories, Infrastructure and Operations, the Workflow category of the Datatecture represents a somewhat linear progression: content is transformed, secured, delivered, played back, and monetized. Of course, some of the individual technologies may be integrated within different functional components of the workflow (such as watermarking happening during transformation) but there is generally a flow within the workflow pipeline. What this demonstrates, like in the other categories, is a very intricate web of technologies which must all work in harmony to provide a scalable, resilient, and high-performing streaming service.

ย 

To learn more, visit and explore the Datatecture site.

The post Understanding the Datatecture Part 4: Workflow Deep-Dive appeared first on Datazoom.

Scroll to Top