The relationship of technologies within the streaming video stack is complex. Although there might seem to be a linear progression of components, from content acquisition to playback, in many workflows, the connection between the technologies is often far from such. Data from one piece of technology may be critical for another to function optimally. APIs can be aptly leveraged to connect optimization data from different technologies to each other, and to higher-level systems like monitoring tools and operational dashboards.ย Thatโs why the datatecture was created: to better visualize the interconnection between the technologies and ultimately document the data they make available.ย
Think of the datatecture as a fabric which lays over the entire workflow and represents the underlying flow of data within the technology stack. How the datatecture is organized, then, is critical to not only understanding the basis of that fabric but how to categorize your own version of it, suited specifically to your business. Regardless of the individual technologies you end up employing in your datatecture, they will be ultimately categorized into three major areas: Operations, Infrastructure, and Workflow.
Operations
A major category within any streaming workflow is the group of technologies which help operate the service. The other categories and technologies within this group are critical to ensuring a great viewer experience:
Infrastructure
Underlying the entire workflow is the infrastructure. From databases to storage to virtualization, these fundamental technologies power the very heart of the streaming stack:
Workflow
This core category is where the magic happens. Itโs all of the subgroups and technologies which enable the transformation, security, delivery, and playback of streaming video which makes sense that itโs the deepest category with the most technologies:
No One Core Category is More Important Than Another In Datatecture
You may be wondering if you can scrimp on one core category, like Operations, for the sake of another, such as Workflow. The short answer is, โno.โ These Datatecture core categories are intricately connected, hence the Venn diagram structure. Operations depends on the Infrastructure to store all of the data while Workflow depends on Operations to find and resolve root-cause issues. Of course, there are countless other examples, but you get the picture: these core categories are joined at the hip. So when you are examining your own streaming platform or planning your service, building your own datatecture depends on understanding the relationship between these core categories and ensuring you are providing them the proper balance.
The post Understanding the Datatecture Part 1: The Core Categories appeared first on Datazoom.