Dolby
  • Up and Running
  • Amazon Web Services
  • Sources
  • Hybrik JSON
  • Video Filters
  • Audio Filters
  • Working with Audio
  • Task Modifiers
  • Source Pipeline
  • Segmented Rendering
  • Split Task
  • Processing Group IDs
  • Package Task
  • Analysis & Quality Control
  • Dolby Technologies
  • Additional Tasks
  • Hybrik Versions
  • QC Player
  • Machine Performance Analysis

    Source Pipeline Tutorial

    The source_pipeline in Hybrik is a representation of the audio, video, and subtitle/caption elements in the transcode task as combined from the source(s). You can think of it as a “virtual source” that is passed to the transcode task. The source pipeline provides a way to apply filters to all targets, to trim the virtual source, or even segment the video into chunks for distributed transcoding.

    A Basic Source Pipeline

    For a simple job with a single input file with audio and video, the source pipeline mirrors the input. source_pipeline_simple

    A Complex Source Pipeline

    Let’s look at a slightly more complicated example where you might map audio, video, and subtitles into the source pipeline via an asset_complex. Once these source files are combined, the source_pipeline in the transcode task will look like this. You can even apply trims to the entire “virtual source” within the source pipeline.

    {
        "uid": "transcode_task",
        "kind": "transcode",
        "payload": {
            "source_pipeline": {
                "trim": {
                    "duration_sec": 30
                }
            },
            ...
    

    source_pipeline_medium

    A More Complex Source Pipeline

    Here is an example of a complex source object which combines media components into groups (versions), and then stitches them together as a sequence of three elements – pre-roll, main content, and a bumper – each of which may be trimmed inside the asset_complex. This uses 6 source files for combinations of audio/video and closed captions.

    The source_pipeline in our transcode task carries all of these elements, and it is presented as a single stream of media components (a “virtual source” of video, audio, and closed captions). The targets in our transcode task can use all of it or select specific stream components via audio mapping or specifying video-only or subtitle-only targets.

    source_pipeline_complex

    Filtering

    The source pipeline is also a way to specify a filter which applies to the entire “virtual source” for every target. For example, let’s say you wanted (crop) out letterboxing for each output target. If you were to specify the crop filter on each target, each one would be filtered independently and consume its own computational resources. By applying the filter to the entire source_pipeline, the crop filter is invoked only once. Because each of our targets references our cropped “virtual source” (the source_pipeline), each output has the the crop pre-applied. This would be especially important if you were applying a deinterlace filter, as they can be computationally demanding.

    source_pipeline_filter

    {
        "uid": "transcode_task",
        "kind": "transcode",
        "payload": {
            "source_pipeline": {
                "filters": [
                    {
                        "video": [
                            {
                                "kind": "crop",
                                "payload": {
                                    "top": 50,
                                    "bottom": 50,
                                    "left": 0,
                                    "right": 0
                                }
                            }
                        ]
                    }
                ]
            },
            ...
    

    Filtering a Single asset_version in the source_pipeline

    You may have a need to apply a filter to a single element in the asset_complex source. You can apply the filter selectively in this case rather than applying it to all asset_versions in the source_pipeline. You do that by specifying a selector. Here is an example:

    {
        "uid": "transcode_task",
        "kind": "transcode",
        "payload": {
            "source_pipeline": {
                "filters": [
                    {
                        "selector": "last",
                        "video": [
                            {
                                "kind": "crop",
                                "payload": {
                                    "top": 50,
                                    "bottom": 50,
                                    "left": 0,
                                    "right": 0
                                }
                            }
                        ]
                    }
                ]
            },
            ...
    

    Valid options for selectors are:

    • Numeric Indexes (index starts from 1); these select the asset_version by order in the source sequence:
      • #1
      • #2
      • …
    • Other selectors:
      • first
      • last
      • shortest
      • longest

    selective_source_pipeline_filter

    Segmented Rendering

    Segmented_rendering gives us the ability to temporally segment our virtual pipeline and encode it in chunks. This allows us to spread the chunks across multiple computing instances and gain the speed benefits of distributed encoding. The transcoded chunks are then automatically combined back into a single output file. Segmented_rendering has limited format support and cannot work with stitched sources. Audio is not chunked and is transcoded as a single thread. Read our tutorial on segmented_rendering for usage and more details.

    source_pipeline_seg_render

    Examples

    • source_pipeline with a filter and a trim
    • source_pipeline with a selective filter and trims
    • See segmented_rendering for examples