mediaconvert-2017-08-29 library

Classes

AacSettings
Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AAC. The service accepts one of two mutually exclusive groups of AAC settings--VBR and CBR. To select one of these modes, set the value of Bitrate control mode (rateControlMode) to "VBR" or "CBR". In VBR mode, you control the audio quality with the setting VBR quality (vbrQuality). In CBR mode, you use the setting Bitrate (bitrate). Defaults and valid values depend on the rate control mode.
Ac3Settings
Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AC3.
AccelerationSettings
Accelerated transcoding can significantly speed up jobs with long, visually complex content.
AiffSettings
Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AIFF.
AncillarySourceSettings
Settings for ancillary captions source.
AssociateCertificateResponse
AudioChannelTaggingSettings
When you mimic a multi-channel audio layout with multiple mono-channel tracks, you can tag each channel layout manually. For example, you would tag the tracks that contain your left, right, and center audio with Left (L), Right (R), and Center (C), respectively. When you don't specify a value, MediaConvert labels your track as Center (C) by default. To use audio layout tagging, your output must be in a QuickTime (.mov) container; your audio codec must be AAC, WAV, or AIFF; and you must set up your audio track to have only one channel.
AudioCodecSettings
Audio codec settings (CodecSettings) under (AudioDescriptions) contains the group of settings related to audio encoding. The settings in this group vary depending on the value that you choose for Audio codec (Codec). For each codec enum that you choose, define the corresponding settings object. The following lists the codec enum, settings object pairs. * AAC, AacSettings * MP2, Mp2Settings * MP3, Mp3Settings * WAV, WavSettings * AIFF, AiffSettings
AudioDescription
Description of audio output
AudioNormalizationSettings
Advanced audio normalization settings. Ignore these settings unless you need to comply with a loudness standard.
AudioSelector
Selector for Audio
AudioSelectorGroup
Group of Audio Selectors
AutomatedAbrSettings
Use automated ABR to have MediaConvert set up the renditions in your ABR package for you automatically, based on characteristics of your input video. This feature optimizes video quality while minimizing the overall size of your ABR package.
AutomatedEncodingSettings
Use automated encoding to have MediaConvert choose your encoding settings for you, based on characteristics of your input video.
Av1QvbrSettings
Settings for quality-defined variable bitrate encoding with the AV1 codec. Required when you set Rate control mode to QVBR. Not valid when you set Rate control mode to a value other than QVBR, or when you don't define Rate control mode.
Av1Settings
Required when you set Codec, under VideoDescription>CodecSettings to the value AV1.
AvailBlanking
Settings for Avail Blanking
AvcIntraSettings
Required when you set your output video codec to AVC-Intra. For more information about the AVC-I settings, see the relevant specification. For detailed information about SD and HD in AVC-I, see https://ieeexplore.ieee.org/document/7290936.
AwsClientCredentials
AWS credentials.
BurninDestinationSettings
Burn-In Destination Settings.
CancelJobResponse
CaptionDescription
Description of Caption output
CaptionDescriptionPreset
Caption Description for preset
CaptionDestinationSettings
Specific settings required by destination type. Note that burnin_destination_settings are not available if the source of the caption data is Embedded or Teletext.
CaptionSelector
Set up captions in your outputs by first selecting them from your input here.
CaptionSourceFramerate
Ignore this setting unless your input captions format is SCC. To have the service compensate for differing frame rates between your input captions and input video, specify the frame rate of the captions file. Specify this value as a fraction, using the settings Framerate numerator (framerateNumerator) and Framerate denominator (framerateDenominator). For example, you might specify 24 / 1 for 24 fps, 25 / 1 for 25 fps, 24000 / 1001 for 23.976 fps, or 30000 / 1001 for 29.97 fps.
CaptionSourceSettings
If your input captions are SCC, TTML, STL, SMI, SRT, or IMSC in an xml file, specify the URI of the input captions source file. If your input captions are IMSC in an IMF package, use TrackSourceSettings instead of FileSoureSettings.
ChannelMapping
Channel mapping (ChannelMapping) contains the group of fields that hold the remixing value for each channel. Units are in dB. Acceptable values are within the range from -60 (mute) through 6. A setting of 0 passes the input channel unchanged to the output channel (no attenuation or amplification).
CmafAdditionalManifest
Specify the details for each pair of HLS and DASH additional manifests that you want the service to generate for this CMAF output group. Each pair of manifests can reference a different subset of outputs in the group.
CmafEncryptionSettings
Settings for CMAF encryption
CmafGroupSettings
Required when you set (Type) under (OutputGroups)>(OutputGroupSettings) to CMAF_GROUP_SETTINGS. Each output in a CMAF Output Group may only contain a single video, audio, or caption output.
CmfcSettings
Settings for MP4 segments in CMAF
ColorCorrector
Settings for color correction.
ContainerSettings
Container specific settings.
CreateJobResponse
CreateJobTemplateResponse
CreatePresetResponse
CreateQueueResponse
DashAdditionalManifest
Specify the details for each additional DASH manifest that you want the service to generate for this output group. Each manifest can reference a different subset of outputs in the group.
DashIsoEncryptionSettings
Specifies DRM settings for DASH outputs.
DashIsoGroupSettings
Required when you set (Type) under (OutputGroups)>(OutputGroupSettings) to DASH_ISO_GROUP_SETTINGS.
Deinterlacer
Settings for deinterlacer
DeleteJobTemplateResponse
DeletePresetResponse
DeleteQueueResponse
DescribeEndpointsResponse
DestinationSettings
Settings associated with the destination. Will vary based on the type of destination
DisassociateCertificateResponse
DolbyVision
Settings for Dolby Vision
DolbyVisionLevel6Metadata
Use these settings when you set DolbyVisionLevel6Mode to SPECIFY to override the MaxCLL and MaxFALL values in your input with new values.
DvbNitSettings
Inserts DVB Network Information Table (NIT) at the specified table repetition interval.
DvbSdtSettings
Inserts DVB Service Description Table (NIT) at the specified table repetition interval.
DvbSubDestinationSettings
DVB-Sub Destination Settings
DvbSubSourceSettings
DVB Sub Source Settings
DvbTdtSettings
Inserts DVB Time and Date Table (TDT) at the specified table repetition interval.
Eac3AtmosSettings
Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value EAC3_ATMOS.
Eac3Settings
Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value EAC3.
EmbeddedDestinationSettings
Settings specific to embedded/ancillary caption outputs, including 608/708 Channel destination number.
EmbeddedSourceSettings
Settings for embedded captions Source
Endpoint
Describes an account-specific API endpoint.
EsamManifestConfirmConditionNotification
ESAM ManifestConfirmConditionNotification defined by OC-SP-ESAM-API-I03-131025.
EsamSettings
Settings for Event Signaling And Messaging (ESAM). If you don't do ad insertion, you can ignore these settings.
EsamSignalProcessingNotification
ESAM SignalProcessingNotification data defined by OC-SP-ESAM-API-I03-131025.
F4vSettings
Settings for F4v container
FileGroupSettings
Required when you set (Type) under (OutputGroups)>(OutputGroupSettings) to FILE_GROUP_SETTINGS.
FileSourceSettings
If your input captions are SCC, SMI, SRT, STL, TTML, or IMSC 1.1 in an xml file, specify the URI of the input caption source file. If your caption source is IMSC in an IMF package, use TrackSourceSettings instead of FileSoureSettings.
FrameCaptureSettings
Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value FRAME_CAPTURE.
GetJobResponse
GetJobTemplateResponse
GetPresetResponse
GetQueueResponse
H264QvbrSettings
Settings for quality-defined variable bitrate encoding with the H.264 codec. Required when you set Rate control mode to QVBR. Not valid when you set Rate control mode to a value other than QVBR, or when you don't define Rate control mode.
H264Settings
Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value H_264.
H265QvbrSettings
Settings for quality-defined variable bitrate encoding with the H.265 codec. Required when you set Rate control mode to QVBR. Not valid when you set Rate control mode to a value other than QVBR, or when you don't define Rate control mode.
H265Settings
Settings for H265 codec
Hdr10Metadata
Use these settings to specify static color calibration metadata, as defined by SMPTE ST 2086. These values don't affect the pixel values that are encoded in the video stream. They are intended to help the downstream video player display content in a way that reflects the intentions of the the content creator.
HlsAdditionalManifest
Specify the details for each additional HLS manifest that you want the service to generate for this output group. Each manifest can reference a different subset of outputs in the group.
HlsCaptionLanguageMapping
Caption Language Mapping
HlsEncryptionSettings
Settings for HLS encryption
HlsGroupSettings
Required when you set (Type) under (OutputGroups)>(OutputGroupSettings) to HLS_GROUP_SETTINGS.
HlsSettings
Settings for HLS output groups
HopDestination
Optional. Configuration for a destination queue to which the job can hop once a customer-defined minimum wait time has passed.
Id3Insertion
To insert ID3 tags in your output, specify two values. Use ID3 tag (Id3) to specify the base 64 encoded string and use Timecode (TimeCode) to specify the time when the tag should be inserted. To insert multiple ID3 tags in your output, create multiple instances of ID3 insertion (Id3Insertion).
ImageInserter
Enable the image inserter feature to include a graphic overlay on your video. Enable or disable this feature for each input or output individually. This setting is disabled by default.
ImscDestinationSettings
Settings specific to IMSC caption outputs.
Input
Specifies media input
InputClipping
To transcode only portions of your input (clips), include one Input clipping (one instance of InputClipping in the JSON job file) for each input clip. All input clips you specify will be included in every output of the job.
InputDecryptionSettings
Settings for decrypting any input files that you encrypt before you upload them to Amazon S3. MediaConvert can decrypt files only when you use AWS Key Management Service (KMS) to encrypt the data key that you use to encrypt your content.
InputTemplate
Specified video input in a template.
InsertableImage
Settings that specify how your still graphic overlay appears.
Job
Each job converts an input file into an output file or files. For more information, see the User Guide at https://docs.aws.amazon.com/mediaconvert/latest/ug/what-is.html
JobMessages
Provides messages from the service about jobs that you have already successfully submitted.
JobSettings
JobSettings contains all the transcode settings for a job.
JobTemplate
A job template is a pre-made set of encoding instructions that you can use to quickly create a job.
JobTemplateSettings
JobTemplateSettings contains all the transcode settings saved in the template that will be applied to jobs created from it.
ListJobsResponse
ListJobTemplatesResponse
ListPresetsResponse
ListQueuesResponse
ListTagsForResourceResponse
M2tsScte35Esam
Settings for SCTE-35 signals from ESAM. Include this in your job settings to put SCTE-35 markers in your HLS and transport stream outputs at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).
M2tsSettings
MPEG-2 TS container settings. These apply to outputs in a File output group when the output's container (ContainerType) is MPEG-2 Transport Stream (M2TS). In these assets, data is organized by the program map table (PMT). Each transport stream program contains subsets of data, including audio, video, and metadata. Each of these subsets of data has a numerical label called a packet identifier (PID). Each transport stream program corresponds to one MediaConvert output. The PMT lists the types of data in a program along with their PID. Downstream systems and players use the program map table to look up the PID for each type of data it accesses and then uses the PIDs to locate specific data within the asset.
M3u8Settings
Settings for TS segments in HLS
MediaConvert
AWS Elemental MediaConvert
MotionImageInserter
Overlay motion graphics on top of your video at the time that you specify.
MotionImageInsertionFramerate
For motion overlays that don't have a built-in frame rate, specify the frame rate of the overlay in frames per second, as a fraction. For example, specify 24 fps as 24/1. The overlay frame rate doesn't need to match the frame rate of the underlying video.
MotionImageInsertionOffset
Specify the offset between the upper-left corner of the video frame and the top left corner of the overlay.
MovSettings
Settings for MOV Container.
Mp2Settings
Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value MP2.
Mp3Settings
Required when you set Codec, under AudioDescriptions>CodecSettings, to the value MP3.
Mp4Settings
Settings for MP4 container. You can create audio-only AAC outputs with this container.
MpdSettings
Settings for MP4 segments in DASH
Mpeg2Settings
Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value MPEG2.
MsSmoothAdditionalManifest
Specify the details for each additional Microsoft Smooth Streaming manifest that you want the service to generate for this output group. Each manifest can reference a different subset of outputs in the group.
MsSmoothEncryptionSettings
If you are using DRM, set DRM System (MsSmoothEncryptionSettings) to specify the value SpekeKeyProvider.
MsSmoothGroupSettings
Required when you set (Type) under (OutputGroups)>(OutputGroupSettings) to MS_SMOOTH_GROUP_SETTINGS.
MxfSettings
MXF settings
NexGuardFileMarkerSettings
For forensic video watermarking, MediaConvert supports Nagra NexGuard File Marker watermarking. MediaConvert supports both PreRelease Content (NGPR/G2) and OTT Streaming workflows.
NielsenConfiguration
Settings for your Nielsen configuration. If you don't do Nielsen measurement and analytics, ignore these settings. When you enable Nielsen configuration (nielsenConfiguration), MediaConvert enables PCM to ID3 tagging for all outputs in the job. To enable Nielsen configuration programmatically, include an instance of nielsenConfiguration in your JSON job specification. Even if you don't include any children of nielsenConfiguration, you still enable the setting.
NielsenNonLinearWatermarkSettings
Ignore these settings unless you are using Nielsen non-linear watermarking. Specify the values that MediaConvert uses to generate and place Nielsen watermarks in your output audio. In addition to specifying these values, you also need to set up your cloud TIC server. These settings apply to every output in your job. The MediaConvert implementation is currently with the following Nielsen versions: Nielsen Watermark SDK Version 5.2.1 Nielsen NLM Watermark Engine Version 1.2.7 Nielsen Watermark Authenticator SID_TIC Version 5.0.0
NoiseReducer
Enable the Noise reducer (NoiseReducer) feature to remove noise from your video output if necessary. Enable or disable this feature for each output individually. This setting is disabled by default. When you enable Noise reducer (NoiseReducer), you must also select a value for Noise reducer filter (NoiseReducerFilter).
NoiseReducerFilterSettings
Settings for a noise reducer filter
NoiseReducerSpatialFilterSettings
Noise reducer filter settings for spatial filter.
NoiseReducerTemporalFilterSettings
Noise reducer filter settings for temporal filter.
OpusSettings
Required when you set Codec, under AudioDescriptions>CodecSettings, to the value OPUS.
Output
An output object describes the settings for a single output file or stream in an output group.
OutputChannelMapping
OutputChannel mapping settings.
OutputDetail
Details regarding output
OutputGroup
Group of outputs
OutputGroupDetail
Contains details about the output groups specified in the job settings.
OutputGroupSettings
Output Group settings, including type
OutputSettings
Specific settings for this type of output.
PartnerWatermarking
If you work with a third party video watermarking partner, use the group of settings that correspond with your watermarking partner to include watermarks in your output.
Preset
A preset is a collection of preconfigured media conversion settings that you want MediaConvert to apply to the output during the conversion process.
PresetSettings
Settings for preset
ProresSettings
Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value PRORES.
Queue
You can use queues to manage the resources that are available to your AWS account for running multiple transcoding jobs at the same time. If you don't specify a queue, the service sends all jobs through the default queue. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-queues.html.
QueueTransition
Description of the source and destination queues between which the job has moved, along with the timestamp of the move
Rectangle
Use Rectangle to identify a specific area of the video frame.
RemixSettings
Use Manual audio remixing (RemixSettings) to adjust audio levels for each audio channel in each output of your job. With audio remixing, you can output more or fewer audio channels than your input audio source provides.
ReservationPlan
Details about the pricing plan for your reserved queue. Required for reserved queues and not applicable to on-demand queues.
ReservationPlanSettings
Details about the pricing plan for your reserved queue. Required for reserved queues and not applicable to on-demand queues.
ResourceTags
The Amazon Resource Name (ARN) and tags for an AWS Elemental MediaConvert resource.
S3DestinationAccessControl
Optional. Have MediaConvert automatically apply Amazon S3 access control for the outputs in this output group. When you don't use this setting, S3 automatically applies the default access control list PRIVATE.
S3DestinationSettings
Settings associated with S3 destination
S3EncryptionSettings
Settings for how your job outputs are encrypted as they are uploaded to Amazon S3.
SccDestinationSettings
Settings for SCC caption output.
SpekeKeyProvider
If your output group type is HLS, DASH, or Microsoft Smooth, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is CMAF, use the SpekeKeyProviderCmaf settings instead.
SpekeKeyProviderCmaf
If your output group type is CMAF, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is HLS, DASH, or Microsoft Smooth, use the SpekeKeyProvider settings instead.
StaticKeyProvider
Use these settings to set up encryption with a static key provider.
TagResourceResponse
TeletextDestinationSettings
Settings for Teletext caption output
TeletextSourceSettings
Settings specific to Teletext caption sources, including Page number.
TimecodeBurnin
Timecode burn-in (TimecodeBurnIn)--Burns the output timecode and specified prefix into the output.
TimecodeConfig
These settings control how the service handles timecodes throughout the job. These settings don't affect input clipping.
TimedMetadataInsertion
Enable Timed metadata insertion (TimedMetadataInsertion) to include ID3 tags in any HLS outputs. To include timed metadata, you must enable it here, enable it in each output container, and specify tags and timecodes in ID3 insertion (Id3Insertion) objects.
Timing
Information about when jobs are submitted, started, and finished is specified in Unix epoch format in seconds.
TrackSourceSettings
Settings specific to caption sources that are specified by track number. Currently, this is only IMSC captions in an IMF package. If your caption source is IMSC 1.1 in a separate xml file, use FileSourceSettings instead of TrackSourceSettings.
TtmlDestinationSettings
Settings specific to TTML caption outputs, including Pass style information (TtmlStylePassthrough).
UntagResourceResponse
UpdateJobTemplateResponse
UpdatePresetResponse
UpdateQueueResponse
Vc3Settings
Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VC3
VideoCodecSettings
Video codec settings, (CodecSettings) under (VideoDescription), contains the group of settings related to video encoding. The settings in this group vary depending on the value that you choose for Video codec (Codec). For each codec enum that you choose, define the corresponding settings object. The following lists the codec enum, settings object pairs. * AV1, Av1Settings * AVC_INTRA, AvcIntraSettings * FRAME_CAPTURE, FrameCaptureSettings * H_264, H264Settings * H_265, H265Settings * MPEG2, Mpeg2Settings * PRORES, ProresSettings * VC3, Vc3Settings * VP8, Vp8Settings * VP9, Vp9Settings
VideoDescription
Settings for video outputs
VideoDetail
Contains details about the output's video stream
VideoPreprocessor
Find additional transcoding features under Preprocessors (VideoPreprocessors). Enable the features at each output individually. These features are disabled by default.
VideoSelector
Selector for video.
VorbisSettings
Required when you set Codec, under AudioDescriptions>CodecSettings, to the value Vorbis.
Vp8Settings
Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VP8.
Vp9Settings
Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VP9.
WavSettings
Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value WAV.

Enums

AacAudioDescriptionBroadcasterMix
Choose BROADCASTER_MIXED_AD when the input contains pre-mixed main audio + audio description (AD) as a stereo pair. The value for AudioType will be set to 3, which signals to downstream systems that this stream contains "broadcaster mixed AD". Note that the input received by the encoder must contain pre-mixed audio; the encoder does not perform the mixing. When you choose BROADCASTER_MIXED_AD, the encoder ignores any values you provide in AudioType and FollowInputAudioType. Choose NORMAL when the input does not contain pre-mixed audio + audio description (AD). In this case, the encoder will use any values you provide for AudioType and FollowInputAudioType.
AacCodecProfile
AAC Profile.
AacCodingMode
Mono (Audio Description), Mono, Stereo, or 5.1 channel layout. Valid values depend on rate control mode and profile. "1.0 - Audio Description (Receiver Mix)" setting receives a stereo description plus control track and emits a mono AAC encode of the description track, with control data emitted in the PES header as per ETSI TS 101 154 Annex E.
AacRateControlMode
Rate Control Mode.
AacRawFormat
Enables LATM/LOAS AAC output. Note that if you use LATM/LOAS AAC in an output, you must choose "No container" for the output container.
AacSpecification
Use MPEG-2 AAC instead of MPEG-4 AAC audio for raw or MPEG-2 Transport Stream containers.
AacVbrQuality
VBR Quality Level - Only used if rate_control_mode is VBR.
Ac3BitstreamMode
Specify the bitstream mode for the AC-3 stream that the encoder emits. For more information about the AC3 bitstream mode, see ATSC A/52-2012 (Annex E).
Ac3CodingMode
Dolby Digital coding mode. Determines number of channels.
Ac3DynamicRangeCompressionProfile
If set to FILM_STANDARD, adds dynamic range compression signaling to the output bitstream as defined in the Dolby Digital specification.
Ac3LfeFilter
Applies a 120Hz lowpass filter to the LFE channel prior to encoding. Only valid with 3_2_LFE coding mode.
Ac3MetadataControl
When set to FOLLOW_INPUT, encoder metadata will be sourced from the DD, DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied from one of these streams, then the static metadata settings will be used.
AccelerationMode
Specify whether the service runs your job with accelerated transcoding. Choose DISABLED if you don't want accelerated transcoding. Choose ENABLED if you want your job to run with accelerated transcoding and to fail if your input files or your job settings aren't compatible with accelerated transcoding. Choose PREFERRED if you want your job to run with accelerated transcoding if the job is compatible with the feature and to run at standard speed if it's not.
AccelerationStatus
Describes whether the current job is running with accelerated transcoding. For jobs that have Acceleration (AccelerationMode) set to DISABLED, AccelerationStatus is always NOT_APPLICABLE. For jobs that have Acceleration (AccelerationMode) set to ENABLED or PREFERRED, AccelerationStatus is one of the other states. AccelerationStatus is IN_PROGRESS initially, while the service determines whether the input files and job settings are compatible with accelerated transcoding. If they are, AcclerationStatus is ACCELERATED. If your input files and job settings aren't compatible with accelerated transcoding, the service either fails your job or runs it without accelerated transcoding, depending on how you set Acceleration (AccelerationMode). When the service runs your job without accelerated transcoding, AccelerationStatus is NOT_ACCELERATED.
AfdSignaling
This setting only applies to H.264, H.265, and MPEG2 outputs. Use Insert AFD signaling (AfdSignaling) to specify whether the service includes AFD values in the output video data and what those values are. * Choose None to remove all AFD values from this output. * Choose Fixed to ignore input AFD values and instead encode the value specified in the job. * Choose Auto to calculate output AFD values based on the input AFD scaler data.
AlphaBehavior
Ignore this setting unless this input is a QuickTime animation with an alpha channel. Use this setting to create separate Key and Fill outputs. In each output, specify which part of the input MediaConvert uses. Leave this setting at the default value DISCARD to delete the alpha channel and preserve the video. Set it to REMAP_TO_LUMA to delete the video and map the alpha channel to the luma channel of your outputs.
AncillaryConvert608To708
Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.
AncillaryTerminateCaptions
By default, the service terminates any unterminated captions at the end of each input. If you want the caption to continue onto your next input, disable this setting.
AntiAlias
The anti-alias filter is automatically applied to all outputs. The service no longer accepts the value DISABLED for AntiAlias. If you specify that in your job, the service will ignore the setting.
AudioChannelTag
You can add a tag for this mono-channel audio track to mimic its placement in a multi-channel layout. For example, if this track is the left surround channel, choose Left surround (LS).
AudioCodec
Type of Audio codec.
AudioDefaultSelection
Enable this setting on one audio selector to set it as the default for the job. The service uses this default for outputs where it can't find the specified input audio. If you don't set a default, those outputs have no audio.
AudioLanguageCodeControl
Specify which source for language code takes precedence for this audio track. When you choose Follow input (FOLLOW_INPUT), the service uses the language code from the input track if it's present. If there's no languge code on the input track, the service uses the code that you specify in the setting Language code (languageCode or customLanguageCode). When you choose Use configured (USE_CONFIGURED), the service uses the language code that you specify.
AudioNormalizationAlgorithm
Choose one of the following audio normalization algorithms: ITU-R BS.1770-1: Ungated loudness. A measurement of ungated average loudness for an entire piece of content, suitable for measurement of short-form content under ATSC recommendation A/85. Supports up to 5.1 audio channels. ITU-R BS.1770-2: Gated loudness. A measurement of gated average loudness compliant with the requirements of EBU-R128. Supports up to 5.1 audio channels. ITU-R BS.1770-3: Modified peak. The same loudness measurement algorithm as 1770-2, with an updated true peak measurement. ITU-R BS.1770-4: Higher channel count. Allows for more audio channels than the other algorithms, including configurations such as 7.1.
AudioNormalizationAlgorithmControl
When enabled the output audio is corrected using the chosen algorithm. If disabled, the audio will be measured but not adjusted.
AudioNormalizationLoudnessLogging
If set to LOG, log each output's audio track loudness to a CSV file.
AudioNormalizationPeakCalculation
If set to TRUE_PEAK, calculate and log the TruePeak for each output's audio track loudness.
AudioSelectorType
Specifies the type of the audio selector.
AudioTypeControl
When set to FOLLOW_INPUT, if the input contains an ISO 639 audio_type, then that value is passed through to the output. If the input contains no ISO 639 audio_type, the value in Audio Type is included in the output. Otherwise the value in Audio Type is included in the output. Note that this field and audioType are both ignored if audioDescriptionBroadcasterMix is set to BROADCASTER_MIXED_AD.
Av1AdaptiveQuantization
Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to Spatial adaptive quantization (spatialAdaptiveQuantization).
Av1FramerateControl
If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.
Av1FramerateConversionAlgorithm
Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.
Av1RateControlMode
'With AV1 outputs, for rate control mode, MediaConvert supports only quality-defined variable bitrate (QVBR). You can''t use CBR or VBR.'
Av1SpatialAdaptiveQuantization
Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.
AvcIntraClass
Specify the AVC-Intra class of your output. The AVC-Intra class selection determines the output video bit rate depending on the frame rate of the output. Outputs with higher class values have higher bitrates and improved image quality.
AvcIntraFramerateControl
If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.
AvcIntraFramerateConversionAlgorithm
Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.
AvcIntraInterlaceMode
Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.
AvcIntraSlowPal
Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.
AvcIntraTelecine
When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.
BillingTagsSource
The tag type that AWS Billing and Cost Management will use to sort your AWS Elemental MediaConvert costs on any billing report that you set up.
BurninSubtitleAlignment
If no explicit x_position or y_position is provided, setting alignment to centered will place the captions at the bottom center of the output. Similarly, setting a left alignment will align captions to the bottom left of the output. If x and y positions are given in conjunction with the alignment parameter, the font will be justified (either left or centered) relative to those coordinates. This option is not valid for source captions that are STL, 608/embedded or teletext. These source settings are already pre-defined by the caption stream. All burn-in and DVB-Sub font settings must match.
BurninSubtitleBackgroundColor
Specifies the color of the rectangle behind the captions. All burn-in and DVB-Sub font settings must match.
BurninSubtitleFontColor
Specifies the color of the burned-in captions. This option is not valid for source captions that are STL, 608/embedded or teletext. These source settings are already pre-defined by the caption stream. All burn-in and DVB-Sub font settings must match.
BurninSubtitleOutlineColor
Specifies font outline color. This option is not valid for source captions that are either 608/embedded or teletext. These source settings are already pre-defined by the caption stream. All burn-in and DVB-Sub font settings must match.
BurninSubtitleShadowColor
Specifies the color of the shadow cast by the captions. All burn-in and DVB-Sub font settings must match.
BurninSubtitleTeletextSpacing
Only applies to jobs with input captions in Teletext or STL formats. Specify whether the spacing between letters in your captions is set by the captions grid or varies depending on letter width. Choose fixed grid to conform to the spacing specified in the captions file more accurately. Choose proportional to make the text easier to read if the captions are closed caption.
CaptionDestinationType
Specify the format for this set of captions on this output. The default format is embedded without SCTE-20. Other options are embedded with SCTE-20, burn-in, DVB-sub, IMSC, SCC, SRT, teletext, TTML, and web-VTT. If you are using SCTE-20, choose SCTE-20 plus embedded (SCTE20_PLUS_EMBEDDED) to create an output that complies with the SCTE-43 spec. To create a non-compliant output where the embedded captions come first, choose Embedded plus SCTE-20 (EMBEDDED_PLUS_SCTE20).
CaptionSourceType
Use Source (SourceType) to identify the format of your input captions. The service cannot auto-detect caption format.
CmafClientCache
Disable this setting only when your workflow requires the #EXT-X-ALLOW-CACHE:no tag. Otherwise, keep the default value Enabled (ENABLED) and control caching in your video distribution set up. For example, use the Cache-Control http header.
CmafCodecSpecification
Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist generation.
CmafEncryptionType
Specify the encryption scheme that you want the service to use when encrypting your CMAF segments. Choose AES-CBC subsample (SAMPLE-AES) or AES_CTR (AES-CTR).
CmafInitializationVectorInManifest
When you use DRM with CMAF outputs, choose whether the service writes the 128-bit encryption initialization vector in the HLS and DASH manifests.
CmafKeyProviderType
Specify whether your DRM encryption key is static or from a key provider that follows the SPEKE standard. For more information about SPEKE, see https://docs.aws.amazon.com/speke/latest/documentation/what-is-speke.html.
CmafManifestCompression
When set to GZIP, compresses HLS playlist.
CmafManifestDurationFormat
Indicates whether the output manifest should use floating point values for segment duration.
CmafMpdProfile
Specify whether your DASH profile is on-demand or main. When you choose Main profile (MAIN_PROFILE), the service signals urn:mpeg:dash:profile:isoff-main:2011 in your .mpd DASH manifest. When you choose On-demand (ON_DEMAND_PROFILE), the service signals urn:mpeg:dash:profile:isoff-on-demand:2011 in your .mpd. When you choose On-demand, you must also set the output group setting Segment control (SegmentControl) to Single file (SINGLE_FILE).
CmafSegmentControl
When set to SINGLE_FILE, a single output file is generated, which is internally segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, separate segment files will be created.
CmafStreamInfResolution
Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag of variant manifest.
CmafWriteDASHManifest
When set to ENABLED, a DASH MPD manifest will be generated for this output.
CmafWriteHLSManifest
When set to ENABLED, an Apple HLS manifest will be generated for this output.
CmafWriteSegmentTimelineInRepresentation
When you enable Precise segment duration in DASH manifests (writeSegmentTimelineInRepresentation), your DASH manifest shows precise segment durations. The segment duration information appears inside the SegmentTimeline element, inside SegmentTemplate at the Representation level. When this feature isn't enabled, the segment durations in your DASH manifest are approximate. The segment duration information appears in the duration attribute of the SegmentTemplate element.
CmfcAudioDuration
Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.
CmfcScte35Esam
Use this setting only when you specify SCTE-35 markers from ESAM. Choose INSERT to put SCTE-35 markers in this output at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).
CmfcScte35Source
Ignore this setting unless you have SCTE-35 markers in your input video file. Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want those SCTE-35 markers in this output.
ColorMetadata
Choose Insert (INSERT) for this setting to include color metadata in this output. Choose Ignore (IGNORE) to exclude color metadata from this output. If you don't specify a value, the service sets this to Insert by default.
ColorSpace
If your input video has accurate color space metadata, or if you don't know about color space, leave this set to the default value Follow (FOLLOW). The service will automatically detect your input color space. If your input video has metadata indicating the wrong color space, specify the accurate color space here. If your input video is HDR 10 and the SMPTE ST 2086 Mastering Display Color Volume static metadata isn't present in your video stream, or if that metadata is present but not accurate, choose Force HDR 10 (FORCE_HDR10) here and specify correct values in the input HDR 10 metadata (Hdr10Metadata) settings. For more information about MediaConvert HDR jobs, see https://docs.aws.amazon.com/console/mediaconvert/hdr.
ColorSpaceConversion
Specify the color space you want for this output. The service supports conversion between HDR formats, between SDR formats, from SDR to HDR, and from HDR to SDR. SDR to HDR conversion doesn't upgrade the dynamic range. The converted video has an HDR format, but visually appears the same as an unconverted output. HDR to SDR conversion uses Elemental tone mapping technology to approximate the outcome of manually regrading from HDR to SDR.
ColorSpaceUsage
There are two sources for color metadata, the input file and the job input settings Color space (ColorSpace) and HDR master display information settings(Hdr10Metadata). The Color space usage setting determines which takes precedence. Choose Force (FORCE) to use color metadata from the input job settings. If you don't specify values for those settings, the service defaults to using metadata from your input. FALLBACK - Choose Fallback (FALLBACK) to use color metadata from the source when it is present. If there's no color metadata in your input file, the service defaults to using values you specify in the input settings.
Commitment
The length of the term of your reserved queue pricing plan commitment.
ContainerType
Container for this output. Some containers require a container settings object. If not specified, the default object will be created.
DashIsoHbbtvCompliance
Supports HbbTV specification as indicated
DashIsoMpdProfile
Specify whether your DASH profile is on-demand or main. When you choose Main profile (MAIN_PROFILE), the service signals urn:mpeg:dash:profile:isoff-main:2011 in your .mpd DASH manifest. When you choose On-demand (ON_DEMAND_PROFILE), the service signals urn:mpeg:dash:profile:isoff-on-demand:2011 in your .mpd. When you choose On-demand, you must also set the output group setting Segment control (SegmentControl) to Single file (SINGLE_FILE).
DashIsoPlaybackDeviceCompatibility
This setting can improve the compatibility of your output with video players on obsolete devices. It applies only to DASH H.264 outputs with DRM encryption. Choose Unencrypted SEI (UNENCRYPTED_SEI) only to correct problems with playback on older devices. Otherwise, keep the default setting CENC v1 (CENC_V1). If you choose Unencrypted SEI, for that output, the service will exclude the access unit delimiter and will leave the SEI NAL units unencrypted.
DashIsoSegmentControl
When set to SINGLE_FILE, a single output file is generated, which is internally segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, separate segment files will be created.
DashIsoWriteSegmentTimelineInRepresentation
When you enable Precise segment duration in manifests (writeSegmentTimelineInRepresentation), your DASH manifest shows precise segment durations. The segment duration information appears inside the SegmentTimeline element, inside SegmentTemplate at the Representation level. When this feature isn't enabled, the segment durations in your DASH manifest are approximate. The segment duration information appears in the duration attribute of the SegmentTemplate element.
DecryptionMode
Specify the encryption mode that you used to encrypt your input files.
DeinterlaceAlgorithm
Only applies when you set Deinterlacer (DeinterlaceMode) to Deinterlace (DEINTERLACE) or Adaptive (ADAPTIVE). Motion adaptive interpolate (INTERPOLATE) produces sharper pictures, while blend (BLEND) produces smoother motion. Use (INTERPOLATE_TICKER) OR (BLEND_TICKER) if your source file includes a ticker, such as a scrolling headline at the bottom of the frame.
DeinterlacerControl
  • When set to NORMAL (default), the deinterlacer does not convert frames that are tagged in metadata as progressive. It will only convert those that are tagged as some other type. - When set to FORCE_ALL_FRAMES, the deinterlacer converts every frame to progressive - even those that are already tagged as progressive. Turn Force mode on only if there is a good chance that the metadata has tagged frames as progressive when they are not progressive. Do not turn on otherwise; processing frames that are already progressive into progressive will probably result in lower quality video.
  • DeinterlacerMode
    Use Deinterlacer (DeinterlaceMode) to choose how the service will do deinterlacing. Default is Deinterlace. - Deinterlace converts interlaced to progressive. - Inverse telecine converts Hard Telecine 29.97i to progressive 23.976p. - Adaptive auto-detects and converts to progressive.
    DescribeEndpointsMode
    Optional field, defaults to DEFAULT. Specify DEFAULT for this operation to return your endpoints if any exist, or to create an endpoint for you and return it if one doesn't already exist. Specify GET_ONLY to return your endpoints if any exist, or an empty list if none exist.
    DolbyVisionLevel6Mode
    Use Dolby Vision Mode to choose how the service will handle Dolby Vision MaxCLL and MaxFALL properies.
    DolbyVisionProfile
    In the current MediaConvert implementation, the Dolby Vision profile is always 5 (PROFILE_5). Therefore, all of your inputs must contain Dolby Vision frame interleaved data.
    DropFrameTimecode
    Applies only to 29.97 fps outputs. When this feature is enabled, the service will use drop-frame timecode on outputs. If it is not possible to use drop-frame timecode, the system will fall back to non-drop-frame. This setting is enabled by default when Timecode insertion (TimecodeInsertion) is enabled.
    DvbSubtitleAlignment
    If no explicit x_position or y_position is provided, setting alignment to centered will place the captions at the bottom center of the output. Similarly, setting a left alignment will align captions to the bottom left of the output. If x and y positions are given in conjunction with the alignment parameter, the font will be justified (either left or centered) relative to those coordinates. This option is not valid for source captions that are STL, 608/embedded or teletext. These source settings are already pre-defined by the caption stream. All burn-in and DVB-Sub font settings must match.
    DvbSubtitleBackgroundColor
    Specifies the color of the rectangle behind the captions. All burn-in and DVB-Sub font settings must match.
    DvbSubtitleFontColor
    Specifies the color of the burned-in captions. This option is not valid for source captions that are STL, 608/embedded or teletext. These source settings are already pre-defined by the caption stream. All burn-in and DVB-Sub font settings must match.
    DvbSubtitleOutlineColor
    Specifies font outline color. This option is not valid for source captions that are either 608/embedded or teletext. These source settings are already pre-defined by the caption stream. All burn-in and DVB-Sub font settings must match.
    DvbSubtitleShadowColor
    Specifies the color of the shadow cast by the captions. All burn-in and DVB-Sub font settings must match.
    DvbSubtitleTeletextSpacing
    Only applies to jobs with input captions in Teletext or STL formats. Specify whether the spacing between letters in your captions is set by the captions grid or varies depending on letter width. Choose fixed grid to conform to the spacing specified in the captions file more accurately. Choose proportional to make the text easier to read if the captions are closed caption.
    DvbSubtitlingType
    Specify whether your DVB subtitles are standard or for hearing impaired. Choose hearing impaired if your subtitles include audio descriptions and dialogue. Choose standard if your subtitles include only dialogue.
    Eac3AtmosBitstreamMode
    Specify the bitstream mode for the E-AC-3 stream that the encoder emits. For more information about the EAC3 bitstream mode, see ATSC A/52-2012 (Annex E).
    Eac3AtmosCodingMode
    The coding mode for Dolby Digital Plus JOC (Atmos) is always 9.1.6 (CODING_MODE_9_1_6).
    Eac3AtmosDialogueIntelligence
    Enable Dolby Dialogue Intelligence to adjust loudness based on dialogue analysis.
    Eac3AtmosDynamicRangeCompressionLine
    Specify the absolute peak level for a signal with dynamic range compression.
    Eac3AtmosDynamicRangeCompressionRf
    Specify how the service limits the audio dynamic range when compressing the audio.
    Eac3AtmosMeteringMode
    Choose how the service meters the loudness of your audio.
    Eac3AtmosStereoDownmix
    Choose how the service does stereo downmixing.
    Eac3AtmosSurroundExMode
    Specify whether your input audio has an additional center rear surround channel matrix encoded into your left and right surround channels.
    Eac3AttenuationControl
    If set to ATTENUATE_3_DB, applies a 3 dB attenuation to the surround channels. Only used for 3/2 coding mode.
    Eac3BitstreamMode
    Specify the bitstream mode for the E-AC-3 stream that the encoder emits. For more information about the EAC3 bitstream mode, see ATSC A/52-2012 (Annex E).
    Eac3CodingMode
    Dolby Digital Plus coding mode. Determines number of channels.
    Eac3DcFilter
    Activates a DC highpass filter for all input channels.
    Eac3DynamicRangeCompressionLine
    Specify the absolute peak level for a signal with dynamic range compression.
    Eac3DynamicRangeCompressionRf
    Specify how the service limits the audio dynamic range when compressing the audio.
    Eac3LfeControl
    When encoding 3/2 audio, controls whether the LFE channel is enabled
    Eac3LfeFilter
    Applies a 120Hz lowpass filter to the LFE channel prior to encoding. Only valid with 3_2_LFE coding mode.
    Eac3MetadataControl
    When set to FOLLOW_INPUT, encoder metadata will be sourced from the DD, DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied from one of these streams, then the static metadata settings will be used.
    Eac3PassthroughControl
    When set to WHEN_POSSIBLE, input DD+ audio will be passed through if it is present on the input. this detection is dynamic over the life of the transcode. Inputs that alternate between DD+ and non-DD+ content will have a consistent DD+ output as the system alternates between passthrough and encoding.
    Eac3PhaseControl
    Controls the amount of phase-shift applied to the surround channels. Only used for 3/2 coding mode.
    Eac3StereoDownmix
    Choose how the service does stereo downmixing. This setting only applies if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Stereo downmix (Eac3StereoDownmix).
    Eac3SurroundExMode
    When encoding 3/2 audio, sets whether an extra center back surround channel is matrix encoded into the left and right surround channels.
    Eac3SurroundMode
    When encoding 2/0 audio, sets whether Dolby Surround is matrix encoded into the two channels.
    EmbeddedConvert608To708
    Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.
    EmbeddedTerminateCaptions
    By default, the service terminates any unterminated captions at the end of each input. If you want the caption to continue onto your next input, disable this setting.
    F4vMoovPlacement
    If set to PROGRESSIVE_DOWNLOAD, the MOOV atom is relocated to the beginning of the archive as required for progressive downloading. Otherwise it is placed normally at the end.
    FileSourceConvert608To708
    Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.
    FontScript
    Provide the font script, using an ISO 15924 script code, if the LanguageCode is not sufficient for determining the script type. Where LanguageCode or CustomLanguageCode is sufficient, use "AUTOMATIC" or leave unset.
    H264AdaptiveQuantization
    Keep the default value, Auto (AUTO), for this setting to have MediaConvert automatically apply the best types of quantization for your video content. When you want to apply your quantization settings manually, you must set H264AdaptiveQuantization to a value other than Auto (AUTO). Use this setting to specify the strength of any adaptive quantization filters that you enable. If you don't want MediaConvert to do any adaptive quantization in this transcode, set Adaptive quantization (H264AdaptiveQuantization) to Off (OFF). Related settings: The value that you choose here applies to the following settings: H264FlickerAdaptiveQuantization, H264SpatialAdaptiveQuantization, and H264TemporalAdaptiveQuantization.
    H264CodecLevel
    Specify an H.264 level that is consistent with your output video settings. If you aren't sure what level to specify, choose Auto (AUTO).
    H264CodecProfile
    H.264 Profile. High 4:2:2 and 10-bit profiles are only available with the AVC-I License.
    H264DynamicSubGop
    Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).
    H264EntropyEncoding
    Entropy encoding mode. Use CABAC (must be in Main or High profile) or CAVLC.
    H264FieldEncoding
    Keep the default value, PAFF, to have MediaConvert use PAFF encoding for interlaced outputs. Choose Force field (FORCE_FIELD) to disable PAFF encoding and create separate interlaced fields.
    H264FlickerAdaptiveQuantization
    Only use this setting when you change the default value, AUTO, for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264FlickerAdaptiveQuantization is Disabled (DISABLED). Change this value to Enabled (ENABLED) to reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. To manually enable or disable H264FlickerAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.
    H264FramerateControl
    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.
    H264FramerateConversionAlgorithm
    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.
    H264GopBReference
    If enable, use reference B frames for GOP structures that have B frames > 1.
    H264GopSizeUnits
    Indicates if the GOP Size in H264 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.
    H264InterlaceMode
    Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.
    H264ParControl
    Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.
    H264QualityTuningLevel
    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.
    H264RateControlMode
    Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).
    H264RepeatPps
    Places a PPS header on each encoded picture, even if repeated.
    H264SceneChangeDetect
    Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default. If this output uses QVBR, choose Transition detection (TRANSITION_DETECTION) for further video quality improvement. For more information about QVBR, see https://docs.aws.amazon.com/console/mediaconvert/cbr-vbr-qvbr.
    H264SlowPal
    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.
    H264SpatialAdaptiveQuantization
    Only use this setting when you change the default value, Auto (AUTO), for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264SpatialAdaptiveQuantization is Enabled (ENABLED). Keep this default value to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to set H264SpatialAdaptiveQuantization to Disabled (DISABLED). Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (H264AdaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher. To manually enable or disable H264SpatialAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.
    H264Syntax
    Produces a bitstream compliant with SMPTE RP-2027.
    H264Telecine
    When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard or soft telecine to create a smoother picture. Hard telecine (HARD) produces a 29.97i output. Soft telecine (SOFT) produces an output with a 23.976 output that signals to the video player device to do the conversion during play back. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.
    H264TemporalAdaptiveQuantization
    Only use this setting when you change the default value, AUTO, for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264TemporalAdaptiveQuantization is Enabled (ENABLED). Keep this default value to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to set H264TemporalAdaptiveQuantization to Disabled (DISABLED). Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization). To manually enable or disable H264TemporalAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.
    H264UnregisteredSeiTimecode
    Inserts timecode for each frame as 4 bytes of an unregistered SEI message.
    H265AdaptiveQuantization
    Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to the following settings: Flicker adaptive quantization (flickerAdaptiveQuantization), Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).
    H265AlternateTransferFunctionSei
    Enables Alternate Transfer Function SEI message for outputs using Hybrid Log Gamma (HLG) Electro-Optical Transfer Function (EOTF).
    H265CodecLevel
    H.265 Level.
    H265CodecProfile
    Represents the Profile and Tier, per the HEVC (H.265) specification. Selections are grouped as Profile / Tier, so "Main/High" represents Main Profile with High Tier. 4:2:2 profiles are only available with the HEVC 4:2:2 License.
    H265DynamicSubGop
    Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).
    H265FlickerAdaptiveQuantization
    Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set adaptiveQuantization to a value other than Off (OFF).
    H265FramerateControl
    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.
    H265FramerateConversionAlgorithm
    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.
    H265GopBReference
    If enable, use reference B frames for GOP structures that have B frames > 1.
    H265GopSizeUnits
    Indicates if the GOP Size in H265 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.
    H265InterlaceMode
    Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.
    H265ParControl
    Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.
    H265QualityTuningLevel
    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.
    H265RateControlMode
    Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).
    H265SampleAdaptiveOffsetFilterMode
    Specify Sample Adaptive Offset (SAO) filter strength. Adaptive mode dynamically selects best strength based on content
    H265SceneChangeDetect
    Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default. If this output uses QVBR, choose Transition detection (TRANSITION_DETECTION) for further video quality improvement. For more information about QVBR, see https://docs.aws.amazon.com/console/mediaconvert/cbr-vbr-qvbr.
    H265SlowPal
    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.
    H265SpatialAdaptiveQuantization
    Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.
    H265Telecine
    This field applies only if the Streams > Advanced > Framerate (framerate) field is set to 29.970. This field works with the Streams > Advanced > Preprocessors > Deinterlacer field (deinterlace_mode) and the Streams > Advanced > Interlaced Mode field (interlace_mode) to identify the scan type for the output: Progressive, Interlaced, Hard Telecine or Soft Telecine. - Hard: produces 29.97i output from 23.976 input. - Soft: produces 23.976; the player converts this output to 29.97i.
    H265TemporalAdaptiveQuantization
    Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).
    H265TemporalIds
    Enables temporal layer identifiers in the encoded bitstream. Up to 3 layers are supported depending on GOP structure: I- and P-frames form one layer, reference B-frames can form a second layer and non-reference b-frames can form a third layer. Decoders can optionally decode only the lower temporal layers to generate a lower frame rate output. For example, given a bitstream with temporal IDs and with b-frames = 1 (i.e. IbPbPb display order), a decoder could decode all the frames for full frame rate output or only the I and P frames (lowest temporal layer) for a half frame rate output.
    H265Tiles
    Enable use of tiles, allowing horizontal as well as vertical subdivision of the encoded pictures.
    H265UnregisteredSeiTimecode
    Inserts timecode for each frame as 4 bytes of an unregistered SEI message.
    H265WriteMp4PackagingType
    If the location of parameter set NAL units doesn't matter in your workflow, ignore this setting. Use this setting only with CMAF or DASH outputs, or with standalone file outputs in an MPEG-4 container (MP4 outputs). Choose HVC1 to mark your output as HVC1. This makes your output compliant with the following specification: ISO IECJTC1 SC29 N13798 Text ISO/IEC FDIS 14496-15 3rd Edition. For these outputs, the service stores parameter set NAL units in the sample headers but not in the samples directly. For MP4 outputs, when you choose HVC1, your output video might not work properly with some downstream systems and video players. The service defaults to marking your output as HEV1. For these outputs, the service writes parameter set NAL units directly into the samples.
    HlsAdMarkers
    HlsAudioOnlyContainer
    Use this setting only in audio-only outputs. Choose MPEG-2 Transport Stream (M2TS) to create a file in an MPEG2-TS container. Keep the default value Automatic (AUTOMATIC) to create a raw audio-only file with no container. Regardless of the value that you specify here, if this output has video, the service will place outputs into an MPEG2-TS container.
    HlsAudioOnlyHeader
    Ignore this setting unless you are using FairPlay DRM with Verimatrix and you encounter playback issues. Keep the default value, Include (INCLUDE), to output audio-only headers. Choose Exclude (EXCLUDE) to remove the audio-only headers from your audio segments.
    HlsAudioTrackType
    Four types of audio-only tracks are supported: Audio-Only Variant Stream The client can play back this audio-only stream instead of video in low-bandwidth scenarios. Represented as an EXT-X-STREAM-INF in the HLS manifest. Alternate Audio, Auto Select, Default Alternate rendition that the client should try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=YES, AUTOSELECT=YES Alternate Audio, Auto Select, Not Default Alternate rendition that the client may try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=YES Alternate Audio, not Auto Select Alternate rendition that the client will not try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=NO
    HlsCaptionLanguageSetting
    Applies only to 608 Embedded output captions. Insert: Include CLOSED-CAPTIONS lines in the manifest. Specify at least one language in the CC1 Language Code field. One CLOSED-CAPTION line is added for each Language Code you specify. Make sure to specify the languages in the order in which they appear in the original source (if the source is embedded format) or the order of the caption selectors (if the source is other than embedded). Otherwise, languages in the manifest will not match up properly with the output captions. None: Include CLOSED-CAPTIONS=NONE line in the manifest. Omit: Omit any CLOSED-CAPTIONS line from the manifest.
    HlsClientCache
    Disable this setting only when your workflow requires the #EXT-X-ALLOW-CACHE:no tag. Otherwise, keep the default value Enabled (ENABLED) and control caching in your video distribution set up. For example, use the Cache-Control http header.
    HlsCodecSpecification
    Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist generation.
    HlsDirectoryStructure
    Indicates whether segments should be placed in subdirectories.
    HlsEncryptionType
    Encrypts the segments with the given encryption scheme. Leave blank to disable. Selecting 'Disabled' in the web interface also disables encryption.
    HlsIFrameOnlyManifest
    When set to INCLUDE, writes I-Frame Only Manifest in addition to the HLS manifest
    HlsInitializationVectorInManifest
    The Initialization Vector is a 128-bit number used in conjunction with the key for encrypting blocks. If set to INCLUDE, Initialization Vector is listed in the manifest. Otherwise Initialization Vector is not in the manifest.
    HlsKeyProviderType
    Specify whether your DRM encryption key is static or from a key provider that follows the SPEKE standard. For more information about SPEKE, see https://docs.aws.amazon.com/speke/latest/documentation/what-is-speke.html.
    HlsManifestCompression
    When set to GZIP, compresses HLS playlist.
    HlsManifestDurationFormat
    Indicates whether the output manifest should use floating point values for segment duration.
    HlsOfflineEncrypted
    Enable this setting to insert the EXT-X-SESSION-KEY element into the master playlist. This allows for offline Apple HLS FairPlay content protection.
    HlsOutputSelection
    Indicates whether the .m3u8 manifest file should be generated for this HLS output group.
    HlsProgramDateTime
    Includes or excludes EXT-X-PROGRAM-DATE-TIME tag in .m3u8 manifest files. The value is calculated as follows: either the program date and time are initialized using the input timecode source, or the time is initialized using the input timecode source and the date is initialized using the timestamp_offset.
    HlsSegmentControl
    When set to SINGLE_FILE, emits program as a single media resource (.ts) file, uses #EXT-X-BYTERANGE tags to index segment for playback.
    HlsStreamInfResolution
    Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag of variant manifest.
    HlsTimedMetadataId3Frame
    Indicates ID3 frame that has the timecode.
    ImscStylePassthrough
    Keep this setting enabled to have MediaConvert use the font style and position information from the captions source in the output. This option is available only when your input captions are IMSC, SMPTE-TT, or TTML. Disable this setting for simplified output captions.
    InputDeblockFilter
    Enable Deblock (InputDeblockFilter) to produce smoother motion in the output. Default is disabled. Only manually controllable for MPEG2 and uncompressed video inputs.
    InputDenoiseFilter
    Enable Denoise (InputDenoiseFilter) to filter noise from the input. Default is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video inputs.
    InputFilterEnable
    Specify how the transcoding service applies the denoise and deblock filters. You must also enable the filters separately, with Denoise (InputDenoiseFilter) and Deblock (InputDeblockFilter). * Auto - The transcoding service determines whether to apply filtering, depending on input type and quality. * Disable - The input is not filtered. This is true even if you use the API to enable them in (InputDeblockFilter) and (InputDeblockFilter). * Force - The input is filtered regardless of input type.
    InputPsiControl
    Set PSI control (InputPsiControl) for transport stream inputs to specify which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio and video. * Use PSI - Scan only PSI data.
    InputRotate
    Use Rotate (InputRotate) to specify how the service rotates your video. You can choose automatic rotation or specify a rotation. You can specify a clockwise rotation of 0, 90, 180, or 270 degrees. If your input video container is .mov or .mp4 and your input has rotation metadata, you can choose Automatic to have the service rotate your video according to the rotation specified in the metadata. The rotation must be within one degree of 90, 180, or 270 degrees. If the rotation metadata specifies any other rotation, the service will default to no rotation. By default, the service does no rotation, even if your input video has rotation metadata. The service doesn't pass through rotation metadata.
    InputScanType
    When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn't automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don't specify, the default value is Auto (AUTO). Auto is the correct setting for all inputs that are not PsF. Don't set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts.
    InputTimecodeSource
    Use this Timecode source setting, located under the input settings (InputTimecodeSource), to specify how the service counts input video frames. This input frame count affects only the behavior of features that apply to a single input at a time, such as input clipping and synchronizing some captions formats. Choose Embedded (EMBEDDED) to use the timecodes in your input video. Choose Start at zero (ZEROBASED) to start the first frame at zero. Choose Specified start (SPECIFIEDSTART) to start the first frame at the timecode that you specify in the setting Start timecode (timecodeStart). If you don't specify a value for Timecode source, the service will use Embedded by default. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.
    JobPhase
    A job's phase can be PROBING, TRANSCODING OR UPLOADING
    JobStatus
    A job's status can be SUBMITTED, PROGRESSING, COMPLETE, CANCELED, or ERROR.
    JobTemplateListBy
    Optional. When you request a list of job templates, you can choose to list them alphabetically by NAME or chronologically by CREATION_DATE. If you don't specify, the service will list them by name.
    LanguageCode
    Specify the language, using the ISO 639-2 three-letter code listed at https://www.loc.gov/standards/iso639-2/php/code_list.php.
    M2tsAudioBufferModel
    Selects between the DVB and ATSC buffer models for Dolby Digital audio.
    M2tsAudioDuration
    Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.
    M2tsBufferModel
    Controls what buffer model to use for accurate interleaving. If set to MULTIPLEX, use multiplex buffer model. If set to NONE, this can lead to lower latency, but low-memory devices may not be able to play back the stream without interruptions.
    M2tsEbpAudioInterval
    When set to VIDEO_AND_FIXED_INTERVALS, audio EBP markers will be added to partitions 3 and 4. The interval between these additional markers will be fixed, and will be slightly shorter than the video EBP marker interval. When set to VIDEO_INTERVAL, these additional markers will not be inserted. Only applicable when EBP segmentation markers are is selected (segmentationMarkers is EBP or EBP_LEGACY).
    M2tsEbpPlacement
    Selects which PIDs to place EBP markers on. They can either be placed only on the video PID, or on both the video PID and all audio PIDs. Only applicable when EBP segmentation markers are is selected (segmentationMarkers is EBP or EBP_LEGACY).
    M2tsEsRateInPes
    Controls whether to include the ES Rate field in the PES header.
    M2tsForceTsVideoEbpOrder
    Keep the default value (DEFAULT) unless you know that your audio EBP markers are incorrectly appearing before your video EBP markers. To correct this problem, set this value to Force (FORCE).
    M2tsNielsenId3
    If INSERT, Nielsen inaudible tones for media tracking will be detected in the input audio and an equivalent ID3 tag will be inserted in the output.
    M2tsPcrControl
    When set to PCR_EVERY_PES_PACKET, a Program Clock Reference value is inserted for every Packetized Elementary Stream (PES) header. This is effective only when the PCR PID is the same as the video or audio elementary stream.
    M2tsRateMode
    When set to CBR, inserts null packets into transport stream to fill specified bitrate. When set to VBR, the bitrate setting acts as the maximum bitrate, but the output will not be padded up to that bitrate.
    M2tsScte35Source
    For SCTE-35 markers from your input-- Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want SCTE-35 markers in this output. For SCTE-35 markers from an ESAM XML document-- Choose None (NONE). Also provide the ESAM XML as a string in the setting Signal processing notification XML (sccXml). Also enable ESAM SCTE-35 (include the property scte35Esam).
    M2tsSegmentationMarkers
    Inserts segmentation markers at each segmentation_time period. rai_segstart sets the Random Access Indicator bit in the adaptation field. rai_adapt sets the RAI bit and adds the current timecode in the private data bytes. psi_segstart inserts PAT and PMT tables at the start of segments. ebp adds Encoder Boundary Point information to the adaptation field as per OpenCable specification OC-SP-EBP-I01-130118. ebp_legacy adds Encoder Boundary Point information to the adaptation field using a legacy proprietary format.
    M2tsSegmentationStyle
    The segmentation style parameter controls how segmentation markers are inserted into the transport stream. With avails, it is possible that segments may be truncated, which can influence where future segmentation markers are inserted. When a segmentation style of "reset_cadence" is selected and a segment is truncated due to an avail, we will reset the segmentation cadence. This means the subsequent segment will have a duration of of $segmentation_time seconds. When a segmentation style of "maintain_cadence" is selected and a segment is truncated due to an avail, we will not reset the segmentation cadence. This means the subsequent segment will likely be truncated as well. However, all segments after that will have a duration of $segmentation_time seconds. Note that EBP lookahead is a slight exception to this rule.
    M3u8AudioDuration
    Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.
    M3u8NielsenId3
    If INSERT, Nielsen inaudible tones for media tracking will be detected in the input audio and an equivalent ID3 tag will be inserted in the output.
    M3u8PcrControl
    When set to PCR_EVERY_PES_PACKET a Program Clock Reference value is inserted for every Packetized Elementary Stream (PES) header. This parameter is effective only when the PCR PID is the same as the video or audio elementary stream.
    M3u8Scte35Source
    For SCTE-35 markers from your input-- Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want SCTE-35 markers in this output. For SCTE-35 markers from an ESAM XML document-- Choose None (NONE) if you don't want manifest conditioning. Choose Passthrough (PASSTHROUGH) and choose Ad markers (adMarkers) if you do want manifest conditioning. In both cases, also provide the ESAM XML as a string in the setting Signal processing notification XML (sccXml).
    MotionImageInsertionMode
    Choose the type of motion graphic asset that you are providing for your overlay. You can choose either a .mov file or a series of .png files.
    MotionImagePlayback
    Specify whether your motion graphic overlay repeats on a loop or plays only once.
    MovClapAtom
    When enabled, include 'clap' atom if appropriate for the video output settings.
    MovCslgAtom
    When enabled, file composition times will start at zero, composition times in the 'ctts' (composition time to sample) box for B-frames will be negative, and a 'cslg' (composition shift least greatest) box will be included per 14496-1 amendment 1. This improves compatibility with Apple players and tools.
    MovMpeg2FourCCControl
    When set to XDCAM, writes MPEG2 video streams into the QuickTime file using XDCAM fourcc codes. This increases compatibility with Apple editors and players, but may decrease compatibility with other players. Only applicable when the video codec is MPEG2.
    MovPaddingControl
    To make this output compatible with Omenon, keep the default value, OMNEON. Unless you need Omneon compatibility, set this value to NONE. When you keep the default value, OMNEON, MediaConvert increases the length of the edit list atom. This might cause file rejections when a recipient of the output file doesn't expct this extra padding.
    MovReference
    Always keep the default value (SELF_CONTAINED) for this setting.
    Mp3RateControlMode
    Specify whether the service encodes this MP3 audio output with a constant bitrate (CBR) or a variable bitrate (VBR).
    Mp4CslgAtom
    When enabled, file composition times will start at zero, composition times in the 'ctts' (composition time to sample) box for B-frames will be negative, and a 'cslg' (composition shift least greatest) box will be included per 14496-1 amendment 1. This improves compatibility with Apple players and tools.
    Mp4FreeSpaceBox
    Inserts a free-space box immediately after the moov box.
    Mp4MoovPlacement
    If set to PROGRESSIVE_DOWNLOAD, the MOOV atom is relocated to the beginning of the archive as required for progressive downloading. Otherwise it is placed normally at the end.
    MpdAccessibilityCaptionHints
    Optional. Choose Include (INCLUDE) to have MediaConvert mark up your DASH manifest with
    MpdAudioDuration
    Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.
    MpdCaptionContainerType
    Use this setting only in DASH output groups that include sidecar TTML or IMSC captions. You specify sidecar captions in a separate output from your audio and video. Choose Raw (RAW) for captions in a single XML file in a raw container. Choose Fragmented MPEG-4 (FRAGMENTED_MP4) for captions in XML format contained within fragmented MP4 files. This set of fragmented MP4 files is separate from your video and audio fragmented MP4 files.
    MpdScte35Esam
    Use this setting only when you specify SCTE-35 markers from ESAM. Choose INSERT to put SCTE-35 markers in this output at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).
    MpdScte35Source
    Ignore this setting unless you have SCTE-35 markers in your input video file. Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want those SCTE-35 markers in this output.
    Mpeg2AdaptiveQuantization
    Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to the following settings: Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).
    Mpeg2CodecLevel
    Use Level (Mpeg2CodecLevel) to set the MPEG-2 level for the video output.
    Mpeg2CodecProfile
    Use Profile (Mpeg2CodecProfile) to set the MPEG-2 profile for the video output.
    Mpeg2DynamicSubGop
    Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).
    Mpeg2FramerateControl
    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.
    Mpeg2FramerateConversionAlgorithm
    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.
    Mpeg2GopSizeUnits
    Indicates if the GOP Size in MPEG2 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.
    Mpeg2InterlaceMode
    Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.
    Mpeg2IntraDcPrecision
    Use Intra DC precision (Mpeg2IntraDcPrecision) to set quantization precision for intra-block DC coefficients. If you choose the value auto, the service will automatically select the precision based on the per-frame compression ratio.
    Mpeg2ParControl
    Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.
    Mpeg2QualityTuningLevel
    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.
    Mpeg2RateControlMode
    Use Rate control mode (Mpeg2RateControlMode) to specifiy whether the bitrate is variable (vbr) or constant (cbr).
    Mpeg2SceneChangeDetect
    Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default.
    Mpeg2SlowPal
    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.
    Mpeg2SpatialAdaptiveQuantization
    Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.
    Mpeg2Syntax
    Specify whether this output's video uses the D10 syntax. Keep the default value to not use the syntax. Related settings: When you choose D10 (D_10) for your MXF profile (profile), you must also set this value to to D10 (D_10).
    Mpeg2Telecine
    When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard or soft telecine to create a smoother picture. Hard telecine (HARD) produces a 29.97i output. Soft telecine (SOFT) produces an output with a 23.976 output that signals to the video player device to do the conversion during play back. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.
    Mpeg2TemporalAdaptiveQuantization
    Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).
    MsSmoothAudioDeduplication
    COMBINE_DUPLICATE_STREAMS combines identical audio encoding settings across a Microsoft Smooth output group into a single audio stream.
    MsSmoothManifestEncoding
    Use Manifest encoding (MsSmoothManifestEncoding) to specify the encoding format for the server and client manifest. Valid options are utf8 and utf16.
    MxfAfdSignaling
    Optional. When you have AFD signaling set up in your output video stream, use this setting to choose whether to also include it in the MXF wrapper. Choose Don't copy (NO_COPY) to exclude AFD signaling from the MXF wrapper. Choose Copy from video stream (COPY_FROM_VIDEO) to copy the AFD values from the video stream for this output to the MXF wrapper. Regardless of which option you choose, the AFD values remain in the video stream. Related settings: To set up your output to include or exclude AFD values, see AfdSignaling, under VideoDescription. On the console, find AFD signaling under the output's video encoding settings.
    MxfProfile
    Specify the MXF profile, also called shim, for this output. When you choose Auto, MediaConvert chooses a profile based on the video codec and resolution. For a list of codecs supported with each MXF profile, see https://docs.aws.amazon.com/mediaconvert/latest/ug/codecs-supported-with-each-mxf-profile.html. For more information about the automatic selection behavior, see https://docs.aws.amazon.com/mediaconvert/latest/ug/default-automatic-selection-of-mxf-profiles.html.
    NielsenActiveWatermarkProcessType
    Choose the type of Nielsen watermarks that you want in your outputs. When you choose NAES 2 and NW (NAES2_AND_NW), you must provide a value for the setting SID (sourceId). When you choose CBET (CBET), you must provide a value for the setting CSID (cbetSourceId). When you choose NAES 2, NW, and CBET (NAES2_AND_NW_AND_CBET), you must provide values for both of these settings.
    NielsenSourceWatermarkStatusType
    Required. Specify whether your source content already contains Nielsen non-linear watermarks. When you set this value to Watermarked (WATERMARKED), the service fails the job. Nielsen requires that you add non-linear watermarking to only clean content that doesn't already have non-linear Nielsen watermarks.
    NielsenUniqueTicPerAudioTrackType
    To create assets that have the same TIC values in each audio track, keep the default value Share TICs (SAME_TICS_PER_TRACK). To create assets that have unique TIC values for each audio track, choose Use unique TICs (RESERVE_UNIQUE_TICS_PER_TRACK).
    NoiseFilterPostTemporalSharpening
    Optional. When you set Noise reducer (noiseReducer) to Temporal (TEMPORAL), you can use this setting to apply sharpening. The default behavior, Auto (AUTO), allows the transcoder to determine whether to apply filtering, depending on input type and quality. When you set Noise reducer to Temporal, your output bandwidth is reduced. When Post temporal sharpening is also enabled, that bandwidth reduction is smaller.
    NoiseReducerFilter
    Use Noise reducer filter (NoiseReducerFilter) to select one of the following spatial image filtering functions. To use this setting, you must also enable Noise reducer (NoiseReducer). * Bilateral preserves edges while reducing noise. * Mean (softest), Gaussian, Lanczos, and Sharpen (sharpest) do convolution filtering. * Conserve does min/max noise reduction. * Spatial does frequency-domain filtering based on JND principles. * Temporal optimizes video quality for complex motion.
    Order
    Optional. When you request lists of resources, you can specify whether they are sorted in ASCENDING or DESCENDING order. Default varies by resource.
    OutputGroupType
    Type of output group (File group, Apple HLS, DASH ISO, Microsoft Smooth Streaming, CMAF)
    OutputSdt
    Selects method of inserting SDT information into output stream. "Follow input SDT" copies SDT information from input stream to output stream. "Follow input SDT if present" copies SDT information from input stream to output stream if SDT information is present in the input, otherwise it will fall back on the user-defined values. Enter "SDT Manually" means user will enter the SDT information. "No SDT" means output stream will not contain SDT information.
    PresetListBy
    Optional. When you request a list of presets, you can choose to list them alphabetically by NAME or chronologically by CREATION_DATE. If you don't specify, the service will list them by name.
    PricingPlan
    Specifies whether the pricing plan for the queue is on-demand or reserved. For on-demand, you pay per minute, billed in increments of .01 minute. For reserved, you pay for the transcoding capacity of the entire queue, regardless of how much or how little you use it. Reserved pricing requires a 12-month commitment.
    ProresCodecProfile
    Use Profile (ProResCodecProfile) to specifiy the type of Apple ProRes codec to use for this output.
    ProresFramerateControl
    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.
    ProresFramerateConversionAlgorithm
    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.
    ProresInterlaceMode
    Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.
    ProresParControl
    Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.
    ProresSlowPal
    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.
    ProresTelecine
    When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.
    QueueListBy
    Optional. When you request a list of queues, you can choose to list them alphabetically by NAME or chronologically by CREATION_DATE. If you don't specify, the service will list them by creation date.
    QueueStatus
    Queues can be ACTIVE or PAUSED. If you pause a queue, jobs in that queue won't begin. Jobs that are running when you pause a queue continue to run until they finish or result in an error.
    RenewalType
    Specifies whether the term of your reserved queue pricing plan is automatically extended (AUTO_RENEW) or expires (EXPIRE) at the end of the term.
    ReservationPlanStatus
    Specifies whether the pricing plan for your reserved queue is ACTIVE or EXPIRED.
    RespondToAfd
    Use Respond to AFD (RespondToAfd) to specify how the service changes the video itself in response to AFD values in the input. * Choose Respond to clip the input video frame according to the AFD value, input display aspect ratio, and output display aspect ratio. * Choose Passthrough to include the input AFD values. Do not choose this when AfdSignaling is set to (NONE). A preferred implementation of this workflow is to set RespondToAfd to (NONE) and set AfdSignaling to (AUTO). * Choose None to remove all input AFD values from this output.
    S3ObjectCannedAcl
    Choose an Amazon S3 canned ACL for MediaConvert to apply to this output.
    S3ServerSideEncryptionType
    Specify how you want your data keys managed. AWS uses data keys to encrypt your content. AWS also encrypts the data keys themselves, using a customer master key (CMK), and then stores the encrypted data keys alongside your encrypted content. Use this setting to specify which AWS service manages the CMK. For simplest set up, choose Amazon S3 (SERVER_SIDE_ENCRYPTION_S3). If you want your master key to be managed by AWS Key Management Service (KMS), choose AWS KMS (SERVER_SIDE_ENCRYPTION_KMS). By default, when you choose AWS KMS, KMS uses the AWS managed customer master key (CMK) associated with Amazon S3 to encrypt your data keys. You can optionally choose to specify a different, customer managed CMK. Do so by specifying the Amazon Resource Name (ARN) of the key for the setting KMS ARN (kmsKeyArn).
    ScalingBehavior
    Specify how the service handles outputs that have a different aspect ratio from the input aspect ratio. Choose Stretch to output (STRETCH_TO_OUTPUT) to have the service stretch your video image to fit. Keep the setting Default (DEFAULT) to have the service letterbox your video instead. This setting overrides any value that you specify for the setting Selection placement (position) in this output.
    SccDestinationFramerate
    Set Framerate (SccDestinationFramerate) to make sure that the captions and the video are synchronized in the output. Specify a frame rate that matches the frame rate of the associated video. If the video frame rate is 29.97, choose 29.97 dropframe (FRAMERATE_29_97_DROPFRAME) only if the video has video_insertion=true and drop_frame_timecode=true; otherwise, choose 29.97 non-dropframe (FRAMERATE_29_97_NON_DROPFRAME).
    SimulateReservedQueue
    Enable this setting when you run a test job to estimate how many reserved transcoding slots (RTS) you need. When this is enabled, MediaConvert runs your job from an on-demand queue with similar performance to what you will see with one RTS in a reserved queue. This setting is disabled by default.
    StatusUpdateInterval
    Specify how often MediaConvert sends STATUS_UPDATE events to Amazon CloudWatch Events. Set the interval, in seconds, between status updates. MediaConvert sends an update at this interval from the time the service begins processing your job to the time it completes the transcode or encounters an error.
    TeletextPageType
    A page type as defined in the standard ETSI EN 300 468, Table 94
    TimecodeBurninPosition
    Use Position (Position) under under Timecode burn-in (TimecodeBurnIn) to specify the location the burned-in timecode on output video.
    TimecodeSource
    Use Source (TimecodeSource) to set how timecodes are handled within this job. To make sure that your video, audio, captions, and markers are synchronized and that time-based features, such as image inserter, work correctly, choose the Timecode source option that matches your assets. All timecodes are in a 24-hour format with frame number (HH:MM:SS:FF). * Embedded (EMBEDDED) - Use the timecode that is in the input video. If no embedded timecode is in the source, the service will use Start at 0 (ZEROBASED) instead. * Start at 0 (ZEROBASED) - Set the timecode of the initial frame to 00:00:00:00. * Specified Start (SPECIFIEDSTART) - Set the timecode of the initial frame to a value other than zero. You use Start timecode (Start) to provide this value.
    TimedMetadata
    Applies only to HLS outputs. Use this setting to specify whether the service inserts the ID3 timed metadata from the input in this output.
    TtmlStylePassthrough
    Pass through style and position information from a TTML-like input source (TTML, SMPTE-TT) to the TTML output.
    Type
    Vc3Class
    Specify the VC3 class to choose the quality characteristics for this output. VC3 class, together with the settings Framerate (framerateNumerator and framerateDenominator) and Resolution (height and width), determine your output bitrate. For example, say that your video resolution is 1920x1080 and your framerate is 29.97. Then Class 145 (CLASS_145) gives you an output with a bitrate of approximately 145 Mbps and Class 220 (CLASS_220) gives you and output with a bitrate of approximately 220 Mbps. VC3 class also specifies the color bit depth of your output.
    Vc3FramerateControl
    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.
    Vc3FramerateConversionAlgorithm
    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.
    Vc3InterlaceMode
    Optional. Choose the scan line type for this output. If you don't specify a value, MediaConvert will create a progressive output.
    Vc3SlowPal
    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output by relabeling the video frames and resampling your audio. Note that enabling this setting will slightly reduce the duration of your video. Related settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.
    Vc3Telecine
    When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.
    VideoCodec
    Type of video codec
    VideoTimecodeInsertion
    Applies only to H.264, H.265, MPEG2, and ProRes outputs. Only enable Timecode insertion when the input frame rate is identical to the output frame rate. To include timecodes in this output, set Timecode insertion (VideoTimecodeInsertion) to PIC_TIMING_SEI. To leave them out, set it to DISABLED. Default is DISABLED. When the service inserts timecodes in an output, by default, it uses any embedded timecodes from the input. If none are present, the service will set the timecode for the first output frame to zero. To change this default behavior, adjust the settings under Timecode configuration (TimecodeConfig). In the console, these settings are located under Job > Job settings > Timecode configuration. Note - Timecode source under input settings (InputTimecodeSource) does not affect the timecodes that are inserted in the output. Source under Job settings > Timecode configuration (TimecodeSource) does.
    Vp8FramerateControl
    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.
    Vp8FramerateConversionAlgorithm
    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.
    Vp8ParControl
    Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.
    Vp8QualityTuningLevel
    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, multi-pass encoding.
    Vp8RateControlMode
    With the VP8 codec, you can use only the variable bitrate (VBR) rate control mode.
    Vp9FramerateControl
    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.
    Vp9FramerateConversionAlgorithm
    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.
    Vp9ParControl
    Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.
    Vp9QualityTuningLevel
    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, multi-pass encoding.
    Vp9RateControlMode
    With the VP9 codec, you can use only the variable bitrate (VBR) rate control mode.
    WatermarkingStrength
    Optional. Ignore this setting unless Nagra support directs you to specify a value. When you don't specify a value here, the Nagra NexGuard library uses its default value.
    WavFormat
    The service defaults to using RIFF for WAV outputs. If your output audio is likely to exceed 4 GB in file size, or if you otherwise need the extended support of the RF64 format, set your output WAV file format to RF64.

    Extensions

    AacAudioDescriptionBroadcasterMixFromString on String
    AacAudioDescriptionBroadcasterMixValueExtension on AacAudioDescriptionBroadcasterMix
    AacCodecProfileFromString on String
    AacCodecProfileValueExtension on AacCodecProfile
    AacCodingModeFromString on String
    AacCodingModeValueExtension on AacCodingMode
    AacRateControlModeFromString on String
    AacRateControlModeValueExtension on AacRateControlMode
    AacRawFormatFromString on String
    AacRawFormatValueExtension on AacRawFormat
    AacSpecificationFromString on String
    AacSpecificationValueExtension on AacSpecification
    AacVbrQualityFromString on String
    AacVbrQualityValueExtension on AacVbrQuality
    Ac3BitstreamModeFromString on String
    Ac3BitstreamModeValueExtension on Ac3BitstreamMode
    Ac3CodingModeFromString on String
    Ac3CodingModeValueExtension on Ac3CodingMode
    Ac3DynamicRangeCompressionProfileFromString on String
    Ac3DynamicRangeCompressionProfileValueExtension on Ac3DynamicRangeCompressionProfile
    Ac3LfeFilterFromString on String
    Ac3LfeFilterValueExtension on Ac3LfeFilter
    Ac3MetadataControlFromString on String
    Ac3MetadataControlValueExtension on Ac3MetadataControl
    AccelerationModeFromString on String
    AccelerationModeValueExtension on AccelerationMode
    AccelerationStatusFromString on String
    AccelerationStatusValueExtension on AccelerationStatus
    AfdSignalingFromString on String
    AfdSignalingValueExtension on AfdSignaling
    AlphaBehaviorFromString on String
    AlphaBehaviorValueExtension on AlphaBehavior
    AncillaryConvert608To708FromString on String
    AncillaryConvert608To708ValueExtension on AncillaryConvert608To708
    AncillaryTerminateCaptionsFromString on String
    AncillaryTerminateCaptionsValueExtension on AncillaryTerminateCaptions
    AntiAliasFromString on String
    AntiAliasValueExtension on AntiAlias
    AudioChannelTagFromString on String
    AudioChannelTagValueExtension on AudioChannelTag
    AudioCodecFromString on String
    AudioCodecValueExtension on AudioCodec
    AudioDefaultSelectionFromString on String
    AudioDefaultSelectionValueExtension on AudioDefaultSelection
    AudioLanguageCodeControlFromString on String
    AudioLanguageCodeControlValueExtension on AudioLanguageCodeControl
    AudioNormalizationAlgorithmControlFromString on String
    AudioNormalizationAlgorithmControlValueExtension on AudioNormalizationAlgorithmControl
    AudioNormalizationAlgorithmFromString on String
    AudioNormalizationAlgorithmValueExtension on AudioNormalizationAlgorithm
    AudioNormalizationLoudnessLoggingFromString on String
    AudioNormalizationLoudnessLoggingValueExtension on AudioNormalizationLoudnessLogging
    AudioNormalizationPeakCalculationFromString on String
    AudioNormalizationPeakCalculationValueExtension on AudioNormalizationPeakCalculation
    AudioSelectorTypeFromString on String
    AudioSelectorTypeValueExtension on AudioSelectorType
    AudioTypeControlFromString on String
    AudioTypeControlValueExtension on AudioTypeControl
    Av1AdaptiveQuantizationFromString on String
    Av1AdaptiveQuantizationValueExtension on Av1AdaptiveQuantization
    Av1FramerateControlFromString on String
    Av1FramerateControlValueExtension on Av1FramerateControl
    Av1FramerateConversionAlgorithmFromString on String
    Av1FramerateConversionAlgorithmValueExtension on Av1FramerateConversionAlgorithm
    Av1RateControlModeFromString on String
    Av1RateControlModeValueExtension on Av1RateControlMode
    Av1SpatialAdaptiveQuantizationFromString on String
    Av1SpatialAdaptiveQuantizationValueExtension on Av1SpatialAdaptiveQuantization
    AvcIntraClassFromString on String
    AvcIntraClassValueExtension on AvcIntraClass
    AvcIntraFramerateControlFromString on String
    AvcIntraFramerateControlValueExtension on AvcIntraFramerateControl
    AvcIntraFramerateConversionAlgorithmFromString on String
    AvcIntraFramerateConversionAlgorithmValueExtension on AvcIntraFramerateConversionAlgorithm
    AvcIntraInterlaceModeFromString on String
    AvcIntraInterlaceModeValueExtension on AvcIntraInterlaceMode
    AvcIntraSlowPalFromString on String
    AvcIntraSlowPalValueExtension on AvcIntraSlowPal
    AvcIntraTelecineFromString on String
    AvcIntraTelecineValueExtension on AvcIntraTelecine
    BillingTagsSourceFromString on String
    BillingTagsSourceValueExtension on BillingTagsSource
    BurninSubtitleAlignmentFromString on String
    BurninSubtitleAlignmentValueExtension on BurninSubtitleAlignment
    BurninSubtitleBackgroundColorFromString on String
    BurninSubtitleBackgroundColorValueExtension on BurninSubtitleBackgroundColor
    BurninSubtitleFontColorFromString on String
    BurninSubtitleFontColorValueExtension on BurninSubtitleFontColor
    BurninSubtitleOutlineColorFromString on String
    BurninSubtitleOutlineColorValueExtension on BurninSubtitleOutlineColor
    BurninSubtitleShadowColorFromString on String
    BurninSubtitleShadowColorValueExtension on BurninSubtitleShadowColor
    BurninSubtitleTeletextSpacingFromString on String
    BurninSubtitleTeletextSpacingValueExtension on BurninSubtitleTeletextSpacing
    CaptionDestinationTypeFromString on String
    CaptionDestinationTypeValueExtension on CaptionDestinationType
    CaptionSourceTypeFromString on String
    CaptionSourceTypeValueExtension on CaptionSourceType
    CmafClientCacheFromString on String
    CmafClientCacheValueExtension on CmafClientCache
    CmafCodecSpecificationFromString on String
    CmafCodecSpecificationValueExtension on CmafCodecSpecification
    CmafEncryptionTypeFromString on String
    CmafEncryptionTypeValueExtension on CmafEncryptionType
    CmafInitializationVectorInManifestFromString on String
    CmafInitializationVectorInManifestValueExtension on CmafInitializationVectorInManifest
    CmafKeyProviderTypeFromString on String
    CmafKeyProviderTypeValueExtension on CmafKeyProviderType
    CmafManifestCompressionFromString on String
    CmafManifestCompressionValueExtension on CmafManifestCompression
    CmafManifestDurationFormatFromString on String
    CmafManifestDurationFormatValueExtension on CmafManifestDurationFormat
    CmafMpdProfileFromString on String
    CmafMpdProfileValueExtension on CmafMpdProfile
    CmafSegmentControlFromString on String
    CmafSegmentControlValueExtension on CmafSegmentControl
    CmafStreamInfResolutionFromString on String
    CmafStreamInfResolutionValueExtension on CmafStreamInfResolution
    CmafWriteDASHManifestFromString on String
    CmafWriteDASHManifestValueExtension on CmafWriteDASHManifest
    CmafWriteHLSManifestFromString on String
    CmafWriteHLSManifestValueExtension on CmafWriteHLSManifest
    CmafWriteSegmentTimelineInRepresentationFromString on String
    CmafWriteSegmentTimelineInRepresentationValueExtension on CmafWriteSegmentTimelineInRepresentation
    CmfcAudioDurationFromString on String
    CmfcAudioDurationValueExtension on CmfcAudioDuration
    CmfcScte35EsamFromString on String
    CmfcScte35EsamValueExtension on CmfcScte35Esam
    CmfcScte35SourceFromString on String
    CmfcScte35SourceValueExtension on CmfcScte35Source
    ColorMetadataFromString on String
    ColorMetadataValueExtension on ColorMetadata
    ColorSpaceConversionFromString on String
    ColorSpaceConversionValueExtension on ColorSpaceConversion
    ColorSpaceFromString on String
    ColorSpaceUsageFromString on String
    ColorSpaceUsageValueExtension on ColorSpaceUsage
    ColorSpaceValueExtension on ColorSpace
    CommitmentFromString on String
    CommitmentValueExtension on Commitment
    ContainerTypeFromString on String
    ContainerTypeValueExtension on ContainerType
    DashIsoHbbtvComplianceFromString on String
    DashIsoHbbtvComplianceValueExtension on DashIsoHbbtvCompliance
    DashIsoMpdProfileFromString on String
    DashIsoMpdProfileValueExtension on DashIsoMpdProfile
    DashIsoPlaybackDeviceCompatibilityFromString on String
    DashIsoPlaybackDeviceCompatibilityValueExtension on DashIsoPlaybackDeviceCompatibility
    DashIsoSegmentControlFromString on String
    DashIsoSegmentControlValueExtension on DashIsoSegmentControl
    DashIsoWriteSegmentTimelineInRepresentationFromString on String
    DashIsoWriteSegmentTimelineInRepresentationValueExtension on DashIsoWriteSegmentTimelineInRepresentation
    DecryptionModeFromString on String
    DecryptionModeValueExtension on DecryptionMode
    DeinterlaceAlgorithmFromString on String
    DeinterlaceAlgorithmValueExtension on DeinterlaceAlgorithm
    DeinterlacerControlFromString on String
    DeinterlacerControlValueExtension on DeinterlacerControl
    DeinterlacerModeFromString on String
    DeinterlacerModeValueExtension on DeinterlacerMode
    DescribeEndpointsModeFromString on String
    DescribeEndpointsModeValueExtension on DescribeEndpointsMode
    DolbyVisionLevel6ModeFromString on String
    DolbyVisionLevel6ModeValueExtension on DolbyVisionLevel6Mode
    DolbyVisionProfileFromString on String
    DolbyVisionProfileValueExtension on DolbyVisionProfile
    DropFrameTimecodeFromString on String
    DropFrameTimecodeValueExtension on DropFrameTimecode
    DvbSubtitleAlignmentFromString on String
    DvbSubtitleAlignmentValueExtension on DvbSubtitleAlignment
    DvbSubtitleBackgroundColorFromString on String
    DvbSubtitleBackgroundColorValueExtension on DvbSubtitleBackgroundColor
    DvbSubtitleFontColorFromString on String
    DvbSubtitleFontColorValueExtension on DvbSubtitleFontColor
    DvbSubtitleOutlineColorFromString on String
    DvbSubtitleOutlineColorValueExtension on DvbSubtitleOutlineColor
    DvbSubtitleShadowColorFromString on String
    DvbSubtitleShadowColorValueExtension on DvbSubtitleShadowColor
    DvbSubtitleTeletextSpacingFromString on String
    DvbSubtitleTeletextSpacingValueExtension on DvbSubtitleTeletextSpacing
    DvbSubtitlingTypeFromString on String
    DvbSubtitlingTypeValueExtension on DvbSubtitlingType
    Eac3AtmosBitstreamModeFromString on String
    Eac3AtmosBitstreamModeValueExtension on Eac3AtmosBitstreamMode
    Eac3AtmosCodingModeFromString on String
    Eac3AtmosCodingModeValueExtension on Eac3AtmosCodingMode
    Eac3AtmosDialogueIntelligenceFromString on String
    Eac3AtmosDialogueIntelligenceValueExtension on Eac3AtmosDialogueIntelligence
    Eac3AtmosDynamicRangeCompressionLineFromString on String
    Eac3AtmosDynamicRangeCompressionLineValueExtension on Eac3AtmosDynamicRangeCompressionLine
    Eac3AtmosDynamicRangeCompressionRfFromString on String
    Eac3AtmosDynamicRangeCompressionRfValueExtension on Eac3AtmosDynamicRangeCompressionRf
    Eac3AtmosMeteringModeFromString on String
    Eac3AtmosMeteringModeValueExtension on Eac3AtmosMeteringMode
    Eac3AtmosStereoDownmixFromString on String
    Eac3AtmosStereoDownmixValueExtension on Eac3AtmosStereoDownmix
    Eac3AtmosSurroundExModeFromString on String
    Eac3AtmosSurroundExModeValueExtension on Eac3AtmosSurroundExMode
    Eac3AttenuationControlFromString on String
    Eac3AttenuationControlValueExtension on Eac3AttenuationControl
    Eac3BitstreamModeFromString on String
    Eac3BitstreamModeValueExtension on Eac3BitstreamMode
    Eac3CodingModeFromString on String
    Eac3CodingModeValueExtension on Eac3CodingMode
    Eac3DcFilterFromString on String
    Eac3DcFilterValueExtension on Eac3DcFilter
    Eac3DynamicRangeCompressionLineFromString on String
    Eac3DynamicRangeCompressionLineValueExtension on Eac3DynamicRangeCompressionLine
    Eac3DynamicRangeCompressionRfFromString on String
    Eac3DynamicRangeCompressionRfValueExtension on Eac3DynamicRangeCompressionRf
    Eac3LfeControlFromString on String
    Eac3LfeControlValueExtension on Eac3LfeControl
    Eac3LfeFilterFromString on String
    Eac3LfeFilterValueExtension on Eac3LfeFilter
    Eac3MetadataControlFromString on String
    Eac3MetadataControlValueExtension on Eac3MetadataControl
    Eac3PassthroughControlFromString on String
    Eac3PassthroughControlValueExtension on Eac3PassthroughControl
    Eac3PhaseControlFromString on String
    Eac3PhaseControlValueExtension on Eac3PhaseControl
    Eac3StereoDownmixFromString on String
    Eac3StereoDownmixValueExtension on Eac3StereoDownmix
    Eac3SurroundExModeFromString on String
    Eac3SurroundExModeValueExtension on Eac3SurroundExMode
    Eac3SurroundModeFromString on String
    Eac3SurroundModeValueExtension on Eac3SurroundMode
    EmbeddedConvert608To708FromString on String
    EmbeddedConvert608To708ValueExtension on EmbeddedConvert608To708
    EmbeddedTerminateCaptionsFromString on String
    EmbeddedTerminateCaptionsValueExtension on EmbeddedTerminateCaptions
    F4vMoovPlacementFromString on String
    F4vMoovPlacementValueExtension on F4vMoovPlacement
    FileSourceConvert608To708FromString on String
    FileSourceConvert608To708ValueExtension on FileSourceConvert608To708
    FontScriptFromString on String
    FontScriptValueExtension on FontScript
    H264AdaptiveQuantizationFromString on String
    H264AdaptiveQuantizationValueExtension on H264AdaptiveQuantization
    H264CodecLevelFromString on String
    H264CodecLevelValueExtension on H264CodecLevel
    H264CodecProfileFromString on String
    H264CodecProfileValueExtension on H264CodecProfile
    H264DynamicSubGopFromString on String
    H264DynamicSubGopValueExtension on H264DynamicSubGop
    H264EntropyEncodingFromString on String
    H264EntropyEncodingValueExtension on H264EntropyEncoding
    H264FieldEncodingFromString on String
    H264FieldEncodingValueExtension on H264FieldEncoding
    H264FlickerAdaptiveQuantizationFromString on String
    H264FlickerAdaptiveQuantizationValueExtension on H264FlickerAdaptiveQuantization
    H264FramerateControlFromString on String
    H264FramerateControlValueExtension on H264FramerateControl
    H264FramerateConversionAlgorithmFromString on String
    H264FramerateConversionAlgorithmValueExtension on H264FramerateConversionAlgorithm
    H264GopBReferenceFromString on String
    H264GopBReferenceValueExtension on H264GopBReference
    H264GopSizeUnitsFromString on String
    H264GopSizeUnitsValueExtension on H264GopSizeUnits
    H264InterlaceModeFromString on String
    H264InterlaceModeValueExtension on H264InterlaceMode
    H264ParControlFromString on String
    H264ParControlValueExtension on H264ParControl
    H264QualityTuningLevelFromString on String
    H264QualityTuningLevelValueExtension on H264QualityTuningLevel
    H264RateControlModeFromString on String
    H264RateControlModeValueExtension on H264RateControlMode
    H264RepeatPpsFromString on String
    H264RepeatPpsValueExtension on H264RepeatPps
    H264SceneChangeDetectFromString on String
    H264SceneChangeDetectValueExtension on H264SceneChangeDetect
    H264SlowPalFromString on String
    H264SlowPalValueExtension on H264SlowPal
    H264SpatialAdaptiveQuantizationFromString on String
    H264SpatialAdaptiveQuantizationValueExtension on H264SpatialAdaptiveQuantization
    H264SyntaxFromString on String
    H264SyntaxValueExtension on H264Syntax
    H264TelecineFromString on String
    H264TelecineValueExtension on H264Telecine
    H264TemporalAdaptiveQuantizationFromString on String
    H264TemporalAdaptiveQuantizationValueExtension on H264TemporalAdaptiveQuantization
    H264UnregisteredSeiTimecodeFromString on String
    H264UnregisteredSeiTimecodeValueExtension on H264UnregisteredSeiTimecode
    H265AdaptiveQuantizationFromString on String
    H265AdaptiveQuantizationValueExtension on H265AdaptiveQuantization
    H265AlternateTransferFunctionSeiFromString on String
    H265AlternateTransferFunctionSeiValueExtension on H265AlternateTransferFunctionSei
    H265CodecLevelFromString on String
    H265CodecLevelValueExtension on H265CodecLevel
    H265CodecProfileFromString on String
    H265CodecProfileValueExtension on H265CodecProfile
    H265DynamicSubGopFromString on String
    H265DynamicSubGopValueExtension on H265DynamicSubGop
    H265FlickerAdaptiveQuantizationFromString on String
    H265FlickerAdaptiveQuantizationValueExtension on H265FlickerAdaptiveQuantization
    H265FramerateControlFromString on String
    H265FramerateControlValueExtension on H265FramerateControl
    H265FramerateConversionAlgorithmFromString on String
    H265FramerateConversionAlgorithmValueExtension on H265FramerateConversionAlgorithm
    H265GopBReferenceFromString on String
    H265GopBReferenceValueExtension on H265GopBReference
    H265GopSizeUnitsFromString on String
    H265GopSizeUnitsValueExtension on H265GopSizeUnits
    H265InterlaceModeFromString on String
    H265InterlaceModeValueExtension on H265InterlaceMode
    H265ParControlFromString on String
    H265ParControlValueExtension on H265ParControl
    H265QualityTuningLevelFromString on String
    H265QualityTuningLevelValueExtension on H265QualityTuningLevel
    H265RateControlModeFromString on String
    H265RateControlModeValueExtension on H265RateControlMode
    H265SampleAdaptiveOffsetFilterModeFromString on String
    H265SampleAdaptiveOffsetFilterModeValueExtension on H265SampleAdaptiveOffsetFilterMode
    H265SceneChangeDetectFromString on String
    H265SceneChangeDetectValueExtension on H265SceneChangeDetect
    H265SlowPalFromString on String
    H265SlowPalValueExtension on H265SlowPal
    H265SpatialAdaptiveQuantizationFromString on String
    H265SpatialAdaptiveQuantizationValueExtension on H265SpatialAdaptiveQuantization
    H265TelecineFromString on String
    H265TelecineValueExtension on H265Telecine
    H265TemporalAdaptiveQuantizationFromString on String
    H265TemporalAdaptiveQuantizationValueExtension on H265TemporalAdaptiveQuantization
    H265TemporalIdsFromString on String
    H265TemporalIdsValueExtension on H265TemporalIds
    H265TilesFromString on String
    H265TilesValueExtension on H265Tiles
    H265UnregisteredSeiTimecodeFromString on String
    H265UnregisteredSeiTimecodeValueExtension on H265UnregisteredSeiTimecode
    H265WriteMp4PackagingTypeFromString on String
    H265WriteMp4PackagingTypeValueExtension on H265WriteMp4PackagingType
    HlsAdMarkersFromString on String
    HlsAdMarkersValueExtension on HlsAdMarkers
    HlsAudioOnlyContainerFromString on String
    HlsAudioOnlyContainerValueExtension on HlsAudioOnlyContainer
    HlsAudioOnlyHeaderFromString on String
    HlsAudioOnlyHeaderValueExtension on HlsAudioOnlyHeader
    HlsAudioTrackTypeFromString on String
    HlsAudioTrackTypeValueExtension on HlsAudioTrackType
    HlsCaptionLanguageSettingFromString on String
    HlsCaptionLanguageSettingValueExtension on HlsCaptionLanguageSetting
    HlsClientCacheFromString on String
    HlsClientCacheValueExtension on HlsClientCache
    HlsCodecSpecificationFromString on String
    HlsCodecSpecificationValueExtension on HlsCodecSpecification
    HlsDirectoryStructureFromString on String
    HlsDirectoryStructureValueExtension on HlsDirectoryStructure
    HlsEncryptionTypeFromString on String
    HlsEncryptionTypeValueExtension on HlsEncryptionType
    HlsIFrameOnlyManifestFromString on String
    HlsIFrameOnlyManifestValueExtension on HlsIFrameOnlyManifest
    HlsInitializationVectorInManifestFromString on String
    HlsInitializationVectorInManifestValueExtension on HlsInitializationVectorInManifest
    HlsKeyProviderTypeFromString on String
    HlsKeyProviderTypeValueExtension on HlsKeyProviderType
    HlsManifestCompressionFromString on String
    HlsManifestCompressionValueExtension on HlsManifestCompression
    HlsManifestDurationFormatFromString on String
    HlsManifestDurationFormatValueExtension on HlsManifestDurationFormat
    HlsOfflineEncryptedFromString on String
    HlsOfflineEncryptedValueExtension on HlsOfflineEncrypted
    HlsOutputSelectionFromString on String
    HlsOutputSelectionValueExtension on HlsOutputSelection
    HlsProgramDateTimeFromString on String
    HlsProgramDateTimeValueExtension on HlsProgramDateTime
    HlsSegmentControlFromString on String
    HlsSegmentControlValueExtension on HlsSegmentControl
    HlsStreamInfResolutionFromString on String
    HlsStreamInfResolutionValueExtension on HlsStreamInfResolution
    HlsTimedMetadataId3FrameFromString on String
    HlsTimedMetadataId3FrameValueExtension on HlsTimedMetadataId3Frame
    ImscStylePassthroughFromString on String
    ImscStylePassthroughValueExtension on ImscStylePassthrough
    InputDeblockFilterFromString on String
    InputDeblockFilterValueExtension on InputDeblockFilter
    InputDenoiseFilterFromString on String
    InputDenoiseFilterValueExtension on InputDenoiseFilter
    InputFilterEnableFromString on String
    InputFilterEnableValueExtension on InputFilterEnable
    InputPsiControlFromString on String
    InputPsiControlValueExtension on InputPsiControl
    InputRotateFromString on String
    InputRotateValueExtension on InputRotate
    InputScanTypeFromString on String
    InputScanTypeValueExtension on InputScanType
    InputTimecodeSourceFromString on String
    InputTimecodeSourceValueExtension on InputTimecodeSource
    JobPhaseFromString on String
    JobPhaseValueExtension on JobPhase
    JobStatusFromString on String
    JobStatusValueExtension on JobStatus
    JobTemplateListByFromString on String
    JobTemplateListByValueExtension on JobTemplateListBy
    LanguageCodeFromString on String
    LanguageCodeValueExtension on LanguageCode
    M2tsAudioBufferModelFromString on String
    M2tsAudioBufferModelValueExtension on M2tsAudioBufferModel
    M2tsAudioDurationFromString on String
    M2tsAudioDurationValueExtension on M2tsAudioDuration
    M2tsBufferModelFromString on String
    M2tsBufferModelValueExtension on M2tsBufferModel
    M2tsEbpAudioIntervalFromString on String
    M2tsEbpAudioIntervalValueExtension on M2tsEbpAudioInterval
    M2tsEbpPlacementFromString on String
    M2tsEbpPlacementValueExtension on M2tsEbpPlacement
    M2tsEsRateInPesFromString on String
    M2tsEsRateInPesValueExtension on M2tsEsRateInPes
    M2tsForceTsVideoEbpOrderFromString on String
    M2tsForceTsVideoEbpOrderValueExtension on M2tsForceTsVideoEbpOrder
    M2tsNielsenId3FromString on String
    M2tsNielsenId3ValueExtension on M2tsNielsenId3
    M2tsPcrControlFromString on String
    M2tsPcrControlValueExtension on M2tsPcrControl
    M2tsRateModeFromString on String
    M2tsRateModeValueExtension on M2tsRateMode
    M2tsScte35SourceFromString on String
    M2tsScte35SourceValueExtension on M2tsScte35Source
    M2tsSegmentationMarkersFromString on String
    M2tsSegmentationMarkersValueExtension on M2tsSegmentationMarkers
    M2tsSegmentationStyleFromString on String
    M2tsSegmentationStyleValueExtension on M2tsSegmentationStyle
    M3u8AudioDurationFromString on String
    M3u8AudioDurationValueExtension on M3u8AudioDuration
    M3u8NielsenId3FromString on String
    M3u8NielsenId3ValueExtension on M3u8NielsenId3
    M3u8PcrControlFromString on String
    M3u8PcrControlValueExtension on M3u8PcrControl
    M3u8Scte35SourceFromString on String
    M3u8Scte35SourceValueExtension on M3u8Scte35Source
    MotionImageInsertionModeFromString on String
    MotionImageInsertionModeValueExtension on MotionImageInsertionMode
    MotionImagePlaybackFromString on String
    MotionImagePlaybackValueExtension on MotionImagePlayback
    MovClapAtomFromString on String
    MovClapAtomValueExtension on MovClapAtom
    MovCslgAtomFromString on String
    MovCslgAtomValueExtension on MovCslgAtom
    MovMpeg2FourCCControlFromString on String
    MovMpeg2FourCCControlValueExtension on MovMpeg2FourCCControl
    MovPaddingControlFromString on String
    MovPaddingControlValueExtension on MovPaddingControl
    MovReferenceFromString on String
    MovReferenceValueExtension on MovReference
    Mp3RateControlModeFromString on String
    Mp3RateControlModeValueExtension on Mp3RateControlMode
    Mp4CslgAtomFromString on String
    Mp4CslgAtomValueExtension on Mp4CslgAtom
    Mp4FreeSpaceBoxFromString on String
    Mp4FreeSpaceBoxValueExtension on Mp4FreeSpaceBox
    Mp4MoovPlacementFromString on String
    Mp4MoovPlacementValueExtension on Mp4MoovPlacement
    MpdAccessibilityCaptionHintsFromString on String
    MpdAccessibilityCaptionHintsValueExtension on MpdAccessibilityCaptionHints
    MpdAudioDurationFromString on String
    MpdAudioDurationValueExtension on MpdAudioDuration
    MpdCaptionContainerTypeFromString on String
    MpdCaptionContainerTypeValueExtension on MpdCaptionContainerType
    MpdScte35EsamFromString on String
    MpdScte35EsamValueExtension on MpdScte35Esam
    MpdScte35SourceFromString on String
    MpdScte35SourceValueExtension on MpdScte35Source
    Mpeg2AdaptiveQuantizationFromString on String
    Mpeg2AdaptiveQuantizationValueExtension on Mpeg2AdaptiveQuantization
    Mpeg2CodecLevelFromString on String
    Mpeg2CodecLevelValueExtension on Mpeg2CodecLevel
    Mpeg2CodecProfileFromString on String
    Mpeg2CodecProfileValueExtension on Mpeg2CodecProfile
    Mpeg2DynamicSubGopFromString on String
    Mpeg2DynamicSubGopValueExtension on Mpeg2DynamicSubGop
    Mpeg2FramerateControlFromString on String
    Mpeg2FramerateControlValueExtension on Mpeg2FramerateControl
    Mpeg2FramerateConversionAlgorithmFromString on String
    Mpeg2FramerateConversionAlgorithmValueExtension on Mpeg2FramerateConversionAlgorithm
    Mpeg2GopSizeUnitsFromString on String
    Mpeg2GopSizeUnitsValueExtension on Mpeg2GopSizeUnits
    Mpeg2InterlaceModeFromString on String
    Mpeg2InterlaceModeValueExtension on Mpeg2InterlaceMode
    Mpeg2IntraDcPrecisionFromString on String
    Mpeg2IntraDcPrecisionValueExtension on Mpeg2IntraDcPrecision
    Mpeg2ParControlFromString on String
    Mpeg2ParControlValueExtension on Mpeg2ParControl
    Mpeg2QualityTuningLevelFromString on String
    Mpeg2QualityTuningLevelValueExtension on Mpeg2QualityTuningLevel
    Mpeg2RateControlModeFromString on String
    Mpeg2RateControlModeValueExtension on Mpeg2RateControlMode
    Mpeg2SceneChangeDetectFromString on String
    Mpeg2SceneChangeDetectValueExtension on Mpeg2SceneChangeDetect
    Mpeg2SlowPalFromString on String
    Mpeg2SlowPalValueExtension on Mpeg2SlowPal
    Mpeg2SpatialAdaptiveQuantizationFromString on String
    Mpeg2SpatialAdaptiveQuantizationValueExtension on Mpeg2SpatialAdaptiveQuantization
    Mpeg2SyntaxFromString on String
    Mpeg2SyntaxValueExtension on Mpeg2Syntax
    Mpeg2TelecineFromString on String
    Mpeg2TelecineValueExtension on Mpeg2Telecine
    Mpeg2TemporalAdaptiveQuantizationFromString on String
    Mpeg2TemporalAdaptiveQuantizationValueExtension on Mpeg2TemporalAdaptiveQuantization
    MsSmoothAudioDeduplicationFromString on String
    MsSmoothAudioDeduplicationValueExtension on MsSmoothAudioDeduplication
    MsSmoothManifestEncodingFromString on String
    MsSmoothManifestEncodingValueExtension on MsSmoothManifestEncoding
    MxfAfdSignalingFromString on String
    MxfAfdSignalingValueExtension on MxfAfdSignaling
    MxfProfileFromString on String
    MxfProfileValueExtension on MxfProfile
    NielsenActiveWatermarkProcessTypeFromString on String
    NielsenActiveWatermarkProcessTypeValueExtension on NielsenActiveWatermarkProcessType
    NielsenSourceWatermarkStatusTypeFromString on String
    NielsenSourceWatermarkStatusTypeValueExtension on NielsenSourceWatermarkStatusType
    NielsenUniqueTicPerAudioTrackTypeFromString on String
    NielsenUniqueTicPerAudioTrackTypeValueExtension on NielsenUniqueTicPerAudioTrackType
    NoiseFilterPostTemporalSharpeningFromString on String
    NoiseFilterPostTemporalSharpeningValueExtension on NoiseFilterPostTemporalSharpening
    NoiseReducerFilterFromString on String
    NoiseReducerFilterValueExtension on NoiseReducerFilter
    OrderFromString on String
    OrderValueExtension on Order
    OutputGroupTypeFromString on String
    OutputGroupTypeValueExtension on OutputGroupType
    OutputSdtFromString on String
    OutputSdtValueExtension on OutputSdt
    PresetListByFromString on String
    PresetListByValueExtension on PresetListBy
    PricingPlanFromString on String
    PricingPlanValueExtension on PricingPlan
    ProresCodecProfileFromString on String
    ProresCodecProfileValueExtension on ProresCodecProfile
    ProresFramerateControlFromString on String
    ProresFramerateControlValueExtension on ProresFramerateControl
    ProresFramerateConversionAlgorithmFromString on String
    ProresFramerateConversionAlgorithmValueExtension on ProresFramerateConversionAlgorithm
    ProresInterlaceModeFromString on String
    ProresInterlaceModeValueExtension on ProresInterlaceMode
    ProresParControlFromString on String
    ProresParControlValueExtension on ProresParControl
    ProresSlowPalFromString on String
    ProresSlowPalValueExtension on ProresSlowPal
    ProresTelecineFromString on String
    ProresTelecineValueExtension on ProresTelecine
    QueueListByFromString on String
    QueueListByValueExtension on QueueListBy
    QueueStatusFromString on String
    QueueStatusValueExtension on QueueStatus
    RenewalTypeFromString on String
    RenewalTypeValueExtension on RenewalType
    ReservationPlanStatusFromString on String
    ReservationPlanStatusValueExtension on ReservationPlanStatus
    RespondToAfdFromString on String
    RespondToAfdValueExtension on RespondToAfd
    S3ObjectCannedAclFromString on String
    S3ObjectCannedAclValueExtension on S3ObjectCannedAcl
    S3ServerSideEncryptionTypeFromString on String
    S3ServerSideEncryptionTypeValueExtension on S3ServerSideEncryptionType
    ScalingBehaviorFromString on String
    ScalingBehaviorValueExtension on ScalingBehavior
    SccDestinationFramerateFromString on String
    SccDestinationFramerateValueExtension on SccDestinationFramerate
    SimulateReservedQueueFromString on String
    SimulateReservedQueueValueExtension on SimulateReservedQueue
    StatusUpdateIntervalFromString on String
    StatusUpdateIntervalValueExtension on StatusUpdateInterval
    TeletextPageTypeFromString on String
    TeletextPageTypeValueExtension on TeletextPageType
    TimecodeBurninPositionFromString on String
    TimecodeBurninPositionValueExtension on TimecodeBurninPosition
    TimecodeSourceFromString on String
    TimecodeSourceValueExtension on TimecodeSource
    TimedMetadataFromString on String
    TimedMetadataValueExtension on TimedMetadata
    TtmlStylePassthroughFromString on String
    TtmlStylePassthroughValueExtension on TtmlStylePassthrough
    TypeFromString on String
    TypeValueExtension on Type
    Vc3ClassFromString on String
    Vc3ClassValueExtension on Vc3Class
    Vc3FramerateControlFromString on String
    Vc3FramerateControlValueExtension on Vc3FramerateControl
    Vc3FramerateConversionAlgorithmFromString on String
    Vc3FramerateConversionAlgorithmValueExtension on Vc3FramerateConversionAlgorithm
    Vc3InterlaceModeFromString on String
    Vc3InterlaceModeValueExtension on Vc3InterlaceMode
    Vc3SlowPalFromString on String
    Vc3SlowPalValueExtension on Vc3SlowPal
    Vc3TelecineFromString on String
    Vc3TelecineValueExtension on Vc3Telecine
    VideoCodecFromString on String
    VideoCodecValueExtension on VideoCodec
    VideoTimecodeInsertionFromString on String
    VideoTimecodeInsertionValueExtension on VideoTimecodeInsertion
    Vp8FramerateControlFromString on String
    Vp8FramerateControlValueExtension on Vp8FramerateControl
    Vp8FramerateConversionAlgorithmFromString on String
    Vp8FramerateConversionAlgorithmValueExtension on Vp8FramerateConversionAlgorithm
    Vp8ParControlFromString on String
    Vp8ParControlValueExtension on Vp8ParControl
    Vp8QualityTuningLevelFromString on String
    Vp8QualityTuningLevelValueExtension on Vp8QualityTuningLevel
    Vp8RateControlModeFromString on String
    Vp8RateControlModeValueExtension on Vp8RateControlMode
    Vp9FramerateControlFromString on String
    Vp9FramerateControlValueExtension on Vp9FramerateControl
    Vp9FramerateConversionAlgorithmFromString on String
    Vp9FramerateConversionAlgorithmValueExtension on Vp9FramerateConversionAlgorithm
    Vp9ParControlFromString on String
    Vp9ParControlValueExtension on Vp9ParControl
    Vp9QualityTuningLevelFromString on String
    Vp9QualityTuningLevelValueExtension on Vp9QualityTuningLevel
    Vp9RateControlModeFromString on String
    Vp9RateControlModeValueExtension on Vp9RateControlMode
    WatermarkingStrengthFromString on String
    WatermarkingStrengthValueExtension on WatermarkingStrength
    WavFormatFromString on String
    WavFormatValueExtension on WavFormat