synthesizeSpeech method
- required OutputFormat outputFormat,
- required String text,
- required VoiceId voiceId,
- Engine? engine,
- LanguageCode? languageCode,
- List<
String> ? lexiconNames, - String? sampleRate,
- List<
SpeechMarkType> ? speechMarkTypes, - TextType? textType,
Synthesizes UTF-8 input, plain text or SSML, to a stream of bytes. SSML input must be valid, well-formed SSML. Some alphabets might not be available with all the voices (for example, Cyrillic might not be read at all by English voices) unless phoneme mapping is used. For more information, see How it Works.
May throw TextLengthExceededException. May throw InvalidSampleRateException. May throw InvalidSsmlException. May throw LexiconNotFoundException. May throw ServiceFailureException. May throw MarksNotSupportedForFormatException. May throw SsmlMarksNotSupportedForTextTypeException. May throw LanguageNotSupportedException. May throw EngineNotSupportedException.
Parameter outputFormat
:
The format in which the returned output will be encoded. For audio stream,
this will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json.
When pcm is used, the content returned is audio/pcm in a signed 16-bit, 1 channel (mono), little-endian format.
Parameter text
:
Input text to synthesize. If you specify ssml
as the
TextType
, follow the SSML format for the input text.
Parameter voiceId
:
Voice ID to use for the synthesis. You can get a list of available voice
IDs by calling the DescribeVoices
operation.
Parameter engine
:
Specifies the engine (standard
or neural
) for
Amazon Polly to use when processing input text for speech synthesis. For
information on Amazon Polly voices and which voices are available in
standard-only, NTTS-only, and both standard and NTTS formats, see Available
Voices.
NTTS-only voices
When using NTTS-only voices such as Kevin (en-US), this parameter is
required and must be set to neural
. If the engine is not
specified, or is set to standard
, this will result in an
error.
Type: String
Valid Values: standard
| neural
Required: Yes
Standard voices
For standard voices, this is not required; the engine parameter defaults
to standard
. If the engine is not specified, or is set to
standard
and an NTTS-only voice is selected, this will result
in an error.
Parameter languageCode
:
Optional language code for the Synthesize Speech request. This is only
necessary if using a bilingual voice, such as Aditi, which can be used for
either Indian English (en-IN) or Hindi (hi-IN).
If a bilingual voice is used and no language code is specified, Amazon
Polly will use the default language of the bilingual voice. The default
language for any voice is the one returned by the DescribeVoices
operation for the LanguageCode
parameter. For example, if no
language code is specified, Aditi will use Indian English rather than
Hindi.
Parameter lexiconNames
:
List of one or more pronunciation lexicon names you want the service to
apply during synthesis. Lexicons are applied only if the language of the
lexicon is the same as the language of the voice. For information about
storing lexicons, see PutLexicon.
Parameter sampleRate
:
The audio frequency specified in Hz.
The valid values for mp3 and ogg_vorbis are "8000", "16000", "22050", and "24000". The default value for standard voices is "22050". The default value for neural voices is "24000".
Valid values for pcm are "8000" and "16000" The default value is "16000".
Parameter speechMarkTypes
:
The type of speech marks returned for the input text.
Parameter textType
:
Specifies whether the input text is plain text or SSML. The default value
is plain text. For more information, see Using
SSML.
Implementation
Future<SynthesizeSpeechOutput> synthesizeSpeech({
required OutputFormat outputFormat,
required String text,
required VoiceId voiceId,
Engine? engine,
LanguageCode? languageCode,
List<String>? lexiconNames,
String? sampleRate,
List<SpeechMarkType>? speechMarkTypes,
TextType? textType,
}) async {
ArgumentError.checkNotNull(outputFormat, 'outputFormat');
ArgumentError.checkNotNull(text, 'text');
ArgumentError.checkNotNull(voiceId, 'voiceId');
final $payload = <String, dynamic>{
'OutputFormat': outputFormat.toValue(),
'Text': text,
'VoiceId': voiceId.toValue(),
if (engine != null) 'Engine': engine.toValue(),
if (languageCode != null) 'LanguageCode': languageCode.toValue(),
if (lexiconNames != null) 'LexiconNames': lexiconNames,
if (sampleRate != null) 'SampleRate': sampleRate,
if (speechMarkTypes != null)
'SpeechMarkTypes': speechMarkTypes.map((e) => e.toValue()).toList(),
if (textType != null) 'TextType': textType.toValue(),
};
final response = await _protocol.sendRaw(
payload: $payload,
method: 'POST',
requestUri: '/v1/speech',
exceptionFnMap: _exceptionFns,
);
return SynthesizeSpeechOutput(
audioStream: await response.stream.toBytes(),
contentType:
_s.extractHeaderStringValue(response.headers, 'Content-Type'),
requestCharacters: _s.extractHeaderIntValue(
response.headers, 'x-amzn-RequestCharacters'),
);
}