createBatchPrediction method
Generates predictions for a group of observations. The observations to
process exist in one or more data files referenced by a
DataSource
. This operation creates a new
BatchPrediction
, and uses an MLModel
and the
data files referenced by the DataSource
as information
sources.
CreateBatchPrediction
is an asynchronous operation. In
response to CreateBatchPrediction
, Amazon Machine Learning
(Amazon ML) immediately returns and sets the BatchPrediction
status to PENDING
. After the BatchPrediction
completes, Amazon ML sets the status to COMPLETED
.
You can poll for status updates by using the GetBatchPrediction
operation and checking the Status
parameter of the result.
After the COMPLETED
status appears, the results are available
in the location specified by the OutputUri
parameter.
May throw InvalidInputException. May throw InternalServerException. May throw IdempotentParameterMismatchException.
Parameter batchPredictionDataSourceId
:
The ID of the DataSource
that points to the group of
observations to predict.
Parameter batchPredictionId
:
A user-supplied ID that uniquely identifies the
BatchPrediction
.
Parameter mLModelId
:
The ID of the MLModel
that will generate predictions for the
group of observations.
Parameter outputUri
:
The location of an Amazon Simple Storage Service (Amazon S3) bucket or
directory to store the batch prediction results. The following substrings
are not allowed in the s3 key
portion of the
outputURI
field: ':', '//', '/./', '/../'.
Amazon ML needs permissions to store and retrieve the logs on your behalf. For information about how to set permissions, see the Amazon Machine Learning Developer Guide.
Parameter batchPredictionName
:
A user-supplied name or description of the BatchPrediction
.
BatchPredictionName
can only use the UTF-8 character set.
Implementation
Future<CreateBatchPredictionOutput> createBatchPrediction({
required String batchPredictionDataSourceId,
required String batchPredictionId,
required String mLModelId,
required String outputUri,
String? batchPredictionName,
}) async {
ArgumentError.checkNotNull(
batchPredictionDataSourceId, 'batchPredictionDataSourceId');
_s.validateStringLength(
'batchPredictionDataSourceId',
batchPredictionDataSourceId,
1,
64,
isRequired: true,
);
ArgumentError.checkNotNull(batchPredictionId, 'batchPredictionId');
_s.validateStringLength(
'batchPredictionId',
batchPredictionId,
1,
64,
isRequired: true,
);
ArgumentError.checkNotNull(mLModelId, 'mLModelId');
_s.validateStringLength(
'mLModelId',
mLModelId,
1,
64,
isRequired: true,
);
ArgumentError.checkNotNull(outputUri, 'outputUri');
_s.validateStringLength(
'outputUri',
outputUri,
0,
2048,
isRequired: true,
);
_s.validateStringLength(
'batchPredictionName',
batchPredictionName,
0,
1024,
);
final headers = <String, String>{
'Content-Type': 'application/x-amz-json-1.1',
'X-Amz-Target': 'AmazonML_20141212.CreateBatchPrediction'
};
final jsonResponse = await _protocol.send(
method: 'POST',
requestUri: '/',
exceptionFnMap: _exceptionFns,
// TODO queryParams
headers: headers,
payload: {
'BatchPredictionDataSourceId': batchPredictionDataSourceId,
'BatchPredictionId': batchPredictionId,
'MLModelId': mLModelId,
'OutputUri': outputUri,
if (batchPredictionName != null)
'BatchPredictionName': batchPredictionName,
},
);
return CreateBatchPredictionOutput.fromJson(jsonResponse.body);
}