inference method

Future<GoogleApiHttpBody> inference(
  1. GoogleApiHttpBody request,
  2. String endpoint, {
  3. String? deployedModelId,
  4. String? $fields,
})

Forwards arbitrary HTTP requests for both streaming and non-streaming cases.

To use this method, invoke_route_prefix must be set to allow the paths that will be specified in the request.

request - The metadata request object.

Request parameters:

endpoint - Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint} Value must have pattern ^projects/\[^/\]+/locations/\[^/\]+/endpoints/google$.

deployedModelId - ID of the DeployedModel that serves the invoke request.

$fields - Selector specifying which fields to include in a partial response.

Completes with a GoogleApiHttpBody.

Completes with a commons.ApiRequestError if the API endpoint returned an error.

If the used http.Client completes with an error when making a REST call, this method will complete with the same error.

Implementation

async.Future<GoogleApiHttpBody> inference(
  GoogleApiHttpBody request,
  core.String endpoint, {
  core.String? deployedModelId,
  core.String? $fields,
}) async {
  final body_ = convert.json.encode(request);
  final queryParams_ = <core.String, core.List<core.String>>{
    'deployedModelId': ?deployedModelId == null ? null : [deployedModelId],
    'fields': ?$fields == null ? null : [$fields],
  };

  final url_ =
      'v1/' + core.Uri.encodeFull('$endpoint') + '/science/inference';

  final response_ = await _requester.request(
    url_,
    'POST',
    body: body_,
    queryParams: queryParams_,
  );
  return GoogleApiHttpBody.fromJson(
    response_ as core.Map<core.String, core.dynamic>,
  );
}