predictRoute property
HTTP path on the container to send prediction requests to.
Vertex AI forwards requests sent using
projects.locations.endpoints.predict to this path on the container's IP
address and port. Vertex AI then returns the container's response in the
API response. For example, if you set this field to /foo
, then when
Vertex AI receives a prediction request, it forwards the request body in a
POST request to the /foo
path on the port of your container specified by
the first value of this ModelContainerSpec
's ports field. If you don't
specify this field, it defaults to the following value when you deploy
this Model to an Endpoint:
/v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The
placeholders in this value are replaced as follows: * ENDPOINT: The last
segment (following endpoints/
)of the Endpoint.name][] field of the
Endpoint where this Model has been deployed. (Vertex AI makes this value
available to your container code as the [AIP_ENDPOINT_ID
environment
variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).)
- DEPLOYED_MODEL: DeployedModel.id of the
DeployedModel
. (Vertex AI makes this value available to your container code as the [AIP_DEPLOYED_MODEL_ID
environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).)
Immutable.
Implementation
core.String? predictRoute;