updateService method
- required String service,
- List<
CapacityProviderStrategyItem> ? capacityProviderStrategy, - String? cluster,
- DeploymentConfiguration? deploymentConfiguration,
- int? desiredCount,
- bool? forceNewDeployment,
- int? healthCheckGracePeriodSeconds,
- NetworkConfiguration? networkConfiguration,
- List<
PlacementConstraint> ? placementConstraints, - List<
PlacementStrategy> ? placementStrategy, - String? platformVersion,
- String? taskDefinition,
For services using the rolling update (ECS
) deployment
controller, the desired count, deployment configuration, network
configuration, task placement constraints and strategies, or task
definition used can be updated.
For services using the blue/green (CODE_DEPLOY
) deployment
controller, only the desired count, deployment configuration, task
placement constraints and strategies, and health check grace period can be
updated using this API. If the network configuration, platform version, or
task definition need to be updated, a new AWS CodeDeploy deployment should
be created. For more information, see CreateDeployment
in the AWS CodeDeploy API Reference.
For services using an external deployment controller, you can update only the desired count, task placement constraints and strategies, and health check grace period using this API. If the launch type, load balancer, network configuration, platform version, or task definition need to be updated, you should create a new task set. For more information, see CreateTaskSet.
You can add to or subtract from the number of instantiations of a task
definition in a service by specifying the cluster that the service is
running in and a new desiredCount
parameter.
If you have updated the Docker image of your application, you can create a
new task definition with that image and deploy it to your service. The
service scheduler uses the minimum healthy percent and maximum percent
parameters (in the service's deployment configuration) to determine the
deployment strategy.
You can also update the deployment configuration of a service. When a
deployment is triggered by updating the task definition of a service, the
service scheduler uses the deployment configuration parameters,
minimumHealthyPercent
and maximumPercent
, to
determine the deployment strategy.
-
If
minimumHealthyPercent
is below 100%, the scheduler can ignoredesiredCount
temporarily during a deployment. For example, ifdesiredCount
is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy if they are in theRUNNING
state. Tasks for services that use a load balancer are considered healthy if they are in theRUNNING
state and the container instance they are hosted on is reported as healthy by the load balancer. -
The
maximumPercent
parameter represents an upper limit on the number of running tasks during a deployment, which enables you to define the deployment batch size. For example, ifdesiredCount
is four tasks, a maximum of 200% starts four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available).
docker stop
is issued to the containers running in the
task. This results in a SIGTERM
and a 30-second timeout,
after which SIGKILL
is sent and the containers are forcibly
stopped. If the container handles the SIGTERM
gracefully and
exits within 30 seconds from receiving it, no SIGKILL
is
sent.
When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic:
- Determine which of the container instances in your cluster can support your service's task definition (for example, they have the required CPU, memory, ports, and container instance attributes).
-
By default, the service scheduler attempts to balance tasks across
Availability Zones in this manner (although you can choose a different
placement strategy):
- Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
- Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
- Sort the container instances by the largest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have two, container instances in either zone B or C are considered optimal for termination.
- Stop the task on a container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the largest number of running tasks for this service.
May throw ServerException. May throw ClientException. May throw InvalidParameterException. May throw ClusterNotFoundException. May throw ServiceNotFoundException. May throw ServiceNotActiveException. May throw PlatformUnknownException. May throw PlatformTaskDefinitionIncompatibilityException. May throw AccessDeniedException.
Parameter service
:
The name of the service to update.
Parameter capacityProviderStrategy
:
The capacity provider strategy to update the service to use.
If the service is using the default capacity provider strategy for the cluster, the service can be updated to use one or more capacity providers as opposed to the default capacity provider strategy. However, when a service is using a capacity provider strategy that is not the default capacity provider strategy, the service cannot be updated to use the cluster's default capacity provider strategy.
A capacity provider strategy consists of one or more capacity providers
along with the base
and weight
to assign to
them. A capacity provider must be associated with the cluster to be used
in a capacity provider strategy. The PutClusterCapacityProviders
API is used to associate a capacity provider with a cluster. Only capacity
providers with an ACTIVE
or UPDATING
status can
be used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the
FARGATE
or FARGATE_SPOT
capacity providers. The
AWS Fargate capacity providers are available to all accounts and only need
to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
Parameter cluster
:
The short name or full Amazon Resource Name (ARN) of the cluster that your
service is running on. If you do not specify a cluster, the default
cluster is assumed.
Parameter deploymentConfiguration
:
Optional deployment parameters that control how many tasks run during the
deployment and the ordering of stopping and starting tasks.
Parameter desiredCount
:
The number of instantiations of the task to place and keep running in your
service.
Parameter forceNewDeployment
:
Whether to force a new deployment of the service. Deployments are not
forced by default. You can use this option to trigger a new deployment
with no service definition changes. For example, you can update a
service's tasks to use a newer Docker image with the same image/tag
combination (my_image:latest
) or to roll Fargate tasks onto a
newer platform version.
Parameter healthCheckGracePeriodSeconds
:
The period of time, in seconds, that the Amazon ECS service scheduler
should ignore unhealthy Elastic Load Balancing target health checks after
a task has first started. This is only valid if your service is configured
to use a load balancer. If your service's tasks take a while to start and
respond to Elastic Load Balancing health checks, you can specify a health
check grace period of up to 2,147,483,647 seconds. During that time, the
Amazon ECS service scheduler ignores the Elastic Load Balancing health
check status. This grace period can prevent the ECS service scheduler from
marking tasks as unhealthy and stopping them before they have time to come
up.
Parameter placementConstraints
:
An array of task placement constraint objects to update the service to
use. If no value is specified, the existing placement constraints for the
service will remain unchanged. If this value is specified, it will
override any existing placement constraints defined for the service. To
remove all existing placement constraints, specify an empty array.
You can specify a maximum of 10 constraints per task (this limit includes constraints in the task definition and those specified at runtime).
Parameter placementStrategy
:
The task placement strategy objects to update the service to use. If no
value is specified, the existing placement strategy for the service will
remain unchanged. If this value is specified, it will override the
existing placement strategy defined for the service. To remove an existing
placement strategy, specify an empty object.
You can specify a maximum of five strategy rules per service.
Parameter platformVersion
:
The platform version on which your tasks in the service are running. A
platform version is only specified for tasks using the Fargate launch
type. If a platform version is not specified, the LATEST
platform version is used by default. For more information, see AWS
Fargate Platform Versions in the Amazon Elastic Container Service
Developer Guide.
Parameter taskDefinition
:
The family
and revision
(family:revision
) or full ARN of the task definition to run
in your service. If a revision
is not specified, the latest
ACTIVE
revision is used. If you modify the task definition
with UpdateService
, Amazon ECS spawns a task with the new
version of the task definition and then stops an old task after the new
version is running.
Implementation
Future<UpdateServiceResponse> updateService({
required String service,
List<CapacityProviderStrategyItem>? capacityProviderStrategy,
String? cluster,
DeploymentConfiguration? deploymentConfiguration,
int? desiredCount,
bool? forceNewDeployment,
int? healthCheckGracePeriodSeconds,
NetworkConfiguration? networkConfiguration,
List<PlacementConstraint>? placementConstraints,
List<PlacementStrategy>? placementStrategy,
String? platformVersion,
String? taskDefinition,
}) async {
ArgumentError.checkNotNull(service, 'service');
final headers = <String, String>{
'Content-Type': 'application/x-amz-json-1.1',
'X-Amz-Target': 'AmazonEC2ContainerServiceV20141113.UpdateService'
};
final jsonResponse = await _protocol.send(
method: 'POST',
requestUri: '/',
exceptionFnMap: _exceptionFns,
// TODO queryParams
headers: headers,
payload: {
'service': service,
if (capacityProviderStrategy != null)
'capacityProviderStrategy': capacityProviderStrategy,
if (cluster != null) 'cluster': cluster,
if (deploymentConfiguration != null)
'deploymentConfiguration': deploymentConfiguration,
if (desiredCount != null) 'desiredCount': desiredCount,
if (forceNewDeployment != null)
'forceNewDeployment': forceNewDeployment,
if (healthCheckGracePeriodSeconds != null)
'healthCheckGracePeriodSeconds': healthCheckGracePeriodSeconds,
if (networkConfiguration != null)
'networkConfiguration': networkConfiguration,
if (placementConstraints != null)
'placementConstraints': placementConstraints,
if (placementStrategy != null) 'placementStrategy': placementStrategy,
if (platformVersion != null) 'platformVersion': platformVersion,
if (taskDefinition != null) 'taskDefinition': taskDefinition,
},
);
return UpdateServiceResponse.fromJson(jsonResponse.body);
}