Using AWS Batch, you can run batch computing workloads on the AWS Cloud.
Batch computing is a common means for developers, scientists, and engineers
to access large amounts of compute resources. AWS Batch utilizes the
advantages of this computing workload to remove the undifferentiated heavy
lifting of configuring and managing required infrastructure, while also
adopting a familiar batch computing software approach. Given these
advantages, AWS Batch can help you to efficiently provision resources in
response to jobs submitted, thus effectively helping to eliminate capacity
constraints, reduce compute costs, and deliver your results more quickly.
The order in which compute environments are tried for job placement within a
queue. Compute environments are tried in ascending order. For example, if
two compute environments are associated with a job queue, the compute
environment with a lower order integer value is tried for job placement
first. Compute environments must be in the VALID state before
you can associate them with a job queue. All of the compute environments
must be either EC2 (EC2 or SPOT) or Fargate
(FARGATE or FARGATE_SPOT); EC2 and Fargate compute
environments can't be mixed.
Determine whether your data volume persists on the host container instance
and where it is stored. If this parameter is empty, then the Docker daemon
assigns a host path for your data volume, but the data isn't guaranteed to
persist after the containers associated with it stop running.
Details on a Docker volume mount point that's used in a job's container
properties. This parameter maps to Volumes in the Create
a container section of the Docker Remote API and the
--volume option to docker run.