GoogleCloudMlV1TrainingInput class
Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the --config command-line argument. For details, see the guide to submitting a training job.
Constructors
Properties
-
args
↔ List<
String> -
Optional. Command-line arguments passed to the training application when
it starts. If your job uses a custom container, then the arguments are
passed to the container's
ENTRYPOINT
command.read / write - encryptionConfig ↔ GoogleCloudMlV1EncryptionConfig
-
Optional. Options for using customer-managed encryption keys (CMEK) to
protect resources created by a training job, instead of using Google's
default encryption. If this is set, then all resources created by the
training job will be encrypted with the customer-managed encryption key
that you specify. Learn how and when to use CMEK with AI Platform
Training.
read / write
- evaluatorConfig ↔ GoogleCloudMlV1ReplicaConfig
-
Optional. The configuration for evaluators. You should only set
evaluatorConfig.acceleratorConfig
ifevaluatorType
is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. SetevaluatorConfig.imageUri
only if you build a custom image for your evaluator. IfevaluatorConfig.imageUri
has not been set, AI Platform uses the value ofmasterConfig.imageUri
. Learn more about configuring custom containers.read / write - evaluatorCount ↔ String
-
Optional. The number of evaluator replicas to use for the training job.
Each replica in the cluster will be of the type specified in
evaluator_type
. This value can only be used whenscale_tier
is set toCUSTOM
. If you set this value, you must also setevaluator_type
. The default value is zero.read / write - evaluatorType ↔ String
-
Optional. Specifies the type of virtual machine to use for your training
job's evaluator nodes. The supported values are the same as those
described in the entry for
masterType
. This value must be consistent with the category of machine type thatmasterType
uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present whenscaleTier
is set toCUSTOM
andevaluatorCount
is greater than zero.read / write - hashCode → int
-
The hash code for this object. [...]
read-only, inherited
- hyperparameters ↔ GoogleCloudMlV1HyperparameterSpec
-
Optional. The set of Hyperparameters to tune.
read / write
- jobDir ↔ String
-
Optional. A Google Cloud Storage path in which to store training outputs
and other data needed for training. This path is passed to your TensorFlow
program as the '--job-dir' command-line argument. The benefit of
specifying this field is that Cloud ML validates the path for use in
training.
read / write
- masterConfig ↔ GoogleCloudMlV1ReplicaConfig
-
Optional. The configuration for your master worker. You should only set
masterConfig.acceleratorConfig
ifmasterType
is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. SetmasterConfig.imageUri
only if you build a custom image. Only one ofmasterConfig.imageUri
andruntimeVersion
should be set. Learn more about configuring custom containers.read / write - masterType ↔ String
-
Optional. Specifies the type of virtual machine to use for your training
job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Learn more about using Compute Engine machine types. Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Learn more about using legacy machine types. Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPUs.read / write - network ↔ String
-
Optional. The full name of the Compute Engine network to
which the Job is peered. For example,
projects/12345/global/networks/myVPC
. The format of this field isprojects/{project}/global/networks/{network}
, where {project} is a project number (like12345
) and {network} is network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. Learn about using VPC Network Peering..read / write -
packageUris
↔ List<
String> -
Required. The Google Cloud Storage location of the packages with the
training program and any additional dependencies. The maximum number of
package URIs is 100.
read / write
- parameterServerConfig ↔ GoogleCloudMlV1ReplicaConfig
-
Optional. The configuration for parameter servers. You should only set
parameterServerConfig.acceleratorConfig
ifparameterServerType
is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. SetparameterServerConfig.imageUri
only if you build a custom image for your parameter server. IfparameterServerConfig.imageUri
has not been set, AI Platform uses the value ofmasterConfig.imageUri
. Learn more about configuring custom containers.read / write - parameterServerCount ↔ String
-
Optional. The number of parameter server replicas to use for the training
job. Each replica in the cluster will be of the type specified in
parameter_server_type
. This value can only be used whenscale_tier
is set toCUSTOM
. If you set this value, you must also setparameter_server_type
. The default value is zero.read / write - parameterServerType ↔ String
-
Optional. Specifies the type of virtual machine to use for your training
job's parameter server. The supported values are the same as those
described in the entry for
master_type
. This value must be consistent with the category of machine type thatmasterType
uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present whenscaleTier
is set toCUSTOM
andparameter_server_count
is greater than zero.read / write - pythonModule ↔ String
-
Required. The Python module name to run after installing the packages.
read / write
- pythonVersion ↔ String
-
Optional. The version of Python used in training. You must either specify
this field or specify
masterConfig.imageUri
. The following Python versions are available: * Python '3.7' is available whenruntime_version
is set to '1.15' or later. * Python '3.5' is available whenruntime_version
is set to a version from '1.4' to '1.14'. * Python '2.7' is available whenruntime_version
is set to '1.15' or earlier. Read more about the Python versions available for each runtime version.read / write - region ↔ String
-
Required. The region to run the training job in. See the available
regions for AI Platform Training.
read / write
- runtimeType → Type
-
A representation of the runtime type of the object.
read-only, inherited
- runtimeVersion ↔ String
-
Optional. The AI Platform runtime version to use for training. You must
either specify this field or specify
masterConfig.imageUri
. For more information, see the runtime version list and learn how to manage runtime versions.read / write - scaleTier ↔ String
-
Required. Specifies the machine types, the number of replicas for workers
and parameter servers.
Possible string values are: [...]
read / write
- scheduling ↔ GoogleCloudMlV1Scheduling
-
Optional. Scheduling options for a training job.
read / write
- serviceAccount ↔ String
-
Optional. The email address of a service account to use when running the
training appplication. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have theroles/iam.serviceAccountAdmin
role for the specified service account. Learn more about configuring a service account. If not specified, the AI Platform Training Google-managed service account is used by default.read / write - useChiefInTfConfig ↔ bool
-
Optional. Use
chief
instead ofmaster
in theTF_CONFIG
environment variable when training with a custom container. Defaults tofalse
. Learn more about this field. This field has no effect for training jobs that don't use a custom container.read / write - workerConfig ↔ GoogleCloudMlV1ReplicaConfig
-
Optional. The configuration for workers. You should only set
workerConfig.acceleratorConfig
ifworkerType
is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. SetworkerConfig.imageUri
only if you build a custom image for your worker. IfworkerConfig.imageUri
has not been set, AI Platform uses the value ofmasterConfig.imageUri
. Learn more about configuring custom containers.read / write - workerCount ↔ String
-
Optional. The number of worker replicas to use for the training job. Each
replica in the cluster will be of the type specified in
worker_type
. This value can only be used whenscale_tier
is set toCUSTOM
. If you set this value, you must also setworker_type
. The default value is zero.read / write - workerType ↔ String
-
Optional. Specifies the type of virtual machine to use for your training
job's worker nodes. The supported values are the same as those described
in the entry for
masterType
. This value must be consistent with the category of machine type thatmasterType
uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you usecloud_tpu
for this value, see special instructions for configuring a custom TPU machine. This value must be present whenscaleTier
is set toCUSTOM
andworkerCount
is greater than zero.read / write
Methods
-
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a non-existent method or property is accessed. [...]
inherited
-
toJson(
) → Map< String, Object> -
toString(
) → String -
Returns a string representation of this object.
inherited
Operators
-
operator ==(
Object other) → bool -
The equality operator. [...]
inherited