events/cloud/dataflow/v1beta3 library

Classes

AutoscalingAlgorithm
Specifies the algorithm used to determine the number of worker processes to run at any given point in time, based on the amount of data left to process, the number of workers, and how quickly existing workers are processing data.
AutoscalingSettings
Settings for WorkerPool autoscaling.
BigQueryIODetails
Metadata for a BigQuery connector used by the job.
BigTableIODetails
Metadata for a Cloud Bigtable connector used by the job.
DatastoreIODetails
Metadata for a Datastore connector used by the job.
DebugOptions
Describes any options that have an effect on the debugging of pipelines.
DefaultPackageSet
The default set of packages to be staged on a pool of workers.
Environment
Describes the environment in which a Dataflow Job runs.
ExecutionStageState
A message describing the state of a particular execution stage.
FileIODetails
Metadata for a File connector used by the job.
FlexResourceSchedulingGoal
Specifies the resource to optimize for in Flexible Resource Scheduling.
Job
Defines a job to be run by the Cloud Dataflow service. Do not enter confidential information when you supply string values using the API. Fields stripped from source Job proto:
JobEventData
The data within all Job events.
JobExecutionInfo
Additional information about how a Cloud Dataflow job will be executed that isn't contained in the submitted job.
JobExecutionStageInfo
Contains information about how a particular google.dataflow.v1beta3.Step will be executed.
JobMetadata
Metadata available primarily for filtering jobs. Will be included in the ListJob response and Job SUMMARY view.
JobState
Describes the overall state of a google.dataflow.v1beta3.Job.
JobStatusChangedEvent
The CloudEvent raised when a Job status changes.
JobType
Specifies the processing model used by a google.dataflow.v1beta3.Job, which determines the way the Job is managed by the Cloud Dataflow service (how workers are scheduled, how inputs are sharded, etc).
Package
The packages that must be installed in order for a worker to run the steps of the Cloud Dataflow job that will be assigned to its worker pool.
PubSubIODetails
Metadata for a Pub/Sub connector used by the job.
SdkHarnessContainerImage
Defines an SDK harness container for executing Dataflow pipelines.
SdkVersion
The version of the SDK used to run the job.
SdkVersion_SdkSupportStatus
The support status of the SDK used to run the job.
ShuffleMode
Specifies the shuffle mode used by a google.dataflow.v1beta3.Job, which determines the approach data is shuffled during processing. More details in: https://cloud.google.com/dataflow/docs/guides/deploying-a-pipeline#dataflow-shuffle
SpannerIODetails
Metadata for a Spanner connector used by the job.
TeardownPolicy
Specifies what happens to a resource when a Cloud Dataflow google.dataflow.v1beta3.Job has completed.
WorkerIPAddressConfiguration
Specifies how IP addresses should be allocated to the worker machines.
WorkerPool
Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job.