googleapis.remotebuildexecution.v2 library


An Action captures all the information about an execution which is required to reproduce it. Actions are the core component of the Execution service. A single Action represents a repeatable action that can be performed by the execution service. Actions can be succinctly identified by the digest of their wire format encoding and, once an Action has been executed, will be cached in the action cache. Future requests can then use the cached result rather than needing to run afresh. When a server completes execution of an Action, it MAY choose to cache the result in the ActionCache unless do_not_cache is true. Clients SHOULD expect the server to do so. By default, future calls to Execute the same Action will also serve their results from the cache. Clients must take care to understand the caching behaviour. Ideally, all Actions will be reproducible so that serving a result from cache is always desirable and correct.
Describes the server/instance capabilities for updating the action cache.
An ActionResult represents the result of an Action being run.
A request message for ContentAddressableStorage.BatchReadBlobs.
A response message for ContentAddressableStorage.BatchReadBlobs.
A response corresponding to a single blob that the client tried to download.
A request message for ContentAddressableStorage.BatchUpdateBlobs.
A request corresponding to a single blob that the client wants to upload.
A response message for ContentAddressableStorage.BatchUpdateBlobs.
A response corresponding to a single blob that the client tried to upload.
Capabilities of the remote cache system.
A Command is the actual command executed by a worker running an Action and specifications of its environment. Except as otherwise required, the environment (such as which system libraries or binaries are available, and what filesystems are mounted where) is defined by and specific to the implementation of the remote execution API.
An EnvironmentVariable is one variable to set in the running program's environment.
A content digest. A digest for a given blob consists of the size of the blob and its hash. The hash algorithm to use is defined by the server. The size is considered to be an integral part of the digest and cannot be separated. That is, even if the hash field is correctly specified but size_bytes is not, the server MUST reject the request. The reason for including the size in the digest is as follows: in a great many cases, the server needs to know the size of the blob it is about to work with prior to starting an operation with it, such as flattening Merkle tree structures or streaming it to a worker. Technically, the server could implement a separate metadata store, but this results in a significantly more complicated implementation as opposed to having the client specify the size up-front (or storing the size along with the digest in every message where digests are embedded). This does mean that the API leaks some implementation details of (what we consider to be) a reasonable server implementation, but we consider this to be a worthwhile tradeoff. When a Digest is used to refer to a proto message, it always refers to the message in binary encoded form. To ensure consistent hashing, clients and servers MUST ensure that they serialize messages according to the following rules, even if there are alternate valid encodings for the same message: * Fields are serialized in tag order. * There are no unknown fields. * There are no duplicate fields. * Fields are serialized according to the default semantics for their type. Most protocol buffer implementations will always follow these rules when serializing, but care should be taken to avoid shortcuts. For instance, concatenating two messages to merge them may produce duplicate fields.
A Directory represents a directory node in a file tree, containing zero or more children FileNodes, DirectoryNodes and SymlinkNodes. Each Node contains its name in the directory, either the digest of its content (either a file blob or a Directory proto) or a symlink target, as well as possibly some metadata about the file or directory. In order to ensure that two equivalent directory trees hash to the same value, the following restrictions MUST be obeyed when constructing a a Directory: * Every child in the directory must have a path of exactly one segment. Multiple levels of directory hierarchy may not be collapsed. * Each child in the directory must have a unique path segment (file name). Note that while the API itself is case-sensitive, the environment where the Action is executed may or may not be case-sensitive. That is, it is legal to call the API with a Directory that has both "Foo" and "foo" as children, but the Action may be rejected by the remote system upon execution. * The files, directories and symlinks in the directory must each be sorted in lexicographical order by path. The path strings must be sorted by code point, equivalently, by UTF-8 bytes. * The NodeProperties of files, directories, and symlinks must be sorted in lexicographical order by property name. A Directory that obeys the restrictions is said to be in canonical form. As an example, the following could be used for a file named bar and a directory named foo with an executable file named baz (hashes shortened for readability): json // (Directory proto) { files: [ { name: "bar", digest: { hash: "4a73bc9d03...", size: 65534 }, node_properties: [ { "name": "MTime", "value": "2017-01-15T01:30:15.01Z" } ] } ], directories: [ { name: "foo", digest: { hash: "4cf2eda940...", size: 43 } } ] } // (Directory proto with hash "4cf2eda940..." and size 43) { files: [ { name: "baz", digest: { hash: "b2c941073e...", size: 1294, }, is_executable: true } ] }
A DirectoryNode represents a child of a Directory which is itself a Directory and its associated metadata.
ExecutedActionMetadata contains details about a completed execution.
Metadata about an ongoing execution, which will be contained in the metadata field of the Operation.
A request message for Execution.Execute.
The response message for Execution.Execute, which will be contained in the response field of the Operation.
Capabilities of the remote execution system.
An ExecutionPolicy can be used to control the scheduling of the action.
A FileNode represents a single file and associated metadata.
A request message for ContentAddressableStorage.FindMissingBlobs.
A response message for ContentAddressableStorage.FindMissingBlobs.
A response message for ContentAddressableStorage.GetTree.
A LogFile is a log stored in the CAS.
A single property for FileNodes, DirectoryNodes, and SymlinkNodes. The server is responsible for specifying the property names that it accepts. If permitted by the server, the same name may occur multiple times.
An OutputDirectory is the output in an ActionResult corresponding to a directory's full contents rather than a single file.
An OutputFile is similar to a FileNode, but it is used as an output in an ActionResult. It allows a full file path rather than only a name.
An OutputSymlink is similar to a Symlink, but it is used as an output in an ActionResult. OutputSymlink is binary-compatible with SymlinkNode.
A Platform is a set of requirements, such as hardware, operating system, or compiler toolchain, for an Action's execution environment. A Platform is represented as a series of key-value pairs representing the properties that are required of the platform.
A single property for the environment. The server is responsible for specifying the property names that it accepts. If an unknown name is provided in the requirements for an Action, the server SHOULD reject the execution request. If permitted by the server, the same name may occur multiple times. The server is also responsible for specifying the interpretation of property values. For instance, a property describing how much RAM must be available may be interpreted as allowing a worker with 16GB to fulfill a request for 8GB, while a property describing the OS environment on which the action must be performed may require an exact match with the worker's OS. The server MAY use the value of one or more properties to determine how it sets up the execution environment, such as by making specific system files available to the worker.
Allowed values for priority in ResultsCachePolicy Used for querying both cache and execution valid priority ranges.
Supported range of priorities, including boundaries.
An optional Metadata to attach to any RPC request to tell the server about an external context of the request. The server may use this for logging or other purposes. To use it, the client attaches the header to the call using the canonical proto serialization: * name: build.bazel.remote.execution.v2.requestmetadata-bin * contents: the base64 encoded binary RequestMetadata message. Note: the gRPC library serializes binary headers encoded in base 64 by default ( Therefore, if the gRPC library is used to pass/retrieve this metadata, the user may ignore the base64 encoding and assume it is simply serialized as a binary message.
A ResultsCachePolicy is used for fine-grained control over how action outputs are stored in the CAS and Action Cache.
A response message for Capabilities.GetCapabilities.
A SymlinkNode represents a symbolic link.
Details for the tool used to call the API.
A Tree contains all the Directory protos in a single directory Merkle tree, compressed into one message.
A request message for WaitExecution.
The full version of a given tool.
CommandDuration contains the various duration metrics tracked when a bot performs a command.
CommandEvents contains counters for the number of warnings and errors that occurred during the execution of a command.
The internal status of the command result.
ResourceUsage is the system resource usage of the host machine.
AcceleratorConfig defines the accelerator cards to attach to the VM.
Autoscale defines the autoscaling policy of a worker pool.
The request used for CreateInstance.
The request used for CreateWorkerPool.
The request used for DeleteInstance.
The request used for DeleteWorkerPool.
FeaturePolicy defines features allowed to be used on RBE instances, as well as instance-wide behavior changes that take effect without opt-in or opt-out at usage time.
Defines whether a feature can be used or what values are accepted.
The request used for GetInstance.
The request used for GetWorkerPool.
Instance conceptually encapsulates all Remote Build Execution resources for remote builds. An instance consists of storage and compute resources (for example, ContentAddressableStorage, ActionCache, WorkerPools) used for running remote builds. All Remote Build Execution API calls are scoped to an instance.
SoleTenancyConfig specifies information required to host a pool on STNs.
The request used for UpdateInstance.
The request used for UpdateWorkerPool.
Defines the configuration to be used for creating workers in the worker pool.
A worker pool resource in the Remote Build Execution API.
AdminTemp is a prelimiary set of administration tasks. It's called "Temp" because we do not yet know the best way to represent admin tasks; it's possible that this will be entirely replaced in later versions of this API. If this message proves to be sufficient, it will be renamed in the alpha or beta release of this API. This message (suitably marshalled into a protobuf.Any) can be used as the inline_assignment field in a lease; the lease assignment field should simply be "admin" in these cases. This message is heavily based on Swarming administration tasks from the LUCI project (
Describes a blob of binary content with its digest.
DEPRECATED - use CommandResult instead. Describes the actual outputs from the task.
DEPRECATED - use CommandResult instead. Can be used as part of CompleteRequest.metadata, or are part of a more sophisticated message.
All information about the execution of a command, suitable for providing as the Bots interface's Lease.result field.
Describes a shell-style task to execute, suitable for providing as the Bots interface's Lease.payload field.
Describes the inputs to a shell-style task.
An environment variable required by this task.
Describes the expected outputs of the command.
Describes the timeouts associated with this task.
The CommandTask and CommandResult messages assume the existence of a service that can serve blobs of content, identified by a hash and size known as a "digest." The method by which these blobs may be retrieved is not specified here, but a model implementation is in the Remote Execution API's "ContentAddressibleStorage" interface. In the context of the RWAPI, a Digest will virtually always refer to the contents of a file or a directory. The latter is represented by the byte-encoded Directory message.
The contents of a directory. Similar to the equivalent message in the Remote Execution API.
The metadata for a directory. Similar to the equivalent message in the Remote Execution API.
The metadata for a file. Similar to the equivalent message in the Remote Execution API.
This resource represents a long-running operation that is the result of a network API call.
The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
Supplies a Remote Execution API service for tools such as bazel.


USER_AGENT → const String
'dart-api-client remotebuildexecution/v2'

Exceptions / Errors

Represents a general error reported by the API endpoint.
Represents a specific error reported by the API endpoint.