S3Settings class
Settings for exporting data to Amazon S3.
Constructors
- S3Settings({String? bucketFolder, String? bucketName, bool? cdcInsertsAndUpdates, bool? cdcInsertsOnly, String? cdcPath, CompressionTypeValue? compressionType, String? csvDelimiter, String? csvNoSupValue, String? csvRowDelimiter, DataFormatValue? dataFormat, int? dataPageSize, DatePartitionDelimiterValue? datePartitionDelimiter, bool? datePartitionEnabled, DatePartitionSequenceValue? datePartitionSequence, int? dictPageSizeLimit, bool? enableStatistics, EncodingTypeValue? encodingType, EncryptionModeValue? encryptionMode, String? externalTableDefinition, bool? includeOpForFullLoad, bool? parquetTimestampInMillisecond, ParquetVersionValue? parquetVersion, bool? preserveTransactions, int? rowGroupLength, String? serverSideEncryptionKmsKeyId, String? serviceAccessRoleArn, String? timestampColumnName, bool? useCsvNoSupValue})
-
S3Settings.fromJson(Map<
String, dynamic> json) -
factory
Properties
- bucketFolder → String?
-
An optional parameter to set a folder name in the S3 bucket. If provided,
tables are created in the path
bucketFolder/schema_name/table_name/. If this parameter isn't specified, then the path used isschema_name/table_name/.final - bucketName → String?
-
The name of the S3 bucket.
final
- cdcInsertsAndUpdates → bool?
-
A value that enables a change data capture (CDC) load to write INSERT and
UPDATE operations to .csv or .parquet (columnar storage) output files. The
default setting is
false, but whenCdcInsertsAndUpdatesis set totrueory, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.final - cdcInsertsOnly → bool?
-
A value that enables a change data capture (CDC) load to write only INSERT
operations to .csv or columnar storage (.parquet) output files. By default
(the
falsesetting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.final - cdcPath → String?
-
Specifies the folder path of CDC files. For an S3 source, this setting is
required if a task captures change data; otherwise, it's optional. If
CdcPathis set, AWS DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you setPreserveTransactionstotrue, AWS DMS verifies that you have set this parameter to a folder path on your S3 target where AWS DMS can save the transaction order for the CDC load. AWS DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified byBucketFolderandBucketName.final - compressionType → CompressionTypeValue?
-
An optional parameter to use GZIP to compress the target files. Set to GZIP
to compress the target files. Either set this parameter to NONE (the
default) or don't use it to leave the files uncompressed. This parameter
applies to both .csv and .parquet file formats.
final
- csvDelimiter → String?
-
The delimiter used to separate columns in the .csv file for both source and
target. The default is a comma.
final
- csvNoSupValue → String?
-
This setting only applies if your Amazon S3 output files during a change
data capture (CDC) load are written in .csv format. If
UseCsvNoSupValueis set to true, specify a string value that you want AWS DMS to use for all columns not included in the supplemental log. If you do not specify a string value, AWS DMS uses the null value for these columns regardless of theUseCsvNoSupValuesetting.final - csvRowDelimiter → String?
-
The delimiter used to separate rows in the .csv file for both source and
target. The default is a carriage return (
\n).final - dataFormat → DataFormatValue?
-
The format of the data that you want to use for output. You can choose one
of the following:
final
- dataPageSize → int?
-
The size of one data page in bytes. This parameter defaults to 1024 * 1024
bytes (1 MiB). This number is used for .parquet file format only.
final
- datePartitionDelimiter → DatePartitionDelimiterValue?
-
Specifies a date separating delimiter to use during folder partitioning. The
default value is
SLASH. Use this parameter whenDatePartitionedEnabledis set totrue.final - datePartitionEnabled → bool?
-
When set to
true, this parameter partitions S3 bucket folders based on transaction commit dates. The default value isfalse. For more information about date-based folder partitoning, see Using date-based folder partitioning.final - datePartitionSequence → DatePartitionSequenceValue?
-
Identifies the sequence of the date format to use during folder
partitioning. The default value is
YYYYMMDD. Use this parameter whenDatePartitionedEnabledis set totrue.final - dictPageSizeLimit → int?
-
The maximum size of an encoded dictionary page of a column. If the
dictionary page exceeds this, this column is stored using an encoding type
of
PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts toPLAINencoding. This size is used for .parquet file format only.final - enableStatistics → bool?
-
A value that enables statistics for Parquet pages and row groups. Choose
trueto enable statistics,falseto disable. Statistics includeNULL,DISTINCT,MAX, andMINvalues. This parameter defaults totrue. This value is used for .parquet file format only.final - encodingType → EncodingTypeValue?
-
The type of encoding you are using:
final
- encryptionMode → EncryptionModeValue?
-
The type of server-side encryption that you want to use for your data. This
encryption type is part of the endpoint settings or the extra connections
attributes for Amazon S3. You can choose either
SSE_S3(the default) orSSE_KMS. To useSSE_S3, you need an AWS Identity and Access Management (IAM) role with permission to allow"arn:aws:s3:::dms-*"to use the following actions:final - externalTableDefinition → String?
-
Specifies how tables are defined in the S3 source files only.
final
- hashCode → int
-
The hash code for this object.
no setterinherited
- includeOpForFullLoad → bool?
-
A value that enables a full load to write INSERT operations to the
comma-separated value (.csv) output files only to indicate how the rows were
added to the source database.
For full load, records can only be inserted. By default (the
falsesetting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. IfIncludeOpForFullLoadis set totrueory, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.final - parquetTimestampInMillisecond → bool?
-
A value that specifies the precision of any
TIMESTAMPcolumn values that are written to an Amazon S3 object file in .parquet format. WhenParquetTimestampInMillisecondis set totrueory, AWS DMS writes allTIMESTAMPcolumns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.final - parquetVersion → ParquetVersionValue?
-
The version of the Apache Parquet format that you want to use:
parquet_1_0(the default) orparquet_2_0.final - preserveTransactions → bool?
-
If set to
true, AWS DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified byCdcPath. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.final - rowGroupLength → int?
-
The number of rows in a row group. A smaller row group size provides faster
reads. But as the number of row groups grows, the slower writes become. This
parameter defaults to 10,000 rows. This number is used for .parquet file
format only.
final
- runtimeType → Type
-
A representation of the runtime type of the object.
no setterinherited
- serverSideEncryptionKmsKeyId → String?
-
If you are using
SSE_KMSfor theEncryptionMode, provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.final - serviceAccessRoleArn → String?
-
The Amazon Resource Name (ARN) used by the service access IAM role. It is a
required parameter that enables DMS to write and read objects from an 3S
bucket.
final
- timestampColumnName → String?
-
A value that when nonblank causes AWS DMS to add a column with timestamp
information to the endpoint data for an Amazon S3 target.
DMS includes an additional
STRINGcolumn in the .csv or .parquet object files of your migrated data when you setTimestampColumnNameto a nonblank value.final - useCsvNoSupValue → bool?
-
This setting applies if the S3 output files during a change data capture
(CDC) load are written in .csv format. If set to
truefor columns not included in the supplemental log, AWS DMS uses the value specified byCsvNoSupValue. If not set or set tofalse, AWS DMS uses the null value for these columns.final
Methods
-
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
toJson(
) → Map< String, dynamic> -
toString(
) → String -
A string representation of this object.
inherited
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited