/transfer -c /path/to/config.yaml

Note: Keys here are formatted in dot notation for readability purposes, please ensure that the proper nesting is done when writing this into your configuration file. To see sample configuration files, visit the examples page.

KeyOptionalDescription
modeYDefaults to replication. Supported values are currently:
  • replication
  • history
outputSourceNThis is the destination.
Supported values are currently:
  • snowflake
  • bigquery
  • s3
  • databricks
queueYDefaults to kafka.
Other valid options are kafka and pubsub.

Please check the respective sections below on what else is required.
reporting.sentry.dsnYDSN for Sentry alerts. If blank, will just go to stdout.
flushIntervalSecondsYDefaults to 10.

Valid range is between 5 seconds to 6 hours.
bufferRowsYDefaults to 15,000
flushSizeKbYDefaults to 25mb

Source

Kafka

KeyOptionalDescription
kafka.bootstrapServerNPass in the Kafka bootstrap server. For best practices, pass in a comma separated list of bootstrap servers to maintain high availability. This is the same spec as Kafka.
kafka.groupIDNThis is the name of the Kafka consumer group. You can set to whatever you’d like. Just remember that the offsets are associated to a particular consumer group.
kafka.usernameYIf you’d like to use SASL/SCRAM auth, you can pass the username and password.
kafka.passwordYIf you’d like to use SASL/SCRAM auth, you can pass the username and password.
kafka.enableAWSMSKIAMYEnable this if you would like to use IAM authentication to communicate with Amazon MSK. If enabled, please ensure AWS_REGION, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set.
kafka.disableTLSYEnable this to disable TLS.
kafka.topicConfigsNFollow the same convention as *.topicConfigs below.

Google Pub/Sub

If you don’t have access to Kafka, you can use Google Pub/Sub. However, we would recommend you to use Kafka for the best possible experience.

KeyOptionalDescription
pubsub.projectIDNThis is your GCP Project ID, click here to see how you can find it.
pubsub.pathToCredentialsNThis is the path to the credentials for the service account to use. You can re-use the same credentials as BigQuery, or you can use a different service account to support use cases of cross-account transfers.
pubsub.topicConfigsNFollow the same convention as *.topicConfigs below.

Topic Configs

topicConfigs are used at the table level and store configurations like:

  • Destination’s database, schema and table name.
  • What does the data format look like?
  • Whether it should do row based soft deletion or not.
  • Whether it should drop deleted columns or not.
KeyOptionalDescription
*.topicConfigs[0].dbNName of the database in destination.
*.topicConfigs[0].tableNameYName of the table in destination.
  • If not provided, will use table name from event
  • If provided, tableName acts an override
*.topicConfigs[0].schemaNName of the schema in Snowflake.

Not needed for BigQuery.
*.topicConfigs[0].topicNName of the Kafka topic.
*.topicConfigs[0].cdcFormatNName of the CDC connector (thus format) we should be expecting to parse against.

The supported values are:
  1. debezium.postgres
  2. debezium.mongodb
  3. debezium.mysql
*.topicConfigs[0].cdcKeyFormatNFormat for what Kafka Connect will the key to be.
This is called key.converter in the Kafka Connect properties file.

The supported values are:
  • org.apache.kafka.connect.storage.StringConverter
  • org.apache.kafka.connect.json.JsonConverter

If not provided, the default value will be org.apache.kafka.connect.storage.StringConverter.
*.topicConfigs[0].dropDeletedColumnsYDefaults to false.

When set to true, Transfer will drop columns in the destination when Transfer detects that the source has dropped these columns. This column should be turned on if your organization follows standard practice around database migrations.
*.topicConfigs[0].softDeleteYDefaults to false.

When set to true, Transfer will add an additional column called __artie_delete and will set the column to true instead of issuing a hard deletion.
*.topicConfigs[0].skippedOperationsYComma-separated string for Transfer to specified operations.

Valid values are:
  • c (create)
  • r (replication or backfill)
  • u (update)
  • d (delete)
Can be specified like: c,d to skip create and deletes.
*.topicConfigs[0].includeArtieUpdatedAtYDefaults to false.

When set to true, Transfer will emit an additional timestamp column named __artie_updated_at which signifies when this row was processed.
*.topicConfig[0].includeDatabaseUpdatedAtYDefaults to false.

When set to true, Transfer will emit an additional timestamp column called __artie_db_updated_at which signifies the database time of when the row was processed.
*.topicConfigs[0].bigQueryPartitionSettingsYEnable this to turn on BigQuery table partitioning.

Destination

BigQuery

KeyOptionalDescription
bigquery.pathToCredentialsNPath to the credentials file for Google.

You can also directly inject GOOGLE_APPLICATION_CREDENTIALS ENV VAR, else Transfer will set it for you based on this value provided.
bigquery.projectIDNGoogle Cloud Project ID
bigquery.locationYLocation of the BigQuery dataset.

Defaults to us.
bigquery.defaultDatasetNThe default dataset used.

This just allows us to connect to BigQuery using data source notation (DSN).

Databricks

KeyOptionalDescription
databricks.hostNHost URL e.g. https://test-cluster.azuredatabricks.net
databricks.httpPathNHTTP path of the SQL warehouse
databricks.portYHTTP port of the SQL warehouse (defaults to 443)
databricks.catalogNUnity catalog name
personalAccessTokenNPersonal access token for Databricks
volumeNVolume name for Databricks. Volume must exist under the database and schema that you are replicating into.

Microsoft SQL Server

KeyOptionalDescription
mssql.hostNDatabase host e.g. test-cluster.us-east-1.redshift.amazonaws.com
mssql.portN-
mssql.databaseNName of the database
mssql.usernameN-
mssql.passwordN-

S3

KeyTypeOptionalDescription
s3.bucketStringNS3 bucket name. Example: artie-transfer.
s3.folderNameStringYOptional folder name within the bucket. If this is specified, Artie Transfer will save the files under s3://artie-transfer/folderName/...
s3.awsAccessKeyIDStringNThe AWS_ACCESS_KEY_ID for the service account.
s3.awsSecretAccessKeyStringNThe AWS_SECRET_ACCESS_KEY for the service account.

Snowflake

KeyOptionalDescription
snowflake.accountNAccount Identifier
snowflake.usernameNSnowflake username
snowflake.passwordNSnowflake password
snowflake.warehouseNVirtual warehouse name
snowflake.regionNSnowflake region

Redshift

KeyOptionalDescription
redshift.hostNHost URL e.g. test-cluster.us-east-1.redshift.amazonaws.com
redshift.portN-
redshift.databaseNNamespace / Database in Redshift.
redshift.usernameN-
redshift.passwordN-
redshift.bucketNBucket for where staging files will be stored.
Click here to see how to set up a S3 bucket and have it automatically purged based on expiration.
redshift.optionalS3PrefixYThe prefix for S3, say bucket is foo and prefix is bar. It becomes: s3://foo/bar/file.txt
redshift.credentialsClauseNRedshift credentials clause to store staging files into S3.

Telemetry

Overview of Telemetry can be found here.

KeyTypeOptionalDescription
telemetry.metricsObjectYParent object. See below.
telemetry.metrics.providerStringYProvider to export metrics to. Transfer currently only supports: datadog.
telemetry.metrics.settingsObjectYAdditional settings block, see below
telemetry.metrics.settings.tagsArrayYTags that will appear for every metric like: env:production, company:foo
telemetry.metrics.settings.addrStringYAddress for where the statsD agent is running. Defaults to 127.0.0.1:8125 if none is provided.
telemetry.metrics.settings.samplingNumberYPercentage of data to send. Provide a number between 0 and 1. Defaults to 1 if none is provided. Refer to this for additional information.