I'm using the latest version of the S3 Sink Connector (v10.0.5) and enabled both the kafka.keys and kafka.headers but only value files are being created. Here's a copy of our config:
connector.class=io.confluent.connect.s3.S3SinkConnector
format.class=io.confluent.connect.s3.format.avro.AvroFormat
errors.log.include.messages=true
s3.region=eu-west-1
flush.size=1
tasks.max=1
errors.log.enable=true
s3.bucket.name=sink-connector-backup-bcs
schema.compatibility=FULL
topics=onboardingStatus
store.kafka.keys=true
store.kafka.headers=true
keys.format.class=io.confluent.connect.s3.format.avro.AvroFormat
headers.format.class=io.confluent.connect.s3.format.avro.AvroFormat
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.enhanced.avro.schema.support=true
value.converter.schema.registry.url=http://.......eu-west-1.compute.internal:8083
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.enhanced.avro.schema.support=true
key.converter.schema.registry.url=http://.......eu-west-1.compute.internal:8083
names=s3-sink-onboardingStatus
s3.sse.kms.key.id=arn:aws:kms:eu-west-1:.......:key/......
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
storage.class=io.confluent.connect.s3.storage.S3Storage
auto.offset.reset=earliest
what am I missing?
CodePudding user response:
After some digging on our instance of AWS I discovered that we werent actually using the latest version of the S3 Sink Connector. Updated to the latest version and it worked. Did notice a potential bug: if the header or key for a message is empty (and you attempt to output that file type) then the sink connector fails