I am new at kafka and consumed about event deletion. Kafka does not delete the event in topics. But deletes after some retention time.
For example I heve a consumer and a producer application on dotnet core platform. Producer puts the Order
data, and producers gets and save it to the database. But the topic is not deleted. If consumer application restart, will it start from zero and dupplciate the database records? How can we prevent this stuation?
CodePudding user response:
Have a look at the documentation for offsets:
This offset acts as a unique identifier of a record within that partition, and also denotes the position of the consumer in the partition.
...
The committed position is the last offset that has been stored securely. Should the process fail and restart, this is the offset that the consumer will recover to.
Make sure you that the consumer is committing the offset and upon restart, it will use the same offset to continue consuming a topic.