Home > Blockchain >  Can any random Java Server act as a producer in Apache Kafka?
Can any random Java Server act as a producer in Apache Kafka?

Time:06-26

I have a Java Server on localhost:2001, where a frequently updated JSON file is stored. In an attempt to save the changing values of the JSON historically, I want to set up Apache Kafka and am using their event listener capabilities.

Is this even the right approach to take here, because all I want is a table of the changed values of the JSON over time and it seems like a bit of an overkill to work with Kafka?

Anyways, my actual question while setting up the infrastructure was, if any server can be used as a producer like that:

    Properties props = new Properties();
    props.put("bootstrap.servers", "localhost:2001");
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    
    Producer<String, String> producer = new KafkaProducer<String, String>(props);
    
    ProducerRecord<String, String> record = new ProducerRecord<String, String>("test", "hello world");
    
    producer.send(record);
    
    producer.close();

Given localhost:2001 is a Java Server with some JSON. I could also get the JSON from the server with a get request Submodel.getSubmodelElements()

ProducerRecord<String, String> record = new ProducerRecord<Object>(Submodel.getSubmodelElements());

Is this the correct architectural approach to saving timeseries data in a database or am I on a wrong path? Any pointers are appreciated!

CodePudding user response:

I think there's some confusion in this question.

a) Any Java application could act as a Kafka producer/consumer, all you'd need to do to is to add Kafka-clients dependency and set up Producer / Consumer objects (your example code is right in that).

b) You need a Kafka cluster, running somewhere, e.g. on your machine or somewhere in the cloud (e.g. MSK) to actually receive messages from Producers, store them, and provide them to Consumers.

c) Coming back to original point, Kafka as an "audit-log" of lambda-changes over some object (your JSON) is a reasonable approach. You just need to think about how to show these changes - do you want to send the whole history (then the users might just read the whole topic), or just a final version (then maybe some kind of front service doing the aggregation, or read about Kafka compaction).

d) Overkill depends on a usecase unfortunately. If Kafka cluster related op cost is needed for only a single "file" then it might sound so.

  • Related