Home > OS >  Is it possible to have a single kafka stream for multiple queries in structured streaming?
Is it possible to have a single kafka stream for multiple queries in structured streaming?

Time:10-01

I have a spark application that has to process multiple queries in parallel using a single Kafka topic as the source.

The behavior I noticed is that each query has its own consumer (which is in its own consumer group) causing the same data to be streamed to the application multiple times (please correct me if I'm wrong) which seems very inefficient, instead I would like to have a single stream of data that would be then processed in parallel by Spark.

What would be the recommended way to improve performance in the scenario above ? Should I focus on optimizing Kafka partitions instead of how Spark interacts with Kafka ?

Any thoughts are welcome, Thank you.

CodePudding user response:

The behavior I noticed is that each query has its own consumer (which is in its own consumer group) causing the same data to be streamed to the application multiple times (please correct me if I'm wrong) which seems very inefficient, instead I would like to have a single stream of data that would be then processed in parallel by Spark.

tl;dr Not possible in the current design.

A single streaming query "starts" from a sink. There can only be one in a streaming query (I'm repeating it myself to remember better as I seem to have been caught multiple times while with Spark Structured Streaming, Kafka Streams and recently with ksqlDB).

Once you have a sink (output), the streaming query can be started (on its own daemon thread).

For exactly the reasons you mentioned (not to share data for which Kafka Consumer API requires group.id to be different), every streaming query creates a unique group ID (cf. this code and the comment in 3.3.0) so the same records can be transformed by different streaming queries:

// Each running query should use its own group id. Otherwise, the query may be only assigned
// partial data since Kafka will assign partitions to multiple consumers having the same group
// id. Hence, we should generate a unique id for each query.
val uniqueGroupId = KafkaSourceProvider.batchUniqueGroupId(sourceOptions)

And that makes sense IMHO.

Should I focus on optimizing Kafka partitions instead of how Spark interacts with Kafka ?

Guess so.

CodePudding user response:

You can separate your source data frame into different stages, yes.

val df = spark.readStream.format("kafka") ... 
val strDf = df.select(cast('value).as("string")) ...
val df1 = strDf.filter(...)  # in "parallel"
val df2 = strDf.filter(...)  # in "parallel"

Only the first line should be creating Kafka consumer instance(s), not the other stages, as they depend on the consumer records from the first stage.

  • Related