Home > Enterprise >  kafka issue while connecting to zookeeper (kubernetes-kafka:1.0-10.2.1)
kafka issue while connecting to zookeeper (kubernetes-kafka:1.0-10.2.1)

Time:10-20

I have used this document for creating kafka https://kow3ns.github.io/kubernetes-kafka/manifests/

able to create zookeeper, facing issue with the creation of kafka.getting error to connect with the zookeeper.

this is the manifest i have used for creating for kafka:

https://kow3ns.github.io/kubernetes-kafka/manifests/kafka.yaml for Zookeeper

https://github.com/kow3ns/kubernetes-zookeeper/blob/master/manifests/zookeeper.yaml

The logs of the kafka

 kubectl logs -f pod/kafka-0 -n kaf
[2021-10-19 05:37:14,535] INFO KafkaConfig values:
        advertised.host.name = null
        advertised.listeners = null
        advertised.port = null
        authorizer.class.name =
        auto.create.topics.enable = true
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.id = 0
        broker.id.generation.enable = true
        broker.rack = null
        compression.type = producer
        connections.max.idle.ms = 600000
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delete.topic.enable = false
        fetch.purgatory.purge.interval.requests = 1000
        group.max.session.timeout.ms = 300000
        group.min.session.timeout.ms = 6000
        host.name =
        inter.broker.listener.name = null
        inter.broker.protocol.version = 0.10.2-IV0
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
        listeners = PLAINTEXT://:9093
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = /var/lib/kafka
        log.dirs = /tmp/kafka-logs
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.message.format.version = 0.10.2-IV0
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides =
        message.max.bytes = 1000012
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 1440
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 3
        offsets.topic.segment.bytes = 104857600
        port = 9092
        principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
        producer.purgatory.purge.interval.requests = 1000
        queued.max.requests = 500
        quota.consumer.default = 9223372036854775807
        quota.producer.default = 9223372036854775807
        quota.window.num = 11
        quota.window.size.seconds = 1
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 10000
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.enabled.mechanisms = [GSSAPI]
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.mechanism.inter.broker.protocol = GSSAPI
        security.inter.broker.protocol = PLAINTEXT
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.cipher.suites = null
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        unclean.leader.election.enable = true
        zookeeper.connect = zk-cs.default.svc.cluster.local:2181
        zookeeper.connection.timeout.ms = 6000
        zookeeper.session.timeout.ms = 6000
        zookeeper.set.acl = false
        zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2021-10-19 05:37:14,569] INFO starting (kafka.server.KafkaServer)
[2021-10-19 05:37:14,570] INFO Connecting to zookeeper on zk-cs.default.svc.cluster.local:2181 (kafka.server.KafkaServer)
[2021-10-19 05:37:14,579] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2021-10-19 05:37:14,583] INFO Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:host.name=kafka-0.kafka-hs.kaf.svc.cluster.local (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.version=1.8.0_131 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/connect-api-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-file-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-json-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-runtime-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-transforms-0.10.2.1.jar:/opt/kafka/bin/../libs/guava-18.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b05.jar:/opt/kafka/bin/../libs/jackson-annotations-2.8.0.jar:/opt/kafka/bin/../libs/jackson-annotations-2.8.5.jar:/opt/kafka/bin/../libs/jackson-core-2.8.5.jar:/opt/kafka/bin/../libs/jackson-databind-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/opt/kafka/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b05.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.24.jar:/opt/kafka/bin/../libs/jersey-common-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.24.jar:/opt/kafka/bin/../libs/jersey-guava-2.24.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.24.jar:/opt/kafka/bin/../libs/jersey-server-2.24.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.3.jar:/opt/kafka/bin/../libs/kafka-clients-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-streams-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-streams-examples-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-tools-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-1.3.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/reflections-0.9.10.jar:/opt/kafka/bin/../libs/rocksdbjni-5.0.1.jar:/opt/kafka/bin/../libs/scala-library-2.11.8.jar:/opt/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.21.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.21.jar:/opt/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.9.jar (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:os.version=5.4.141-67.229.amzn2.x86_64 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:user.name=kafka (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:user.home=/home/kafka (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,584] INFO Initiating client connection, connectString=zk-cs.default.svc.cluster.local:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@5e0826e7 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,591] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2021-10-19 05:37:14,592] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkException: Unable to connect to zk-cs.default.svc.cluster.local:2181
        at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:72)
        at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1228)
        at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
        at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
        at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:106)
        at kafka.utils.ZkUtils$.apply(ZkUtils.scala:88)
        at kafka.server.KafkaServer.initZk(KafkaServer.scala:326)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
        at kafka.Kafka$.main(Kafka.scala:67)
        at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: zk-cs.default.svc.cluster.local: Name or service not known
        at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
        at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
        at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
        at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
        at java.net.InetAddress.getAllByName(InetAddress.java:1192)
        at java.net.InetAddress.getAllByName(InetAddress.java:1126)
        at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
        at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:70)
        ... 10 more
[2021-10-19 05:37:14,594] INFO shutting down (kafka.server.KafkaServer)
[2021-10-19 05:37:14,597] INFO shut down completed (kafka.server.KafkaServer)
[2021-10-19 05:37:14,597] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkException: Unable to connect to zk-cs.default.svc.cluster.local:2181
        at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:72)
        at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1228)
        at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
        at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
        at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:106)
        at kafka.utils.ZkUtils$.apply(ZkUtils.scala:88)
        at kafka.server.KafkaServer.initZk(KafkaServer.scala:326)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
        at kafka.Kafka$.main(Kafka.scala:67)
        at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: zk-cs.default.svc.cluster.local: Name or service not known
        at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
        at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
        at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
        at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
        at java.net.InetAddress.getAllByName(InetAddress.java:1192)
        at java.net.InetAddress.getAllByName(InetAddress.java:1126)
        at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
        at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:70)
        ... 10 more

Crash-Loop-Kafka kafka deployed manifest

CodePudding user response:

Your Kafka and Zookeeper deployments are running in the kaf namespace according to your screenshots, presumably you have set this up manually and applied the configurations while in that namespace? Neither the Kafka or Zookeeper YAML files explicitly state a namespace in metadata, so will be deployed to the active namespace when created.

Anyway, the Kafka deployment YAML you have is hardcoded to assume Zookeeper is setup in the default namespace, with the following line:

          --override zookeeper.connect=zk-cs.default.svc.cluster.local:2181 \

Change this to:

          --override zookeeper.connect=zk-cs.kaf.svc.cluster.local:2181 \

and it should connect. Whether that's by downloading and locally editing the YAML file etc.

Alternatively deploy Zookeeper into the default namespace.

I also recommend looking at other options like Bitnami Kafka Helm charts which deploy Zookeeper as needed with Kafka, manages most of the connection details and allows for easier customisation. It is also kept far more up to date.

  • Related