Home > Enterprise >  python confluent kafka client - unable to access Kafka on GKE using SSL
python confluent kafka client - unable to access Kafka on GKE using SSL

Time:12-09

I have a simple python Kafka producer, and i'm trying to access the Strimzi Kafka Cluster on GKE, and i'm getting following error :

cimpl.KafkaException: KafkaError{code=_INVALID_ARG,val=-186,str="Failed to create producer: ssl.key.location failed: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch"}

Here is the Kafka producer code:

from confluent_kafka import Producer

kafkaBrokers='<host>:<port>'
caRootLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/cacerts.pem'
certLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/cert.pem'
keyLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/key.pem'
password='<password>'

conf = {'bootstrap.servers': kafkaBrokers,
        'security.protocol': 'SSL',
        'ssl.ca.location':caRootLocation,
        'ssl.certificate.location': certLocation,
        'ssl.key.location':keyLocation,
        'ssl.key.password' : password
}
topic = 'my-topic1'

producer = Producer(conf)

for n in range(100):
        producer.produce(topic, key=str(n), value="val -> " str(n))

producer.flush()

To get the pem files (from the secrets - PKCS files), here are the commands used

kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12 
kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password


kubectl get secret my-bridge1 -n kafka -o jsonpath='{.data.user\.p12}' | base64 -d > user2.p12
kubectl get secret my-bridge1 -n kafka -o jsonpath='{.data.user\.password}' | base64 -d > user2.password

- to get the user private key i.e. key.pem
openssl pkcs12 -in user2.p12 -nodes -nocerts -out key.pem -passin pass:<passwd>

# CARoot - extract cacerts.cer
openssl pkcs12 -in ca.p12 -cacerts -nokeys -chain | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > cacerts.cer
# convert to pem format
openssl x509 -in cacerts.cer -out cacerts.pem

# get the ca.crt from the secret
kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
# convert to pem
openssl x509 -in ca.crt -out cert.pem

Any ideas how to fix this issue ?

Pls note - I'm able to access Kafka Cluster using commandline Kafka producer/consumer on SSL

CodePudding user response:

This is fixed, pls see below configuration that is expected:

'ssl.ca.location' -> CARoot (certifying authority, used to sign all the user certs) 'ssl.certificate.location' -> User Cert (used by Kubernetes to authenticate to API server) 'ssl.key.location' -> User private key

The above error was due to incorrect User Cert being used, it should match the User Private Key

  • Related