This is my KafkaJs based publisher client. I created a container image and submitted a Pod
YAML to Strimzi broker.
const { Kafka } = require('kafkajs')
async function clients() {
const kafka = new Kafka({
clientId: 'my-app',
brokers: ['test-kafka-bootstrap.strimzi.svc.cluster.local:9092']
})
const producer = kafka.producer()
await producer.connect()
await producer.send({
topic: 'clients',
messages: [
{ value: 'Hello KafkaJS user!' },
],
})
await producer.disconnect()
}
clients()
My Dockerfile
.
FROM node:14 as build_app
WORKDIR /
WORKDIR /app
COPY app .
COPY package.json .
COPY package-lock.json .
RUN npm i
CMD ["node", "index.js"]
My pod
YAML.
apiVersion: v1
kind: Pod
metadata:
name: clients
spec:
containers:
- name: clients
image: ghcr.io/org/clients:v0.0.0
imagePullPolicy: Always
The Pod
keeps crashing and kubectl logs
does not show anything - its empty. A kubectl describe
didnt' reveal anything either.
What am I missing?
CodePudding user response:
The logs show nothing because 1) Your code has no log output 2) Your code only send a single record, and then stops, so the container exits cleanly; it is not a long-running service. You'd have the same issues with a simple docker run
command.
If you wanted a long-running service, you'd need to wrap your code in a web-service (NextJS, Express, Hapi, etc), and then add a health check probe for the k8s Service.
CodePudding user response:
you can bring up the pod in sleep mode and from there you can debug, use below yaml
apiVersion: v1
kind: Pod
metadata:
name: clients
spec:
containers:
- name: clients
image: ghcr.io/org/clients:v0.0.0
imagePullPolicy: Always
command: ["sleep" , "100000000"]
Now exec into the pod
kubectl exec -it <pod> -- bash
now execute
node index.js