Home > other >  Mongodb on Kubernetes error "all map to this node in new configuration with" for replicase
Mongodb on Kubernetes error "all map to this node in new configuration with" for replicase

Time:10-16

I am using a statefulset to deploy mognodb to kubernetes.

I have two pods called:

mongo-replica-0.mongo:27017 and mongo-replica-1.mongo:27017 (.mongo is added because of the kube service)

I am running this command from a kube job after the pods are started

mongo "mongodb://mongo-replica-0.mongo:27017" -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --eval "rs.initiate({ _id: 'rs0', members: [{ _id: 0, host: 'mongo-replica-0.mongo:27017' }, { _id: 1, host: 'mongo-replica-1.mongo:27017' },] })"

I receive this error:

"errmsg" : "The hosts mongo-replica-0.mongo:27017 and mongo-replica-1.mongo:27017 all map to this node in new configuration with {version: 1, term: 0} for replica set rs0

How can I initiate my replicaset?

CodePudding user response:

I needed to set the service's IP to null and session affinity to null to make the service headless. When mongo tried to intercommunicate with sthe service originally, it saw the service IP and thought it was referencing itself. After the updates it succeeded.

Terraform setting:

resource "kubernetes_service" "mongodb-service" {
  metadata {
    name      = "mongo"
    namespace = kubernetes_namespace.atlas-project.id
    labels = {
      "name" = "mongo"
    }
  }
  spec {

    selector = {
      app = "mongo"
    }
    cluster_ip       = null
    session_affinity = null
    port {
      port        = 27017
      target_port = 27017
    }

    type = "LoadBalancer"
  }
  lifecycle {
    ignore_changes = [spec[0].external_ips]
  }
}
  • Related