Home > Enterprise >  Error connecting ScyllaDB in Docker from Spring Boot app
Error connecting ScyllaDB in Docker from Spring Boot app

Time:07-20

I hope someone can help me with this issues, as I'm no expert with docker.

I have a Java Spring Boot application (let's call it my-app) that uses ScyllaDB. So far, I have been running the application with Spring Boot embedded Apache Tomcat build, and the database is running in Docker with no issues.

Here is the docker-compose file for the 3 Scylla nodes:

version: "3"

services:

  scylla-node1:
    container_name: scylla-node1
    image: scylladb/scylla:4.5.0
    restart: always
    command: --seeds=scylla-node1,scylla-node2 --smp 1 --memory 750M --overprovisioned 1 --api-address 0.0.0.0
    ports:
      - 9042:9042
    volumes:
      - "./scylla/scylla.yaml:/etc/scylla/scylla.yaml"
      - "./scylla/cassandra-rackdc.properties.dc1:/etc/scylla/cassandra-rackdc.properties"
    networks:
      - scylla-network

  scylla-node2:
    container_name: scylla-node2
    image: scylladb/scylla:4.5.0
    restart: always
    command: --seeds=scylla-node1,scylla-node2 --smp 1 --memory 750M --overprovisioned 1 --api-address 0.0.0.0
    ports:
      - 9043:9042
    volumes:
      - "./scylla/scylla.yaml:/etc/scylla/scylla.yaml"
      - "./scylla/cassandra-rackdc.properties.dc1:/etc/scylla/cassandra-rackdc.properties"
    networks:
      - scylla-network

  scylla-node3:
    container_name: scylla-node3
    image: scylladb/scylla:4.5.0
    restart: always
    command: --seeds=scylla-node1,scylla-node2 --smp 1 --memory 750M --overprovisioned 1 --api-address 0.0.0.0
    ports:
      - 9044:9042
    volumes:
      - "./scylla/scylla.yaml:/etc/scylla/scylla.yaml"
      - "./scylla/cassandra-rackdc.properties.dc1:/etc/scylla/cassandra-rackdc.properties"
    networks:
      - scylla-network

Using the node tool, I can see the DB is fine:

Datacenter: DC1
--  Address     Load       Tokens       Owns    Host ID                               Rack
UN  172.27.0.3  202.92 KB  256          ?       4e2690ec-393b-426d-8956-fb775ab5b3f9  Rack1
UN  172.27.0.2  99.5 KB    256          ?       ae6a0b9f-d0e7-4740-8ebe-0ce1d2e9ea7e  Rack1
UN  172.27.0.4  202.68 KB  256          ?       7a4b39bf-f38a-41ab-be33-c11a4e4e352c  Rack1

In the application, the Java driver I'm using is the DataStax Java driver 3.11.2.0 for Apache Cassandra. The way I connect with the DB is the following:

@Bean
    public Cluster cluster() {
        Cluster cluster = Cluster.builder().addContactPointsWithPorts(
                        new InetSocketAddress("127.0.0.1", 9042),
                        new InetSocketAddress("127.0.0.1", 9043),
                        new InetSocketAddress("127.0.0.1", 9044))
                .build();
        return cluster;
    }

    @Bean
    public Session session(Cluster cluster, @Value("${scylla.keyspace}") String keyspace) throws IOException {
        final Session session = cluster.connect();
        setupKeyspace(session, keyspace);
        return session;
    }

When running the application with the tomcat server, I receive a lot of connection errors at the start:

2022-07-19 22:42:38.424  WARN 28228 --- [r1-nio-worker-3] com.datastax.driver.core.Connection      : Error creating netty channel to /172.27.0.4:9042

However, after a small spam of log errors, the app eventually connects and its totally usable. I do have to wait for the node tool to execute and confirm that all nodes are up, though.

2022-07-19 23:25:12.324  INFO 25652 --- [  restartedMain] c.d.d.c.p.DCAwareRoundRobinPolicy        : Using data-center name 'DC1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
2022-07-19 23:25:12.324  INFO 25652 --- [  restartedMain] com.datastax.driver.core.Cluster         : New Cassandra host /172.27.0.3:9042 added
2022-07-19 23:25:12.324  INFO 25652 --- [  restartedMain] com.datastax.driver.core.Cluster         : New Cassandra host /172.27.0.2:9042 added
2022-07-19 23:25:12.324  INFO 25652 --- [  restartedMain] com.datastax.driver.core.Cluster         : New Cassandra host /127.0.0.1:9044 added

Then, I recently added "my-app" to my docker-compose file, but the app can't start and instantly shuts down even if I wait for the node status tool to confirm that all nodes are up.

Caused by: java.net.ConnectException: Connection refused

Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 

Is there something wrong with the way I'm connecting with the DB? I wonder why the embedded tomcat build works and the docker one instantly shuts down. I was hoping someone here could help me find a way for the docker-compose build to wait for all the scylla nodes to be up before starting my-app (I assume I can do it with a script in the dockerfile? Maybe?), but I can't even seem to start the app in docker the same way I did with the tomcat. Maybe I'm missing smething regarding the port and host when using docker.

Any ideas in what I could try to solve this? Thanks in advance!

Docker compose file edited with the app:

  my-app:
    container_name: my-app
    build:
      context: .
      dockerfile: Dockerfile
    image: my-app
    ports:
      - 8082:8082
    depends_on:
      - scylla-node1
      - scylla-node2
      - scylla-node3
    networks:
      - scylla-network

CodePudding user response:

You need to use IP addresses in the contact points which are accessible from outside the containers, not localhost.

Typically, it will be the IP address you've configured for CASSANDRA_RPC_ADDRESS (environment variable) or rpc_address (in your yaml).

If you didn't set the RPC addresses for the containers, you need to tell Cassandra what IP address to advertise to other nodes and clients by specifying a broadcast address with CASSANDRA_BROADCAST_ADDRESS or broadcast_rpc_address.

The important thing is that you need to use IP addresses which are reachable from your Spring Boot app. Cheers!

  • Related