Home > Net >  Unable to view data in docker volume using docker-compose
Unable to view data in docker volume using docker-compose

Time:11-14

I am trying to use docker volume for the first time and I am having a hard time getting the container to share files with the host machine (Ubuntu). I can see the files my code is writing inside the container using docker exec but none of the files are in the volume under /var/lib/docker/volumes.

My DockerFile

FROM node:16-alpine

RUN apk add dumb-init
RUN addgroup gp && adduser -S appuser -G gp

RUN mkdir -p /usr/src/app/logs

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . /usr/src/app/
RUN chown -R appuser:gp /usr/src/app/logs/

USER appuser

My docker-compole.yml

version: "3.6"
services:
  my-service:
    user: appuser
    container_name: demou 
    build:
      context: . 
    image: "myService" 
   
    working_dir: /usr/src/app 

    ports:
      - 8080:8080 # 
    environment:
      - NODE_VERSION=16 
    volumes:
       - /logs:/logs/:rw
    command: sh -c "dumb-init node src/server.js"
    networks:
      - Snet
  # restart: always
volumes:
  logs:
   # driver: local
    name: "logs"
networks:
  Snet:
    name: "Snetwork"

server.js doesn't do anything besides writing a helloworld.txt file to the logs directory. when I run the app in the container,I dont see any errors or even warning. It's just the logs are not available on the host machine where docker keeps its volumes. What I missing here?

Thanks

CodePudding user response:

The compose file uses a bind mount (indicated by the leading / before logs:

...
services:
  my-service:
    ...
    volumes:
       - /logs:/logs/:rw
       # ^ this slash makes the mount a bind mount
    ...

We actually want to use a named volume by removing the leading /:

...
services:
  my-service:
    ...
    volumes:
       - logs:/logs/:rw 
       # ^ no slash, will be interpreted as named volume
       #   referencing the named volume "logs" defined below
    ...
volumes:
  logs:
   # driver: local
    name: "logs"
...

For more details, please refer to the relevant docker-compose file documentation.


As an aside: I had problems starting the docker-compose.yml file due to an invalid reference format. The image name must not include uppercase letters. So I had to change it to my-service. Even then, I was not able to build the my-service image due to missing files.

Here is a full docker-compose.yml that reproduces the desired behaviour, I used an alpine with a simple script to write to the volume:

version: "3.6"
services:
  my-service:
    image: alpine:3.14.3
    working_dir: /logs
    volumes:
       - logs:/logs/:rw
    command: sh -c 'echo "Hello from alpine" > log.txt'
volumes:
  logs:
    name: logs

CodePudding user response:

You hint that you're trying to actually read the logs that come out, reasonably enough. For this use case you should use a Docker bind mount and not a named volume.

Where you specify

volumes:
  - /logs:/logs:rw

The first part (starting with a slash) is an absolute path on the host; if you ls / on the host system, outside a container, you should see the logs directory there. The second part is a path inside the container, which doesn't match what you've indicated in the Dockerfile. If you change it to

volumes:
  - ./logs:/usr/src/app/logs:rw
  # ^^     ^^^^^^^^^^^^

making it a relative path on the host side and the intended directory on the container side, then you will be able to directly read the logs in a subdirectory of the directory containing the docker-compose.yml file. You can delete the volumes: block at the end of the file.

(For completeness, if the left-hand side of a volumes: entry doesn't contain a slash at all, it refers to a named volume specified in the top-level volumes: block; see also @Turing85's answer.)

Permissions-wise, the container process must run as the same numeric user ID that owns the log directory. Any other directories that the container writes to must also have the same numeric owner. It doesn't matter if the code in the image is owned by root (in fact, it's better, because it prevents the code from being accidentally overwritten).

user: 1000  # matches host uid; try running `id -u`
volumes:    # or `ls -lnd logs`
  - ./logs:/usr/src/app/logs

Also consider setting your application to log to stdout, instead of a file. That avoids this problem, and you can use docker logs to read the log output. In more involved container environments like Kubernetes, there are standard ways to collect logs-to-stdout from containers, but it's much trickier to collect logs-to-files.

  • Related