I am developing a dockerized application. From time to time I rebuild and run the image/container on my machine to do some testing. To do so I naively executed the following:
docker image build -t myapp .
docker container run -p 8020:5000 -v $(pwd)/data:/data myapp
This seems to work quite well. But today I realized that the docker folder on my machine has grown quite big in the last three weeks (> 200 GB).
So it seems that all the supposedly temporary containers, images, volumes etc. have piled up on my disk.
What would be the right way to solve this? I only need one version of my image, so it would be great everything was simply overwritten every time I start a new test cycle.
Alternatively I could execute a docker system prune
, but that seems overkill.
CodePudding user response:
Majority of space is occupied by images and volumes so pruning only them would be better option then docker system prune
.
docker image prune
docker volume prune
image prune will clean all dangling images that were untagged from myapp due to a new image built and tag to myapp.
Also as the containers are started for testing/dev, they can be started with --rm
for cleaning when stopped.