Building Docker containers typically involves a lot of Internet access. Docker itself will only pull a Docker image once, and it tries to cache build steps too. But lots and lots of Dockerfiles include a line like zypper update
, apt-get update
or similar. Also, lots of times the same package gets downloaded multiple times.
Is there a way to use Docker itself to create a proxy server, so that package lists, packages, etc. only get downloaded once?
The Internet has millions of questions about how to configure Docker to use a proxy server (like, if you're behind a hardware firewall or something), but I haven't seen anything about using Docker to create a proxy server.
I've never set up a proxy before, so I don't know much about this. I imagine there are probably Docker images for popular ones. Question is, can I set up a Docker container so that everything on the local machine transparently goes through the proxy? Or would I have to manaully configure each and every program on the system to use it? Can I at least globally configure everything Docker-related to use this proxy?
CodePudding user response:
You can run a squid proxy with little more than this:
docker-compose.yml
services:
proxy:
image: datadog/squid:latest
restart: always
ports:
- 3128:3128
docker compose up -d
then all you need to do is add the following to your environment:
HTTP_PROXY=http://localhost:3128
HTTPS_PROXY=$(HTTP_PROXY)
NO_PROXY=localhost,127.0.0.1
and most apps should start using it. It would be dangerous to make it a system proxy in, for example Windows Settings, as it won't be running until Docker has started. To make a reliable system proxy it really needs to be on another server on your network.
To make all docker containers use the proxy follow these instructions: Configure Docker to use a Proxy Server - If the proxy is on your own server, use host.docker.internal
as the hostname for the proxy to inject into containers, as that resolves to the bridge ip.