I'm still early on my Docker journey and I'm trying to 'Dockerise' an existing UI/API/DB web app where the UI & API/DB are on two different servers. It's gone fairly well so far and my docker-compose fires up services for UI, API & DB (and phpMyAdmin) just fine. I know that those services can communicate inherently using service names as host names due to the default networking method of Docker.
The UI (an Angular codebase) makes various calls to https://api.myproject.com which works fine in the live sites but isn't ideal in my Docker version as I want it to reach my API service in the container not the API live site.
I know that I can edit the code on the UI container and replace calls to https://api.myproject.com with calls to 'api' but that's inconvenient and lacks portability if I want to redeploy on different servers (as it is now) in future.
Is it possible for a container to redirect all POST/GET etc. for a URL to a container service? I thought the --add-host
might do something like this but it seems to want an IP address rather than a service name.
Thanks in advance.
EDIT [clarification] My issue lies in the UI page (HTML/JS) that the user sees in the browser. When the page is loaded it throws some GET requests to the API URL and that's what I was hoping to redirect to the container API service.
CodePudding user response:
You can use network aliases for the given container and those can override an existing fqdn.
Here is a 2mn crafted example:
docker-compose.yml
---
services:
search_engine:
image: nginx:latest
networks:
default:
aliases:
- google.com
- www.google.com
client:
image: alpine:latest
command: tail -f /dev/null
You can start this project with:
docker-compose up -d
Then log into the client container with
docker-compose exec client sh
And try to reach your search_engine
service with either of the following
wget http://search_engine
wget http://google.com
wget http://www.google.com
In all cases you will get the default index.html
page of a brand new installed nginx server.