I have a multi-container application deployed on an EC2 instance via a single ECS task. When I try making an HTTP request to container-2 from container-1, I get error "Name or service not known."
I'm unable to reproduce this locally when I run with docker compose. I'm using the bridge network mode. I've SSH'd into the EC2 instance and can see that both containers are on the bridge network. (I've unsuccessfully tried awsvpc as well and that led to a different set of issues... so I'll save that for a separate post if necessary.)
Here's a snippet of my task-definition.json:
{
...
"containerDefinitions": [
{
"name": "container-1",
"image": "container-1",
"portMappings": [
{
"hostPort": 8081,
"containerPort": 8081,
"protocol": "tcp"
}
]
},
{
"name": "container-2",
"image": "container-2",
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080,
"protocol": "tcp"
}
]
}
],
"networkMode": "bridge",
...
}
EDIT1 - Adding some of my ifconfig, let me know if I need to add more.
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:a7ff:febd:55df prefixlen 64 scopeid 0x20<link>
ether 02:42:a7:bd:55:df txqueuelen 0 (Ethernet)
RX packets 842 bytes 55315 (54.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 614 bytes 78799 (76.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ecs-bridge: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 169.254.172.1 netmask 255.255.252.0 broadcast 0.0.0.0
inet6 fe80::c5a:1bff:fed4:525f prefixlen 64 scopeid 0x20<link>
ether 00:00:00:00:00:00 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 23 bytes 1890 (1.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 3760 bytes 274480 (268.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3760 bytes 274480 (268.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
EDIT2 - docker inspect bridge
[
{
"Name": "bridge",
"Id": "...",
"Created": "...",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "XXX",
"Gateway": "XXX"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"somehash": {
"Name": "container-1",
"EndpointID": "XXX",
"MacAddress": "XXX",
"IPv4Address": "XXX",
"IPv6Address": ""
},
"somehash": {
"Name": "container-2",
"EndpointID": "XXX",
"MacAddress": "XXX",
"IPv4Address": "XXX",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
CodePudding user response:
To allow containers in a single task, in EC2 host networking mode, to communicate with each other you need to specify the links
attribute to map containers to internal network names. This is documented here.