Home > Back-end >  When running Nginx PHP-FPM in two different containers, can that configuration ever work without s
When running Nginx PHP-FPM in two different containers, can that configuration ever work without s

Time:04-04

I have the following docker-compose.yaml file for local development that works without issue:

  • Nginx container just runs the webserver with an upstream pointing to php
  • Php runs just php-fpm my extensions
  • I have an external docker-sync volume which contains my code base which is shared with both nginx php.
  • The entire contents of my application is purely PHP returning a bunch of json api data. No static assets get served up.
version: '3.9'

networks:
  backend:
    driver: bridge

services:
  site:
    container_name: nginx
    depends_on: [php]
    image: my-nginx:latest
    networks: [backend]
    ports: ['8080:80', '8081:443']
    restart: always
    volumes: [code:/var/www/html:nocopy]
    working_dir: /var/www/html

  php:
    container_name: php
    image: my-php-fpm:latest
    networks: [backend]
    ports: ['9000:9000']
    volumes: [code:/var/www/html:nocopy]
    working_dir: /var/www/html

volumes:
  code:
    external: true

I'm playing around with ways to deploy this in my production infrastructure and am liking AWS ECS for it. I can create a single task definition, that launches a single service with both containers defined (and both sharing a code volume that I add in during my build process) and the application works.

This solution seems odd to me because now the only way my application can scale out is by giving me a {php nginx} container set each time. My PHP needs are going to scale faster than my nginx ones, so this strikes me as a bit wasteful.

I've tried experimenting with the following setup:

  • 1 ECS service for just nginx
  • 1 different ECS service for just php
  • Both are load balanced, but by virtue of using Fargate and them being on different services, I don't have a way to add a volumesFrom block on the nginx container that would give it access to my code (which I package on the PHP container during my build process). There is no reference to the PHP docker container that I can make that allows this to happen.

My configuration "works" in that the load balanced Nginx service can now scale independent of the load balanced PHP service. They're able to both talk to each other. But Nginx not having my code means it can't help but return a 404 on anything that I want my php upstream to handle.

server {
    listen 80;
    server_name localhost;

    root /var/www/html/public;
    index index.php index.html;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location /health {
        access_log off;
        return 200 'PASS';
        add_header Content-Type text/plain;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(. \.php)(/. )$;
        fastcgi_pass app-upstream;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        proxy_http_version 1.1;
        proxy_set_header   "Connection" "";
    }
}

Is there any nginx configuration I can write that would make this setup (without nginx having access to my code) work?

It feels like my only options are either copying the same code onto both containers (which feels weird), combining them both into the same container (which violates the 1 service/1 container rule), or accepting that I can't scale them as independently as I would like (which is not the end of the world).

CodePudding user response:

It's not required to share the volume between those two containers, the PHP scripts are required only by the PHP container, for Nginx it's only required to have network access to the PHP container, so it can proxy the requests.

To run your application on AWS ECS, you need to pack Nginx PHP in the same container, so the load balancer proxy the request to the container, Nginx accepts the connection and proxy it to PHP, and then return the response.

Using one container for Nginx to act as a proxy to multiple PHP containers it's not possible using Fargate, it would require running the containers on the same network and somehow making the Nginx container proxy and balancing the incoming connections. Besides that, when a new PHP container were deployed, it should be registered on Nginx to start receiving connections.

CodePudding user response:

I had the same struggle for a long time till I have moved all my PHP Apps to NGINX Unit.

https://unit.nginx.org/howto/cakephp/

This is an example how easy it is to have a single container setup to handle static files (html, css, js) as well as all the php code. To learn more about Unit in Docker check this out. https://unit.nginx.org/howto/docker/

Let me know if you have any issues with the tutorials.

  • Related