Home > Software engineering >  Angular App with Data in the URL gets 404 from NGINX
Angular App with Data in the URL gets 404 from NGINX

Time:06-03

I have a server with this setup:

In the root (/var/www) folder I have an index.html that links to angular app1 and app2

   \index.html
   |
    -\app1\index.html
   |
    -\app2\index.html

My angular app2 has data in the URL so my URL will look like this:

https://www.example.com/app2/thispartisdata/AlongDataStringThatContainsData

When I browse to https://www.example.com/app2 everything works fine

When I browse to https://www.example.com/app2/somedata this goes to a 404 message from NGINX

I have tried the try_files configuration option but I only need it for my second app.

So what I would like to do is something like this (but I can't figure out or find what the syntax would be or if it's even possible)

location /app2 {
    try_files $uri $uri/ /app2/index.html;
}

I can't get the above to work. I want to have requests to app2 pass the entire URL on to app2 and not have NGINX get in the way and return a 404. I can't find the correct way to have ONLY app2 receive the full URLs and redirect that full URL to app2's index.html.

Edit

My full configuration:

server {

        server_name example.com www.example.com;

        location /app1/api/ {
                proxy_pass https://127.0.0.1:5001;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection keep-alive;
                proxy_set_header Host $host;
                proxy_cache_bypass $http_upgrade;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
       }

        location /app2/api/ {
                proxy_read_timeout 300s;
                proxy_connect_timeout 75s;
                proxy_pass https://127.0.0.1:5002/;
                proxy_http_version 1.1;
                proxy_request_buffering off;
                proxy_buffering off;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection keep-alive;
                proxy_set_header Host $host;
                proxy_cache_bypass $http_upgrade;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
        }

# I want to do something like this here, but this give a 500 error
#
#        location /app2 {
#             try_files $uri /app2/index.html;
#        }

        location / {
                root /var/www;
                index index.html;
        }


    ssl_stapling on;
    ssl_stapling_verify on;
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}


server {
    if ($host = www.example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    if ($host = example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot



        server_name example.com www.example.com;
    listen 80;
    return 404; # managed by Certbot

}

CodePudding user response:

Your main mistake here is that you didn't defined the root for your web site at the server level. Your root /var/www; is being effective only in the location / { ... } and not anywhere else. Like many other nginx directives the root directive, if not being inherited from the previous configuration levels, has a default value of /html relative to the nginx prefix. That prefix is specified during the build time and can be checked with the nginx -V command (see the --prefix=... configure argument). If by example this prefix is equal to /etc/nginx (a common case), the default server webroot will be /etc/nginx/html. You need to move that directive one level up, to the server context instead of location one.

After doing that, you can add

location /app2/ {
    try_files $uri /app2/index.html;
}

to your server block. I don't use an $uri/ component here since I do not expect you have any nested index.html under your app2 web app directory other than /app2/index.html one. Redirect from /app2 to /app2/ will be made by nginx automatically since under your location / { ... } an implicit try_files $uri $uri/ =404 will be used as the default PRECONTENT request processing phase handler, and the /app2 request will be handled by location / { ... } rather than location /app2/ { ... }.

What I don't understand is that how your app1 can be workable without the similar

location /app1/ {
    try_files $uri /app1/index.html;
}

location block. Maybe it doesn't have an interactive part and used only as an API?

The next thing seems somewhat weird to me is your app1/app2 API locations. You are using an URI suffix for the proxy_pass directive in your second location (proxy_pass https://127.0.0.1:5002/;) and not using it at your first one (proxy_pass https://127.0.0.1:5001;). That means a request like /app1/api/endpoint will appear to your app1 as is, while request like /app2/api/endpoint will appear to your app2 as /endpoint. If you'd use proxy_pass https://127.0.0.1:5001/app1/; and proxy_pass https://127.0.0.1:5002/app2/; both those requests would appear to your backends as /api/endpoint. You can find out more about this proxy_pass directive behavior here. However if both APIs works as expected, maybe your backend apps expect requests in that different form and you don't need to change anything. Nevertheless I think it is worth to mention that difference.

The last thing is about your other settings on proxying API requests. I've never seen any configuration to use proxy_set_header Connection keep-alive; explicitly; the common method to keep connections to the upstream alive is to declare it using upstream block with the keepalive parameter specified (example). The proxy_set_header Upgrade $http_upgrade; most likely have been taken from the official WebSocket proxying example and makes no sense here without the rest of that configuration. More technical details about those HTTP headers and related subjects from MDN: Connection, Keep-Alive, Upgrade. Disabling request/response buffering via proxy_request_buffering off; and proxy_buffering off; also seems strange to me when being applied to the API calls; usually it is used when you need some kind of real-time information about request/response progress which is unlikely for the API calls.

  • Related