Home > front end >  expired ca certificates in ruby docker image (2.6.8-bullseye)
expired ca certificates in ruby docker image (2.6.8-bullseye)

Time:10-05

Last Friday I started seeing issues (on an environment that has been live for months) this issue on ruby on this docker image:

RestClient::SSLCertificateNotVerified: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed

I then proceeded trying adding a custom PEM (wget https://curl.se/ca/cacert.pem) to link in the environment variable SSL_CERT_FILE (as explained in many other stack overflow questions).

but I get:

bash-4.4# wget https://curl.se/ca/cacert.pem
Connecting to curl.se (151.101.2.49:443)
ssl_client: curl.se: certificate verification failed: certificate has expired
wget: error getting response: Connection reset by peer

I tried saving the file on my local machine then docker cp it to the container, but that didn't help either.

I tried running the console with:

bash-4.4# SSL_CERT_FILE=/cacert.pem bundle exec rails c
irb(main):001:0> RestClient.get('https://curl.se/ca/cacert.pem', headers={})
RestClient.get "https://curl.se/ca/cacert.pem", "Accept"=>"*/*", "Accept-Encoding"=>"gzip, deflate", "User-Agent"=>"rest-client/2.0.2 (linux-musl x86_64) ruby/2.3.8p459"
RestClient::SSLCertificateNotVerified: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed

I tried running update-ca-certificates both manually in the console, and in the dockerfile, but I get:

bash-4.4# update-ca-certificates
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping

When I tried this with the cacert.pem copied to the container as above, update-ca-certificates added a warning to that file too, similar to the ca-certificates.crt one.

the issue doesn't seem to improve with anything.

running

curl -Lks 'https://git.io/rg-ssl' | ruby

say it's all ok, and all (3) green checks

any ideas?

Thanks.

update

I think this issue might be related to lets encrypt expiring their root certificate, I tried the first workaround they recommend, by deleting the file on a container, and also deleting it on the dockerfile, then running update-ca-certificates this didn't help either. I'm not sure how to go about the other two workarounds.

CodePudding user response:

A way to fix the issue would be to run on the container's console:

apt update && apt install ca-certificates

However, this would be a Docker antipattern, as the changes would be lost when at the container deletion.

The better way would be to rebuild the image from the Dockerfile that you have linked in your question (with docker build), then deleting and recreating the container from the new image.

CodePudding user response:

If you are on debian 9, I would recommend you to update it. Otherwise, this is my workaround solution for my image.

# Temporarily fix wrong let's encrypt R3 chain because it's chained to an expired old root CA (DST Root CA X3) on debian 9
RUN sed -i -E 's/(.*DST_Root_CA_X3.*)/!\1/' /etc/ca-certificates.conf
ADD https://letsencrypt.org/certs/isrgrootx1.pem /usr/local/share/ca-certificates/isrgrootx1.pem
RUN update-ca-certificates 

FYI: there is a bug on OpenSSL 1.0.2g that causes the issue https://www.openssl.org/blog/blog/2021/09/13/LetsEncryptRootCertExpire/ some platforms have released the workaround fix and you just need to upgrade latest ca-certificates and latest libgnutls30. It would be better if you can upgrade to a more recent OpenSSL.

  • Related