In my ElasticBeanstalk environment, it always (for years) used to take ~5 mins for my Node.js app to be deployed and for the instance to change from status "Pending" to "OK" in the EB Health tab.
Since May 14, the app deployment takes ~15 mins, without touching the app, infrastructure, app repository, EB environment, or EC2 linux image. The same thing happened in the production and development environment, both are separate EB environments, deploying the same app.
Looking at /var/log/eb-activity.log
, I see that these 15 mins are spent at step:
[Application deployment <APPID>/StartupStage0/AppDeployPreHook/50npm.sh] : Starting activity...
The script itself only runs:
/opt/elasticbeanstalk/containerfiles/ebnode.py --action npm-install
That script just does some file checks and path composing, then runs:
npm --production install
For comparison, I cleared any cached files and ran the same command locally, which took ~11 mins:
rm -rf node_modules
node cache clean -f
npm --production install
I ran the command again with --loglevel silly
and it shows that there is one dependency in package.json
which is not pulled from the npm registry but from GitHub directly, pointing to a label:
npm sill pacote Retrying git command: ls-remote -h -t git://github.com/<org>/<repo>.git attempt # 2
npm sill pacote Retrying git command: ls-remote -h -t git://github.com/<org>/<repo>.git attempt # 3
npm verb prepareGitDep github:<org>/<repo>#<label>: installing devDeps and running prepare script.
The timeout for each of these git ls-remote
commands is about 1 minute, and then it seems to install the devDependencies run the prepare
script for another 5 mins. I'm not sure whether this is also happening on the EC2 instance, but that's the only hint I found.
The 3 timed out git ls-remote
commands and prepare
script make up ~8 mins. So if that is something that hasn't been executed before, it may explain the longer deployment time. But then why would the deployment suddenly be any different than it used to be for years?
I came across this GitHub blog post:
March 15, 2022
Changes made permanent. We’ll permanently stop accepting DSA keys. RSA keys uploaded after the cut-off point above will work only with SHA-2 signatures (but again, RSA keys uploaded before this date will continue to work with SHA-1). The deprecated MACs, ciphers, and unencrypted Git protocol will be permanently disabled.
So GitHub stopped accepting requests using the git://
protocol. Maybe that is the reason for the longer deploy time and the timeout also happens in the EC2. But that change was already made permanent in March 15 (exactly 2 months ago), so why would the issue appear only now.
CodePudding user response:
We had the same sort of puzzling behavior in our codebase, starting May 13th. I've got no idea why it took until now for GitHub dependencies to hit these issues, but per the blog post you shared, seems like a permanent thing.
A GitHub issue has been created on the subject, and the community has put forward various methods for getting around the problem.
One such suggestion:
Solved by replacing github protocol with https at package-lock.json & package.json
Another user suggested:
... we found out that updating npm to version 7.16.0 (at least, that was our go to version for other reasons) solved the issue
For us, we went with the latter and started using NPM version 7 (specifically, 7.24.2
). This did require us to update our package-lock.json
file to version 2, which brings with it a whole host of other concerns, but it seemed to work for us.