I'm trying to upload files(40mbs ) to s3 using @adminjs\upload
feature in adminjs dashboard.
When I test on localhost, It uploads most of these files with no issues, but takes a very long time to upload. However, when I try on the deployed version on AWS EC2, I get the following error.
Error: Request aborted
at IncomingMessage.<anonymous> (/home/ubuntu/elearning-system-backend/node_modules/formidable/lib/incoming_form.js:122:19)
at IncomingMessage.emit (node:events:513:28)
at IncomingMessage.emit (node:domain:489:12)
at IncomingMessage._destroy (node:_http_incoming:224:10)
at _destroy (node:internal/streams/destroy:109:10)
at IncomingMessage.destroy (node:internal/streams/destroy:71:5)
at abortIncoming (node:_http_server:700:9)
at socketOnClose (node:_http_server:694:3)
at TLSSocket.emit (node:events:525:35)
at TLSSocket.emit (node:domain:489:12)
What I've tried to try to fix this issue
1- I've read some issues on github stating that using bodyparser with formidable sometimes produces this issue, however when I removed it, nothing changed.
2- Increasing the timeout doesn't really help either.
3- I've read in one issue about a package called formidable-serverless, but i have no idea where to use it since adminjs internally uses formidable.
What could be done to fix this issue?
Versions I'm using
Node.js: 18.7.0
@adminjs/express: "^5.0.0",
@adminjs/mongoose: "^3.0.0",
@adminjs/passwords: "^3.0.0",
@adminjs/upload: "^3.0.0",
adminjs: "^6.0.1",
CodePudding user response:
I've figured out where the issue was, the server timed out before the file could ever finish uploading, my solution was extending the timeout of the server itself and the http request until the file was able to upload. However, this is not really recommended as this may leave the website vulnerable to attacks. It is merely a temporary solution. Another solution would be using socket.io, I haven't tried it yet but I'm planning to.
To extend the time:
server.timeout = 25 * 60000;
server.headersTimeout = 25*60000;
server.keepAliveTimeout = 25 * 60000;