Automated process: I have a SQL View that I convert to JSON, and I'm attempting to POST the JSON via cURL in Bash. This works fine when the View pulls ~50 or fewer records, but if I increase the amount of records the View pulls, and change absolutely nothing else, it breaks on me, exit code 126 instead of 0.
Here's the cURL command I'm trying
curl -H "Content-Type: application/json" \
-H "Accept: application/json" \
-u "${USER}:${API_KEY}" \
-o $LOCATION$RESPONSE_FILE_NAME \
-d "${JSON}" \
$SERVER
Again, this works exactly how I want it to until I increase the size of $JSON. The only thing I touch to do this is one line in my SQL code.
Another process I have utilizes the '--form file=@' option of cURL, and I believe that would solve my problem here, i.e. keep the JSON in a separate file instead of dumping it directly into the cURL command. However, this specific API won't accept the data like that.
Is there any way I can edit my cURL command so it will allow me to deliver up to ~2mb of data in one go? Is that even the problem here, or am I perhaps overlooking something else?
CodePudding user response:
According to curl man page:
-d/--data
If you start the data with the letter @, the rest should be a file name to read the data from, or - if you want curl to read the data from stdin. The contents of the file must already be URL-encoded. Multiple files can also be specified. Posting data from a file named 'foobar' would thus be done with --data @foobar.