I read a JSON file from S3 like this:
json_file = s3_resource.Object(bucket_name='test', key='new.json'
json_content = json.loads(file_content)
....
gzipped_content = gzip.compress(json_content)
After reading the file into json_content
, I want to gzip it.
But I am not sure what to pass to gzip.compress()
for its arguments.
Currently, I get the error below:
{
"errorMessage": "memoryview: a bytes-like object is required, not 'list'",
"errorType": "TypeError",
"requestId": "017949f4-533b-4087-9038-10fd39f435d9",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 28, in lambda_handler\n gzipped_content = gzip.compress(json_content)\n",
" File \"/var/lang/lib/python3.9/gzip.py\", line 548, in compress\n f.write(data)\n",
" File \"/var/lang/lib/python3.9/gzip.py\", line 284, in write\n data = memoryview(data)\n"
]
}
json_content
[{'actionCodes': [], 'additionalCostOccured': '', 'amountEURRecieved': 0.0, 'amountOfAdditionalCost':}]
For zipped files, I did something like this and it worked:
with zipped.open(file, "r") as f_in:
gzipped_content = gzip.compress(f_in.read())
What is the issue?
CodePudding user response:
As the error suggests, gzip.compress(...)
expects a bytes-like object, while you are providing a list
.
You need to:
Pass the (modified?)
list
object (or any other JSON spec. compatible object) tojson.dumps
to obtain a JSON formattedstr
Pass the JSON string to
str.encode
to then get abytes
objectPass the
bytes
object togzip.compress(...)
This should work:
json_file = s3_resource.Object(bucket_name='test', key='new.json'
json_content = json.loads(file_content)
....
content_back_to_json = json.dumps(json_content)
json_content_as_bytes = str.encode(content_back_to_json)
gzipped_content = gzip.compress(json_content_as_bytes)