To upload a JSON file to an AWS DynamoDB table in Python I am happily using the script found on this page, but I can't understand if it is possible to tell Python to split a single string of the JSON file on a specific character in order to create an array of elements on DynamoDB.
For example, let's use this data.json
file
[
{
"artist": "Romero Allen",
"song": "Atomic Dim",
"id": "b4b0da3f-36e3-4569-b196-3ad982f72bbd",
"priceUsdCents": 392,
"publisher": "QUAREX|IME|RUME"
},
{
"artist": "Hilda Barnes",
"song": "Almond Dutch",
"id": "eeb58c73-603f-4d6b-9e3b-cf587488f488",
"priceUsdCents": 161,
"publisher": "LETPRO|SOUNDSCARE"
}
]
and this script.py
file
import boto3
import json
dynamodb = boto3.client('dynamodb')
def upload():
with open('data.json', 'r') as datafile:
records = json.load(datafile)
for song in records:
print(song)
item = {
'artist':{'S':song['artist']},
'song':{'S':song['song']},
'id':{'S': song['id']},
'priceUsdCents':{'S': str(song['priceUsdCents'])},
'publisher':{'S': song['publisher']}
}
print(item)
response = dynamodb.put_item(
TableName='basicSongsTable',
Item=item
)
print("UPLOADING ITEM")
print(response)
upload()
My target is to edit the script so the publisher
column won't include the string
publisher: "QUAREX|IME|RUME"
but a nested array of elements
publisher:["QUAREX","IME","RUME"]
For me, an extra edit of the JSON file with Python before running the upload script is an option.
CodePudding user response:
You can just use .split('|')
item = {
'artist':{'S':song['artist']},
'song':{'S':song['song']},
'id':{'S': song['id']},
'priceUsdCents':{'S': str(song['priceUsdCents'])},
'publisher':{'L': song['publisher'].split('|')}
}