Having a data frame with a timestamp field, like so:
timestamp | id | version |
---|---|---|
2022-01-01 01:02:00.000 | 1 | 2 |
2022-01-01 05:12:00.000 | 1 | 2 |
I've created a Glue job that is using ApplyMapping
to save the data to a new S3 location. Currently I've added id and version partition by selecting those fields in the visual editor and my data is saved with the following structure: id=1/version=2/
I would like to parse the timestamp and extract the date value so the filesystem structure would be id=1/version=2/dt=2022-01-01/
. However, in the visual editor I can only select the timestamp and cant perform any manipulation on the field. I'm guessing I need to change the code, but I'm not sure how.
Code:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ["JOB_NAME"])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args["JOB_NAME"], args)
# Script generated for node S3 bucket
S3bucket_node1 = glueContext.create_dynamic_frame.from_options(
format_options={},
connection_type="s3",
format="parquet",
connection_options={"paths": ["s3://my-data"], "recurse": True},
transformation_ctx="S3bucket_node1",
)
# Script generated for node ApplyMapping
ApplyMapping_node2 = ApplyMapping.apply(
frame=S3bucket_node1,
mappings=[
("timestamp", "timestamp", "timestamp", "timestamp"),
("id", "string", "id", "string"),
("version", "string", "version", "string"),
],
transformation_ctx="ApplyMapping_node2",
)
# Script generated for node S3 bucket
S3bucket_node3 = glueContext.write_dynamic_frame.from_options(
frame=ApplyMapping_node2,
connection_type="s3",
format="glueparquet",
connection_options={
"path": "s3://target-data",
"partitionKeys": ["id", "version"],
},
format_options={"compression": "gzip"},
transformation_ctx="S3bucket_node3",
)
job.commit()
CodePudding user response:
Use the Map Class.
Add this method to your script
def AddDate(rec):
ts = str(rec["timestamp"])
rec["dt"] = ts[:10]
return rec
Insert the Map Transform after the ApplyMapping
step.
Mapped_dyF = Map.apply(frame = ApplyMapping_node2, f = AddDate)
Update the write to S3 step, notice the change to frame
and partitionKeys
.
S3bucket_node3 = glueContext.write_dynamic_frame.from_options(
frame=Mapped_dyF,
connection_type="s3",
format="glueparquet",
connection_options={
"path": "s3://target-data",
"partitionKeys": ["id", "version", "dt"],
},
format_options={"compression": "gzip"},
transformation_ctx="S3bucket_node3",
)