import pyspark.sql.functions as F
file_list = [ i.path for i in dbutils.fs.ls("/location/01/01/01/01/") ]
df = spark.read.json(file_list)
test_df = df.select(F.col('card_1.card_type'))
test_df.show()
I get this error message
Column 'card_1.card_type' does not exist. Did you mean one of the following? [card_1.card_type, card_1.name, ... ]
Can anyone please tell how to resolve this?
CodePudding user response:
I think adding backquote ` will fix your issue :
test_df = df.select(F.col('`card_1.card_type`'))