Can you help me understand the following error message and the reason behind it:
Create a dummy dataset:
df_=spark.createDataFrame([(1, np.nan,'x'), (None, 2.0,'y'),(3,4.0,None)], ("a", "b","c"))
df_.show()
---- --- ----
| a| b| c|
---- --- ----
| 1|NaN| x|
|null|2.0| y|
| 3|4.0|null|
---- --- ----
Now, I attempt to replace the NaN in the column 'b' the following way:
df_.withColumn("b", df_.select("b").replace({float("nan"):5}).b)
The df_.select("b").replace({float("nan"):5}).b
runs just fine and gives a proper column with the expected value. Yet the code above is not working and I am not able to understand the error
The error that I am getting is:
AnalysisException Traceback (most recent call last)
Cell In[170], line 1
----> 1 df_.withColumn("b", df_.select("b").replace({float("nan"):5}).b)
File /usr/lib/spark/python/pyspark/sql/dataframe.py:2455, in DataFrame.withColumn(self, colName, col)
2425 """
2426 Returns a new :class:`DataFrame` by adding a column or replacing the
2427 existing column that has the same name.
(...)
2452
2453 """
2454 assert isinstance(col, Column), "col should be Column"
-> 2455 return DataFrame(self._jdf.withColumn(colName, col._jc), self.sql_ctx)
File /opt/conda/miniconda3/lib/python3.8/site-packages/py4j/java_gateway.py:1304, in JavaMember.__call__(self, *args)
1298 command = proto.CALL_COMMAND_NAME \
1299 self.command_header \
1300 args_command \
1301 proto.END_COMMAND_PART
1303 answer = self.gateway_client.send_command(command)
-> 1304 return_value = get_return_value(
1305 answer, self.gateway_client, self.target_id, self.name)
1307 for temp_arg in temp_args:
1308 temp_arg._detach()
File /usr/lib/spark/python/pyspark/sql/utils.py:117, in capture_sql_exception.<locals>.deco(*a, **kw)
113 converted = convert_exception(e.java_exception)
114 if not isinstance(converted, UnknownException):
115 # Hide where the exception came from that shows a non-Pythonic
116 # JVM exception message.
--> 117 raise converted from None
118 else:
119 raise
AnalysisException: Resolved attribute(s) b#1083 missing from a#930L,b#931,c#932 in operator !Project [a#930L, b#1083 AS b#1085, c#932]. Attribute(s) with the same name appear in the operation: b. Please check if the right attribute(s) are used.;
!Project [a#930L, b#1083 AS b#1085, c#932]
- LogicalRDD [a#930L, b#931, c#932], false
I can achieve the required objective by using the subset argument in the replace API. i.e. df_.replace({float("nan"):5},subset = ['b'])
However, I am trying to understand better the error that I am seeing and the cause behind it.
CodePudding user response:
Based on the documentation of df.withColumn
:
Returns a new DataFrame by adding a column or replacing the existing column that has the same name.
The column expression must be an expression over this DataFrame; attempting to add a column from some other DataFrame will raise an error.
So when you do
df_.select("b").replace({float("nan"):5}).b
this creates a different dataframe with a different attribute id of column b
(since df_.select
returns a new dataframe). This attribute id doesnot exist in the original dataframe.
You should instead use replace with subset which refers to the same pointer from the same dataframe
new_df = df_.replace({float("nan"):5},subset='b')
new_df.explain()
== Physical Plan ==
*(1) Project [a#2131L, CASE WHEN (b#2132 = NaN) THEN 5.0 ELSE b#2132 END AS b#2351, c#2133]
- *(1) Scan ExistingRDD[a#2131L,b#2132,c#2133]
Note how the attribute pointer changes below:
df1 = df_
df1.replace({float("nan"):5},subset='b').explain()
== Physical Plan ==
*(1) Project [a#2131L, CASE WHEN (b#2132 = NaN) THEN 5.0 ELSE b#2132 END AS b#2378, c#2133]
- *(1) Scan ExistingRDD[a#2131L,b#2132,c#2133]