Home > Enterprise >  What is this error on spark-submit by HDFS HA yarn
What is this error on spark-submit by HDFS HA yarn

Time:12-23

here is my error log:

$ /spark-submit --master yarn --deploy-mode cluster pi.py
...
2021-12-23 01:31:04,330 INFO retry.RetryInvocationHandler: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby. Visit https://s.apache.org/sbnn-error
    at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
    at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1954)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1442)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1895)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:860)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:526)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
, while invoking ClientNamenodeProtocolTranslatorPB.setPermission over master/172.17.0.2:8020. Trying to failover immediately.
...

Why I get this erorr??

NOTE. Spark master is run 'master', so spark-submit command run in 'master'

NOTE. Spark worker is run 'worker1' and 'worker2' and 'worker3'

NOTE. ResourceManager run in 'master' and 'master2'

ADD. When print above error log, master2's DFSZKFailoverController is disappeard to jps command result.

ADD. When print above error log, master's Namenode is disappeard to jps command result.

CodePudding user response:

It happens when Spark is unable to access HDFS.

If configured correctly HDFS client will handle the StandbyException by attempting to fail itself over to the other NameNode in the HA, and then it will reattempt the operation.

Replace active Namenode URI manually and check if you are still having the same error, if not HA is not properly configured.

  • Related