Home > other >  Spark calls hbase appear always create a record a reader because of a previous error
Spark calls hbase appear always create a record a reader because of a previous error

Time:09-27

When using spark calls Hbase always create a record a reader because of a previous error exception:
Org. Apache. Spark. SparkException: Job aborted due to stage a failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost Task in stage 0.3 0.0 (dar 3, zdwlhadoop1) : Java. IO. IOException: always create a record reader because of a previous error. Both Please look at the previous logs lines from the Task 's full log for more details.
At org, apache hadoop. Hbase. Graphs. TableInputFormatBase. CreateRecordReader (TableInputFormatBase. Java: 163)
At org. Apache. Spark. RDD. NewHadoopRDD $$$1. -anon & lt; init> (NewHadoopRDD. Scala: 131)
At org.apache.spark.rdd.NewHadoopRDD.com pute (NewHadoopRDD. Scala: 104)
At org.apache.spark.rdd.NewHadoopRDD.com pute (NewHadoopRDD. Scala: 66)
At org.apache.spark.rdd.RDD.com puteOrReadCheckpoint (RDD. Scala: 277)
The at org. Apache. Spark. RDD. RDD. Iterator (RDD. Scala: 244)
The at org. Apache. Spark. The scheduler. ResultTask. RunTask (ResultTask. Scala: 61)
At org. Apache. Spark. The scheduler. Task. Run (64) Task. Scala:
The at org. Apache. Spark. Executor. $TaskRunner executor. Run (executor. Scala: 203)
The at Java. Util. Concurrent. ThreadPoolExecutor. RunWorker (ThreadPoolExecutor. Java: 1145)
The at Java. Util. Concurrent. ThreadPoolExecutor $Worker. The run (ThreadPoolExecutor. Java: 615)
The at Java. Lang. Thread. The run (Thread. Java: 745)
Under Caused by: Java. Lang. An IllegalStateException: The input format instance from The had been properly initialized. Ensure you call initializeTable either in your constructor or The initialize method
At org, apache hadoop. Hbase. Graphs. TableInputFormatBase. GetTable (TableInputFormatBase. Java: 389)
At org, apache hadoop. Hbase. Graphs. TableInputFormatBase. CreateRecordReader (TableInputFormatBase. Java: 158)
. 11 more

Driver stacktrace:
The at org.apache.spark.scheduler.DAGScheduler.org $$$$$$failJobAndIndependentStages DAGScheduler scheduler spark apache (DAGScheduler. Scala: 1203)
The at org. Apache. Spark. The scheduler. DAGScheduler $$$abortStage anonfun $1. Apply (DAGScheduler. Scala: 1192)
The at org. Apache. Spark. The scheduler. DAGScheduler $$$abortStage anonfun $1. Apply (DAGScheduler. Scala: 1191)
At the scala. Collection. The mutable. ResizableArray $class. Foreach (ResizableArray. Scala: 59)
At the scala. Collection. Mutable. ArrayBuffer. Foreach (ArrayBuffer. Scala: 47)
The at org. Apache. Spark. The scheduler. DAGScheduler. AbortStage (DAGScheduler. Scala: 1191)
The at org. Apache. Spark. The scheduler. DAGScheduler $$$handleTaskSetFailed anonfun $1. Apply (DAGScheduler. Scala: 693)
The at org. Apache. Spark. The scheduler. DAGScheduler $$$handleTaskSetFailed anonfun $1. Apply (DAGScheduler. Scala: 693)
At the scala. Option. Foreach (236) Option. The scala:
The at org. Apache. Spark. The scheduler. DAGScheduler. HandleTaskSetFailed (DAGScheduler. Scala: 693)
The at org. Apache. Spark. The scheduler. DAGSchedulerEventProcessLoop. OnReceive (DAGScheduler. Scala: 1393)
The at org. Apache. Spark. The scheduler. DAGSchedulerEventProcessLoop. OnReceive (DAGScheduler. Scala: 1354)
The at org. Apache. Spark. Util. EventLoop $$$1. -anon run (EventLoop. Scala: 48)

Please reply cow X a great god!!!!!!!!!!

CodePudding user response:

Local space is full

CodePudding user response:

Stick a complete log, that is to say, because of some error before,
  • Related