Home > other >  EsHadoopException: Could not write all entries [185/122112] (maybe ES was overlo
EsHadoopException: Could not write all entries [185/122112] (maybe ES was overlo

Time:09-17

The spark to write data to an error in the elasticsearch
 EsSpark. SaveToEs (result, "userprofile/users", the Map (es. Mapping. "id" - & gt; "Uid")) 


Error message for

Org. Elasticsearch. Hadoop. EsHadoopException: Could not write all entries [3/1024] (maybe ES was overloaded?) Bailing out...
The at org. Elasticsearch. Hadoop. Rest. RestRepository. Flush (RestRepository. Java: 250)
The at org. Elasticsearch. Hadoop. Rest. RestRepository. DoWriteToIndex (RestRepository. Java: 201)
The at org. Elasticsearch. Hadoop. Rest. RestRepository. WriteToIndex (RestRepository. Java: 163)
The at org. Elasticsearch. Spark. RDD. EsRDDWriter. Write (49) EsRDDWriter. Scala:
The at org. Elasticsearch. Spark. RDD. EsSpark $$$doSaveToEs anonfun $1. Apply (EsSpark. Scala: 84)
The at org. Elasticsearch. Spark. RDD. EsSpark $$$doSaveToEs anonfun $1. Apply (EsSpark. Scala: 84)
The at org. Apache. Spark. The scheduler. ResultTask. RunTask (ResultTask. Scala: 66)
At org. Apache. Spark. The scheduler. Task. Run (89) Task. Scala:
The at org. Apache. Spark. Executor. $TaskRunner executor. Run (executor. Scala: 214)
The at Java. Util. Concurrent. ThreadPoolExecutor. RunWorker (ThreadPoolExecutor. Java: 1142)
The at Java. Util. Concurrent. ThreadPoolExecutor $Worker. The run (ThreadPoolExecutor. Java: 617)

Spark written data line is 5000 w RDD, es cluster has two nodes

CodePudding user response:

Val conf=new SparkConf ();
The conf. Set (" es. Nodes, "elasticsearch_nodes);
Conf. Set (es. Batch. Write. Retry. "count", "10"); # default is try again 3 times, for 1 word for infinite retry (carefully)
The conf. Set (es. Batch. Write. Retry. "wait" and "60"); # default retry waiting time is 10 s. can be appropriately increased
  • Related