spark写数据到elasticsearch中报错
EsSpark.saveToEs(result, "userprofile/users", Map("es.mapping.id" -> "uid"))报错信息为org.elasticsearch.hadoop.EsHadoopException: Could not write all entries [3/1024] (maybe ES was overloaded?). Bailing out...
at org.elasticsearch.hadoop.rest.RestRepository.flush(RestRepository.java:250)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:201)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:163)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:49)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$doSaveToEs$1.apply(EsSpark.scala:84)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$doSaveToEs$1.apply(EsSpark.scala:84)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)spark写入的数据是5000w行数据的RDD,es集群有两个节点