WARN StaticHostProvider: No IP address found for server:  master:2181
java.net.UnknownHostException:  master
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1295)
at java.net.InetAddress.getAllByName0(InetAddress.java:1248)
at java.net.InetAddress.getAllByName(InetAddress.java:1164)
at java.net.InetAddress.getAllByName(InetAddress.java:1098)
at org.apache.zookeeper.client.StaticHostProvider.resolveAndShuffle(StaticHostProvider.java:98)
at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:141)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:128)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:136)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:171)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:145)
at org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1664)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:104)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:886)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:642)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:427)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:420)
at org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal(ConnectionManager.java:298)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:239)
at project.utils.HBaseUtils.<init>(HBaseUtils.java:25)
at project.utils.HBaseUtils.getInstance(HBaseUtils.java:35)
at project.dao.CourseClickCountDAO$.save(CourseClickCountDAO.scala:24)
at project.spark.ImoocStatStreamingApp$$anonfun$main$4$$anonfun$apply$1.apply(ImoocStatStreamingApp.scala:59)
at project.spark.ImoocStatStreamingApp$$anonfun$main$4$$anonfun$apply$1.apply(ImoocStatStreamingApp.scala:54)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:883)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:883)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
18/05/05 13:01:17 INFO ClientCnxn: Opening socket connection to server core/172.21.0.11:2181. Will not attempt to authenticate using SASL (unknown error)

解决方案 »

  1.   

    使用的是hdp集成的环境,用flume连接kafka,生产消息到sparkstream,使用 val messages= KafkaUtils.createStream(ssc,zkQuorum,groupId,topicMap)接收消息,然后在服务端使用spark-submit提交消息时,就一直报这个错误
      

  2.   

    WARN StaticHostProvider: No IP address found for server:  master:2181
    java.net.UnknownHostException:  master你看你的/etc/hosts 配置好集群所有节点的hostname和ip没有
      

  3.   

    172.21.0.11  core
    172.21.0.13  master
    这样配置的,没有问题啊,整个hdp集群都正常运行的,我想了想,这个项目是用sparkstreaming对接kafka,然后已经接收到来自zookeeper的2181端口的消息,接收的时候都能够正常解析ip,怎么存到hbase的时候,解析不了呢,我是用的一个java工具类连接hbase数据库的,
      

  4.   

    172.21.0.11  core
    172.21.0.13  master
    这样配置的,没有问题啊,整个hdp集群都正常运行的,我想了想,这个项目是用sparkstreaming对接kafka,然后已经接收到来自zookeeper的2181端口的消息,接收的时候都能够正常解析ip,怎么存到hbase的时候,解析不了呢,我是用的一个java工具类连接hbase数据库的,
      

  5.   

    不知道是否是我的java连接工具类没有写正确
      

  6.   

    WARN StaticHostProvider: No IP address found for server:  master:2181
    查找不到对应的hostname,检查相关环境的配置,包括kafka,hbase,hadoop,spark以及执行主机
      

  7.   

    https://www.cnblogs.com/asura7969/p/8483917.html这里有个资料,看能不能帮到你。
      

  8.   

    你的host没配置的吧