先贴错误:
[root@master spark-2.0.1-bin-hadoop2.7]# ./sbin/start-all.sh
org.apache.spark.deploy.master.Master running as process 3530. Stop it first.
slaver1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver1.out
slaver2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver2.out
slaver1: failed to launch org.apache.spark.deploy.worker.Worker:
slaver1: at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:693)
slaver1: at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
slaver1: full log in /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver1.out
slaver2: failed to launch org.apache.spark.deploy.worker.Worker:
slaver2: at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:693)
slaver2: at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
slaver2: full log in /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver2.out
我的spark-env.sh如下:
export JAVA_HOME=/usr/java/jdk1.8.0_101
export SCALA_HOME=/usr/scala/scala-2.11.8
export SPARK_MASTER_IP=master
export SPARK_WORKER_MEMORY=0.5g
export HADOOP_HOME=/usr/hadoop/hadoop-2.7.3
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export SPARK_WORKER_CORES=1
export SPARK_MASTER_PORT=7077
export SPARK_LOCAL_DIRS=/usr/spark/spark-2.0.1#set ipython start
PYSPARK_DRIVER_PYTHON=ipython当我在Hadoop上启动./sbin/start-all.sh时,各个节点都可以正常运行。
但是在spark上启动./sbin/start-all.sh时,就报了上面的错。
我三个虚拟机内存是2个g,系统是红帽5.启动Hadoop时,192.168.183.70:50070网页的内容如下:
求大神指教,该怎么做。
[root@master spark-2.0.1-bin-hadoop2.7]# ./sbin/start-all.sh
org.apache.spark.deploy.master.Master running as process 3530. Stop it first.
slaver1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver1.out
slaver2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver2.out
slaver1: failed to launch org.apache.spark.deploy.worker.Worker:
slaver1: at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:693)
slaver1: at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
slaver1: full log in /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver1.out
slaver2: failed to launch org.apache.spark.deploy.worker.Worker:
slaver2: at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:693)
slaver2: at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
slaver2: full log in /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver2.out
我的spark-env.sh如下:
export JAVA_HOME=/usr/java/jdk1.8.0_101
export SCALA_HOME=/usr/scala/scala-2.11.8
export SPARK_MASTER_IP=master
export SPARK_WORKER_MEMORY=0.5g
export HADOOP_HOME=/usr/hadoop/hadoop-2.7.3
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export SPARK_WORKER_CORES=1
export SPARK_MASTER_PORT=7077
export SPARK_LOCAL_DIRS=/usr/spark/spark-2.0.1#set ipython start
PYSPARK_DRIVER_PYTHON=ipython当我在Hadoop上启动./sbin/start-all.sh时,各个节点都可以正常运行。
但是在spark上启动./sbin/start-all.sh时,就报了上面的错。
我三个虚拟机内存是2个g,系统是红帽5.启动Hadoop时,192.168.183.70:50070网页的内容如下:
求大神指教,该怎么做。
解决方案 »
- Openstack中网络模块子网中的Host Routes的作用是什么?求大神指教
- 思博伦 请云计算经理
- 雷葆华:云计算带来的变革和实践思考
- IAAS SAAS PAAS相关实现的技术
- 请问为什么在win7下用git上传项目会出现以下的情况?
- 在医院搭建一个塔式服务器用HIS系统具体需要什么配置和预算 最好有个方案?有没有大神帮帮忙
- 虚拟化在web应用中的一个探讨
- 安装hadoop2.20 启动的时候报这样的错,谁知道是什么原因吗
- sparkstreaming每次滑动task数量都会增加,求大神给解决一下
- vmware 6 上两个虚拟机访问共享存储问题
- 有阿里云搭建教程吗
- 想用Spark实现一个MLlib没有的机器学习算法?
你要把worker的日志贴上来才知道具体问题在哪