先贴错误:
[root@master spark-2.0.1-bin-hadoop2.7]# ./sbin/start-all.sh 
org.apache.spark.deploy.master.Master running as process 3530.  Stop it first.
slaver1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver1.out
slaver2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver2.out
slaver1: failed to launch org.apache.spark.deploy.worker.Worker:
slaver1:    at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:693)
slaver1:    at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
slaver1: full log in /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver1.out
slaver2: failed to launch org.apache.spark.deploy.worker.Worker:
slaver2:    at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:693)
slaver2:    at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
slaver2: full log in /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver2.out
我的spark-env.sh如下:
export JAVA_HOME=/usr/java/jdk1.8.0_101
export SCALA_HOME=/usr/scala/scala-2.11.8
export SPARK_MASTER_IP=master
export SPARK_WORKER_MEMORY=0.5g
export HADOOP_HOME=/usr/hadoop/hadoop-2.7.3
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export SPARK_WORKER_CORES=1
export SPARK_MASTER_PORT=7077
export SPARK_LOCAL_DIRS=/usr/spark/spark-2.0.1#set ipython start
PYSPARK_DRIVER_PYTHON=ipython当我在Hadoop上启动./sbin/start-all.sh时,各个节点都可以正常运行。
但是在spark上启动./sbin/start-all.sh时,就报了上面的错。
我三个虚拟机内存是2个g,系统是红帽5.启动Hadoop时,192.168.183.70:50070网页的内容如下:
求大神指教,该怎么做。