步骤:
spark-env.sh
加入:
![](https://images2015.cnblogs.com/blog/800044/201512/800044-20151216175331146-1016527870.png)
HADOOP_CONF_DIR=/root/------ 表示使用hdfs上的资源,如果需要使用本地资源,请把这一句注销
![](https://images2015.cnblogs.com/blog/800044/201512/800044-20151216175332318-64163528.png)
2,slaves
![](https://images2015.cnblogs.com/blog/800044/201512/800044-20151216175333271-670781940.png)
3,spark-defalts.conf
![](https://images2015.cnblogs.com/blog/800044/201512/800044-20151216175334677-1641949866.png)
-----------------------------------------------------------------------------------------------------------------
启动:
![](https://images2015.cnblogs.com/blog/800044/201512/800044-20151216175339834-1148405158.png)
cd /root/soft/spark-1.3.1
sbin/start-master.sh 启动master
sbin/start-slaves.sh 启动worker
![](44c32ce6-d320-45d3-bcce-63f8514d97d8_files/771c2ab3-a637-4a94-8902-6cfddcafe944.png)