spark的集中运行模式
Local 、Standalone、Yarn
关闭防火墙:systemctl stop firewalld.service
重启网络服务:systemctl restart network.service
设置静态IP
设置/etc/hosts
192.168.232.133 cent-1 192.168.232.134 cent-2 192.168.232.135 cent-3
配置免密登录在往期的Hadoop配置中已有说明。
一、进入conf配置文件
配置Master
cp slaves.template slaves
vim slaves
cent-2
cent-3
修改spark-env.sh
cp spark-env.sh.template spark-env.sh
vi spark-env.sh
export JAVA_HOME=/usr/local/java/jdk1.8.0_181 export SCALA_HOME=/opt/apps/scala-2.11.8 export HADOOP_HOME=/opt/apps/hadoop-2.7.2 export HADOOP_CONF_DIR=/opt/apps/hadoop-2.7.2/etc/hadoop export SPARK_CONF_DIR=/opt/apps/spark-2.3.1/conf export SPARK_EXECUTOR_MEMORY=2g export SPARK_DRIVER_MEMORY=2g export SPARK_MASTER_HOST=hdc-data1 export SPARK_MASTER_PORT=7077 export SPARK_WORKER_CORES=1 export SPARK_WORKER_MEMORY=2g export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hdc-data1:2181,hdc-data2:2181,hdc-data3:2181 -Dspark.deploy.zookeeper.dir=/spark" export SPARK_CLASSPATH=/opt/apps/hbase-1.1.1/lib/* export SPARK_DIST_CLASSPATH=$(/opt/apps/hadoop-2.7.2/bin/hadoop classpath):$(/opt/apps/hbase-1.1.1/bin/hbase classpath)
将主节点配置好的文件,同步到其他从节点
启动集群
cd 到sbin目录下
,/start-all.sh
访问web页面,cent-1:8080