主要内容:Scala、spark
每一行外部的是命令。----内是打开文档写在内档内的内容
视频:
手工搭建大数据环境【4】java、Hadoop、Scala、spark
1 [Spark] 2 3 mkdir -p /usr/scala 4 5 tar -zxvf /opt/soft/scala-2.12.8.tgz -C /usr/scala 6 7 vi /etc/profile 8 9 --------------------------- 10 11 #scala 12 13 export SCALA_HOME=/usr/scala/scala-2.12.8 14 15 export PATH=$SCALA_HOME/bin:$PATH 16 17 --------------------------- 18 19 source /etc/profile: 20 21 scp -r /usr/scala root@slave1:/usr/ 22 23 scp -r /usr/scala root@slave2:/usr/ 24 25 mkdir -p /usr/spark 26 27 tar -zxvf /opt/soft/spark-2.4.0-bin-hadoop2.7.tgz -C /usr/spark/ 28 29 cd /usr/spark/spark-2.4.0-bin-hadoop2.7/conf 30 31 conf]#cp spark-env.sh.template spark-env.sh 32 33 conf]#leafpad spark-env.sh 34 35 --------------------------------------------------------- 36 37 export SPARK_MASTER_IP=master 38 39 export SCALA_HOME=/usr/scala/scala-2.12.8 40 41 export SPARK_WORKER_MEMORY=8g 42 43 export JAVA_HOME=/usr/java/jdk1.8.0_201 44 45 export HADOOP_HOME=/usr/hadoop/hadoop-2.7.3 46 47 export HADOOP_CONF_DIR=/usr/hadoop/hadoop-2.7.3/etc/hadoop 48 49 --------------------------------------------------------- 50 51 conf]#cp slaves.template slaves 52 53 conf]#vi slaves 54 55 ---------- 56 57 slave1 58 59 slave2 60 61 ---------- 62 63 vim /etc/profile 64 65 --------------------------------------------------------- 66 67 #spark 68 69 export SPARK_HOME=/usr/spark/spark-2.4.0-bin-hadoop2.7 70 71 export PATH=$SPARK_HOME/bin:$PATH 72 73 --------------------------------------------------------- 74 75 source /etc/profile 76 77 scp -r /usr/spark root@slave1:/usr/ 78 79 scp -r /usr/spark root@slave2:/usr/ 80 81 /usr/hadoop/hadoop-2.7.3/sbin/start-all.sh 82 83 jps 84 85 ------>master:Jps、NameNode、SecondaryNameNode、ResourceManager 86 87 ------>slave1slave2:DataNode、NodeManager、Jps 88 89 /usr/spark/spark-2.4.0-bin-hadoop2.7/sbin/start-all.sh 90 91 访问web页面http://masterip:8080 92 93 spark-shell 94 95 pyspark