1. 目录/opt/hadoop/etc/hadoop
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://mip:9000</value> </property> </configuration>
mip:在主节点的mip就是自己的ip,而所有从节点的mip是主节点的ip。
9000:主节点和从节点配置的端口都是9000
hdfs-site.xml
<configuration> <property> <name>dfs.nameservices</name> <value>hadoop-cluster</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///data/hadoop/hdfs/nn</value> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>file:///data/hadoop/hdfs/snn</value> </property> <property> <name>dfs.namenode.checkpoint.edits.dir</name> <value>file:///data/hadoop/hdfs/snn</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///data/hadoop/hdfs/dn</value> </property> </configuration>
dfs.nameservices:在一个全分布式集群大众集群当中这个的value要相同
mapred-site.xml
<configuration> <property> <!-指定Mapreduce运行在yarn上--> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
yarn-site.xml
<configuration> <!-- 指定ResourceManager的地址--> <property> <name>yarn.resourcemanager.hostname</name> <value>mip</value> </property> <!-- 指定reducer获取数据的方式--> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>file:///data/hadoop/yarn/nm</value> </property> </configuration>
创建目录
mkdir -p /data/hadoop/hdfs/nn mkdir -p /data/hadoop/hdfs/dn mkdir -p /data/hadoop/hdfs/snn mkdir -p /data/hadoop/yarn/nm
一定要设置成:chmod -R 777 /data
hdfs启动 ./hadoop-daemon.sh start namenode
yarn启动 ./yarn-daemon.sh start resourcemanager
http://192.168.153.146:50070
打开报连接失败:进入我的电脑-管理-服务-WindowFireware服务,关闭