1.解压hadoop-1.0.3-bin.tar.gz放到指定目录下。
2.安装java环境,参照文档
3.Ssh无密登录
4.修改conf下四个文件
Hadoop-env.sh:
export JAVA_HOME=/usr/local/jdk.....
Core-site.xml
1 <?xml version="1.0"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3
4 <!-- Put site-specific property overrides in this file. -->
5
6 <configuration>
7 <property>
8 <name>fs.default.name</name>
9 <value>hdfs://localhost:9000</value>
10 </property>
11 </configuration>
Mapred-site.xml
12 <?xml version="1.0"?>
13 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
14
15 <!-- Put site-specific property overrides in this file. -->
16
17 <configuration>
18 <property>
19 <name>mapred.job.tracker</name>
20 <value>localhost:9001</value>
21 </property>
22 </configuration>
Hdfs-site.xml
23 <configuration>
24 <property>
25 <name>dfs.name.dir</name>
26 <value>/usr/local/hadoop/datalog1,/usr/local/hadoop/datalog2</value>
27 </property>
28 <property>
29 <name>dfs.data.dir</name>
30 <value>/usr/local/hadoop/data1,/usr/local/hadoop/data2</value>
31 </property>
32 <property>
33 <name>dfs.replication</name>
34 <value>2</value>
35 </property>
36 </configuration>
打开conf/masters文件,添加作为secondarynamenode的主机名,作为单机版环境,这里只需填写localhost就Ok了。
打开conf/slaves文件,添加作为slave的主机名,一行一个。作为单机版,这里也只需填写localhost就Ok了。
5.格式化
./bin/hadoop namenode -format
./start-all.sh
6.http://weixiaolu.iteye.com/blog/1401931