目录
1) unexpected end tag: </ul> 6
1. 前言
Hadoop-2.4.0的源码目录下有个BUILDING.txt文件,它介绍了如何在Linux和Windows下编译源代码,本文基本是遵照BUILDING.txt指示来操作的,这里再做一下简单的提炼。
第一次编译要求能够访问互联网,Hadoop的编译依赖非常多的东西,一定要保证机器可访问互联网,否则难逐一解决所有的编译问题,但第一次之后的编译则不用再下载了。
2. 安装依赖
在编译Hadoop 2.4.0源码之前,需要将下列几个依赖的东西安装好:
1) JDK 1.6或更新版本(本文使用JDK1.7,请不要安装JDK1.8版本,JDK1.8和Hadoop 2.4.0不匹配,编译Hadoop 2.4.0源码时会报很多错误)
2) Maven 3.0或更新版本
3) ProtocolBuffer 2.5.0
4) CMake 2.6或更新版本
5) Findbugs 1.3.9,可选的(本文编译时未安装)
在安装好之后,还需要设置一下环境变量,可以修改/etc/profile,也可以是修改~/.profile,增加如下内容:
export JAVA_HOME=/root/jdk export CLASSPATH=$JAVA_HOME/lib/tools.jar export PATH=$JAVA_HOME/bin:$PATH export CMAKE_HOME=/root/cmake export PATH=$CMAKE_HOME/bin:$PATH export PROTOC_HOME=/root/protobuf export PATH=$PROTOC_HOME/bin:$PATH export MAVEN_HOME=/root/maven export PATH=$MAVEN_HOME/bin:$PATH |
本文以root用户在/root目录下进行安装,但实际可以选择非root用户及非/root目录进行安装。
2.1. 安装ProtocolBuffer
标准的automake编译安装方式:
1) cd /root
2) tar xzf protobuf-2.5.0.tar.gz
3) cd protobuf-2.5.0
4) ./conigure --prefix=/root/protobuf
5) make
6) make install
2.2. 安装CMake
1) cd /root
2) tar xzf cmake-2.8.12.2.tar.gz
3) cd cmake-2.8.12.2
4) ./bootstrap --prefix=/root/cmake
5) make
6) make install
2.3. 安装JDK
1) cd /root
2) tar xzf jdk-7u55-linux-x64.gz
3) cd jdk1.7.0_55
4) ln -s jdk1.7.0_55 jdk
2.4. 安装Maven
1) cd /root
2) tar xzf apache-maven-3.0.5-bin.tar.gz
3) ln -s apache-maven-3.0.5 maven
3. 编译Hadoop源代码
完成上述准备工作后,即可通过执行命令:mvn package -Pdist -DskipTests -Dtar,启动对Hadoop源代码的编译。请注意一定不要使用JDK1.8。
如果需要编译成本地库(Native Libraries)文件,则使用命令:mvn package -Pdist,native -DskipTests -Dtar。如果C/C++程序需要访问HDFS等,需要使用navite方式编译生成相应的库文件。也可以使用mvn package -Pnative -DskipTests -Dtar特意编译出本地库文件。
相关的编译命令还有:
1) mvn package -Pdist -DskipTests -Dtar
2) mvn package -Pdist,native,docs,src -DskipTests -Dtar
3) mvn package -Psrc -DskipTests
4) mvn package -Pdist,native,docs -DskipTests -Dtar
5) mvn clean site; mvn site:stage -DstagingDirectory=/tmp/hadoop-site
编译成功后,jar文件会放在target子目录下,可以在Hadoop源码目录下借用find命令搜索各个target子目录。
编译成功后,会生成Hadoop二进制安装包hadoop-2.4.0.tar.gz,放在源代码的hadoop-dist/target子目录下:
main: [exec] $ tar cf hadoop-2.4.0.tar hadoop-2.4.0 [exec] $ gzip -f hadoop-2.4.0.tar [exec] [exec] Hadoop dist tar available at: /root/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0.tar.gz [exec] [INFO] Executed tasks [INFO] [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist --- [INFO] Building jar: /root/hadoop-2.4.0-src/hadoop-dist/target/hadoop-dist-2.4.0-javadoc.jar [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................ SUCCESS [4.647s] [INFO] Apache Hadoop Project POM ......................... SUCCESS [5.352s] [INFO] Apache Hadoop Annotations ......................... SUCCESS [7.239s] [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.424s] [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [2.918s] [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [6.261s] [INFO] Apache Hadoop MiniKDC ............................. SUCCESS [5.321s] [INFO] Apache Hadoop Auth ................................ SUCCESS [5.953s] [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [3.783s] [INFO] Apache Hadoop Common .............................. SUCCESS [1:54.010s] [INFO] Apache Hadoop NFS ................................. SUCCESS [9.721s] [INFO] Apache Hadoop Common Project ...................... SUCCESS [0.048s] [INFO] Apache Hadoop HDFS ................................ SUCCESS [4:15.270s] [INFO] Apache Hadoop HttpFS .............................. SUCCESS [6:18.553s] [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [16.237s] [INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [6.543s] [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.036s] [INFO] hadoop-yarn ....................................... SUCCESS [0.051s] [INFO] hadoop-yarn-api ................................... SUCCESS [1:35.227s] [INFO] hadoop-yarn-common ................................ SUCCESS [43.216s] [INFO] hadoop-yarn-server ................................ SUCCESS [0.055s] [INFO] hadoop-yarn-server-common ......................... SUCCESS [16.476s] [INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [19.942s] [INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [4.926s] [INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [9.804s] [INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [23.320s] [INFO] hadoop-yarn-server-tests .......................... SUCCESS [1.208s] [INFO] hadoop-yarn-client ................................ SUCCESS [9.177s] [INFO] hadoop-yarn-applications .......................... SUCCESS [0.113s] [INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [4.106s] [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [3.265s] [INFO] hadoop-yarn-site .................................. SUCCESS [0.056s] [INFO] hadoop-yarn-project ............................... SUCCESS [5.552s] [INFO] hadoop-mapreduce-client ........................... SUCCESS [0.096s] [INFO] hadoop-mapreduce-client-core ...................... SUCCESS [37.231s] [INFO] hadoop-mapreduce-client-common .................... SUCCESS [27.135s] [INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [4.886s] [INFO] hadoop-mapreduce-client-app ....................... SUCCESS [17.876s] [INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [14.140s] [INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [11.305s] [INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [3.083s] [INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [9.855s] [INFO] hadoop-mapreduce .................................. SUCCESS [5.110s] [INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [7.778s] [INFO] Apache Hadoop Distributed Copy .................... SUCCESS [12.973s] [INFO] Apache Hadoop Archives ............................ SUCCESS [3.265s] [INFO] Apache Hadoop Rumen ............................... SUCCESS [11.060s] [INFO] Apache Hadoop Gridmix ............................. SUCCESS [7.412s] [INFO] Apache Hadoop Data Join ........................... SUCCESS [4.221s] [INFO] Apache Hadoop Extras .............................. SUCCESS [4.771s] [INFO] Apache Hadoop Pipes ............................... SUCCESS [0.032s] [INFO] Apache Hadoop OpenStack support ................... SUCCESS [8.030s] [INFO] Apache Hadoop Client .............................. SUCCESS [7.730s] [INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.158s] [INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [7.485s] [INFO] Apache Hadoop Tools Dist .......................... SUCCESS [6.912s] [INFO] Apache Hadoop Tools ............................... SUCCESS [0.029s] [INFO] Apache Hadoop Distribution ........................ SUCCESS [40.425s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 21:57.892s [INFO] Finished at: Mon Apr 21 14:33:22 CST 2014 [INFO] Final Memory: 88M/243M [INFO] ------------------------------------------------------------------------ |
附1:无联网环境编译
如果想在无联网环境下编译Hadoop 2.4.0,是个非常复杂的工程,在早期的Hadoop中实现过,对于2.4.0来说有点难了。
但可以采取曲线救国方式,找一台可以联网的机器,先成功编译一次,然后将这个源码包目录打包复制到不能联网的机器。但要注意,保持两台机器的目录相同,并执行相同的编译命令。
为什么要求目录保持相同了?假设在联网机器的/root/hadoop-2.4.0-src下编译的,进入/root/hadoop-2.4.0-src,然后执行:find . -name "*.xml" |xargs grep "/root/",可以看到下表格中的内容,“/root/”被写入到众多xml文件中了,这是导致需要联网重新下载的根本原因,可以将它们替换成目标机器的实际目录,这样也可以无联网编译。
find . -name "*.xml" |xargs grep "/root/" ./hadoop-tools/hadoop-datajoin/target/antrun/build-main.xml: <delete dir="/root/hadoop-2.4.0-src/hadoop-tools/hadoop-datajoin/target/test-dir"/> ./hadoop-tools/hadoop-datajoin/target/antrun/build-main.xml: <mkdir dir="/root/hadoop-2.4.0-src/hadoop-tools/hadoop-datajoin/target/test-dir"/> ./hadoop-tools/hadoop-datajoin/target/antrun/build-main.xml: <mkdir dir="/root/hadoop-2.4.0-src/hadoop-tools/hadoop-datajoin/target/log"/> ./hadoop-tools/hadoop-extras/target/antrun/build-main.xml: <delete dir="/root/hadoop-2.4.0-src/hadoop-tools/hadoop-extras/target/test-dir"/> ./hadoop-tools/hadoop-extras/target/antrun/build-main.xml: <mkdir dir="/root/hadoop-2.4.0-src/hadoop-tools/hadoop-extras/target/test-dir"/> ./hadoop-tools/hadoop-extras/target/antrun/build-main.xml: <mkdir dir="/root/hadoop-2.4.0-src/hadoop-tools/hadoop-extras/target/log"/> ./hadoop-tools/hadoop-gridmix/target/antrun/build-main.xml: <delete dir="/root/hadoop-2.4.0-src/hadoop-tools/hadoop-gridmix/target/test-dir"/> ./hadoop-tools/hadoop-gridmix/target/antrun/build-main.xml: <mkdir dir="/root/hadoop-2.4.0-src/hadoop-tools/hadoop-gridmix/target/test-dir"/> ./hadoop-tools/hadoop-gridmix/target/antrun/build-main.xml: <mkdir dir="/root/hadoop-2.4.0-src/hadoop-tools/hadoop-gridmix/target/log"/> ./hadoop-tools/hadoop-openstack/target/antrun/build-main.xml: <mkdir dir="/root/hadoop-2.4.0-src/hadoop-tools/hadoop-openstack/target/test-dir"/> ./hadoop-tools/hadoop-openstack/target/antrun/build-main.xml: <mkdir dir="/root/hadoop-2.4.0-src/hadoop-tools/hadoop-openstack/target/test-dir"/> |
附2编译环境
整个过程是在阿里云64位主机上进行的,2.30GHz单核1G内存:
[root@AY140408105805619186Z hadoop-2.4.0-src]# uname -a Linux AY140408105805619186Z 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux [root@AY140408105805619186Z ~]# cat /etc/redhat-release CentOS release 5.8 (Final) |
附3:版本信息
名称 |
版本 |
包名 |
说明 |
Maven |
3.0.5 |
apache-maven-3.0.5-bin.tar.gz |
使用3.2.1可能会有问题 |
CMake |
2.8.12.2 |
cmake-2.8.12.2.tar.gz |
|
JDK |
1.7.0 |
jdk-7u55-linux-x64.gz |
不能使用JDK1.8.0 |
Protocol Buffers |
2.5.0 |
protobuf-2.5.0.tar.gz |
|
Hadoop |
2.4.0 |
hadoop-2.4.0-src.tar.gz |
附4:常见错误
1) unexpected end tag: </ul>
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on project hadoop-annotations: MavenReportException: Error while creating archive: [ERROR] Exit code: 1 - /root/hadoop-2.4.0-src/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceStability.java:27: error: unexpected end tag: </ul> [ERROR] * </ul> [ERROR] ^ [ERROR] [ERROR] Command line was: /root/jdk1.8.0/jre/../bin/javadoc @options @packages |
原因是InterfaceStability.java中的注释问题:
解决办法,将JDK换成1.7版本,使用JDK1.8编译就会遇到上述问题,将</ul>行删除可以解决问题,但后续还会遇到类似的问题,所以不要使用JDK1.8编译Hadoop 2.4.0。
附5:相关文档
《HBase-0.98.0分布式安装指南》
《Hive 0.12.0安装指南》
《ZooKeeper-3.4.6分布式安装指南》
《Hadoop 2.3.0源码反向工程》
《在Linux上编译Hadoop-2.4.0》
《Accumulo-1.5.1安装指南》
《Drill 1.0.0安装指南》
《Shark 0.9.1安装指南》
更多,敬请关注技术博客:http://aquester.cublog.cn。