• [Hadoop] Sqoop安装过程详解


    Sqoop是一个用来将Hadoop和关系型数据库中的数据相互转移的工具,可以将一个关系型数据库(例如 : MySQL ,Oracle ,Postgres等)中的数据导进到Hadoop的HDFS中,也可以将HDFS的数据导进到关系型数据库中。
    Sqoop官方版本:http://apache.dataguru.cn/sqoop/1.4.2/
    Sqoop CDH版本:http://archive.cloudera.com/cdh/3/sqoop-1.2.0-CDH3B4.tar.gz
    Hadoop CDH版本:http://archive.cloudera.com/cdh/3/hadoop-0.20.2-CDH3B4.tar.gz

    之前已经安装Hadoop-0.20.2,因sqoop官方版本不支持此版本,但可使用CDH3版本,如上面的下载链接。为了测试方便,可以通过拷贝相应的包到sqoop-1.2.0-CDH3B4/lib下,依然可以使用Hadoop-0.20.2版本。


    sqoop版本: sqoop-1.2.0-CDH3B4
    Hadoop版本:0.20.2
    mysql版本: 5.6.11 

    1)解压缩sqoop安装文件

    [hadoop@node01 ~]$ tar -xzvf sqoop-1.2.0-CDH3B4.tar.gz

    2)sqoop-1.2.0-CDH3B4依赖hadoop-core-0.20.2-CDH3B4.jar,所以你需要下载hadoop- 0.20.2-CDH3B4.tar.gz,解压缩后将hadoop-0.20.2-CDH3B4/hadoop-core-0.20.2- CDH3B4.jar复制到sqoop-1.2.0-CDH3B4/lib中。


    [hadoop@node01 ~]$ cp hadoop-core-0.20.2-CDH3B4.jar sqoop-1.2.0-CDH3B4/lib
    [hadoop@node01 ~]$ ls -l sqoop-1.2.0-CDH3B4/lib/hadoop-core-0.20.2-CDH3B4.jar
    -rw-r--r--. 1 hadoop root 3452461 May  9 05:40 sqoop-1.2.0-CDH3B4/lib/hadoop-core-0.20.2-CDH3B4.jar


    3)另外,sqoop导入mysql数据运行过程中依赖mysql-connector-java-*.jar,所以你需要下载mysql-connector-java-*.jar并复制到sqoop-1.2.0-CDH3B4/lib中

    [hadoop@node01 ~]$ cp mysql-connector-java-5.1.24-bin.jar sqoop-1.2.0-CDH3B4/lib
    [hadoop@node01 ~]$ ls -l sqoop-1.2.0-CDH3B4/lib/mysql-connector-java-5.1.24-bin.jar
    -rw-r--r--. 1 hadoop root 846263 May  9 05:43 sqoop-1.2.0-CDH3B4/lib/mysql-connector-java-5.1.24-bin.jar


    4)修改SQOOP的文件configure-sqoop,注释掉hbase和zookeeper检查(除非你准备使用HABASE等HADOOP上的组件),否则在进行hbase和zookeeper检查时,可能会卡在这里。

    [hadoop@node01 bin]$ pwd
    /home/hadoop/sqoop-1.2.0-CDH3B4/bin
    [hadoop@node01 bin]$ vi configure-sqoop


    #if [ -z "${HBASE_HOME}" ]; then
     HBASE_HOME=/usr/lib/hbase
    #fi
    #if [ -z "${ZOOKEEPER_HOME}" ]; then
     ZOOKEEPER_HOME=/usr/lib/zookeeper
    #fi


    #if [ ! -d "${HBASE_HOME}" ]; then
     echo "Error: $HBASE_HOME does not exist!"
     echo 'Please set $HBASE_HOME to the root of your HBase installation.'
     exit 1
    #fi
    #if [ ! -d "${ZOOKEEPER_HOME}" ]; then
     echo "Error: $ZOOKEEPER_HOME does not exist!"
     echo 'Please set $ZOOKEEPER_HOME to the root of your ZooKeeper installation.'
     exit 1
    #fi


    5)启动Hadoop
    [hadoop@node01 bin]$ start-all.sh
    [hadoop@node01 bin]$ jps
    2732 Jps
    2478 NameNode
    2665 JobTracker
    2600 SecondaryNameNode


    6)从MySQL导入数据到HDFS

    (1)在MySQL里创建测试数据库sqooptest
    [hadoop@node01 ~]$ mysql -u root -p
    mysql> create database sqooptest;
    Query OK, 1 row affected (0.01 sec)


    (2)创建sqoop专有用户
    mysql> create user 'sqoop' identified by 'sqoop';
    Query OK, 0 rows affected (0.00 sec)

    mysql> grant all privileges on *.* to 'sqoop' with grant option;
    Query OK, 0 rows affected (0.00 sec)

    mysql> flush privileges;
    Query OK, 0 rows affected (0.00 sec)


    (3)生成测试数据
    mysql> use sqooptest;
    Database changed
    mysql> create table tb1 as select table_schema,table_name,table_type from information_schema.TABLES;
    Query OK, 154 rows affected (0.28 sec)
    Records: 154  Duplicates: 0  Warnings: 0


    (4)测试sqoop与mysql的连接
    [hadoop@node01 ~]$ sqoop list-databases --connect jdbc:mysql://node01:3306/ --username sqoop --password sqoop
    13/05/09 06:15:01 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
    13/05/09 06:15:01 INFO manager.MySQLManager: Executing SQL statement: SHOW DATABASES
    information_schema
    hive
    mysql
    performance_schema
    sqooptest
    test


    (5)从MySQL导入数据到HDFS
    [hadoop@node01 ~]$ sqoop import --connect jdbc:mysql://node01:3306/sqooptest --username sqoop --password sqoop --table tb1 -m 1
    13/05/09 06:16:39 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
    13/05/09 06:16:39 INFO tool.CodeGenTool: Beginning code generation
    13/05/09 06:16:39 INFO manager.MySQLManager: Executing SQL statement: SELECT t.* FROM `tb1` AS t LIMIT 1
    13/05/09 06:16:39 INFO manager.MySQLManager: Executing SQL statement: SELECT t.* FROM `tb1` AS t LIMIT 1
    13/05/09 06:16:39 INFO orm.CompilationManager: HADOOP_HOME is /home/hadoop/hadoop-0.20.2/bin/..
    13/05/09 06:16:39 INFO orm.CompilationManager: Found hadoop core jar at: /home/hadoop/hadoop-0.20.2/bin/../hadoop-0.20.2-core.jar
    13/05/09 06:16:42 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/4175ce59fd53eb3de75875cfd3bd450b/tb1.jar
    13/05/09 06:16:42 WARN manager.MySQLManager: It looks like you are importing from mysql.
    13/05/09 06:16:42 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
    13/05/09 06:16:42 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
    13/05/09 06:16:42 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
    13/05/09 06:16:42 INFO mapreduce.ImportJobBase: Beginning import of tb1
    13/05/09 06:16:43 INFO manager.MySQLManager: Executing SQL statement: SELECT t.* FROM `tb1` AS t LIMIT 1
    13/05/09 06:16:45 INFO mapred.JobClient: Running job: job_201305090600_0001
    13/05/09 06:16:46 INFO mapred.JobClient:  map 0% reduce 0%
    13/05/09 06:17:01 INFO mapred.JobClient:  map 100% reduce 0%
    13/05/09 06:17:03 INFO mapred.JobClient: Job complete: job_201305090600_0001
    13/05/09 06:17:03 INFO mapred.JobClient: Counters: 5
    13/05/09 06:17:03 INFO mapred.JobClient:   Job Counters
    13/05/09 06:17:03 INFO mapred.JobClient:     Launched map tasks=1
    13/05/09 06:17:03 INFO mapred.JobClient:   FileSystemCounters
    13/05/09 06:17:03 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=7072
    13/05/09 06:17:03 INFO mapred.JobClient:   Map-Reduce Framework
    13/05/09 06:17:03 INFO mapred.JobClient:     Map input records=154
    13/05/09 06:17:03 INFO mapred.JobClient:     Spilled Records=0
    13/05/09 06:17:03 INFO mapred.JobClient:     Map output records=154
    13/05/09 06:17:03 INFO mapreduce.ImportJobBase: Transferred 6.9062 KB in 19.9871 seconds (353.8277 bytes/sec)
    13/05/09 06:17:03 INFO mapreduce.ImportJobBase: Retrieved 154 records.


    (6)在HDFS上查看刚刚导入的数据
    [hadoop@node01 ~]$ hadoop dfs -ls tb1
    Found 2 items
    drwxr-xr-x   - hadoop supergroup          0 2013-05-09 06:16 /user/hadoop/tb1/_logs
    -rw-r--r--   2 hadoop supergroup       7072 2013-05-09 06:16 /user/hadoop/tb1/part-m-00000

    版权声明:本文为博主原创文章,未经博主允许不得转载。

  • 相关阅读:
    在Struts2中使用ValueStack、ActionContext、ServletContext、request、session等 .
    Struts2 Action的访问路径
    Struts2 中的数据传输
    Struts2中的类型转换
    应用:ValueStack
    ValueStack基础:OGNL
    Struts2中的ModelDriven机制及其运用
    form表单中method的get和post区别
    el表达式跟ognl表达式的区别(转)
    Java ServletContext 详解
  • 原文地址:https://www.cnblogs.com/jamesf/p/4751625.html
Copyright © 2020-2023  润新知