• 【Sqoop学习之二】Sqoop使用


    环境
      sqoop-1.4.6

    一、基本命令
    1、帮助命令

    [root@node101 ~]# sqoop help
    Warning: /usr/local/sqoop-1.4.6/../hbase does not exist! HBase imports will fail.
    Please set $HBASE_HOME to the root of your HBase installation.
    Warning: /usr/local/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
    Please set $HCAT_HOME to the root of your HCatalog installation.
    Warning: /usr/local/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
    Please set $ACCUMULO_HOME to the root of your Accumulo installation.
    Warning: /usr/local/sqoop-1.4.6/../zookeeper does not exist! Accumulo imports will fail.
    Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
    19/07/02 10:29:51 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
    usage: sqoop COMMAND [ARGS]
    
    Available commands:
    codegen Generate code to interact with database records
    create-hive-table Import a table definition into Hive
    eval Evaluate a SQL statement and display the results
    export Export an HDFS directory to a database table
    help List available commands
    import Import a table from a database to HDFS
    import-all-tables Import tables from a database to HDFS
    import-mainframe Import datasets from a mainframe server to HDFS
    job Work with saved jobs
    list-databases List available databases on a server
    list-tables List available tables in a database
    merge Merge results of incremental imports
    metastore Run a standalone Sqoop metastore
    version Display version information
    
    See 'sqoop help COMMAND' for information on a specific command.

    查看某个命令的帮助,比如导入

    [root@node101 ~]# sqoop help import 
    Warning: /usr/local/sqoop-1.4.6/../hbase does not exist! HBase imports will fail.
    Please set $HBASE_HOME to the root of your HBase installation.
    Warning: /usr/local/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
    Please set $HCAT_HOME to the root of your HCatalog installation.
    Warning: /usr/local/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
    Please set $ACCUMULO_HOME to the root of your Accumulo installation.
    Warning: /usr/local/sqoop-1.4.6/../zookeeper does not exist! Accumulo imports will fail.
    Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
    19/07/02 10:30:14 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
    usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS]
    
    Common arguments:
    --connect <jdbc-uri> Specify JDBC connect
    string
    --connection-manager <class-name> Specify connection manager
    class name
    --connection-param-file <properties-file> Specify connection
    parameters file
    --driver <class-name> Manually specify JDBC
    driver class to use
    --hadoop-home <hdir> Override
    $HADOOP_MAPRED_HOME_ARG
    --hadoop-mapred-home <dir> Override
    $HADOOP_MAPRED_HOME_ARG
    --help Print usage instructions
    -P Read password from console
    --password <password> Set authentication
    password
    --password-alias <password-alias> Credential provider
    password alias
    --password-file <password-file> Set authentication
    password file path
    --relaxed-isolation Use read-uncommitted
    isolation for imports
    --skip-dist-cache Skip copying jars to
    distributed cache
    --username <username> Set authentication
    username
    --verbose Print more information
    while working
    
    Import control arguments:
    --append Imports data
    in append
    mode
    --as-avrodatafile Imports data
    to Avro data
    files
    --as-parquetfile Imports data
    to Parquet
    files
    --as-sequencefile Imports data
    to
    SequenceFile
    s
    --as-textfile Imports data
    as plain
    text
    (default)
    --autoreset-to-one-mapper Reset the
    number of
    mappers to
    one mapper
    if no split
    key
    available
    --boundary-query <statement> Set boundary
    query for
    retrieving
    max and min
    value of the
    primary key
    --columns <col,col,col...> Columns to
    import from
    table
    --compression-codec <codec> Compression
    codec to use
    for import
    --delete-target-dir Imports data
    in delete
    mode
    --direct Use direct
    import fast
    path
    --direct-split-size <n> Split the
    input stream
    every 'n'
    bytes when
    importing in
    direct mode
    -e,--query <statement> Import
    results of
    SQL
    'statement'
    --fetch-size <n> Set number
    'n' of rows
    to fetch
    from the
    database
    when more
    rows are
    needed
    --inline-lob-limit <n> Set the
    maximum size
    for an
    inline LOB
    -m,--num-mappers <n> Use 'n' map
    tasks to
    import in
    parallel
    --mapreduce-job-name <name> Set name for
    generated
    mapreduce
    job
    --merge-key <column> Key column
    to use to
    join results
    --split-by <column-name> Column of
    the table
    used to
    split work
    units
    --table <table-name> Table to
    read
    --target-dir <dir> HDFS plain
    table
    destination
    --validate Validate the
    copy using
    the
    configured
    validator
    --validation-failurehandler <validation-failurehandler> Fully
    qualified
    class name
    for
    ValidationFa
    ilureHandler
    --validation-threshold <validation-threshold> Fully
    qualified
    class name
    for
    ValidationTh
    reshold
    --validator <validator> Fully
    qualified
    class name
    for the
    Validator
    --warehouse-dir <dir> HDFS parent
    for table
    destination
    --where <where clause> WHERE clause
    to use
    during
    import
    -z,--compress Enable
    compression
    
    Incremental import arguments:
    --check-column <column> Source column to check for incremental
    change
    --incremental <import-type> Define an incremental import of type
    'append' or 'lastmodified'
    --last-value <value> Last imported value in the incremental
    check column
    
    Output line formatting arguments:
    --enclosed-by <char> Sets a required field enclosing
    character
    --escaped-by <char> Sets the escape character
    --fields-terminated-by <char> Sets the field separator character
    --lines-terminated-by <char> Sets the end-of-line character
    --mysql-delimiters Uses MySQL's default delimiter set:
    fields: , lines: 
     escaped-by: 
    optionally-enclosed-by: '
    --optionally-enclosed-by <char> Sets a field enclosing character
    
    Input parsing arguments:
    --input-enclosed-by <char> Sets a required field encloser
    --input-escaped-by <char> Sets the input escape
    character
    --input-fields-terminated-by <char> Sets the input field separator
    --input-lines-terminated-by <char> Sets the input end-of-line
    char
    --input-optionally-enclosed-by <char> Sets a field enclosing
    character
    
    Hive arguments:
    --create-hive-table Fail if the target hive
    table exists
    --hive-database <database-name> Sets the database name to
    use when importing to hive
    --hive-delims-replacement <arg> Replace Hive record 0x01
    and row delimiters (
    
    )
    from imported string fields
    with user-defined string
    --hive-drop-import-delims Drop Hive record 0x01 and
    row delimiters (
    
    ) from
    imported string fields
    --hive-home <dir> Override $HIVE_HOME
    --hive-import Import tables into Hive
    (Uses Hive's default
    delimiters if none are
    set.)
    --hive-overwrite Overwrite existing data in
    the Hive table
    --hive-partition-key <partition-key> Sets the partition key to
    use when importing to hive
    --hive-partition-value <partition-value> Sets the partition value to
    use when importing to hive
    --hive-table <table-name> Sets the table name to use
    when importing to hive
    --map-column-hive <arg> Override mapping for
    specific column to hive
    types.
    
    HBase arguments:
    --column-family <family> Sets the target column family for the
    import
    --hbase-bulkload Enables HBase bulk loading
    --hbase-create-table If specified, create missing HBase tables
    --hbase-row-key <col> Specifies which input column to use as the
    row key
    --hbase-table <table> Import to <table> in HBase
    
    HCatalog arguments:
    --hcatalog-database <arg> HCatalog database name
    --hcatalog-home <hdir> Override $HCAT_HOME
    --hcatalog-partition-keys <partition-key> Sets the partition
    keys to use when
    importing to hive
    --hcatalog-partition-values <partition-value> Sets the partition
    values to use when
    importing to hive
    --hcatalog-table <arg> HCatalog table name
    --hive-home <dir> Override $HIVE_HOME
    --hive-partition-key <partition-key> Sets the partition key
    to use when importing
    to hive
    --hive-partition-value <partition-value> Sets the partition
    value to use when
    importing to hive
    --map-column-hive <arg> Override mapping for
    specific column to
    hive types.
    
    HCatalog import specific options:
    --create-hcatalog-table Create HCatalog before import
    --hcatalog-storage-stanza <arg> HCatalog storage stanza for table
    creation
    
    Accumulo arguments:
    --accumulo-batch-size <size> Batch size in bytes
    --accumulo-column-family <family> Sets the target column family for
    the import
    --accumulo-create-table If specified, create missing
    Accumulo tables
    --accumulo-instance <instance> Accumulo instance name.
    --accumulo-max-latency <latency> Max write latency in milliseconds
    --accumulo-password <password> Accumulo password.
    --accumulo-row-key <col> Specifies which input column to
    use as the row key
    --accumulo-table <table> Import to <table> in Accumulo
    --accumulo-user <user> Accumulo user name.
    --accumulo-visibility <vis> Visibility token to be applied to
    all rows imported
    --accumulo-zookeepers <zookeepers> Comma-separated list of
    zookeepers (host:port)
    
    Code generation arguments:
    --bindir <dir> Output directory for compiled
    objects
    --class-name <name> Sets the generated class name.
    This overrides --package-name.
    When combined with --jar-file,
    sets the input class.
    --input-null-non-string <null-str> Input null non-string
    representation
    --input-null-string <null-str> Input null string representation
    --jar-file <file> Disable code generation; use
    specified jar
    --map-column-java <arg> Override mapping for specific
    columns to java types
    --null-non-string <null-str> Null non-string representation
    --null-string <null-str> Null string representation
    --outdir <dir> Output directory for generated
    code
    --package-name <name> Put auto-generated classes in
    this package
    
    Generic Hadoop command-line arguments:
    (must preceed any tool-specific arguments)
    Generic options supported are
    -conf <configuration file> specify an application configuration file
    -D <property=value> use value for given property
    -fs <local|namenode:port> specify a namenode
    -jt <local|resourcemanager:port> specify a ResourceManager
    -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
    -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
    -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
    
    The general command line syntax is
    bin/hadoop command [genericOptions] [commandOptions]
    
    
    At minimum, you must specify --connect and --table
    Arguments to mysqldump and other subprograms may be supplied
    after a '--' on the command line.
    View Code

    2、列出MySQL数据有哪些数据库

    [root@node101 ~]# sqoop list-databases --connect jdbc:mysql://node102:3306/ --username root --password 123456
    Warning: /usr/local/sqoop-1.4.6/../hbase does not exist! HBase imports will fail.
    Please set $HBASE_HOME to the root of your HBase installation.
    Warning: /usr/local/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
    Please set $HCAT_HOME to the root of your HCatalog installation.
    Warning: /usr/local/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
    Please set $ACCUMULO_HOME to the root of your Accumulo installation.
    Warning: /usr/local/sqoop-1.4.6/../zookeeper does not exist! Accumulo imports will fail.
    Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
    19/07/02 10:26:47 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
    19/07/02 10:26:47 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
    19/07/02 10:26:47 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
    information_schema
    mysql
    performance_schema
    sys

    3、列出MySQL中的某个数据库有哪些数据表:

    [root@node101 ~]# sqoop list-tables --connect jdbc:mysql://node102:3306/mysql --username root --password 123456
    Warning: /usr/local/sqoop-1.4.6/../hbase does not exist! HBase imports will fail.
    Please set $HBASE_HOME to the root of your HBase installation.
    Warning: /usr/local/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
    Please set $HCAT_HOME to the root of your HCatalog installation.
    Warning: /usr/local/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
    Please set $ACCUMULO_HOME to the root of your Accumulo installation.
    Warning: /usr/local/sqoop-1.4.6/../zookeeper does not exist! Accumulo imports will fail.
    Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
    19/07/02 10:29:05 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
    19/07/02 10:29:05 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
    19/07/02 10:29:05 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
    columns_priv
    db
    engine_cost
    event
    func
    general_log
    gtid_executed
    help_category
    help_keyword
    help_relation
    help_topic
    innodb_index_stats
    innodb_table_stats
    ndb_binlog_index
    plugin
    proc
    procs_priv
    proxies_priv
    server_cost
    servers
    slave_master_info
    slave_relay_log_info
    slave_worker_info
    slow_log
    tables_priv
    time_zone
    time_zone_leap_second
    time_zone_name
    time_zone_transition
    time_zone_transition_type
    user

    二、导入

    从mysql数据导入到HDFS
    (1)普通导入:导入mysql库中的help_keyword的数据到HDFS上

    sqoop import 
    --connect jdbc:mysql://node102:3306/mysql 
    --username root 
    --password 123456 
    --table help_keyword 
    -m 1

    #-m指并行任务数
    如果不指定目录,默认导入目录是:/user/root/help_keyword

    [root@node101 ~]# sqoop import 
    > --connect jdbc:mysql://node102:3306/mysql 
    > --username root 
    > --password 123456 
    > --table help_keyword 
    > -m 1
    Warning: /usr/local/sqoop-1.4.6/../hbase does not exist! HBase imports will fail.
    Please set $HBASE_HOME to the root of your HBase installation.
    Warning: /usr/local/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
    Please set $HCAT_HOME to the root of your HCatalog installation.
    Warning: /usr/local/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
    Please set $ACCUMULO_HOME to the root of your Accumulo installation.
    Warning: /usr/local/sqoop-1.4.6/../zookeeper does not exist! Accumulo imports will fail.
    Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
    19/07/02 11:06:38 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
    19/07/02 11:06:38 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
    19/07/02 11:06:39 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
    19/07/02 11:06:39 INFO tool.CodeGenTool: Beginning code generation
    19/07/02 11:06:39 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
    19/07/02 11:06:39 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
    19/07/02 11:06:39 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoop-2.6.5
    Note: /tmp/sqoop-root/compile/7f309ccd42353a370234a9552dace7e3/help_keyword.java uses or overrides a deprecated API.
    Note: Recompile with -Xlint:deprecation for details.
    19/07/02 11:06:40 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/7f309ccd42353a370234a9552dace7e3/help_keyword.jar
    19/07/02 11:06:40 WARN manager.MySQLManager: It looks like you are importing from mysql.
    19/07/02 11:06:40 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
    19/07/02 11:06:40 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
    19/07/02 11:06:40 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
    19/07/02 11:06:40 INFO mapreduce.ImportJobBase: Beginning import of help_keyword
    19/07/02 11:06:41 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
    19/07/02 11:06:41 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
    19/07/02 11:06:41 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
    19/07/02 11:06:41 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
    19/07/02 11:06:42 INFO db.DBInputFormat: Using read commited transaction isolation
    19/07/02 11:06:42 INFO mapreduce.JobSubmitter: number of splits:1
    19/07/02 11:06:42 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1498908125_0001
    19/07/02 11:06:43 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803189/parquet-hadoop-1.4.1.jar <- /root/parquet-hadoop-1.4.1.jar
    19/07/02 11:06:43 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/parquet-hadoop-1.4.1.jar as file:/opt/hadoop/mapred/local/1562036803189/parquet-hadoop-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803190/commons-logging-1.1.1.jar <- /root/commons-logging-1.1.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/commons-logging-1.1.1.jar as file:/opt/hadoop/mapred/local/1562036803190/commons-logging-1.1.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803191/parquet-jackson-1.4.1.jar <- /root/parquet-jackson-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/parquet-jackson-1.4.1.jar as file:/opt/hadoop/mapred/local/1562036803191/parquet-jackson-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803192/avro-mapred-1.7.5-hadoop2.jar <- /root/avro-mapred-1.7.5-hadoop2.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/avro-mapred-1.7.5-hadoop2.jar as file:/opt/hadoop/mapred/local/1562036803192/avro-mapred-1.7.5-hadoop2.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803193/jackson-mapper-asl-1.9.13.jar <- /root/jackson-mapper-asl-1.9.13.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/jackson-mapper-asl-1.9.13.jar as file:/opt/hadoop/mapred/local/1562036803193/jackson-mapper-asl-1.9.13.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803194/parquet-encoding-1.4.1.jar <- /root/parquet-encoding-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/parquet-encoding-1.4.1.jar as file:/opt/hadoop/mapred/local/1562036803194/parquet-encoding-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803195/parquet-avro-1.4.1.jar <- /root/parquet-avro-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/parquet-avro-1.4.1.jar as file:/opt/hadoop/mapred/local/1562036803195/parquet-avro-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803196/kite-data-core-1.0.0.jar <- /root/kite-data-core-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/kite-data-core-1.0.0.jar as file:/opt/hadoop/mapred/local/1562036803196/kite-data-core-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803197/sqoop-1.4.6.jar <- /root/sqoop-1.4.6.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/sqoop-1.4.6.jar as file:/opt/hadoop/mapred/local/1562036803197/sqoop-1.4.6.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803198/parquet-column-1.4.1.jar <- /root/parquet-column-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/parquet-column-1.4.1.jar as file:/opt/hadoop/mapred/local/1562036803198/parquet-column-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803199/parquet-generator-1.4.1.jar <- /root/parquet-generator-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/parquet-generator-1.4.1.jar as file:/opt/hadoop/mapred/local/1562036803199/parquet-generator-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803200/xz-1.0.jar <- /root/xz-1.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/xz-1.0.jar as file:/opt/hadoop/mapred/local/1562036803200/xz-1.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803201/paranamer-2.3.jar <- /root/paranamer-2.3.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/paranamer-2.3.jar as file:/opt/hadoop/mapred/local/1562036803201/paranamer-2.3.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803202/jackson-core-asl-1.9.13.jar <- /root/jackson-core-asl-1.9.13.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/jackson-core-asl-1.9.13.jar as file:/opt/hadoop/mapred/local/1562036803202/jackson-core-asl-1.9.13.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803203/snappy-java-1.0.5.jar <- /root/snappy-java-1.0.5.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/snappy-java-1.0.5.jar as file:/opt/hadoop/mapred/local/1562036803203/snappy-java-1.0.5.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803204/jackson-core-2.3.1.jar <- /root/jackson-core-2.3.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/jackson-core-2.3.1.jar as file:/opt/hadoop/mapred/local/1562036803204/jackson-core-2.3.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803205/parquet-common-1.4.1.jar <- /root/parquet-common-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/parquet-common-1.4.1.jar as file:/opt/hadoop/mapred/local/1562036803205/parquet-common-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803206/commons-io-1.4.jar <- /root/commons-io-1.4.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/commons-io-1.4.jar as file:/opt/hadoop/mapred/local/1562036803206/commons-io-1.4.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803207/mysql-connector-java-5.1.26-bin.jar <- /root/mysql-connector-java-5.1.26-bin.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/mysql-connector-java-5.1.26-bin.jar as file:/opt/hadoop/mapred/local/1562036803207/mysql-connector-java-5.1.26-bin.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803208/hsqldb-1.8.0.10.jar <- /root/hsqldb-1.8.0.10.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/hsqldb-1.8.0.10.jar as file:/opt/hadoop/mapred/local/1562036803208/hsqldb-1.8.0.10.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803209/parquet-format-2.0.0.jar <- /root/parquet-format-2.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/parquet-format-2.0.0.jar as file:/opt/hadoop/mapred/local/1562036803209/parquet-format-2.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803210/slf4j-api-1.6.1.jar <- /root/slf4j-api-1.6.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/slf4j-api-1.6.1.jar as file:/opt/hadoop/mapred/local/1562036803210/slf4j-api-1.6.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803211/commons-jexl-2.1.1.jar <- /root/commons-jexl-2.1.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/commons-jexl-2.1.1.jar as file:/opt/hadoop/mapred/local/1562036803211/commons-jexl-2.1.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803212/jackson-annotations-2.3.0.jar <- /root/jackson-annotations-2.3.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/jackson-annotations-2.3.0.jar as file:/opt/hadoop/mapred/local/1562036803212/jackson-annotations-2.3.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803213/kite-hadoop-compatibility-1.0.0.jar <- /root/kite-hadoop-compatibility-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/kite-hadoop-compatibility-1.0.0.jar as file:/opt/hadoop/mapred/local/1562036803213/kite-hadoop-compatibility-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803214/ant-eclipse-1.0-jvm1.2.jar <- /root/ant-eclipse-1.0-jvm1.2.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/ant-eclipse-1.0-jvm1.2.jar as file:/opt/hadoop/mapred/local/1562036803214/ant-eclipse-1.0-jvm1.2.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803215/opencsv-2.3.jar <- /root/opencsv-2.3.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/opencsv-2.3.jar as file:/opt/hadoop/mapred/local/1562036803215/opencsv-2.3.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803216/jackson-databind-2.3.1.jar <- /root/jackson-databind-2.3.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/jackson-databind-2.3.1.jar as file:/opt/hadoop/mapred/local/1562036803216/jackson-databind-2.3.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803217/ant-contrib-1.0b3.jar <- /root/ant-contrib-1.0b3.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/ant-contrib-1.0b3.jar as file:/opt/hadoop/mapred/local/1562036803217/ant-contrib-1.0b3.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803218/avro-1.7.5.jar <- /root/avro-1.7.5.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/avro-1.7.5.jar as file:/opt/hadoop/mapred/local/1562036803218/avro-1.7.5.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803219/kite-data-hive-1.0.0.jar <- /root/kite-data-hive-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/kite-data-hive-1.0.0.jar as file:/opt/hadoop/mapred/local/1562036803219/kite-data-hive-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803220/kite-data-mapreduce-1.0.0.jar <- /root/kite-data-mapreduce-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/kite-data-mapreduce-1.0.0.jar as file:/opt/hadoop/mapred/local/1562036803220/kite-data-mapreduce-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803221/commons-compress-1.4.1.jar <- /root/commons-compress-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/commons-compress-1.4.1.jar as file:/opt/hadoop/mapred/local/1562036803221/commons-compress-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Creating symlink: /opt/hadoop/mapred/local/1562036803222/commons-codec-1.4.jar <- /root/commons-codec-1.4.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: Localized file:/usr/local/sqoop-1.4.6/lib/commons-codec-1.4.jar as file:/opt/hadoop/mapred/local/1562036803222/commons-codec-1.4.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803189/parquet-hadoop-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803190/commons-logging-1.1.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803191/parquet-jackson-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803192/avro-mapred-1.7.5-hadoop2.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803193/jackson-mapper-asl-1.9.13.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803194/parquet-encoding-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803195/parquet-avro-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803196/kite-data-core-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803197/sqoop-1.4.6.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803198/parquet-column-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803199/parquet-generator-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803200/xz-1.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803201/paranamer-2.3.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803202/jackson-core-asl-1.9.13.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803203/snappy-java-1.0.5.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803204/jackson-core-2.3.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803205/parquet-common-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803206/commons-io-1.4.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803207/mysql-connector-java-5.1.26-bin.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803208/hsqldb-1.8.0.10.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803209/parquet-format-2.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803210/slf4j-api-1.6.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803211/commons-jexl-2.1.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803212/jackson-annotations-2.3.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803213/kite-hadoop-compatibility-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803214/ant-eclipse-1.0-jvm1.2.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803215/opencsv-2.3.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803216/jackson-databind-2.3.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803217/ant-contrib-1.0b3.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803218/avro-1.7.5.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803219/kite-data-hive-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803220/kite-data-mapreduce-1.0.0.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803221/commons-compress-1.4.1.jar
    19/07/02 11:06:44 INFO mapred.LocalDistributedCacheManager: file:/opt/hadoop/mapred/local/1562036803222/commons-codec-1.4.jar
    19/07/02 11:06:44 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
    19/07/02 11:06:44 INFO mapreduce.Job: Running job: job_local1498908125_0001
    19/07/02 11:06:44 INFO mapred.LocalJobRunner: OutputCommitter set in config null
    19/07/02 11:06:44 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
    19/07/02 11:06:45 INFO mapred.LocalJobRunner: Waiting for map tasks
    19/07/02 11:06:45 INFO mapred.LocalJobRunner: Starting task: attempt_local1498908125_0001_m_000000_0
    19/07/02 11:06:45 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
    19/07/02 11:06:45 INFO db.DBInputFormat: Using read commited transaction isolation
    19/07/02 11:06:45 INFO mapred.MapTask: Processing split: 1=1 AND 1=1
    19/07/02 11:06:45 INFO db.DBRecordReader: Working on split: 1=1 AND 1=1
    19/07/02 11:06:45 INFO db.DBRecordReader: Executing query: SELECT `help_keyword_id`, `name` FROM `help_keyword` AS `help_keyword` WHERE ( 1=1 ) AND ( 1=1 )
    19/07/02 11:06:45 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
    19/07/02 11:06:45 INFO mapred.LocalJobRunner: 
    19/07/02 11:06:45 INFO mapreduce.Job: Job job_local1498908125_0001 running in uber mode : false
    19/07/02 11:06:45 INFO mapreduce.Job: map 0% reduce 0%
    19/07/02 11:06:46 INFO mapred.Task: Task:attempt_local1498908125_0001_m_000000_0 is done. And is in the process of committing
    19/07/02 11:06:46 INFO mapred.LocalJobRunner: 
    19/07/02 11:06:46 INFO mapred.Task: Task attempt_local1498908125_0001_m_000000_0 is allowed to commit now
    19/07/02 11:06:46 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1498908125_0001_m_000000_0' to hdfs://node101:8020/user/root/help_keyword/_temporary/0/task_local1498908125_0001_m_000000
    19/07/02 11:06:46 INFO mapred.LocalJobRunner: map
    19/07/02 11:06:46 INFO mapred.Task: Task 'attempt_local1498908125_0001_m_000000_0' done.
    19/07/02 11:06:46 INFO mapred.LocalJobRunner: Finishing task: attempt_local1498908125_0001_m_000000_0
    19/07/02 11:06:46 INFO mapred.LocalJobRunner: map task executor complete.
    19/07/02 11:06:46 INFO mapreduce.Job: map 100% reduce 0%
    19/07/02 11:06:46 INFO mapreduce.Job: Job job_local1498908125_0001 completed successfully
    19/07/02 11:06:46 INFO mapreduce.Job: Counters: 23
    File System Counters
    FILE: Number of bytes read=17916102
    FILE: Number of bytes written=18328858
    FILE: Number of read operations=0
    FILE: Number of large read operations=0
    FILE: Number of write operations=0
    HDFS: Number of bytes read=0
    HDFS: Number of bytes written=9748
    HDFS: Number of read operations=4
    HDFS: Number of large read operations=0
    HDFS: Number of write operations=3
    Map-Reduce Framework
    Map input records=699
    Map output records=699
    Input split bytes=87
    Spilled Records=0
    Failed Shuffles=0
    Merged Map outputs=0
    GC time elapsed (ms)=7
    CPU time spent (ms)=0
    Physical memory (bytes) snapshot=0
    Virtual memory (bytes) snapshot=0
    Total committed heap usage (bytes)=63778816
    File Input Format Counters 
    Bytes Read=0
    File Output Format Counters 
    Bytes Written=9748
    19/07/02 11:06:46 INFO mapreduce.ImportJobBase: Transferred 9.5195 KB in 5.2139 seconds (1.8258 KB/sec)
    19/07/02 11:06:46 INFO mapreduce.ImportJobBase: Retrieved 699 records.
    View Code

    查看导入数据:

    [root@node101 ~]# hdfs dfs -cat /user/root/help_keyword/part-m-00000
    0,(JSON
    1,->
    2,->>
    3,<>
    4,ACCOUNT
    5,ACTION
    6,ADD
    7,AES_DECRYPT
    .
    .
    .


    (2)导入: 指定分隔符和导入路径

    sqoop import 
    --connect jdbc:mysql://node102:3306/mysql 
    --username root 
    --password 123456 
    --table help_keyword 
    --target-dir /sqoop/mysql/my_help_keyword1 
    --fields-terminated-by '	'
    -m 1

    查看导入数据:

    [root@node101 ~]# hdfs dfs -cat /sqoop/mysql/my_help_keyword1/part-m-00003
    524    SQLSTATE
    525    SQL_AFTER_GTIDS
    526    SQL_AFTER_MTS_GAPS
    527    SQL_BEFORE_GTIDS
    528    SQL_BIG_RESULT
    529    SQL_BUFFER_RESULT
    530    SQL_CACHE

    (3)导入数据:带where条件

    sqoop import 
    --connect jdbc:mysql://node102:3306/mysql 
    --username root 
    --password 123456 
    --table help_keyword 
    --where "name='STRING' " 
    --target-dir /sqoop/mysql/my_help_keyword2 
    -m 1

    查看导入数据:

    [root@node101 ~]# hdfs dfs -cat /sqoop/mysql/my_help_keyword2/part-m-00000
    553,STRING

    (4)查询指定列

    sqoop import 
    --connect jdbc:mysql://node102:3306/mysql 
    --username root 
    --password 123456 
    --columns "name" 
    --table help_keyword 
    --where "name='STRING' " 
    --target-dir /sqoop/mysql/my_help_keyword3 
    -m 1

    查看导入数据:

    [root@node101 ~]# hdfs dfs -cat /sqoop/mysql/my_help_keyword3/part-m-00000
    STRING

    (5)指定自定义查询SQL

    sqoop import 
    --connect jdbc:mysql://node102:3306/mysql 
    --username root 
    --password 123456 
    --query 'select help_keyword_id,name from mysql.help_keyword where $CONDITIONS and name = "STRING"' 
    --split-by help_keyword_id 
    --fields-terminated-by '	' 
    --target-dir /sqoop/mysql/my_help_keyword4 
    -m 4

    查看导入数据:

    [root@node101 ~]# hdfs dfs -cat /sqoop/mysql/my_help_keyword4/part-m-00000
    553    STRING

    注意:
    在以上需要按照自定义SQL语句导出数据到HDFS的情况下:
    1、引号问题,要么外层使用单引号,内层使用双引号,$CONDITIONS的$符号不用转义, 要么外层使用双引号,那么内层使用单引号,然后$CONDITIONS的$符号需要转义
    2、自定义的SQL语句中必须带有WHERE $CONDITIONS


    三、导出

    将HDFS数据导出到mysql

    sqoop export 
    --connect jdbc:mysql://node102:3306/hdfs 
    --username root 
    --password 123456 
    --table help_keyword 
    --input-fields-terminated-by "	" 
    --export-dir /sqoop/mysql/my_help_keyword1 
    -m 1


    参考:
    用户手册
    Sqoop学习之路

  • 相关阅读:
    samba
    sed用法
    Jenkins流水线项目发布流程
    Jenkins
    CI/CD
    tomcat
    gitlab
    rsync
    HAPROXY
    基于LVS的web集群部署(http)
  • 原文地址:https://www.cnblogs.com/cac2020/p/11119035.html
Copyright © 2020-2023  润新知