• Presto单机/集群模式安装笔记


    Presto单机/集群模式安装笔记

    一、安装环境

    1. JDK版本要求: 1.9.0_92+

    二、安装步骤

    1. 官网下载最新版本

      https://prestodb.io/docs/current/installation/deployment.html

    2. 配置,参考网址

      http://prestodb-china.com/docs/current/installation/deployment.html

      etc/node.properties暂时配置如下: node.data-dir=/opt/dtwave/presto/data (新建data目录)

      注意删除etc/config.properties中的task.max-memory=1GB。 否则启动报如下错误:

      启动服务: bin/launcher start

    ERROR com.facebook.presto.server.PrestoServer Unable to create injector, see the following errors:
    
    1) Error: Invalid configuration property node.environment: is malformed (for class io.airlift.node.NodeConfig.environment)
    
    2) Error: Defunct property 'task.max-memory' (class [class com.facebook.presto.execution.TaskManagerConfig]) cannot be configured.
    
    

    3.查询日志目录

    /opt/dtwave/presto/data/var/log

    4.浏览器查看

    http://ip:8086/

    5.下载客户端,下载到 /opt/dtwave/presto/bin目录下

    https://prestodb.io/docs/current/installation/cli.html

    重命名为 presto-cli

    $ mv presto-cli-0.180-executable.jar presto-cli

    $ chmod +x presto-cli

    添加环境变量 /etc/profile

    export PRESTO_HOME=/opt/dtwave/presto

    export PATH=$PATH:$PRESTO_HOME/bin

    source /etc/profile

    6.配置Hive连接器,etc/catalog目录新建 hive.properties, 内容如下:

    connector.name=hive-hadoop2
    hive.metastore.uri=thrift://master106:9083
    hive.config.resources=/opt/dtwave/hadoop/etc/hadoop/core-site.xml,/opt/dtwave/hadoop/etc/hadoop/hdfs-site.xml
    

    7.连接Hive,

    presto-cli –server outer_ip:8086 –catalog hive –schema default

    presto-cli –server inner_ip:8086 –catalog hive –schema default

    或者(不指定数据库)

    presto-cli –server ip:8086 –cata 大专栏  Presto单机/集群模式安装笔记log hive

    presto> describe hive.dtwave_dev.bas_user_info;
     Column |  Type   | Extra | Comment 
    --------+---------+-------+---------
     id     | integer |       |         
     name   | varchar |       |         
     age    | integer |       |         
    (3 rows)
    
    Query 20170707_171515_00005_cxg9d, FINISHED, 1 node
    Splits: 18 total, 18 done (100.00%)
    0:01 [3 rows, 216B] [4 rows/s, 320B/s]
    
    presto> select * from hive.dtwave_dev.bas_user_info limit 10;
     id | name  | age 
    ----+-------+-----
      1 | zhang |  23 
      2 | san   |  24 
      3 | li    |  35 
    

    =============================

    三、集群模式安装:

    从第一台安装好的persto机器上scp 文件夹过去

    scp -r presto-server-0.180 hadoop@node47:/opt/dtwave/
    scp -r presto-server-0.180 hadoop@1node.50:/opt/dtwave/
    scp -r presto-server-0.180 hadoop@node172:/opt/dtwave/
    
    scp -r jdk1.8.0_131 hadoop@node47:/opt/dtwave/
    scp -r jdk1.8.0_131 hadoop@node50:/opt/dtwave/
    scp -r jdk1.8.0_131 hadoop@node172:/opt/dtwave/
    
    

    创建软链接:

    ln -s presto-server-0.180 presto
    

    删除目录/presto/data下所有旧文件。

    配置环境变量:

    export PRESTO_HOME=/opt/dtwave/presto
    export PATH=$PATH:$PRESTO_HOME/bin
    
    source /etc/profile
    

    删除旧版本jdk软连接

    rm -rf jdk
    

    创建软链接升级JDK:

    ln -s jdk1.8.0_131 jdk
    

    3.1 集群模式修改配置部分

    把Node172作为coordinator Node45和Node47作为Node。修改配置后重新启动。

    3.1.1 coordinator 节点配置。

    Node172配置

    :vim config.properties

    coordinator=true node-scheduler.include-coordinator=false task.max-memory=1GB http-server.http.port=8086 discovery-server.enabled=true discovery.uri=http://node172:8086

    :vim node.properties

    node.environment=dtwave node.id=172 node.data-dir=/opt/dtwave/presto/data

    3.1.2 node节点配置:【注意coordinator=false】

    Node50

    coordinator=false
    http-server.http.port=8086
    discovery-server.enabled=true
    discovery.uri=http://node172:8086
    
    node.environment=dtwave
    node.id=50
    node.data-dir=/opt/dtwave/presto/data
    

    Node47

    coordinator=false
    http-server.http.port=8086
    discovery-server.enabled=true
    discovery.uri=http://node172:8086
    
    node.environment=dtwave
    node.id=47
    node.data-dir=/opt/dtwave/presto/data
    

    3.2 Cli远程连接测试:(指定Server为172)

    presto-cli --server node172:8086 --catalog hive --schema default
    
  • 相关阅读:
    十七、oracle 权限
    九、oracle 事务
    十六、oracle 索引
    十九、oracle pl/sql简介
    二十二、oracle pl/sql分类二 函数
    通过HttpURLConnection模拟post表单提交
    八、oracle 分页
    二十一、oracle pl/sql分类一 存储过程
    xStream框架操作XML、JSON
    二十、oracle pl/sql基础
  • 原文地址:https://www.cnblogs.com/lijianming180/p/12251300.html
Copyright © 2020-2023  润新知