• kafka实现SASL_PLAINTEXT权限认证·服务器篇


    kafka 集群用户认证SASL_PLAINTEXT认证

    环境背景

    版本:

    OS:         centos 7.3     
    Java:       jdk1.8.0_162      
    zookeeper:  zookeeper-3.4.10.tar.gz     
    kafka:      kafka_2.11-1.0.2.tgz   

    认证所需要的jar:

    kafka-clients-0.10.0.1.jar    
    lz4-1.3.0.jar    
    slf4j-api-1.7.21.jar   
    slf4j-log4j12-1.7.21.jar   
    snappy-java-1.1.2.6.jar    

    集群主机:

    192.168.1.86 dphd-192-168-1-86       
    192.168.1.87 dphd-192-168-1-87          
    192.168.1.88 dphd-192-168-1-88 

    1) 安装jdk1.8

     1.1 )  vim  /etc/profile #环境变量配置   
        export JAVA_HOME=/usr/local/jdk1.8.0_162                
        export JRE_HOME=/usr/local/jdk1.8.0_162/jre   
        export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH    
        export CLASSPATH=::$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    
        [root@host-10-200-86-163 ~]# sh /nas/nas_log_pbs/auto_install/centos_7/tomcat_install.sh
    
    
           参数:
                1 ) sh tomcat_install.sh install_jdk7     (---  安装jdk7 ---)
                2 ) sh tomcat_install.sh install_jdk8     (---  安装jdk8 ----)
                3 ) sh tomcat_install.sh install_tomcat7  (--- 安装tomcat7 --- )
                4 ) sh tomcat_install.sh install_tomcat8  ( --- 安装tomcat8 --  )
    
        [root@host-10-200-86-163 ~]# sh /nas/nas_log_pbs/auto_install/centos_7/tomcat_install.sh  install_jdk8

    2) 安装zookeeper

    2.1 三台集群进行如下配置,区别是myid文件分别不同
    #安装指定目录    
    [root@dphd-192-168-1-86 src]# cd /usr/local/src
    [root@dphd-192-168-1-86 src]# tar zxpf zookeeper-3.4.10.tar.gz
    [root@dphd-192-168-1-86 src]# mv zookeeper-3.4.10 /usr/local/zookeeper
    #配置文件
    [root@dphd-192-168-1-86 src]# mkdir -p /zk_data/zk1
    [root@dphd-192-168-1-86 src]# echo "1" >>/zk_data/zk1/myid
    [root@dphd-192-168-1-86 src]# mkdir -p /usr/local/zookeeper/logs
    [root@dphd-192-168-1-86 src]# cat /usr/local/zookeeper/conf/zoo.cfg 
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=/zk_data/zk1
    dataLogDir=/usr/local/zookeeper/logs
    clientPort=2181
    server.1=192.168.1.86:3181:4181
    server.2=192.168.1.87:3182:4182
    server.3=192.168.1.88:3183:4183
    authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider  
    requireClientAuthScheme=sasl  
    jaasLoginRenew=3600000
    [root@dphd-192-168-1-86 src]#

    3) 安装kafka

    3.1 三台集群如下安装,区别地方是 broker.id/advertised.listeners有所略同
    #安装  
    [root@dphd-192-168-1-86 src]# cd /usr/local/src/
    [root@dphd-192-168-1-86 src]# tar zxpf kafka_2.11-1.0.2.tgz
    [root@dphd-192-168-1-86 src]# mv kafka_2.11-1.0.2 ../kafka
    [root@dphd-192-168-1-86 src]# mkdir -p /opt/kafka-logs
    #配置文件  
    [root@dphd-192-168-1-86 src]# cat /kafka/config/server.properties 
    broker.id=0
    authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
    security.inter.broker.protocol= SASL_PLAINTEXT
    sasl.mechanism.inter.broker.protocol=PLAIN
    sasl.enabled.mechanisms=PLAIN
    super.users=User:admin 
    listeners=SASL_PLAINTEXT://0.0.0.0:9092
    advertised.listeners=SASL_PLAINTEXT://dphd-192-168-1-86:9092
    num.network.threads=3
    num.io.threads=8
    socket.send.buffer.bytes=102400
    socket.receive.buffer.bytes=102400
    socket.request.max.bytes=104857600
    log.dirs=/opt/kafka-logs
    num.partitions=1
    num.recovery.threads.per.data.dir=1
    log.cleanup.policy=delete
    log.retention.hours=72
    log.segment.bytes=1073741824
    zookeeper.connect=dphd-192-168-1-86:2181,dphd-192-168-1-87:2181,dphd-192-168-1-88:2181
    delete.topic.enable=true
    zookeeper.connection.timeout.ms=60000
    [root@dphd-192-168-1-86 src]# 

    4) zookeeper SASL_PLAINTEXT认证

    4.1 zookeeper集群配置SASL(三台都要改)
    在/usr/locla/zookeeper/conf/zoo.cfg 配置文件加上下面参数,上面已经操作完

    authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider  
    requireClientAuthScheme=sasl  
    jaasLoginRenew=3600000

    4.2 编写JAAS文件(三台都要改)
    这个文件定义需要链接到Zookeeper服务器的用户名和密码

    [root@dphd-192-168-1-86 conf]# cat /zookeeper/conf/jaas.conf 
    Server {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin"
    user_admin="admin";
    };    

    说明:
    配置文件我命名为/zookeeper/conf/jaas.conf,并放在部署目录的conf/下。文件中定义了身份认证类(org.apache.kafka.common.security.plain.PlainLoginModule),可以看到这个认证类是kafka命名空间,也就是需要加入kafka的插件。

    4.3、 向zookeeper添加Kafka认证插件
    由于Zookeeper的认证机制是使用插件,这个插件只要支持JAAS即可。Kafka需要链接到Zookeeper,直接使用Kafka的认证插件。这个插件类也包含在kafka-clients中(Maven项目)。将依赖的几个jar加入Zookeeper启动的classpath即可。如下是kafka-clients-0.10.0.1相关jar,包括其依赖:

    kafka-clients-0.10.0.1.jar
    lz4-1.3.0.jar
    slf4j-api-1.7.21.jar
    slf4j-log4j12-1.7.21.jar
    snappy-java-1.1.2.6.jar

    4.4、 zookeeper在启动的时候要加载配置文件和jar需要如下配置:
    加载jar包: /usr/local/zookeeper/conf/jar/*.jar
    加载认证文件:/usr/local/zookeeper/conf/jaas.conf

    [root@dphd-192-168-1-86 conf]# cat /usr/local/zookeeper/bin/zkEnv.sh
    #上面的启动文件加上下面参数
    for i in /usr/local/zookeeper/conf/jar/*.jar; do  
        CLASSPATH="$i:$CLASSPATH"  
    done  
    SERVER_JVMFLAGS=" -Djava.security.auth.login.config=/usr/local/zookeeper/conf/jaas.conf" 

    4.5、启动所有节点 将所有zookeeper节点的Quorum进程开启,查看zookeeper日志,看看之后所有节点是否都能稳定运行,再试试bin/zkCli.sh链接所有节点。是否都能通。

    5) kafka SASL_PLAINTEXT认证

    5.1、创建JAAS配置文件

    [root@dphd-192-168-1-86 kafka]# cat /usr/local/kafka/kafka_cluster_jaas.conf
    KafkaServer {  
      org.apache.kafka.common.security.plain.PlainLoginModule required  
        username="admin"  
        password="admin"  
        user_admin="admin"  
        user_producer="prod"  
        user_consumer="cons";  
    };  
    
    Client {  
      org.apache.kafka.common.security.plain.PlainLoginModule required  
        username="admin"  
        password="admin";  
    };
    [root@dphd-192-168-1-86 kafka]# 

    5.2 配置KAFKA_OPTS环境变量

    [root@dphd-192-168-1-86 kafka]# cat /usr/local/kafka/bin/kafka-run-class.sh 
    # If Cygwin is detected, classpath is converted to Windows format.
    (( CYGWIN )) && CLASSPATH=$(cygpath --path --mixed "${CLASSPATH}")
    #1)添加这一行加载配置文件
    KAFKA_SASL_OPTS='-Djava.security.auth.login.config=/usr/local/kafka/kafka_cluster_jaas.conf'
    
    # Launch mode
    # 引用在下面条件内  $KAFKA_SASL_OPTS
    if [ "x$DAEMON_MODE" = "xtrue" ]; then
    nohup $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_SASL_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &
    else
    exec $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_SASL_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@"
    fi

    5.3 测试的时候需要生成配置二个文件producer.config/writer_jaas.conf

    [root@dphd-192-168-1-86 kafka]# cat /usr/local/kafka/producer.config 
    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=PLAIN
    [root@dphd-192-168-1-86 kafka]# cat /usr/local/kafka/writer_jaas.conf 
    KafkaClient {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin";
    };

    5.4 kafka自带shell测试,过程如下

    [root@dphd-192-168-1-86 kafka]#cp /usr/locak/kafka/bin/kafka-console-producer.sh /usr/locak/kafka/bin/writer-kafka-console-producer.sh   
    #producer配置文件修改如下:  
    [root@dphd-192-168-1-86 kafka]# cat /usr/local/kafka/bin/writer-kafka-console-producer.sh
    #!/bin/bash
    if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
        export KAFKA_HEAP_OPTS="-Xmx512M"
    fi
    exec $(dirname $0)/kafka-run-class.sh -Djava.security.auth.login.config=/usr/local/kafka/writer_jaas.conf kafka.tools.ConsoleProducer "$@"
    
    #Consumer配置文件修改如下:
    [root@dphd-192-168-1-86 kafka]#cp /usr/locak/kafka/bin/kafka-console-consumer.sh /usr/locak/kafka/bin/reader-kafka-console-consumer.sh
    [root@dphd-192-168-1-87 ~]# cat /kafka/bin/reader-kafka-console-consumer.sh
    #!/bin/bash 
    if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
        export KAFKA_HEAP_OPTS="-Xmx512M"
    fi
    exec $(dirname $0)/kafka-run-class.sh -Djava.security.auth.login.config=/usr/local/kafka/writer_jaas.conf kafka.tools.ConsoleConsumer "$@"
    
    #添加topic访问权限已经授权的用户
    [root@dphd-192-168-1-86 kafka]#/usr/local/kafka/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=dphd-192-168-1-88:2181 --add --allow-principal User:admin --operation Read --operation Write --topic sean-security
    
    
    #生产者生产数据
    [root@dphd-192-168-1-86 kafka]# /kafka/bin/writer-kafka-console-producer.sh --broker-list dphd-192-168-1-86:9092 --topic dada --producer.config /kafka/producer.config
    >aaaa
    >test
    
    #消费者已经收到数据
    [root@dphd-192-168-1-87 kafka]# /kafka/bin/reader-kafka-console-consumer.sh --bootstrap-server dphd-192-168-1-86:9092 --topic dada --from-beginning --consumer.config producer.config
    aaaa
    test
  • 相关阅读:
    Part 7 Joins in sql server
    Part 9 Union and union all in sql server
    Part 4 using entity framework
    Part 3 ViewData and ViewBag in mvc
    Part 2 How are the URL's mapped to Controller Action Methods?
    Part 1 some difference from asp.net to asp.net mvc4
    Part 18 Indexes in sql server
    c/c++保存日志程序模板
    技术只是工具,你不能用它来代替生活
    网络篇:linux下select、poll、epoll之间的区别总结
  • 原文地址:https://www.cnblogs.com/chenandy/p/11846802.html
Copyright © 2020-2023  润新知