• saltstack syndic安装配置使用


    salt-syndic是做神马的呢?如果大家知道zabbix proxy的话那就可以很容易理解了,syndic的意思为理事,其实如果叫salt-proxy的话那就更好理解了,它就是一层代理,如同zabbix proxy功能一样,隔离master与minion,使其不需要通讯,只需要与syndic都通讯就可以,这样的话就可以在跨机房的时候将架构清晰部署了,建议zabbix proxy与salt-syndic可以放在一起哦

    本次我萌使用node2作为node3的代理让他收到node1(master)的控制

    在node1(master)上配置

    1 [root@linux-node1 ~]# grep "^[a-Z]" /etc/salt/master
    2 default_include: master.d/*.conf
    3 file_roots:
    4 order_masters: True                        # 修改这里,表示允许开启多层master

    在node2上安装配置

     1 [root@linux-node2 salt]# yum install salt-syndic -y
     2 [root@linux-node2 salt]# cd /etc/salt/
     3 [root@linux-node2 salt]# grep "^[a-Z]" proxy
     4 master: 192.168.56.11                                    # proxy文件里
     5 [root@linux-node2 salt]# grep "^[a-Z]" master
     6 syndic_master: 192.168.56.11                            # master文件里
     7 [root@linux-node2 salt]# systemctl start salt-master.service
     8 [root@linux-node2 salt]# systemctl start salt-syndic.service
     9 [root@linux-node2 salt]# netstat -tpln
    10 Active Internet connections (only servers)
    11 Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    12 tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
    13 tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      998/sshd            
    14 tcp        0      0 0.0.0.0:4505            0.0.0.0:*               LISTEN      6013/python         
    15 tcp        0      0 0.0.0.0:4506            0.0.0.0:*               LISTEN      6019/python         
    16 tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
    17 tcp6       0      0 :::22                   :::*                    LISTEN      998/sshd      

    node3上正常安装minion

    1 [root@linux-node3 salt]# yum install salt-minion -y
    2 [root@linux-node3 salt]# cd /etc/salt/
    3 [root@linux-node3 salt]# grep "^[a-Z]" minion
    4 master: 192.168.56.12                                # 此时只需要认定node2就好,不需要知道node1的存在
    5 [root@linux-node3 salt]# systemctl start salt-minion

    然后回到node2(syndic)

    1 [root@linux-node2 salt]# salt-key -L
    2 Accepted Keys:
    3 Denied Keys:
    4 Unaccepted Keys:
    5 linux-node3.example.com
    6 Rejected Keys:
    7 [root@linux-node2 salt]# salt-key –A                # 把key接受了

    最后回到node1(master)

     1 [root@linux-node1 ~]# salt-key –L                    # 发现并没有linux-node3.example.com
     2 Accepted Keys:
     3 linux-node1.example.com
     4 linux-node2.example.com
     5 Denied Keys:
     6 Unaccepted Keys:
     7 Rejected Keys:
     8 [root@linux-node1 ~]# salt '*' test.ping    
     9 linux-node2.example.com:
    10     True
    11 linux-node1.example.com:
    12     True
    13 linux-node3.example.com:                        # 但是它会出现效果
    14 True

    其他的同层代理及多层代理的配置也是相同的,只需要分清每个代理的上层master就好

    这里有一些常见的问题

    1.我代理之下控制的是否可以重名?举个例子就是node3的id改成node2,然后在总master上执行会有什么情况?

    首先我萌要涉及到修改id啦,小伙伴还记得修改id的流程吗~~

     1 [root@linux-node3 salt]# systemctl stop salt-minion.service    # 停止minion
     2 [root@linux-node2 salt]# salt-key –L            # 注意是在node2(syndic)上哦,因为node3的眼里的master是node2,并且把key是发送给node2了哦,删除它
     3 Accepted Keys:
     4 linux-node3.example.com
     5 Denied Keys:
     6 Unaccepted Keys:
     7 Rejected Keys:
     8 [root@linux-node2 salt]# salt-key -d linux-node3.example.com
     9 The following keys are going to be deleted:
    10 Accepted Keys:
    11 linux-node3.example.com
    12 Proceed? [N/y] y
    13 Key for minion linux-node3.example.com deleted.
    14 [root@linux-node3 salt]# rm -fr /etc/salt/pki/minion/            # 删除minion端/etc/salt/pki/minion下所有文件
    15 [root@linux-node3 salt]# grep "^[a-Z]" /etc/salt/minion        # 修改新id
    16 master: 192.168.56.12
    17 id: linux-node2.example.com                                # 配置一个已有的重复id做测试
    18 [root@linux-node3 salt]# systemctl start salt-minion.service    # 再次启动minion
    19 [root@linux-node2 salt]# salt-key –L                        # 回到node2再次接受新id的key
    20 Accepted Keys:
    21 Denied Keys:
    22 Unaccepted Keys:
    23 linux-node2.example.com
    24 Rejected Keys:
    25 [root@linux-node2 salt]# salt-key -A
    26 The following keys are going to be accepted:
    27 Unaccepted Keys:
    28 linux-node2.example.com
    29 Proceed? [n/Y] Y
    30 Key for minion linux-node2.example.com accepted.
    31 [root@linux-node2 salt]# salt '*' test.ping                    # 简单测试下
    32 linux-node2.example.com:
    33 True
    34 
    35 最后验证我们的测试,回到node1(master)
    36 [root@linux-node1 ~]# salt '*' test.ping
    37 linux-node2.example.com:
    38     True
    39 linux-node1.example.com:
    40     True
    41 linux-node2.example.com:
    42     True
    43 我萌发现,wtf,什么鬼,linux-node2.example.com居然出现了两次,虽然已经想过这种情况,但是在实际使用中我肯定是分不清谁是谁了,所以这种使用了代理后依然id重名的方式依然是很不好的,所以还是建议大家把id要分清楚哦,最简单的方式就是设置合理的主机名,这样所有的机器都不会重复,而且连设置id这个事情都可以省略了(我已经将node3的id改回去了)

    2.远程执行没有问题了,这种架构下状态文件的执行会不会有影响呢?

      1 [root@linux-node1 base]# pwd                    # 我们在master上定义top
      2 /srv/salt/base
      3 [root@linux-node1 base]# cat top.sls                 # 其实就是给大家传输了个文件
      4 base:
      5   '*':
      6     - known-hosts.known-hosts
      7 [root@linux-node1 base]# cat known-hosts/known-hosts.sls 
      8 known-hosts:
      9   file.managed:
     10     - name: /root/.ssh/known_hosts
     11     - source: salt://known-hosts/templates/known-hosts
     12     - clean: True
     13 [root@linux-node1 base]# salt '*' state.highstate
     14 linux-node3.example.com:
     15 ----------
     16           ID: states
     17     Function: no.None
     18       Result: False
     19      Comment: No Top file or master_tops data matches found.
     20      Changes:   
     21 
     22 Summary for linux-node3.example.com
     23 ------------
     24 Succeeded: 0
     25 Failed:    1
     26 ------------
     27 Total states run:     1
     28 Total run time:   0.000 ms
     29 linux-node2.example.com:
     30 ----------
     31           ID: known-hosts
     32     Function: file.managed
     33         Name: /root/.ssh/known_hosts
     34       Result: True
     35      Comment: File /root/.ssh/known_hosts updated
     36      Started: 11:15:35.210699
     37     Duration: 37.978 ms
     38      Changes:   
     39               ----------
     40               diff:
     41                   New file
     42               mode:
     43                   0644
     44 
     45 Summary for linux-node2.example.com
     46 ------------
     47 Succeeded: 1 (changed=1)
     48 Failed:    0
     49 ------------
     50 Total states run:     1
     51 Total run time:  37.978 ms
     52 linux-node1.example.com:
     53 ----------
     54           ID: known-hosts
     55     Function: file.managed
     56         Name: /root/.ssh/known_hosts
     57       Result: True
     58      Comment: File /root/.ssh/known_hosts is in the correct state
     59      Started: 11:15:35.226119
     60     Duration: 51.202 ms
     61      Changes:   
     62 
     63 Summary for linux-node1.example.com
     64 ------------
     65 Succeeded: 1
     66 Failed:    0
     67 ------------
     68 Total states run:     1
     69 Total run time:  51.202 ms
     70 ERROR: Minions returned with non-zero exit code
     71 显而易见的node3发生了错误,而node1跟node2正常(很好理解),我们去看node3报出的“No Top file or master_tops data matches found”,言简意赅没有找到匹配的top执行文件,简单推断出是因为node3认证的master是node2,但是node2上没有写top,我们去node2上写一个不同的top再次测试下
     72 [root@linux-node2 base]# pwd
     73 /srv/salt/base
     74 [root@linux-node2 base]# cat top.sls                     # 这个更简单了,就是ls /root
     75 base:
     76   '*':
     77     - cmd.cmd
     78 [root@linux-node2 base]# cat cmd/cmd.sls 
     79 cmd:
     80   cmd.run:
     81     - name: ls /root
     82 好的我们回到master上再次测试,我将node1、2正常执行的信息省略
     83 [root@linux-node1 base]# salt '*' state.highstate
     84 linux-node3.example.com:
     85 ----------
     86           ID: cmd
     87     Function: cmd.run
     88         Name: ls /root
     89       Result: True
     90      Comment: Command "ls /root" run
     91      Started: 11:24:42.752326
     92     Duration: 11.944 ms
     93      Changes:   
     94               ----------
     95               pid:
     96                   5095
     97               retcode:
     98                   0
     99               stderr:
    100               stdout:
    101                   lvs.sh
    102 
    103 Summary for linux-node3.example.com
    104 ------------
    105 Succeeded: 1 (changed=1)
    106 Failed:    0
    107 ------------
    108 Total states run:     1
    109 Total run time:  11.944 ms
    110 我们已经可以看出一些端倪,我们再次修改master的配置文件并执行测试
    111 [root@linux-node1 base]# cat top.sls 
    112 base:
    113   'linux-node3.example.com':                        # 只定义执行node3
    114     - known-hosts.known-hosts
    115 [root@linux-node1 base]# salt '*' state.highstate
    116 linux-node3.example.com:
    117 ----------
    118           ID: cmd
    119     Function: cmd.run
    120         Name: ls /root
    121       Result: True
    122      Comment: Command "ls /root" run
    123      Started: 11:28:20.792475
    124     Duration: 8.686 ms
    125      Changes:   
    126               ----------
    127               pid:
    128                   5283
    129               retcode:
    130                   0
    131               stderr:
    132               stdout:
    133                   lvs.sh
    134 
    135 Summary for linux-node3.example.com
    136 ------------
    137 Succeeded: 1 (changed=1)
    138 Failed:    0
    139 ------------
    140 Total states run:     1
    141 Total run time:   8.686 ms
    142 linux-node2.example.com:
    143 ----------
    144           ID: states
    145     Function: no.None
    146       Result: False
    147      Comment: No Top file or master_tops data matches found.
    148      Changes:   
    149 
    150 Summary for linux-node2.example.com
    151 ------------
    152 Succeeded: 0
    153 Failed:    1
    154 ------------
    155 Total states run:     1
    156 Total run time:   0.000 ms
    157 linux-node1.example.com:
    158 ----------
    159           ID: states
    160     Function: no.None
    161       Result: False
    162      Comment: No Top file or master_tops data matches found.
    163      Changes:   
    164 
    165 Summary for linux-node1.example.com
    166 ------------
    167 Succeeded: 0
    168 Failed:    1
    169 ------------
    170 Total states run:     1
    171 Total run time:   0.000 ms
    172 ERROR: Minions returned with non-zero exit code
    173 我们发现这次node1跟node2出刚才问题了,而node3执行的是node2上定义的top,好吧,这时候就要发挥小北方的作用!
    174 北方的总结:
    175 每个minion会去找自己master里定义的top并执行,即node1、2找的是master的,而node2找的是syndic(node2)的
    176 
    177 “No Top file or master_tops data matches found”出现是因为我每次执行都是salt '*' state.highstate,即让所有机器都查找top文件并执行对应操作,第一次node3出现问题是因为它听从的top文件在syndic上,当时syndic上我还没有写top所以他找不到匹配自己的;第二次我把top里执行的*换成了node3单独一个,没有node1跟node2的相关操作了,他们接受到指令并来查找top文件想执行相关操作发现没匹配自己也因此报错,就跟刚才node3找不到是一个意思
    178 
    179 一下子还是无法理解呢,那么怎么办呢,有一个规范的做法就是,将master的文件目录直接拷到所有的syndic上,这样就可以保证所有的操作都是统一的了,如同没有代理的时候一样

    3.top好麻烦呀,那么我普通的执行sls文件会怎么样呢?

     1 [root@linux-node1 base]# salt '*' state.sls  known-hosts.known-hosts
     2 linux-node3.example.com:
     3     Data failed to compile:
     4 ----------
     5     No matching sls found for 'known-hosts.known-hosts' in env 'base'
     6 linux-node2.example.com:
     7 ----------
     8           ID: known-hosts
     9     Function: file.managed
    10         Name: /root/.ssh/known_hosts
    11       Result: True
    12      Comment: File /root/.ssh/known_hosts is in the correct state
    13      Started: 11:46:03.968021
    14     Duration: 870.596 ms
    15      Changes:   
    16 
    17 Summary for linux-node2.example.com
    18 ------------
    19 Succeeded: 1
    20 Failed:    0
    21 ------------
    22 Total states run:     1
    23 Total run time: 870.596 ms
    24 linux-node1.example.com:
    25 ----------
    26           ID: known-hosts
    27     Function: file.managed
    28         Name: /root/.ssh/known_hosts
    29       Result: True
    30      Comment: File /root/.ssh/known_hosts is in the correct state
    31      Started: 11:46:05.003462
    32     Duration: 42.02 ms
    33      Changes:   
    34 
    35 Summary for linux-node1.example.com
    36 ------------
    37 Succeeded: 1
    38 Failed:    0
    39 ------------
    40 Total states run:     1
    41 Total run time:  42.020 ms
    42 ERROR: Minions returned with non-zero exit code
    43 我么看到node3又报错了,“No matching sls found for 'known-hosts.known-hosts' in env 'base'”,我甚至都不需要验证都知道是怎么回事了,直接复制下来
    44 
    45 每个minion会去找自己master里定义的sls并执行,即node1、2找的是master的,而node2找的是syndic(node2)的
    46 
    47 所以如果你在syndic定义个known-hosts但是里面执行些其他操作那么node3就会按这个来了,但是没有人会这么乱七八糟的搞,因此:保证各个syndic与master的文件目录保持统一!
  • 相关阅读:
    MongoDB入门下
    MongoDB简介
    MongoDB 查询上
    MongoDB 创建,更新,删除文档 下
    MongoDB 创建,更新,删除文档 上
    SqlServer 对 数据类型 text 的操作
    Asp.Net编码
    MongoDB 查询下
    (基于Java)编写编译器和解释器第3A章:基于Antlr构造词法分析器(连载)
    (基于Java)编写编译器和解释器第2章:框架I:编译器和解释器第三部分(连载)
  • 原文地址:https://www.cnblogs.com/bfmq/p/7919609.html
Copyright © 2020-2023  润新知