https://www.cnblogs.com/zhoujinyi/p/6023839.html
背景:
关于Fabric的介绍,可以看官网说明。简单来说主要功能就是一个基于Python的服务器批量管理库/工具,Fabric 使用 ssh(通过 paramiko 库)在多个服务器上批量执行任务、上传、下载。在使用Fabric之前,都用Python的paramiko模块来实现需求,相比之后发现Fabric比paramiko模块强大很多。具体的使用方法和说明可以看官方文档介绍。下面写类一个用paramiko(apt-get install python-paramiko)封装的远程操作类的模板:
#!/usr/bin/python # -*- encoding: utf-8 -*- import paramiko import sys reload(sys) sys.setdefaultencoding('utf8') class Remote_Ops(): def __init__(self,hostname,ssh_port,username='',password=''): self.hostname = hostname self.ssh_port = ssh_port self.username = username self.password = password #密码登入的操作方法 def ssh_connect_exec(self,cmd): try: ssh_key = paramiko.SSHClient() ssh_key.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh_key.connect(hostname=self.hostname, port=self.ssh_port, username=self.username, password=self.password, timeout=10) # paramiko.util.log_to_file('syslogin.log') except Exception, e: print('Connect Error:ssh %s@%s: %s' % (self.username, self.hostname, e)) exit() stdin, stdout, stderr = ssh_key.exec_command(cmd,get_pty=True) #切换root stdin.write(self.password+' ') stdin.flush() err_list = stderr.readlines() if len( err_list ) > 0: print 'ERROR:' + err_list[0] exit() # print stdout.read() for item in stdout.readlines()[2:]: print item.strip() ssh_key.close() #ssh登陆的操作方法 def ssh_connect_keyfile_exec(self,file_name,cmd): try: ssh_key = paramiko.SSHClient() ssh_key.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh_key.connect(hostname=self.hostname, port=self.ssh_port, key_filename=file_name, timeout=10) # paramiko.util.log_to_file('syslogin.log') except Exception, e: print e exit() stdin, stdout, stderr = ssh_key.exec_command(cmd) err_list = stderr.readlines() if len( err_list ) > 0: print 'ERROR:' + err_list[0] exit() for item in stdout.readlines(): print item.strip() ssh_key.close() if __name__ == '__main__': #密码登陆的操作方法: test = Remote_Ops('10.211.55.11', 22, 'zjy', 'zhoujinyi') test.ssh_connect_exec('sudo ls -lh /var/lib/mysql/') #ssh key登陆的操作方法:(需要到root下运行) file_name = '/var/root/.ssh/id_rsa' test1 = Remote_Ops('10.211.55.11', 22) test1.ssh_connect_keyfile_exec(file_name,'apt-get update')
关于更多的paramiko信息可以看官方文档和python运维之paramiko、python远程连接paramiko 模块。本文将要介绍的是Fabric的使用方法。
说明:
1.安装
$ pip install fabric OR $ sudo apt-get install fabric
2.参数(fab -h)
~$ fab -h Usage: fab [options] <command>[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ... #命令行的使用方法 Options: -h, --help show this help message and exit -d NAME, --display=NAME print detailed info about command NAME -F FORMAT, --list-format=FORMAT formats --list, choices: short, normal, nested -I, --initial-password-prompt Force password prompt up-front --initial-sudo-password-prompt Force sudo password prompt up-front -l, --list print list of possible commands and exit #显示出可以执行的命令函数名 --set=KEY=VALUE,... comma separated KEY=VALUE pairs to set Fab env vars --shortlist alias for -F short --list -V, --version show program's version number and exit -a, --no_agent don't use the running SSH agent -A, --forward-agent forward local agent to remote end --abort-on-prompts abort instead of prompting (for password, host, etc) #出现提示就中指,如密码、主机提示 -c PATH, --config=PATH specify location of config file to use --colorize-errors Color error output #输出颜色错误 -D, --disable-known-hosts do not load user known_hosts file -e, --eagerly-disconnect disconnect from hosts as soon as possible -f PATH, --fabfile=PATH #指定fab执行的文件,默认是fabfile.py python module file to import, e.g. '../other.py' -g HOST, --gateway=HOST #指定堡垒机(中转机)的地址 gateway host to connect through --gss-auth Use GSS-API authentication --gss-deleg Delegate GSS-API client credentials or not --gss-kex Perform GSS-API Key Exchange and user authentication --hide=LEVELS comma-separated list of output levels to hide #output隐藏的等级设置,多个等级逗号分隔 -H HOSTS, --hosts=HOSTS #指定操作的服务器,多个用逗号分隔 comma-separated list of hosts to operate on -i PATH path to SSH private key file. May be repeated. #指定私钥文件 -k, --no-keys don't load private key files from ~/.ssh/ --keepalive=N enables a keepalive every N seconds # --linewise print line-by-line instead of byte-by-byte -n M, --connection-attempts=M make M attempts to connect before giving up --no-pty do not use pseudo-terminal in run/sudo -p PASSWORD, --password=PASSWORD #指定远程登陆的密码包括sudo password for use with authentication and/or sudo -P, --parallel default to parallel execution method #指定是否并行执行的 --port=PORT SSH connection port #指定ssh的默认端口 -r, --reject-unknown-hosts #指定拒绝的主机 reject unknown hosts --sudo-password=SUDO_PASSWORD #指定sudo密码 password for use with sudo only --system-known-hosts=SYSTEM_KNOWN_HOSTS load system known_hosts file before reading user known_hosts -R ROLES, --roles=ROLES #指定role,以role来区分不同执行函数 comma-separated list of roles to operate on -s SHELL, --shell=SHELL #指定的shell的执行环境, specify a new shell, defaults to '/bin/bash -l -c' --show=LEVELS comma-separated list of output levels to show #指定显示output的等级,多个用逗号分隔 --skip-bad-hosts skip over hosts that can't be reached #指定跳过不能到达的主机 --skip-unknown-tasks skip over unknown tasks #指定跳过不识别的执行函数 --ssh-config-path=PATH #指定ssh配置文件的路径 Path to SSH config file -t N, --timeout=N set connection timeout to N seconds #指定连接超时时间 -T N, --command-timeout=N #指定远程命令超时时间 set remote command timeout to N seconds -u USER, --user=USER username to use when connecting to remote hosts #指定远程登陆用户 -w, --warn-only warn, instead of abort, when commands fail #指定命令错误发出警告而不是中止 -x HOSTS, --exclude-hosts=HOSTS comma-separated list of hosts to exclude #指定排除的主机 -z INT, --pool-size=INT number of concurrent processes to use in parallel mode #指定并发线程的数量
使用
①:命令行接口
~$ fab -u zjy -p zhoujinyi -H 10.211.55.9,10.211.55.11 -- 'ls -lh /tmp/'
效果:
[10.211.55.9] Executing task '<remainder>' [10.211.55.9] run: ls -lh /tmp/ [10.211.55.9] out: 总用量 16K [10.211.55.9] out: -rw-rw-r-- 1 zjy zjy 853 11月 10 18:42 change_pwd.py [10.211.55.9] out: [10.211.55.11] Executing task '<remainder>' [10.211.55.11] run: ls -lh /tmp/ [10.211.55.11] out: 总用量 12K [10.211.55.11] out: -rw-rw-r-- 1 zjy zjy 2.4K 11月 10 18:29 remote_ops.py [10.211.55.11] out:
不推荐使用命令行,最好都写到一个文件脚本里,方便也安全。
②:脚本
注:默认fabric使用一个名为fabfile.py文件里,如:
#!/usr/bin/python from fabric.api import local, lcd def lsfab(): with lcd('/tmp/'): local('ls')
效果:
[localhost] local: ls com.apple.launchd.ESurbfatee com.apple.launchd.ykbSkcZdfZ mykey.txt parallels_crash_dumps Done.
如果写到其他文件则需要通过-f来指定执行:
~$ mv fabfile.py ttt.py
~$ fab -f ttt.py lsfab
③:参数
定义的执行函数里带参数:
#!/usr/bin/python from fabric.api import * def hello(name,age): print "hello,%s,%s" %(name,age)
带参数的执行函数执行:
~$ fab hello:zhoujy,123 hello,zhoujy,123 Done.
④:模块说明
1:from fabric.api import * local #执行本地命令,如local('uname -s') lcd #切换本地目录,如lcd('/home') cd #切换远程目录,如cd('/var/logs') run #执行远程命令,如run('free -m') sudo #sudo方式执行远程命令,如sudo('/etc/init.d/httpd start') put #上次本地文件导远程主机,如put('/home/user.info','/data/user.info') get #从远程主机下载文件到本地,如:get('/data/user.info','/home/user.info') prompt #获得用户输入信息,如:prompt('please input user password:') confirm #获得提示信息确认,如:confirm('Test failed,Continue[Y/N]?') reboot #重启远程主机,如:reboot() @task #函数修饰符,标识的函数为fab可调用的,非标记对fab不可见,纯业务逻辑 @runs_once #函数修饰符,标识的函数只会执行一次,不受多台主机影响 @roles() #运行指定的角色组里,通过env.roledefs里的定义
2:from fabric.colors import * print blue(text) print cyan(text) print green(text) print magenta(text) print red(text) print white(text) print yellow(text) 3:
...
用到再补充 ...
基础实例说明
实例1: 本地操作
from fabric.api import * def lsfab(): with lcd('/tmp/'): #本地切换目录 local('ls') #本地执行命令 def host_name(): local('uname -s') #本地执行命令
用-l查看可以执行的命令函数:
~$ fab -f fab_ops.py -l Available commands: host_name lsfab
可以执行上面2个执行函数,执行:
~$ fab -f fab_ops.py host_name [localhost] local: uname -s Darwin Done.
上面2个函数合并,并且对外只显示一个执行入口函数(@task):
from fabric.api import * def lsfab(): with lcd('/tmp/'): local('ls') def host_name(): local('uname -s') @task def go(): lsfab() host_name()
执行:
~$ fab -f fab_ops.py go [localhost] local: ls com.apple.launchd.ESurbfatee com.apple.launchd.ykbSkcZdfZ parallels_crash_dumps [localhost] local: uname -s Darwin Done.
实例2:远程操作,env变量
from fabric.api import * #配置远程服务器 env.hosts = [ '10.211.55.9', '10.211.55.11' ] #端口 env.port = '22' #用户 env.user = 'zjy' #密码,远程服务器密码都一样 env.password = 'zhoujinyi' def lsfab(): with cd('/tmp/'): #远程切换目录 run('ls') #远程命令运行 def host_name(): run('uname -s') @task def go(): lsfab() host_name()
执行看到的信息:执行的函数,命令和命令的输出结果。
~$ fab -f fab_ops.py go [10.211.55.9] Executing task 'go' [10.211.55.9] run: ls [10.211.55.9] out: 1 2 3 [10.211.55.9] out: [10.211.55.9] run: uname -s [10.211.55.9] out: Linux [10.211.55.9] out: [10.211.55.11] Executing task 'go' [10.211.55.11] run: ls [10.211.55.11] out: a b c mongodb-27017.sock [10.211.55.11] out: [10.211.55.11] run: uname -s [10.211.55.11] out: Linux [10.211.55.11] out: Done. Disconnecting from 10.211.55.11... done. Disconnecting from 10.211.55.9... done.
远程机器的密码不一致,怎么配置?这时可以用env.passwords来替换env.password:注意格式:user@ip:pwd
env.passwords = { 'zjy@10.211.55.9:22' : 'zjy', 'zjy@10.211.55.11:22': 'zhoujinyi', }
实例3:如何让不同服务器组执行不同的操作?如DB和WEB服务器各自执行自己的操作。这里需要用env.roledefs来定义角色组,根据不同的roles来使用execute进行不同的操作。
#!/usr/bin/python # -*- encoding: utf-8 -*- from fabric.api import * #定义角色,操作一致的服务器可以放在一组。因为服务器的用户端口不一样,需要在role里指定用户、IP和端口 env.roledefs = { 'dbserver':['zjy@10.211.55.9:22','zjy@10.211.55.11:22'], 'webserver':['zhoujy@192.168.200.25:221'], } #密码,远程服务器密码不一致时使用,格式user@host:port:pwd env.passwords = { 'zjy@10.211.55.9:22' : 'zjy', 'zjy@10.211.55.11:22': 'zhoujinyi', 'zhoujy@192.168.200.25:221':'123456', } @task #入口 @roles('dbserver') #角色修饰符 def get_memory(): run('free -m') @task @roles('webserver') def mkfile_task(): with cd('/home/zhoujy/'): run('touch xxxx.log')
执行效果:执行get_memory函数,在dbserver中的主机上执行,mkfile_touch函数则在webserver中的主机上执行。
通过env.roledefs和env.passwords指定好了用户名、端口和密码,这时上面2个函数合并,并且对外只显示一个执行函数(@task),还要注意的是因为各自的执行函数处于不同的roles下执行的,要放到一个函数里执行,需要添加:execute(),这样可以在一个执行函数里操作多个远程的机器。
... ... @task def go(): execute(get_memory) execute(mkfile_task)
效果:
[zjy@10.211.55.9:22] Executing task 'get_memory' [zjy@10.211.55.9:22] run: free -m [zjy@10.211.55.9:22] out: total used free shared buffers cached [zjy@10.211.55.9:22] out: Mem: 990 749 241 0 17 136 [zjy@10.211.55.9:22] out: -/+ buffers/cache: 595 395 [zjy@10.211.55.9:22] out: Swap: 1021 0 1021 [zjy@10.211.55.9:22] out: [zjy@10.211.55.11:22] Executing task 'get_memory' [zjy@10.211.55.11:22] run: free -m [zjy@10.211.55.11:22] out: total used free shared buffers cached [zjy@10.211.55.11:22] out: Mem: 3949 943 3006 0 13 230 [zjy@10.211.55.11:22] out: -/+ buffers/cache: 699 3249 [zjy@10.211.55.11:22] out: Swap: 1021 0 1021 [zjy@10.211.55.11:22] out: [zhoujy@192.168.200.25:221] Executing task 'mkfile_task' [zhoujy@192.168.200.25:221] run: touch fabric.log Done. Disconnecting from zhoujy@192.168.200.25:221... done. Disconnecting from zjy@10.211.55.11... done. Disconnecting from zjy@10.211.55.9... done.
实例4:多台服务器并行执行,@parallel
#!/usr/bin/python # -*- encoding: utf-8 -*- from fabric.api import *
#要是各个服务器的端口、用户名不一样,就不能配置env.port、env.user了,需要在env.hosts中设置端口:用户@IP:端口 env.hosts = ['zjy@10.211.55.9:22','zjy@10.211.55.11:22','zhoujy@192.168.200.25:221'] #密码,远程服务器密码不一致时使用,格式user@host:port:pwd env.passwords = { 'zjy@10.211.55.9:22' : 'zjy', 'zjy@10.211.55.11:22': 'zhoujinyi', 'zhoujy@192.168.200.25:221':'123456', } @task #入口 @parallel def get_memory(): run('free -m')
执行效果:执行函数同时在多个主机上运行,加快执行效率
[zjy@10.211.55.9:22] Executing task 'get_memory' [zjy@10.211.55.11:22] Executing task 'get_memory' [zhoujy@192.168.200.25:221] Executing task 'get_memory' [zhoujy@192.168.200.25:221] run: free -m [zjy@10.211.55.11:22] run: free -m [zjy@10.211.55.9:22] run: free -m [zjy@10.211.55.9:22] out: total used free shared buffers cached [zjy@10.211.55.9:22] out: Mem: 990 749 240 0 17 136 [zjy@10.211.55.9:22] out: -/+ buffers/cache: 595 395 [zjy@10.211.55.9:22] out: Swap: 1021 0 1021 [zjy@10.211.55.9:22] out: [zjy@10.211.55.11:22] out: total used free shared buffers cached [zjy@10.211.55.11:22] out: Mem: 3949 943 3006 0 13 230 [zjy@10.211.55.11:22] out: -/+ buffers/cache: 699 3249 [zjy@10.211.55.11:22] out: Swap: 1021 0 1021 [zjy@10.211.55.11:22] out: [zhoujy@192.168.200.25:221] out: total used free shared buffers cached [zhoujy@192.168.200.25:221] out: Mem: 3993 2747 1245 0 464 1271 [zhoujy@192.168.200.25:221] out: -/+ buffers/cache: 1011 2981 [zhoujy@192.168.200.25:221] out: Swap: 0 0 0 [zhoujy@192.168.200.25:221] out: Done.
实例5:输出信息等级设置:with settings(hide(...)):
#!/usr/bin/python # -*- encoding: utf-8 -*- from fabric.api import * from fabric.colors import * env.hosts = ['zjy@10.211.55.9:22','zjy@10.211.55.11:22','zhoujy@192.168.200.25:221'] #密码,远程服务器密码不一致时使用,格式user@host:port:pwd env.passwords = { 'zjy@10.211.55.9:22' : 'zjy', 'zjy@10.211.55.11:22': 'zhoujinyi', 'zhoujy@192.168.200.25:221':'123456', } @runs_once #只执行一次,避免每个主机处理都输入一次 def put_file():
#获取输入信息 return prompt("输入上传的文件名(绝对路径),默认当前目录文件:",default ="") @task def put_task(): local_path = put_file() remote_path = "/tmp/" put(local_path,remote_path) #上传 #输出等级设置,隐藏指定的类型 with settings(hide('running', 'stdout', 'stderr','warnings','everything')): result = put(local_path,remote_path) print yellow("%s上传成功...") %env.host
隐藏:
~$ fab -f zz.py put_task 1 ↵ [zjy@10.211.55.9:22] Executing task 'put_task' 输入上传的文件名(绝对路径),默认当前目录文件: /Users/jinyizhou/fabric_script/single_fab_ops.py [zjy@10.211.55.9:22] put: /Users/jinyizhou/fabric_script/single_fab_ops.py -> /tmp/single_fab_ops.py 10.211.55.9上传成功... 10.211.55.11上传成功... 192.168.200.25上传成功... Done. Disconnecting from zhoujy@192.168.200.25:221... done. Disconnecting from zjy@10.211.55.11... done. Disconnecting from zjy@10.211.55.9... done.
没有隐藏:
~$ fab -f zz.py put_task 126 ↵ [zjy@10.211.55.9:22] Executing task 'put_task' 输入上传的文件名(绝对路径),默认当前目录文件: /Users/jinyizhou/fabric_script/single_fab_ops.py [zjy@10.211.55.9:22] put: /Users/jinyizhou/fabric_script/single_fab_ops.py -> /tmp/single_fab_ops.py [zjy@10.211.55.9:22] put: /Users/jinyizhou/fabric_script/single_fab_ops.py -> /tmp/single_fab_ops.py 10.211.55.9上传成功... [zjy@10.211.55.11:22] Executing task 'put_task' [zjy@10.211.55.11:22] put: /Users/jinyizhou/fabric_script/single_fab_ops.py -> /tmp/single_fab_ops.py [zjy@10.211.55.11:22] put: /Users/jinyizhou/fabric_script/single_fab_ops.py -> /tmp/single_fab_ops.py 10.211.55.11上传成功... [zhoujy@192.168.200.25:221] Executing task 'put_task' [zhoujy@192.168.200.25:221] put: /Users/jinyizhou/fabric_script/single_fab_ops.py -> /tmp/single_fab_ops.py [zhoujy@192.168.200.25:221] put: /Users/jinyizhou/fabric_script/single_fab_ops.py -> /tmp/single_fab_ops.py 192.168.200.25上传成功... Done. Disconnecting from zhoujy@192.168.200.25:221... done. Disconnecting from zjy@10.211.55.11... done. Disconnecting from zjy@10.211.55.9... done.
实例6:异常处理、捕获:
异常处理:settings(warn_only=True)遇到错误继续执行还是退出settings(warn_only=False),False是默认方式。
from fabric.api import * def lsfab(): with lcd('/tmp/'): local('lsa') #执行不存在的命令,报错退出 print "xxxxxxx" def lsfab1(): with settings(warn_only=True): #执行不存在的命令,报错,继续执行后面的 with lcd('/tmp/'): local('lsa') print "xxxxxx"
效果:
~$ fab -f zz.py lsfab [localhost] local: lsa /bin/sh: lsa: command not found Fatal error: local() encountered an error (return code 127) while executing 'lsa' Aborting. ~$ fab -f zz.py lsfab1 [localhost] local: lsa /bin/sh: lsa: command not found Warning: local() encountered an error (return code 127) while executing 'lsa' xxxxxx Done.
异常捕获:if result.failed:abord()捕获到异常之后直接退出;if result.failed and not confirm("")捕获到异常之后确认退出还是继续
#!/usr/bin/python # -*- encoding: utf-8 -*- from fabric.colors import * from fabric.contrib.console import confirm from fabric.api import * def lsfab(): with settings(warn_only=True): with lcd('/Users/jinyizhou/uuii/'): result = local('lsn',capture=True) #通过local来执行任务,需要通过capture=True来得到值 if result.failed: #即使warn_only设置成True,捕捉到错误之后,还是退出 abort(red("错误...")) print 'xxxxx' def lsfab1(): with settings(warn_only=True): with lcd('/Users/jinyizhou/uuii/'): result = local('lsn',capture=True) if result.failed and not confirm("failed. Continue?"): #即使warn_only设置成True,捕捉到错误之后,还是要确认是否退出或则继续 abort(red("错误...")) print 'xxxxx'
效果:
~$ fab -f zz.py lsfab [localhost] local: lsn Warning: local() encountered an error (return code 127) while executing 'lsn' Fatal error: 错误... Aborting. ~$ fab -f zz.py lsfab1 [localhost] local: lsn Warning: local() encountered an error (return code 127) while executing 'lsn' failed. Continue? [Y/n] Y xxxxx Done.
实例7:通过中转(路由)机连接远程机执行,env.gateway
#!/usr/bin/python # -*- encoding: utf-8 -*- from fabric.api import * from fabric.colors import * #中转机 env.gateway = 'zjy@10.211.55.9:22' #中转机的地址,注意中转机的端口、密码和其他远程机器不一样,需要设置成user@ip:port的格式。 #操作服务器 env.hosts = ['zjy@10.211.55.11:22','zhoujy@192.168.200.25:221'] env.passwords = { 'zjy@10.211.55.9:22' : 'zjy', 'zjy@10.211.55.11:22': 'zhoujinyi', 'zhoujy@192.168.200.25:221':'123456', } #远程连接超时时间 env.timeout = 30 #命令超时时间 env.command_timeout = 30 @runs_once #只执行一次,避免每个主机处理都输入一次 def put_file(): return prompt("输入上传的文件名(绝对路径),默认当前目录文件:",default ="") @task def put_task(): local_path = put_file() remote_path = "/tmp/" put(local_path,remote_path) #输出等级设置,隐藏指定的类型 with settings(hide('running','stdout', 'stderr','everything'),warn_only=True): result = put(local_path,remote_path) print yellow("%s上传成功...") %env.host if result.failed and not confirm("put file failed,Continue[Y/N]?"): abort("Aborting file put task!")
效果:本地执行脚本通过中转机来向远程机器上传文件,本机和远程机不需要有关联
~$ fab -f zz.py put_task [zjy@10.211.55.11:22] Executing task 'put_task' 输入上传的文件名(绝对路径),默认当前目录文件: /Users/jinyizhou/fabric_script/gateway.txt [zjy@10.211.55.11:22] put: /Users/jinyizhou/fabric_script/gateway.txt -> /tmp/gateway.txt 10.211.55.11上传成功... 192.168.200.25上传成功... Done. Disconnecting from zhoujy@192.168.200.25:221... done. Disconnecting from zjy@10.211.55.11... done. Disconnecting from zjy@10.211.55.9... done. #最后断开中转机
实例8:通过ssh key(密钥)登陆
#!/usr/bin/python # -*- encoding: utf-8 -*- from fabric.api import * from fabric.colors import * #中转机 env.gateway = 'zjy@10.211.55.9:22' #中转机的地址,注意中转机的端口、密码和其他远程机器不一样,需要设置成user@ip:port的格式。 #操作服务器 env.hosts = ['zhoujy@192.168.200.25:221','zhoujy@192.168.200.102:222'] env.passwords = { 'zhoujy@192.168.200.25:221':'123456', 'zhoujy@192.168.200.102:222':'123456', } #env.use_ssh_config = True #密钥登陆中转服务器 env.key_filename = ['~/.ssh/id_rsa'] #远程连接超时时间 env.timeout = 30 #命令超时时间 env.command_timeout = 30 @runs_once #只执行一次,避免每个主机处理都输入一次 def put_file(): return prompt("输入上传的文件名(绝对路径),默认当前目录文件:",default ="/home/zjy/ttxx.py") @task def put_task(): local_path = put_file() remote_path = "/tmp/" put(local_path,remote_path) #输出等级设置,隐藏指定的类型 with settings(hide('running','stdout', 'stderr','everything'),warn_only=True): result = put(local_path,remote_path) print yellow("%s上传成功...") %env.host if result.failed and not confirm("put file failed,Continue[Y/N]?"): abort("Aborting file put task!")
效果:执行脚本的机器通过密钥登陆GATEWAY机器,再在GATEWAY机器上通过账号密码登陆要操作的机器(不确定这里能否也通过SSH密钥登陆?)。
zhoujy@192.168.200.25:221] Executing task 'put_task' 输入上传的文件名(绝对路径),默认当前目录文件: [/home/zjy/ttxx.py] [zhoujy@192.168.200.25:221] put: /home/zjy/ttxx.py -> /tmp/ttxx.py 192.168.200.25上传成功... 192.168.200.102上传成功... Done. Disconnecting from zhoujy@192.168.200.25:221... done. Disconnecting from zhoujy@192.168.200.102:222... done. Disconnecting from 10.211.55.9... done. #最后退出GATEWAY
如何建立SSH的密钥连接,在MySQL MHA 搭建&测试里有说明,可以了解一下。
应用说明
应用1:带参数批量修改服务器密码
#!/usr/bin/python # -*- coding: utf-8 -*- from fabric.api import * import socket import paramiko from fabric.colors import * env.user = 'zjy' env.password = 'zhoujinyi' env.hosts = ['10.211.55.11','10.211.55.9'] def isup(host): print 'connecting host: %s' % host timeout = socket.getdefaulttimeout() socket.setdefaulttimeout(1) up = True try: paramiko.Transport((host, 22)) except Exception, e: up = False print '%s down, %s' % (host, e) finally: socket.setdefaulttimeout(timeout) return up @task @parallel def passwd(user,password): with settings(hide('running', 'stdout', 'stderr','everything'), warn_only=True): if isup(env.host): sudo("echo -e '%s %s' | passwd %s" % (password, password, user)) print yellow("%s 更新密码...") %env.host
执行效果:
~$ fab -f change_pwd.py passwd:zjy,zjy 1 ↵ [10.211.55.11] Executing task 'passwd' [10.211.55.9] Executing task 'passwd' connecting host: 10.211.55.9 connecting host: 10.211.55.11 10.211.55.9 更新密码... 10.211.55.11 更新密码... Done.
应用2:远程开、关、升级MySQL,并且自动完成MHA相关的切换开启工作。
#!/usr/bin/python # -*- encoding: utf-8 -*- #--------------------------------------------------------------------- # Purpose: MHA自动切换 # Author: zhoujy # Created: 2016-11-07 # Update: 2016-11-07 #--------------------------------------------------------------------- from fabric.api import * from fabric.context_managers import * from fabric.contrib.console import confirm from fabric.colors import * import os import re import getpass import socket import paramiko #IP(主从)+项目名, MySQL_IP = { '192.168.1.7':'ABD', '192.168.1.8':'ABD', '192.168.1.10':'DEW', '192.168.1.5':'DEW', '192.168.1.7':'YTE', '192.168.1.8':'YTE', '192.168.1.5':'POQ', '192.168.1.6':'POQ', '192.168.1.91':'QWE', '192.168.1.72':'QWE', } Run_IP = raw_input("输入要操作的IP:") Run_IP_Port = Run_IP+':45678' #启用ssh key登陆 env.use_ssh_config = True env.key_filename = ['/root/.ssh/id_rsa'] #复制账号密码 Rep_Password = "123456" env.roledefs = { 'mha_manager':['192.168.1.19:45678'], 'db_server':[Run_IP_Port], } #mha日志名 mha_log = { 'ABD':'ABD_manager.log', 'DEW':'DEW_manager.log', 'YTE':'YTE_manager.log', 'POQ':'POQ_manager.log', 'QWE':'QWE_manager.log', } #mha配置文件名 mha_cfg = { 'ABD':'ABD_mha.cnf', 'DEW':'DEW_mha.cnf', 'YTE':'YTE_mha.cnf', 'POQ':'POQ_mha.cnf', 'QWE':'QWE_mha.cnf', } #mysql版本 mysql_version = { '5.6':'percona-server-server-5.6', '5.7':'percona-server-server-5.7', } program_name = MySQL_IP.get(Run_IP) logfile_name = mha_log.get(program_name) cfgfile_name = mha_cfg.get(program_name) env.timeout = 30 env.command_timeout = 100 pat = re.compile('(.*:)(.*)') version_pat = re.compile('(.*)(is already the newest version*)') is_slave = 0 is_alive = 0 is_slave_status = 0 is_newest_version = 0 #更新包 @task #@roles('db_all_server') @roles('db_server') def update_list_task(): try: with settings(hide('running','stdout', 'stderr','everything'), warn_only=False): sudo("apt-get update") print yellow("%s 检查更新完成...") %env.host except SystemExit: print red("%s 检查更新错误!请检查错误...") %env.host exit() #查看延迟 #@task #@roles('db_server') def get_delaytime(): with settings(hide('running','warnings','stdout','stderr'), warn_only=True): result = local("mysql --login-path=remote_mha --host=%s -e 'show slave status\G' | grep 'Seconds_Behind_Master' " %Run_IP,capture=True) if result.failed: print red("%s get_delaytime 检查出错,请查找原因! " %Run_IP) return -1 else: delaytime = int(result.split(':')[1]) return delaytime #查看是不是从库 #@task #@roles('db_server') def get_slave(): with settings(hide('running','warnings','stdout','stderr'), warn_only=True): result = local("mysql --login-path=remote_mha --host=%s -e "show global status like 'Slave_running'" " %Run_IP,capture=True) if result.failed: print red("%s get_slave 检查出错,请查找原因! " %Run_IP) return -1 else: is_slave = result.split(' ')[2] if is_slave =='ON': return 1 elif is_slave =='OFF': return 0 #查看数据库版本 #@task #@roles('db_server') def get_mysql_version(): with settings(hide('running','warnings','stdout','stderr'), warn_only=True): result = local("mysql --login-path=remote_mha --host=%s -e "show global variables like 'version'" " %Run_IP,capture=True) if result.failed: print red("%s get_mysql_version 检查出错,请查找原因! " %Run_IP) return -1 else: version = result.split(' ')[2][:3] return version #检查是否有复制状态 #@task #@roles('db_server') def get_mysql_status(): with settings(hide('running','warnings','stdout','stderr'), warn_only=True): result = local("mysql --login-path=remote_mha --host=%s -e "show slave status\G " " %Run_IP,capture=True) if result.failed: print red("%s get_mysql_status 检查出错,请查找原因! " %Run_IP) return -1 else: items = result if items: return 1 else: return 0 #检查是否存活 #@task #@roles('db_server') def check_mysql_alive(): with settings(hide('running','warnings','stdout','stderr'), warn_only=True): result = local("mysql --login-path=remote_mha --host=%s -e "show status like 'Uptime' " " %Run_IP,capture=True) if result.failed: print red("%s check_mysql_alive 检查出错,请查找原因! " %Run_IP) return -1 else: is_alive = result.split(" ")[2] if is_alive: return 1 else: return 0 #开启数据库 #@task @roles('db_server') def start_mysql(): if Run_IP in MySQL_IP.keys(): print green("正在运行 %s 项目的 start_mysql ... ") %MySQL_IP[Run_IP] else: print red("IP不存在,退出... ") exit() with settings(hide('running','stdout', 'stderr','everything'), warn_only=False): global is_alive global is_slave_status is_alive = check_mysql_alive() if is_alive > 0: is_slave_status = get_mysql_status() print red("%s 数据库已是开启状态,不能再start,退出... ") %env.host exit() else: result = run("/etc/init.d/mysql start") print yellow("%s 数据库启动完成... ") %env.host is_slave_status = get_mysql_status() if is_slave_status: local("mysql --login-path=remote_mha --host=%s -e 'start slave' " %Run_IP) print yellow("%s 数据库复制开启完成... ") %env.host else: print red("%s 需要进行change master操作... ") %env.host #关闭数据库 #@task @roles('db_server') def stop_mysql(): if Run_IP in MySQL_IP.keys(): print green("正在运行 %s 项目的 stop_mysql ... ") %MySQL_IP[Run_IP] else: print red("IP不存在,退出... ") exit() # try: with settings(hide('running','stdout', 'stderr','everything'), warn_only=True): global is_slave is_slave = get_slave() if is_slave == 1: print green("%s 的 %s 从库关闭 ") %(Run_IP,MySQL_IP[Run_IP]) delaytime=get_delaytime() if delaytime >0: print yellow("从库延迟主库 %s 秒! ") %delaytime is_continue = raw_input("是否关闭[Y/N]?:") if is_continue.upper() == 'Y': result = run("/etc/init.d/mysql stop") print yellow("%s 数据库关闭完成... ") %env.host else: print red("退出... ") exit() else: result = run("/etc/init.d/mysql stop") print yellow("%s 数据库关闭完成... ") %env.host elif is_slave == 0: print green("%s 的 %s 主库关闭 ") %(Run_IP,MySQL_IP[Run_IP]) is_continue = raw_input("是否关闭[Y/N]?:") if is_continue.upper() == 'Y': result = run("/etc/init.d/mysql stop") print yellow("%s 数据库关闭完成... ") %env.host else: print red("退出... ") exit() else: print red("数据库关闭失败!退出... ") exit() # except SystemExit: # print red("%s 数据库关闭失败!请检查错误... ") %env.host #更新数据库 #@task @roles('db_server') def update_mysql(): if Run_IP in MySQL_IP.keys(): print blue("正在运行 %s 项目的 update_mysql ... ") %MySQL_IP[Run_IP] else: print red("IP不存在,退出... ") exit() # try: with settings(hide('running','warnings','stdout','stderr'), warn_only=False): version = get_mysql_version() global is_slave global is_newest_version is_slave = get_slave() if is_slave == 1: print green("%s 是 %s 的一个从库 ") %(Run_IP,MySQL_IP[Run_IP]) delaytime=get_delaytime() if delaytime >0: print yellow("从库延迟主库 %s 秒! ") %delaytime is_continue = raw_input("是否更新[Y/N]?:") if is_continue.upper() == 'Y': result = sudo("apt-get -y install %s" %mysql_version[version]) if version_pat.search(result): is_newest_version = 1 print yellow("%s 不用安装,已经是最新版本... ") %env.host else: print yellow("%s 安装更新完成... ") %env.host else: print red("退出... ") exit() else: result = sudo("apt-get -y install %s" %mysql_version[version]) if version_pat.search(result): is_newest_version = 1 print yellow("%s 不用安装,已经是最新版本... ") %env.host else: print yellow("%s 安装更新完成... ") %env.host elif is_slave == 0: print green("%s是 %s 一个主库 ") %(Run_IP,MySQL_IP[Run_IP]) result = sudo("apt-get -y install %s " %mysql_version[version]) if version_pat.search(result): is_newest_version = 1 print yellow("%s 不用安装,已经是最新版本... ") %env.host else: print yellow("%s 安装更新完成... ") %env.host else: print red("更新错误,退出... ") exit() # except SystemExit: # print red("%s 安装更新错误!请检查错误... ") %env.host # exit() #清理mha log #@task #@roles('mha_manager') def clean_log(): with settings(hide('running','warnings','stdout','stderr'), warn_only=True): with lcd("/usr/local/masterha"): result = local("echo 'clean log ...' > %s" %logfile_name,capture=True) if result.failed: abort(red("MHA Log清理失败!退出...")) print green("MHA Log 清理完成... ") #查看mha log #@task #@roles('mha_manager') def get_mha_log(): with settings(hide('running','warnings','stdout','stderr'), warn_only=True): with lcd("/usr/local/masterha"): #local 配合capture=True stdout=local("grep 'CHANGE MASTER' %s" %logfile_name,capture=True) if stdout.failed: abort(red("MHA日志里找不到CHANGE,退出...")) log_info = pat.search(stdout).group(2) print green("在MHA Log中找到 Change Log: ") + yellow('%s') %log_info return log_info #开启mha #@task #@roles('mha_manager') def start_mha(): with lcd("/usr/local/masterha"): # 需要安装supervisor local("supervisorctl start %s " %program_name) with settings(hide('running','stdout','stderr'), warn_only=True): result=local("ps -ef | grep master* | grep -v color | awk '{print $9,$10,$11}'| grep '%s'" %cfgfile_name,capture=True) if result.failed: abort(red("开启MHA失败!退出...")) print green("MHA Log 开启完成... ") #Change master #@task @roles('mha_manager') def change_master(): mha_log_info = get_mha_log() # print green("在MHA Log中找到 Change Log: ") + yellow('%s') %query change_info = mha_log_info.replace('xxx',Rep_Password) with settings(hide('running','warnings','stdout','stderr'), warn_only=True): result = local("mysql --login-path=remote_mha --host=%s -e " %s " " %(Run_IP,change_info),capture=True) if result.failed: abort(red("%s Change Master Failed !请检查错误...") %env.host) else: print green("%s Change Master 完成... ") %env.host with settings(hide('running','warnings','stdout','stderr'), warn_only=True): result = local("mysql --login-path=remote_mha --host=%s -e 'start slave' " %Run_IP,capture=True) if result.failed: abort(red("%s Start Slave Failed !请先检查错误...") %env.host) else: print green("%s Start Slave 完成... ") %env.host clean_log() is_run_mha = raw_input("是否开启MHA?[Y/N]") if is_run_mha.upper() == 'Y': start_mha() else: print red("退出...") exit() #执行更新 @task def update_task(): execute(update_mysql) global is_slave global is_newest_version if not is_newest_version: if not is_slave: is_continue = raw_input("是否进行change master[Y/N]:") if is_continue.upper() == 'Y': execute(change_master) else: exit() else: print green("%s 从库升级,不需要change,退出...") %Run_IP exit() else: print green("%s 已经是最新版本了,退出...") %Run_IP exit() #执行关闭 @task def stop_task(): execute(stop_mysql) #执行开启 @task def start_task(): execute(start_mysql) global is_slave_status if not is_slave_status: is_continue = raw_input("是否进行change master[Y/N]:") if is_continue.upper() == 'Y': execute(change_master) else: exit() else: print green("%s 从库启动,不需要change,退出...") %Run_IP exit()
该脚本执行在Ubuntu上,并且用于Percona5.6、5.7的相关操作,以及mha的切换开启工作。需要注意的是,mysql无密码登陆用了mysql_config_editor和远程开启mha后台运行程序时用到的进程监控(deamon运行)Supervisor程序。
总结:
通过上面的实例和应用说明,大致的介绍了Fabric基本常见的用法,一些比较深入的用法可以看官方文档(好东西太多了,需要的时候再上去扒一下),遇到相关问题可以留言,一起讨论学习。