下载地址:
https://github.com/qingduyu/roe
1. 基本需求
1.1 系统推荐配置
生产建议使用:8核以上(python相当消耗cpu),16G以上内存 (数据传输很大的),磁盘 350G 以上
测试建议使用:cpu 4核以上,ram 8G以上,disk 50G以上
1.2 基础软件需求
python2.7 (自编译), mysql 5.6以上(暂时不做安装教程指导) , redis (暂时本机单机版,不做教程,不设密码)
1.3 yum 安装的软件
yum install epel-release
yum install sshpass nmap supervisor
1.4.本机的ssh-key生成
ssh-keygen 一路回车
2. python 2.7 的虚拟环境(还是决定添加)
2.1 编译2.7
编译前的依赖安装
yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gcc make
自己下载2.7.15 进行编译
cd Python-2.7.15
./configure --prefix=/usr/local/python2.7
make -j 4
make install
安装pip
wget https://bootstrap.pypa.io/get-pip.py
/usr/local/python2.7/bin/python get-pip.py
部署python 虚拟环境
/usr/local/python2.7/bin/pip install virtualenv
mkdir /data/python-env
/usr/local/python2.7/bin/virtualenv -p /usr/local/python2.7/bin/python2.7 --distribute /data/python-env/
source /data/python-env/bin/activate
3.mysql的配置
数据库的配置信息在
roeops/settings.py 中查找
create database roeops
create user roeops identified by 'roeops123'
grant all on roeops.* to roeops
导入 roeops.sql
注意 这里面已经初始化了 admin 用户,不要再做django 的 makemigration ,除非你懂得如何做
4.配置启动roeops(这是关键)
要在python虚拟环境下
roeops 默认部署在
/data/PycharmProject/roeops
目录下,自己酌情修改目录,pip 安装依赖
pip install -r requirements.txt
启动 roeops
nohup python manager runserver 0.0.0.0:80 &
5.配置 celery 这是必须的,目录注意自己改
vim /etc/supervisord.conf 文件尾部添加
1 ;三个工人处理队列 2 [program:celery-worker-default] 3 command=/data/python-env/bin/python manage.py celery worker --loglevel=info -E -Q default 4 directory=/data/PycharmProject/roeops 5 stdout_logfile=/data/PycharmProject/roeops/logs/celery-worker-default.log 6 autostart=true 7 autorestart=true 8 redirect_stderr=true 9 stopsignal=QUIT 10 numprocs=1 11 12 [program:celery-worker-ansible] 13 command=/data/python-env/bin/python manage.py celery worker --loglevel=info -E -Q ansible 14 directory=/data/PycharmProject/roeops 15 stdout_logfile=/data/PycharmProject/roeops/logs/celery-worker-ansible.log 16 autostart=true 17 autorestart=true 18 redirect_stderr=true 19 stopsignal=QUIT 20 numprocs=1 21 22 [program:celery-worker-database] 23 command=/data/python-env/bin/python manage.py celery worker --loglevel=info -E -Q database 24 directory=/data/PycharmProject/roeops 25 stdout_logfile=/data/PycharmProject/roeops/logs/celery-worker-database.log 26 autostart=true 27 autorestart=true 28 redirect_stderr=true 29 stopsignal=QUIT 30 numprocs=1 31 ;Celery默认任务单元由任务生产者触发,但有时可能需要其自动触发,而Beat进程正是负责此类任务,能够自动触发定时/周期性任务. 32 [program:celery-beat] 33 command=/data/python-env/bin/python manage.py celery beat 34 directory=/data/PycharmProject/roeops 35 stdout_logfile=/data/PycharmProject/roeops/logs/celery-beat.log 36 autostart=true 37 autorestart=true 38 redirect_stderr=true 39 stopsignal=QUIT 40 numprocs=1 41 ;对事件进行快照,就是监控事件的工作情况,默认1秒一个,可以调整 42 [program:celery-cam] 43 command=/data/python-env/bin/python manage.py celerycam --frequency=0.5 44 directory=/data/PycharmProject/roeops 45 stdout_logfile=/data/PycharmProject/roeops/logs/celery-celerycam.log 46 autostart=true 47 autorestart=true 48 redirect_stderr=true 49 stopsignal=QUIT 50 numprocs=1
# 启动celery
/usr/bin/supervisord -c /etc/supervisord.conf
supervisorctl status #查看running状态,如果没启动就去看日志奥,会有报错提示的,
'''
#管理 supervisord
supervisorctl update # 更新了配置文件,就要更新
supervisorctl reload # 每次更新了python中的task 就要reload ,不然你的task
supervisorctl start program_name #单独启动(program_name=配置文件中的程序名称)
supervisorctl #查看正在守候的进程
pervisorctl stop program_name #停止某一个程序(program_name=配置文件中的程序)
supervisorctl restart program_name #重启某一个进程(program_name=配置文件中的程序)
supervisorctl stop all #停止所有的进程