电商系统部署文档
下周就彻底离职了,工作还没着落,悲催
写在前面
系统优化参数
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.tcp_max_tw_buckets = 60000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 500000
net.core.somaxconn = 262144
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.tcp_keepalive_probes=5
net.ipv4.tcp_keepalive_intvl=15
net.ipv4.ip_local_port_range = 1024 65535
vm.swappiness = 0
vm.max_map_count=262144
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.core.somaxconn = 32767
vm.overcommit_memory=1
echo never > /sys/kernel/mm/transparent_hugepage/enabled,并把这句话加入到/etc/rc.local中
1、elasticsearch部署
1.1 下载所需文件(3台机器都需要)
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.0.rpm
wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v5.5.0/elasticsearch-analysis-ik-5.5.0.zip
1.2 安装es和ik分词软件(3台机器都需要)
yum install elasticsearch-5.5.0.rpm -y
#安装完es后,把ik分词软件elasticsearch-analysis-ik-5.5.0.zip拷贝至/usr/share/elasticsearch/plugins目录,然后执行下面命令
cd /usr/share/elasticsearch/plugins
unzip elasticsearch-analysis-ik-5.5.0.zip -d ik && rm -f elasticsearch-analysis-ik-5.5.0.zip
1.3 配置es
如果是yum安装的java的话,就不需要设置,如果是自定义安装的java就需要设置,不然的话,es会报找不到java的错误
下面是原文件/etc/sysconfig/elasticsearch
################################
# Elasticsearch
################################
# Elasticsearch home directory
#ES_HOME=/usr/share/elasticsearch
# Elasticsearch Java path
#修改此处
JAVA_HOME=/usr/local/java/jdk1.8.0_152
# Elasticsearch configuration directory
CONF_DIR=/etc/elasticsearch
# Elasticsearch data directory
#DATA_DIR=/var/lib/elasticsearch
# Elasticsearch logs directory
#LOG_DIR=/var/log/elasticsearch
# Elasticsearch PID directory
#PID_DIR=/var/run/elasticsearch
# Additional Java OPTS
#ES_JAVA_OPTS=
# Configure restart on package upgrade (true, every other setting will lead to not restarting)
#RESTART_ON_UPGRADE=true
################################
# Elasticsearch service
################################
# SysV init.d
#
# When executing the init script, this user will be used to run the elasticsearch service.
# The default value is 'elasticsearch' and is declared in the init.d file.
# Note that this setting is only used by the init script. If changed, make sure that
# the configured user can read and write into the data, work, plugins and log directories.
# For systemd service, the user is usually configured in file /usr/lib/systemd/system/elasticsearch.service
#ES_USER=elasticsearch
#ES_GROUP=elasticsearch
# The number of seconds to wait before checking if Elasticsearch started successfully as a daemon process
ES_STARTUP_SLEEP_TIME=5
################################
# System properties
################################
# Specifies the maximum file descriptor number that can be opened by this process
# When using Systemd, this setting is ignored and the LimitNOFILE defined in
# /usr/lib/systemd/system/elasticsearch.service takes precedence
#MAX_OPEN_FILES=65536
# The maximum number of bytes of memory that may be locked into RAM
# Set to "unlimited" if you use the 'bootstrap.memory_lock: true' option
# in elasticsearch.yml.
# When using Systemd, the LimitMEMLOCK property must be set
# in /usr/lib/systemd/system/elasticsearch.service
#MAX_LOCKED_MEMORY=unlimited
# Maximum number of VMA (Virtual Memory Areas) a process can own
# When using Systemd, this setting is ignored and the 'vm.max_map_count'
# property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf
#MAX_MAP_COUNT=262144
jvm配置
/etc/elasticsearch/jvm.options,修改下面所示:因为这3台机器还装了redis集群,所以分配为16G内存
-Xms16g
-Xmx16g
elasticsearch.yml配置
在该文件下面增加下面的配置即可,另外2台服务器修改node.name和network.host这2个字段
其中需要提前创建/data/elasticsearch,/var/log/elasticsearch目录,并授予elasticsearch.elasticsearch权限 chown -R elasticsearch.elasticsearch /data/elasticsearch && chown -R elasticsearch.elasticsearch /var/log/elasticsearch
## 157增加
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
cluster.name: shop-system
node.name: ser5-167.tech-idc.net
node.master: true
node.data: true
network.host: 10.80.5.167
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.80.5.167", "10.80.5.168","10.80.5.169"]
discovery.zen.minimum_master_nodes: 2
## 158增加
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
cluster.name: shop-system
node.name: ser5-168.tech-idc.net
node.master: true
node.data: true
network.host: 10.80.5.168
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.80.5.167", "10.80.5.168","10.80.5.169"]
discovery.zen.minimum_master_nodes: 2
## 159增加
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
cluster.name: shop-system
node.name: ser5-169.tech-idc.net
node.master: true
node.data: true
network.host: 10.80.5.169
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.80.5.167", "10.80.5.168","10.80.5.169"]
discovery.zen.minimum_master_nodes: 2
1.4 重启es设置开启自启动
systemctl start elasticsearch
systemctl enable elasticsearch
2、redis-cluster集群部署
2.1 下载所需文件
wget http://download.redis.io/releases/redis-3.2.11.tar.gz(所有机器下载)
wget https://cache.ruby-lang.org/pub/ruby/2.3/ruby-2.3.1.tar.gz (其中一台机器即可)
2.2 安装redis(所有机器执行)
tar xvf redis-3.2.11.tar.gz -C /usr/local
cd redis-3.2.11 && make && make install
安装完后验证
[root@ser5-167 elasticsearch]# redis-cli --version
redis-cli 3.2.11
2.3 创建目录
端口选择设置为7379-7380
mkdir -p /data/redis/{7379,7380}/{conf,data} #在服务器10.80.5.157创建
mkdir -p /data/redis/{7379,7380}/{conf,data} #在服务器10.80.5.158创建
mkdir -p /data/redis/{7379,7380}/{conf,data} #在服务器10.80.5.159创建
mkdir -p /var/log/redis && mkdir -p /var/run/redis (3台服务器上执行)
2.4 配置redis.conf
#157 7379端口
daemonize yes
pidfile "/var/run/redis/redis-7379.pid"
dir "/data/redis/7379/data"
port 7379
tcp-backlog 511
tcp-keepalive 60
bind 10.80.5.167
loglevel notice
logfile "/var/log/redis/redis-7379.log"
databases 16
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "node-7379.conf"
appendonly yes
appendfilename "appendonly-7379.aof"
appendfsync everysec
no-appendfsync-on-rewrite yes
auto-aof-rewrite-percentage 80-100
auto-aof-rewrite-min-size 64mb
===================================
#157 7380端口
daemonize yes
pidfile "/var/run/redis/redis-7380.pid"
dir "/data/redis/7380/data"
port 7380
tcp-backlog 511
tcp-keepalive 60
bind 10.80.5.167
loglevel notice
logfile "/var/log/redis/redis-7380.log"
databases 16
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "node-7380.conf"
appendonly yes
appendfilename "appendonly-7380.aof"
appendfsync everysec
no-appendfsync-on-rewrite yes
auto-aof-rewrite-percentage 80-100
auto-aof-rewrite-min-size 64mb
================================
# 158 7379端口
daemonize yes
pidfile "/var/run/redis/redis-7379.pid"
dir "/data/redis/7379/data"
port 7379
tcp-backlog 511
tcp-keepalive 60
bind 10.80.5.168
loglevel notice
logfile "/var/log/redis/redis-7379.log"
databases 16
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "node-7379.conf"
appendonly yes
appendfilename "appendonly-7379.aof"
appendfsync everysec
no-appendfsync-on-rewrite yes
auto-aof-rewrite-percentage 80-100
auto-aof-rewrite-min-size 64mb
======================================
# 158 7380端口
daemonize yes
pidfile "/var/run/redis/redis-7380.pid"
dir "/data/redis/7380/data"
port 7380
tcp-backlog 511
tcp-keepalive 60
bind 10.80.5.168
loglevel notice
logfile "/var/log/redis/redis-7380.log"
databases 16
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "node-7380.conf"
appendonly yes
appendfilename "appendonly-7380.aof"
appendfsync everysec
no-appendfsync-on-rewrite yes
auto-aof-rewrite-percentage 80-100
auto-aof-rewrite-min-size 64mb
=========================================
# 159 7379端口
daemonize yes
pidfile "/var/run/redis/redis-7379.pid"
dir "/data/redis/7379/data"
port 7379
tcp-backlog 511
tcp-keepalive 60
bind 10.80.5.169
loglevel notice
logfile "/var/log/redis/redis-7379.log"
databases 16
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "node-7379.conf"
appendonly yes
appendfilename "appendonly-7379.aof"
appendfsync everysec
no-appendfsync-on-rewrite yes
auto-aof-rewrite-percentage 80-100
auto-aof-rewrite-min-size 64mb
====================================
# 159 8086端口
daemonize yes
pidfile "/var/run/redis/redis-7380.pid"
dir "/data/redis/7380/data"
port 7380
tcp-backlog 511
tcp-keepalive 60
bind 10.80.5.169
loglevel notice
logfile "/var/log/redis/redis-7380.log"
databases 16
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "node-7380.conf"
appendonly yes
appendfilename "appendonly-7380.aof"
appendfsync everysec
no-appendfsync-on-rewrite yes
auto-aof-rewrite-percentage 80-100
auto-aof-rewrite-min-size 64mb
2.5 确认redis是否都起来了
#157执行
/usr/local/bin/redis-server /data/redis/7379/conf/redis.conf
/usr/local/bin/redis-server /data/redis/7380/conf/redis.conf
#158执行
/usr/local/bin/redis-server /data/redis/7379/conf/redis.conf
/usr/local/bin/redis-server /data/redis/7380/conf/redis.conf
#159执行
/usr/local/bin/redis-server /data/redis/7379/conf/redis.conf
/usr/local/bin/redis-server /data/redis/7380/conf/redis.conf
ps -ef|grep redis
2.6 安装ruby
tar xvf ruby-2.3.1.tar.gz
cd ruby-2.3.1 && ./configure --prefix=/usr/local/ruby && make && make install
#编译成功后拷贝命令到/usr/local/bin/目录
cd /usr/local/ruby/
cp bin/ruby /usr/local/bin
cp bin/gem /usr/local/bin
#验证是否okay
ruby --version
#安装rubygem redis的依赖
gem install redis
#拷贝redis-trib.rb命令至/usr/local/bin
cp /usr/local/reds-3.2.11/src/redis-trib.rb /usr/local/bin
cp /usr/local/bin/gem /bin/
2.7 初始化集群
由于这是3台机器搭建的集群,需要手动配置slave节点,不然的话,可能master节点和slave节点在同一台机器上,要是那台机器宕机的话,集群失效,所以先不创建副本
redis-trib.rb create --replicas 0 10.80.5.167:7379 10.80.5.168:7379 10.80.5.169:7379
然后再进去其中一台机器redis-cli -h 192.168.5.167 -p 7379执行cluster nodes,找到三个主的master id,master slave分配关系如下
10.80.5.167:7379(master) 10.80.5.168:7380(slave)
10.80.5.168:7379(master) 10.80.5.169:7380(slave)
10.80.5.169:7379(master) 10.80.5.167:7380(slave)
然后依次执行以下三条命令
redis-trib.rb add-node --slave --master-id ea5a83279cffe4c6b7ee7532903978601ff56fd81 10.80.5.168:7380 10.80.5.167:7379
redis-trib.rb add-node --slave --master-id 16efffee642caa4c3e16c137e0bdbe3b637e120dc 10.80.5.169:7380 10.80.5.168:7379
redis-trib.rb add-node --slave --master-id 413eddde18e9d27dd62c69bc413c3c63eb05867b0 10.80.5.167:7380 10.80.5.169:7379
2.8 坑
至此,redis集群就搭建好了,遇到坑安装rubygem redis的依赖的时候,一开始安装的是3.3.0版本,无法初始化集群,后来直接安装最新版的就好了,gem install redis
3台机器搭建的redis集群,redis-trib.rb自动创建集群的时候有可能master节点和slave节点在同一台机器上,所以需要手动分配,不知道有没有自动分配也能分散在不同机器上的方法
3、redis集群密码设置
方式一:修改所有Redis集群中的redis.conf文件加入:
masterauth passwd123
requirepass passwd123
说明:这种方式需要重新启动各节点
方式二:进入各个实例进行设置:
./redis-cli -c -p 7379 -h 10.80.5.167
config set masterauth passwd123
config set requirepass passwd123
auth passwd123
config rewrite
之后分别给各节点设置上密码。
注意:各个节点密码都必须一致,否则Redirected就会失败, 推荐这种方式,这种方式会把密码写入到redis.conf里面去,且不用重启。
用方式二修改密码,./redis-trib.rb check 10.80.5.157:7379执行时可能会报[ERR] Sorry, can't connect to node 10.80.5.157:7379,因为7379的redis.conf没找到密码配置。
2、设置密码之后如果需要使用redis-trib.rb的各种命令
解决办法:vim /usr/local/ruby/lib/ruby/gems/2.3.0/gems/redis-4.1.3/lib/redis/client.rb,然后修改password
class Client
DEFAULTS = {
:url => lambda { ENV["REDIS_URL"] },
:scheme => "redis",
:host => "127.0.0.1",
:port => 6379,
:path => nil,
:timeout => 5.0,
:password => "passwd123",
:db => 0,
:driver => nil,
:id => nil,
:tcp_keepalive => 0,
:reconnect_attempts => 1,
:inherit_socket => false
}
注意:client.rb路径可以通过find命令查找:find / -name 'client.rb'
参考
https://www.cnblogs.com/linjiqin/p/7462822.html
https://blog.csdn.net/u010533511/article/details/89390387