• Docker安装常用服务命令笔记


    Docker安装常用服务命令笔记

    Docker安装MySql

    首先可能需要将docker仓库地址改成阿里云的

    修改配置文件,路径在/etc/docker/daemon.json

    修改为:

    {
      "registry-mirrors": ["https://ubih6qcd.mirror.aliyuncs.com"]
    }
    

    然后刷新配置文件,重启docker

    sudo systemctl daemon-reload
    sudo systemctl restart docker
    

    拉取mysql5.7镜像

    docker pull mysql:5.7
    

    不挂载数据启动

    启动mysql容器,并设置密码和默认字符集和时区

    docker run -d --name mysql57 -p 3306:3306 -e TZ=Asia/Shanghai -e MYSQL_ROOT_PASSWORD=@LovelyLM mysql:5.7 --character-set-server=utf8mb4
    

    挂载数据到宿主机启动

    如果我们需要将数据文件、日志文件、配置文件挂载到宿主机,需要首先在宿主机创建相应文件夹,我习惯在/home/docker/mysql/mysql57目录放置所有相关文件,分别是配置文件、数据文件、日志文件目录结构如下:

    image-20220611101146755

    然后再到conf目录创建一个my.conf的空文件,这是mysql的默认配置文件,以后我们直接修改这里就行了

    启动mysql容器,并挂载数据到主机

    我这里将mysql设置了自启动

    docker run -d --name mysql57 -p 3306:3306 --restart=always -v /home/docker/mysql/mysql57/data:/var/lib/mysql -v /home/docker/mysql/mysql57/conf:/etc/mysql -v /home/docker/mysql/mysql57/log:/var/log/mysql -e TZ=Asia/Shanghai -e MYSQL_ROOT_PASSWORD=@LovelyLM mysql:5.7 --character-set-server=utf8mb4
    

    Docker安装canal

    为了更好的配置canal,一般推荐将canal配置文件挂载再宿主机

    首先拉取canal最新镜像

    docker pull canal/canal-server

    先启动服务:

    docker run --name canal -d canal/canal-server

    我创建了关于canal的目录,结构如下:

    image-20220611112051789

    canal有两个配置文件,canal.propertiesinstance.properties,我们都需要将它们复制到宿主机,此外,我们还可以把日志文件一同也复制

    以下路径,左边是容器内部的,右边是宿主机的

    docker cp canal:/home/admin/canal-server/conf/canal.properties /home/docker/canal/conf
    docker cp canal:/home/admin/canal-server/conf/example/instance.properties /home/docker/canal/conf
    docker cp canal:/home/admin/canal-server/logs/canal/canal.log /home/docker/canal/log
    docker cp canal:/home/admin/canal-server/logs/canal/canal_stdout.log /home/docker/canal/log
    

    复制完成后,主机目录就会多出两个文件:

    log文件夹也会多出两个文件:

    image-20220611112221685

    这里将两个文件的默认信息贴一下:

    canal.properties

    #################################################
    ######### 		common argument		#############
    #################################################
    # tcp bind ip
    canal.ip =
    # register ip to zookeeper
    canal.register.ip =
    canal.port = 11111
    canal.metrics.pull.port = 11112
    # canal instance user/passwd
    # canal.user = canal
    # canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458
    
    # canal admin config
    #canal.admin.manager = 127.0.0.1:8089
    canal.admin.port = 11110
    canal.admin.user = admin
    canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441
    # admin auto register
    #canal.admin.register.auto = true
    #canal.admin.register.cluster =
    #canal.admin.register.name =
    
    canal.zkServers =
    # flush data to zk
    canal.zookeeper.flush.period = 1000
    canal.withoutNetty = false
    # tcp, kafka, rocketMQ, rabbitMQ
    canal.serverMode = tcp
    # flush meta cursor/parse position to file
    canal.file.data.dir = ${canal.conf.dir}
    canal.file.flush.period = 1000
    ## memory store RingBuffer size, should be Math.pow(2,n)
    canal.instance.memory.buffer.size = 16384
    ## memory store RingBuffer used memory unit size , default 1kb
    canal.instance.memory.buffer.memunit = 1024 
    ## meory store gets mode used MEMSIZE or ITEMSIZE
    canal.instance.memory.batch.mode = MEMSIZE
    canal.instance.memory.rawEntry = true
    
    ## detecing config
    canal.instance.detecting.enable = false
    #canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
    canal.instance.detecting.sql = select 1
    canal.instance.detecting.interval.time = 3
    canal.instance.detecting.retry.threshold = 3
    canal.instance.detecting.heartbeatHaEnable = false
    
    # support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
    canal.instance.transaction.size =  1024
    # mysql fallback connected to new master should fallback times
    canal.instance.fallbackIntervalInSeconds = 60
    
    # network config
    canal.instance.network.receiveBufferSize = 16384
    canal.instance.network.sendBufferSize = 16384
    canal.instance.network.soTimeout = 30
    
    # binlog filter config
    canal.instance.filter.druid.ddl = true
    canal.instance.filter.query.dcl = false
    canal.instance.filter.query.dml = false
    canal.instance.filter.query.ddl = false
    canal.instance.filter.table.error = false
    canal.instance.filter.rows = false
    canal.instance.filter.transaction.entry = false
    canal.instance.filter.dml.insert = false
    canal.instance.filter.dml.update = false
    canal.instance.filter.dml.delete = false
    
    # binlog format/image check
    canal.instance.binlog.format = ROW,STATEMENT,MIXED 
    canal.instance.binlog.image = FULL,MINIMAL,NOBLOB
    
    # binlog ddl isolation
    canal.instance.get.ddl.isolation = false
    
    # parallel parser config
    canal.instance.parser.parallel = true
    ## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
    #canal.instance.parser.parallelThreadSize = 16
    ## disruptor ringbuffer size, must be power of 2
    canal.instance.parser.parallelBufferSize = 256
    
    # table meta tsdb info
    canal.instance.tsdb.enable = true
    canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}
    canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
    canal.instance.tsdb.dbUsername = canal
    canal.instance.tsdb.dbPassword = canal
    # dump snapshot interval, default 24 hour
    canal.instance.tsdb.snapshot.interval = 24
    # purge snapshot expire , default 360 hour(15 days)
    canal.instance.tsdb.snapshot.expire = 360
    
    #################################################
    ######### 		destinations		#############
    #################################################
    canal.destinations = example
    # conf root dir
    canal.conf.dir = ../conf
    # auto scan instance dir add/remove and start/stop instance
    canal.auto.scan = true
    canal.auto.scan.interval = 5
    # set this value to 'true' means that when binlog pos not found, skip to latest.
    # WARN: pls keep 'false' in production env, or if you know what you want.
    canal.auto.reset.latest.pos.mode = false
    
    canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
    #canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml
    
    canal.instance.global.mode = spring
    canal.instance.global.lazy = false
    canal.instance.global.manager.address = ${canal.admin.manager}
    #canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
    canal.instance.global.spring.xml = classpath:spring/file-instance.xml
    #canal.instance.global.spring.xml = classpath:spring/default-instance.xml
    
    ##################################################
    ######### 	      MQ Properties      #############
    ##################################################
    # aliyun ak/sk , support rds/mq
    canal.aliyun.accessKey =
    canal.aliyun.secretKey =
    canal.aliyun.uid=
    
    canal.mq.flatMessage = true
    canal.mq.canalBatchSize = 50
    canal.mq.canalGetTimeout = 100
    # Set this value to "cloud", if you want open message trace feature in aliyun.
    canal.mq.accessChannel = local
    
    canal.mq.database.hash = true
    canal.mq.send.thread.size = 30
    canal.mq.build.thread.size = 8
    
    ##################################################
    ######### 		     Kafka 		     #############
    ##################################################
    kafka.bootstrap.servers = 127.0.0.1:9092
    kafka.acks = all
    kafka.compression.type = none
    kafka.batch.size = 16384
    kafka.linger.ms = 1
    kafka.max.request.size = 1048576
    kafka.buffer.memory = 33554432
    kafka.max.in.flight.requests.per.connection = 1
    kafka.retries = 0
    
    kafka.kerberos.enable = false
    kafka.kerberos.krb5.file = "../conf/kerberos/krb5.conf"
    kafka.kerberos.jaas.file = "../conf/kerberos/jaas.conf"
    
    ##################################################
    ######### 		    RocketMQ	     #############
    ##################################################
    rocketmq.producer.group = test
    rocketmq.enable.message.trace = false
    rocketmq.customized.trace.topic =
    rocketmq.namespace =
    rocketmq.namesrv.addr = 127.0.0.1:9876
    rocketmq.retry.times.when.send.failed = 0
    rocketmq.vip.channel.enabled = false
    rocketmq.tag = 
    
    ##################################################
    ######### 		    RabbitMQ	     #############
    ##################################################
    rabbitmq.host =
    rabbitmq.virtual.host =
    rabbitmq.exchange =
    rabbitmq.username =
    rabbitmq.password =
    rabbitmq.deliveryMode =
    

    instance.properties

    #################################################
    ## mysql serverId , v1.0.26+ will autoGen
    # canal.instance.mysql.slaveId=0
    
    # enable gtid use true/false
    canal.instance.gtidon=false
    
    # position info
    canal.instance.master.address=127.0.0.1:3306
    canal.instance.master.journal.name=
    canal.instance.master.position=
    canal.instance.master.timestamp=
    canal.instance.master.gtid=
    
    # rds oss binlog
    canal.instance.rds.accesskey=
    canal.instance.rds.secretkey=
    canal.instance.rds.instanceId=
    
    # table meta tsdb info
    canal.instance.tsdb.enable=true
    #canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb
    #canal.instance.tsdb.dbUsername=canal
    #canal.instance.tsdb.dbPassword=canal
    
    #canal.instance.standby.address =
    #canal.instance.standby.journal.name =
    #canal.instance.standby.position =
    #canal.instance.standby.timestamp =
    #canal.instance.standby.gtid=
    
    # username/password
    canal.instance.dbUsername=canal
    canal.instance.dbPassword=canal
    canal.instance.connectionCharset = UTF-8
    # enable druid Decrypt database password
    canal.instance.enableDruid=false
    #canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==
    
    # table regex
    canal.instance.filter.regex=.*\\..*
    # table black regex
    canal.instance.filter.black.regex=mysql\\.slave_.*
    # table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
    #canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch
    # table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
    #canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch
    
    # mq config
    canal.mq.topic=example
    # dynamic topic route by schema or table regex
    #canal.mq.dynamicTopic=mytest1.user,mytest2\\..*,.*\\..*
    canal.mq.partition=0
    # hash partition config
    #canal.mq.partitionsNum=3
    #canal.mq.partitionHash=test.table:id^name,.*\\..*
    #canal.mq.dynamicTopicPartitionNum=test.*:4,mycanal:6
    #################################################
    
    

    将刚刚启动的容器先停止再删除

    docker stop canal
    docker rm canal
    

    重新以文件挂载的方式启动canal

    我这里将canal设置了自启动

    docker run --name canal -p 11111:11111 -d --restart=always -v /home/canal/instance.properties:/home/docker/canal/conf/instance.properties -v /home/canal/canal.properties:/home/docker/canal/conf/canal.properties -v /home/admin/canal-server/logs/canal:/home/docker/canal/log canal/canal-server
    

    Docker安装rabbitmq(自带管理界面版本)

    首先拉取最新镜像:

    docker pull rabbitmq:management

    同样的,我们选择挂载数据,在主机创建文件夹:

    image-20220611113209128

    挂载文件启动:

    docker run -d --name rabbitmq -p 5671:5671 -p 5672:5672 -p 15672:15672 -p 15671:15671 -p 25672:25672 -p 4369:4369 --restart=always -v /opt/rabbitmq/data/:/home/docker/rabbitmq/data rabbitmq:management
    

    Docker安装redis(单机版)

    如果是安装单机版redis,推荐使用官方的轻量级镜像:

    docker pull redis:alpine3.16

    启动redis容器并设置密码

    强烈推荐设置密码,不然在云服务器上极有可能被人攻击

    docker run  -d --name redis --restart=always -p 6379:6379 redis:alpine3.16 --requirepass "@LovelyLM"
    

    额外:canal配置mysql与rabbitmq

    需要开启mysql的binlog日志:

  • 相关阅读:
    Nuget 多平台多目标快速自动打包
    .Net Core 环境下构建强大且易用的规则引擎
    .Net 4.X 提前用上 .Net Core 的配置模式以及热重载配置
    [搬运] DotNetAnywhere:可供选择的 .NET 运行时
    [搬运] .NET Core 2.1中改进的堆栈信息
    [开源]OSharpNS 步步为营系列
    [开源]OSharpNS 步步为营系列
    [开源]OSharpNS 步步为营系列
    [开源]OSharpNS 步步为营系列
    [开源]OSharpNS 步步为营系列
  • 原文地址:https://www.cnblogs.com/lovelylm/p/16365553.html
Copyright © 2020-2023  润新知