• docker 总结


    目录

    1 docker安装部署

    1.1 docker介绍

    首先 Docker 是一个在 2013 年开源的应用程序并且是一个基于 go 语言编写是一个开源的 PAAS 服务(Platform as a Service,平台即服务的缩写),go 语言是由google 开发,docker 公司最早叫 dotCloud 后由于 Docker 开源后大受欢迎就将公司改名为 Docker Inc,总部位于美国加州的旧金山,Docker 是基于 linux 内核实现,Docker 最早采用 LXC 技术(LinuX Container 的简写,LXC 是 Linux 原生支持的容器技术,可以提供轻量级的虚拟化,可以说 docker 就是基于 LXC 发展起来的(0.1.5 (2013-04-17),提供 LXC 的高级封装,发展标准的配置方法),而虚拟化 技术 KVM(Kernel-based Virtual Machine) 基于模块实现,Docker 后改为自己研发并开源的 runc 技术运行容器(1.11.0 (2016-04-13)。

    1.1.1 Linux Namespace 技术

    namespace 是 Linux 系统的底层概念,在内核层实现,即有一些不同类型的命名空间被部署在核内,各个 docker 容器运行在同一个 docker 主进程并且共用同一个宿主机系统内核,各 docker 容器运行在宿主机的用户空间,每个容器都要有类似于虚拟机一样的相互隔离的运行空间,但是容器技术是在一个进程内实现运行指定服务的运行环境,并且还可以保护宿主机内核不受其他进程的干扰和影响,如文件系统空间、网络空间、进程空间等

    隔离类别 功能 系统调用参数 内核版本
    MNT Namespace 提供磁盘挂载点和文件系统的隔离能力 CLONE_NEWNS Linux2.4.19
    IPC Namespace(Inter-Process Communication) 提供进程间通信的隔离能力 CLONE_NEWIPC Linux2.6.19
    UTS Namespace(UNIX Timesharing System) 提供主机名隔离能力 CLONE_NEWUTS Linux2.6.19
    PID Namespace(Process Identification) 提供进程隔离能力 CLONE_NEWPID Linux2.6.24
    Net Namespace(network) 提供网络隔离能力 CLONE_NEWNET Linux2.6.29
    User Namespace(user) 提供用户隔离能力 CLONE_NEWUSER Linux3.8
    #MNT Namespace 
    每个容器有独立的根文件系统和独立的用户空间,以实现在容器里面启动服务并且使用容器的运行环境。
    即一个宿主机是ubuntu的服务器,可以在里面启动一个centos运行环境的容器并且在容器里面启动一个Nginx服务,此 Nginx运行时使用的运行环境就是centos系统目录的运行环境,但是在容器里面是不能访问宿主机的资源,宿主机是使用了 chroot技术把容器锁定到一个指定的运行目录里面
    例如 #:containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/`容器ID` -address /var/run/docker/containerd/containerd.sock -containerd-binary /opt/kube/bin/containerd -runtime-root /var/run/docker/runtime-runc
    
    
    #IPC Namespace
    一个容器内的进程间通信,允许一个容器内的不同进程的(内存、缓存等)数据访问,但是不能夸容器访问其他容器的数据。
    
    #UTS Namespace
    UTS namespace(UNIX Timesharing System 包含了运行内核的名称、版本、底层体 系结构类型等信息)用于系统标识,其中包含hostname和域名domainname,它使得一个容器拥有属于自己hostname标识,这个主机名标识独立于宿主机系统和其上的其他容器。
    root@harbor:~# docker exec -it 2b9314b71794 bash
    nginx [ / ]$ ls
    bin  boot  dev  etc  home  lib  lib64  media  mnt  proc  root  run  sbin  srv  sys  tmp  usr  var
    nginx [ / ]$ cat /etc/issue
    Welcome to Photon 4.0 (m) - Kernel 
     (l)
    nginx [ / ]$ uname -a  #宿主机的内核
    Linux 2b9314b71794 4.15.0-152-generic #159-Ubuntu SMP Fri Jul 16 16:28:09 UTC 2021 x86_64
    nginx [ / ]$ hostname  #自己的hostname
    2b9314b71794
    
    #PID namespace
    Linux 系统中,有一个 PID 为 1 的进程(init/systemd)是其他所有进程的父进程,那么在每个容器内也要有一个父进程来管理其下属的子进程,那么多个容器的进程通PID namespace进程隔离
    root@harbor:~# docker exec -it 2b9314b71794 bash  #查看当前父进程nginx: master process nginx -g
    nginx [ / ]$ ps -ef
    UID             PID   PPID C STIME TTY          TIME CMD
    nginx             1      0 0 48:05 ?        00:00:00 nginx: master process nginx -g daemon off;
    nginx             6      1 0 48:05 ?        00:00:00 nginx: worker process 
    nginx             7      1 0 48:05 ?        00:00:00 nginx: worker process
    nginx           346      0 0 10:09 pts/0    00:00:00 bash
    nginx           445    346 0 15:46 pts/0    00:00:00 ps -ef
    root@harbor:~# ps -ef |grep docker #找容器id对应的宿主机的目录
    root        379      1  0 07:47 ?        00:00:04 /opt/kube/bin/dockerd
    root        549    379  0 07:47 ?        00:00:02 containerd --config /var/run/docker/containerd/containerd.toml --log-level warn
    root       1152    379  0 07:48 ?        00:00:00 /opt/kube/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.18.0.3 -container-port 8080 #docker-proxy 负责容器访问
    root       1164    379  0 07:48 ?        00:00:00 /opt/kube/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 1514 -container-ip 172.18.0.9 -container-port 10514
    root       1171    549  0 07:48 ?        00:00:00 containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c -address /var/run/docker/containerd/containerd.sock -containerd-binary /opt/kube/bin/containerd -runtime-root /var/run/docker/runtime-runc
    root       1192    549  0 07:48 ?        00:00:00 containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/89e6038bf99fb14a7c586b1317b8eb4e08ec2dcbff4d5055d596a3ac80cfed9c -address /var/run/docker/containerd/containerd.sock -containerd-binary /opt/kube/bin/containerd -runtime-root /var/run/docker/runtime-runc
    root       1202    549  0 07:48 ?        00:00:00 containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/5dbda7bfb4f52dc6abd5d16f9f311585426c2d091eda339c3c19d1efdf7896fa -address /var/run/docker/containerd/containerd.sock -containerd-binary /opt/kube/bin/containerd -runtime-root /var/run/docker/runtime-runc
    root@harbor:~# ps -ef |grep 1171
    root       1171    549  0 07:48 ?        00:00:00 containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c -address /var/run/docker/containerd/containerd.sock -containerd-binary /opt/kube/bin/containerd -runtime-root /var/run/docker/runtime-runc
    10000      1411   1171  0 07:48 ?        00:00:00 nginx: master process nginx -g daemon off;
    root      33661  30705  0 08:20 pts/1    00:00:00 grep --color=auto 1171
    root@harbor:~# ps -ef |grep 1411
    10000      1411   1171  0 07:48 ?        00:00:00 nginx: master process nginx -g daemon off;
    10000      1610   1411  0 07:48 ?        00:00:00 nginx: worker process
    10000      1611   1411  0 07:48 ?        00:00:00 nginx: worker process
    
    #Net Namespace
    每一个容器都类似于虚拟机一样有自己的网卡、监听端口、TCP/IP 协议栈等, Docker使用network namespace启动一个`vethX`接口,这样你的容器将拥有它自己的桥接ip地址,通常是docker0,而docker0实质就是Linux的虚拟网桥,网桥 是在OSI七层模型的数据链路层的网络设备,通过mac地址对网络进行划分,并且在不同网络直接传递数据。
    
    #User Namespace
    User Namespace允许在各个宿主机的各个容器空间内创建相同的用户名以及相同的用户UID和GID,只是会把用户的作用范围限制在每个容器内,即A容器和B容器可以有相同的用户名称/ID的账户,但是此用户的有效范围仅是当前容器内,不能访问另外一个容器内的文件系统,即相互隔离。
    

    1.1.2 Linux control groups

    在一个容器,如果不对其做任何资源限制,则宿主机会允许其占用无限大的内存空间,有时候会因为代码bug程序会一直申请内存,直到把宿主机内存占完,为了避免此类的问题出现,宿主机有必要对容器进行资源分配限制,比如CPU、内存等,Linux Cgroups 的全称是 Linux Control Groups,它最主要的作用,就是限制一个进程组能够使用的资源上限,包括 CPU、内存、磁盘、网络带宽等等。此外,还能够对进程进行优先级设置,以及将进程挂起和恢复等操作

    1.1.2.1 查看系统cgroups

    root@harbor:~/harbor# ll /sys/fs/cgroup/
    total 0
    drwxr-xr-x 15 root root 380 Sep  5 07:47 ./
    drwxr-xr-x  9 root root   0 Sep  5 07:47 ../
    dr-xr-xr-x  5 root root   0 Sep  5 07:47 blkio/  #blkio:块设备 IO 限制。
    lrwxrwxrwx  1 root root  11 Sep  5 07:47 cpu -> cpu,cpuacct/  #使用调度程序为 cgroup 任务提供 cpu 的访问
    dr-xr-xr-x  5 root root   0 Sep  5 07:47 cpu,cpuacct/  
    lrwxrwxrwx  1 root root  11 Sep  5 07:47 cpuacct -> cpu,cpuacct/ #产生 cgroup 任务的 cpu 资源报告。
    dr-xr-xr-x  3 root root   0 Sep  5 07:47 cpuset/ #如果是多核心的 cpu,这个子系统会为 cgroup 任务分配单独的 cpu 和 内存。
    dr-xr-xr-x  5 root root   0 Sep  5 07:47 devices/  #允许或拒绝 cgroup 任务对设备的访问。
    dr-xr-xr-x  3 root root   0 Sep  5 07:47 freezer/  #暂停和恢复 cgroup 任务。
    dr-xr-xr-x  3 root root   0 Sep  5 07:47 hugetlb/
    dr-xr-xr-x  5 root root   0 Sep  5 07:47 memory/ #设置每个 cgroup 的内存限制以及产生内存资源报告。
    lrwxrwxrwx  1 root root  16 Sep  5 07:47 net_cls -> net_cls,net_prio/ #标记每个网络包以供 cgroup 方便使用。
    dr-xr-xr-x  3 root root   0 Sep  5 07:47 net_cls,net_prio/
    lrwxrwxrwx  1 root root  16 Sep  5 07:47 net_prio -> net_cls,net_prio/
    dr-xr-xr-x  3 root root   0 Sep  5 07:47 perf_event/  #增加了对每 group 的监测跟踪的能力
    dr-xr-xr-x  5 root root   0 Sep  5 07:47 pids/
    dr-xr-xr-x  2 root root   0 Sep  5 07:47 rdma/
    dr-xr-xr-x  6 root root   0 Sep  5 07:47 systemd/
    dr-xr-xr-x  5 root root   0 Sep  5 07:47 unified/
    

    1.1.2.2 查看容器的资源限制

    目前 docker 已经几乎支持了所有的 cgroups 资源,可以限制容器对包括 network,device,cpu 和 memory 在内的资源的使用

    默认情况下,Docker 启动一个容器后,会在 /sys/fs/cgroup 目录下的各个资源目录下生成以容器 ID 为名字的目录(group)

    /sys/fs/cgroup/cpu/docker/03dd196f415276375f754d51ce29b418b170bd92d88c5e420d6901c32f93dc14
    
    root@harbor:/var/lib/docker# cd /sys/fs/cgroup/
    root@harbor:/sys/fs/cgroup# ls
    blkio  cpu,cpuacct  cpuset   freezer  memory   net_cls,net_prio  perf_event  rdma     unified
    cpu    cpuacct      devices  hugetlb  net_cls  net_prio          pids        systemd
    root@harbor:/sys/fs/cgroup# find ./* -iname 2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    ./blkio/docker/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    ./cpu,cpuacct/docker/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    ./cpuset/docker/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    ./devices/docker/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    ./freezer/docker/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    ./hugetlb/docker/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    ./memory/docker/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    ./net_cls,net_prio/docker/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    ./perf_event/docker/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    ./pids/docker/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    ./systemd/docker/2b9314b71794481593200549b38493ca5101324a753929a608bbc1fb8c3cb78c
    

    1.1.2.3 docker中run 命令中 cgroups 相关命令

    block IO:
          --blkio-weight value          Block IO (relative weight), between 10 and 1000
          --blkio-weight-device value   Block IO weight (relative device weight) (default [])
          --cgroup-parent string        Optional parent cgroup for the container
    CPU:
          --cpu-percent int             CPU percent (Windows only)
          --cpu-period int              Limit CPU CFS (Completely Fair Scheduler) period
          --cpu-quota int               Limit CPU CFS (Completely Fair Scheduler) quota
      -c, --cpu-shares int              CPU shares (relative weight)
          --cpuset-cpus string          CPUs in which to allow execution (0-3, 0,1)
          --cpuset-mems string          MEMs in which to allow execution (0-3, 0,1)
    Device:    
          --device value                Add a host device to the container (default [])
          --device-read-bps value       Limit read rate (bytes per second) from a device (default [])
          --device-read-iops value      Limit read rate (IO per second) from a device (default [])
          --device-write-bps value      Limit write rate (bytes per second) to a device (default [])
          --device-write-iops value     Limit write rate (IO per second) to a device (default [])
    Memory:      
          --kernel-memory string        Kernel memory limit
      -m, --memory string               Memory limit
          --memory-reservation string   Memory soft limit
          --memory-swap string          Swap limit equal to memory plus swap: '-1' to enable unlimited swap
          --memory-swappiness int       Tune container memory swappiness (0 to 100) (default -1)
    

    1.1.3 cgroups验证

    1.1.3.1 创建容器的CPU权重控制

    默认情况下,每个docker容器的cpu份额都是1024,单独一个容器的份额是没有意义的,只有在同时运行多个容器时,容器cpu的加权效果才能体现出现。

    例如,两个容器A、B的cpu份额分别为1000和500,在cpu进行时间片分配的时候,容器A比容器B多一倍的机会获得cpu的时间片,但是分配的结果取决于当时主机和其他容器的运行状态,实际上也无法保证容器A一定能够获得cpu的时间片。比如容器A的进程一直是空闲的,那么容器B是可以获取比容器A更多的cpu时间片的,极端情况下,例如主机上只运行的一个容器,即使它的cpu份额只有50,它也可以独占整个主机的cpu资源

    cgroups只在容器分配的资源紧缺时,即需要对容器使用的资源进行限制时,才会生效。因此,无法单纯的根据某个容器的份额的cpu份额来确定有多少cpu资源分配给它,可以通过cpu share参数可以设置容器使用cpu的优先级,比如启动了两个容器及运行查看cpu的cpu的使用百分比

    创建两个容器,分别制定不同的权重比

    #docker run -itd --name cpu1024 --cpu-shares 1024 registry.cn-hangzhou.aliyuncs.com/haozheyu/centos-stress:v1 stress -c 10
    
    #docker run -itd --name cpu512 --cpu-shares 512 registry.cn-hangzhou.aliyuncs.com/haozheyu/centos-stress:v1 stress -c 10
    

    查看%cpu的比例

    root@harbor2:~# docker exec -it cpu1024 bash
    top - 02:27:46 up  2:39,  0 users,  load average: 19.86, 13.08, 5.86
    Tasks:  13 total,  11 running,   2 sleeping,   0 stopped,   0 zombie
    %Cpu0  : 99.7 us,  0.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
    %Cpu1  :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
    %Cpu2  :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
    KiB Mem :  4015804 total,  2211948 free,   261200 used,  1542656 buff/cache
    KiB Swap:  2097148 total,  2097148 free,        0 used.  3512012 avail Mem 
    
       PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                            
         7 root      20   0    7324    100      0 R  19.9  0.0   0:51.90 stress                             
         8 root      20   0    7324    100      0 R  19.9  0.0   0:51.87 stress                             
         9 root      20   0    7324    100      0 R  19.9  0.0   0:51.84 stress                             
        10 root      20   0    7324    100      0 R  19.9  0.0   0:51.86 stress                             
        11 root      20   0    7324    100      0 R  19.9  0.0   0:51.86 stress                             
        12 root      20   0    7324    100      0 R  19.9  0.0   0:51.90 stress  
        
    ------------------------------------------------
    root@harbor2:~# docker exec -it cpu512 bash
    top - 02:28:40 up  2:40,  0 users,  load average: 19.94, 14.25, 6.67
    Tasks:  13 total,  11 running,   2 sleeping,   0 stopped,   0 zombie
    %Cpu(s):100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
    KiB Mem :  4015804 total,  2210800 free,   262280 used,  1542724 buff/cache
    KiB Swap:  2097148 total,  2097148 free,        0 used.  3511016 avail Mem 
    
       PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                            
         6 root      20   0    7324     96      0 R  10.0  0.0   1:08.04 stress                             
         7 root      20   0    7324     96      0 R  10.0  0.0   1:08.11 stress                             
         8 root      20   0    7324     96      0 R  10.0  0.0   1:08.02 stress                             
         9 root      20   0    7324     96      0 R  10.0  0.0   1:08.40 stress                             
        10 root      20   0    7324     96      0 R  10.0  0.0   1:07.82 stress                             
        11 root      20   0    7324     96      0 R  10.0  0.0   1:07.89 stress 
    

    分别进入cpu512和cpu1024之后可以看到,%cpu的比例差不多是1:2,符合我们设置的–cpu-shares参数

    1.1.3.2 cpu core控制

    对于多核cpu的服务器,docker还可以控制容器运行使用那些cpu内核,以及使用–cpuset-cpus参数,这对于具有多cpu服务器尤其有用,可以对需要高性能计算的容器进行性能最优的配置

    root@harbor2:~# docker run -itd --name cpu1 --cpuset-cpus 0-1 registry.cn-hangzhou.aliyuncs.com/haozheyu/centos-stress:v1
    326cacc9423fefe5f942441f449f8f22c9319ce59d28c18ca6fbb4651d8263c5
    root@harbor2:~# docker exec -it cpu1 bash
    [root@326cacc9423f /]# cat /sys/fs/cgroup/cpuset/cpuset.cpus
    0-1
    
    //通过下列指令可以看到容器中进程与cpu内核的绑定关系,达到绑定cpu内核的目的
    root@harbor2:~# docker exec -it cpu1 taskset -c -p 1
    pid 1's current affinity list: 0,1    #容器内部的第一个进程号pid为1,被绑定到指定到的cpu上运行
    

    1.1.3.3 cpu配额控制参数的混合使用

    docker run -itd --name cpu2 --cpuset-cpus 1 --cpu-shares 512 registry.cn-hangzhou.aliyuncs.com/haozheyu/centos-stress:v1 stress -c 1
    
    docker run -itd --name cpu3 --cpuset-cpus 3 --cpu-shares 1024 registry.cn-hangzhou.aliyuncs.com/haozheyu/centos-stress:v1 stress -c 1
    

    1.1.3.4 内存配额

    • 与操作系统类似,容器可使用的内存包括两部分:物理内存和swap
      容器通过 -m或–memory设置内存的使用限额,例如:-m 300M;通过–memory-swap设置内存+swap的使用限额
    • 实例如下,允许容器最多使用200M的内存和300M的swap
    docker run -itd --name mem1 –m 300M --memory-swap=300M registry.cn-hangzhou.aliyuncs.com/haozheyu/centos-stress:v1 stress -m 3 --vm-bytes 300M
    #会生成3个进程,每个进程占用300M内存
    root@harbor2:~# docker exec -it 0504806aa832 bash
    #top之后会看到这300M使用的内存在3个线程上来回飘
    

    1.1.3.5 Block IO的限制

    默认情况下,所有容器能平等地读写磁盘,可以通过设置–blkio-weight参数来改变容器block IO的优先级

    docker run -itd --name blokio1 --blkio-weight 600 registry.cn-hangzhou.aliyuncs.com/haozheyu/centos-stress:v1 
    
    docker run -itd --name blokio2 --blkio-weight 300 registry.cn-hangzhou.aliyuncs.com/haozheyu/centos-stress:v1
    
    docker exec -it --name blokio2 bash
    cat /sys/fs/cgroup/blkio/blkio.weight
    300
    

    1.1.3.6 bps和iops限制

    bps是byte per second,每秒读写的数据量。iops是io per second, 每秒IO的次数。
    可通过以下参数控制容器的bps和iops:

    --device-read-bps:限制读某个设备的bps.
    --device-write-bps:限制写某个设备的bps.
    --device-read-iops:限制读某个设备的iops.
    --device-write-iops:限制写某个设备的iops。
    

    下面的示例是限制容器写/dev/sda 的速率为5 MB/s

    docker run -itd --name bps1 --device-write-bps /dev/sda:5MB registry.cn-hangzhou.aliyuncs.com/haozheyu/centos-stress:v1
    
    docker exec -it bps1 bash
    dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct
    

    1.2 docker安装和基础命令

    1.2.1 在线安装

    如果你过去安装过 docker,先删掉:

    sudo apt-get remove docker docker-engine docker.io
    

    首先安装依赖:

    sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
    

    根据你的发行版

    信任 Docker 的 GPG 公钥:

    curl -fsSL https://download.docker.com/linux/[ubuntu|debian]/gpg | sudo apt-key add -
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    

    添加软件仓库:

    echo 'deb [arch=amd64] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu bionic stable' >> /etc/apt/sources.list
    

    最后安装

    sudo apt-get update
    sudo apt-get install docker-ce
    

    验证

    root@harbor2:~# docker run -it --rm nginx bash
    Unable to find image 'nginx:latest' locally
    latest: Pulling from library/nginx
    a330b6cecb98: Pull complete 
    5ef80e6f29b5: Pull complete 
    f699b0db74e3: Pull complete 
    0f701a34c55e: Pull complete 
    3229dce7b89c: Pull complete 
    ddb78cb2d047: Pull complete 
    Digest: sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e
    Status: Downloaded newer image for nginx:latest
    root@d8840d0d3a84:/# exit 
    exit
    

    1.2.2 离线安装

    cat >> limits.conf << EOF
    *             soft    core            unlimited
    *             hard    core            unlimited
    *	      soft    nproc           1000000
    *             hard    nproc           1000000
    *             soft    nofile          1000000
    *             hard    nofile          1000000
    *             soft    memlock         32000
    *             hard    memlock         32000
    *             soft    msgqueue        8192000
    *             hard    msgqueue        8192000
    EOF
    #如果是非root用户将*修改成用户名称再添加一次
    cat >> sysctl.conf << EOF
    net.ipv4.ip_forward=1
    vm.max_map_count=262144
    kernel.pid_max=4194303
    fs.file-max=1000000
    net.ipv4.tcp_max_tw_buckets=6000
    net.netfilter.nf_conntrack_max=2097152
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    vm.swappiness=0
    EOF
    
    cat >> containerd.service << EOF
    [Unit]
    Description=containerd container runtime
    Documentation=https://containerd.io
    After=network.target local-fs.target
    [Service]
    ExecStartPre=-/sbin/modprobe overlay
    ExecStart=/usr/bin/containerd
    Type=notify
    Delegate=yes
    KillMode=process
    Restart=always
    # Having non-zero Limit*s causes performance problems due to accounting overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    LimitNPROC=infinity
    LimitCORE=infinity
    LimitNOFILE=1048576
    # Comment TasksMax if your systemd version does not supports it.
    # Only systemd 226 and above support this version.
    TasksMax=infinity
    [Install]
    WantedBy=multi-user.target
    EOF
    
    cat >> docker.service << EOF
    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    BindsTo=containerd.service
    After=network-online.target firewalld.service containerd.service
    Wants=network-online.target
    Requires=docker.socket
    [Service]
    Type=notify
    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    ExecReload=/bin/kill -s HUP $MAINPID
    TimeoutSec=0
    RestartSec=2
    Restart=always
    # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
    # Both the old, and new location are accepted by systemd 229 and up, so using the old location
    # to make them work for either version of systemd.
    StartLimitBurst=3
    # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
    # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
    # this option work for either version of systemd.
    StartLimitInterval=60s
    # Having non-zero Limit*s causes performance problems due to accounting overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    # Comment TasksMax if your systemd version does not support it.
    # Only systemd 226 and above support this option.
    TasksMax=infinity
    # set delegate yes so that systemd does not reset the cgroups of docker containers
    Delegate=yes
    # kill only the docker process, not all processes in the cgroup
    KillMode=process
    [Install]
    WantedBy=multi-user.target
    EOF
    
    cat >> docker.socket << EOF
    [Unit]
    Description=Docker Socket for the API
    PartOf=docker.service
    [Socket]
    ListenStream=/var/run/docker.sock
    SocketMode=0660
    SocketUser=root
    SocketGroup=docker
    [Install]
    WantedBy=sockets.target
    EOF
    
    
    #!/bin/bash
    DIR=`pwd`
    PACKAGE_NAME="docker-19.03.15.tgz"
    DOCKER_FILE=${DIR}/${PACKAGE_NAME}
    install_docker(){
      grep "Ubuntu" /etc/issue &> /dev/null
      if [ $? -eq 0 ];then
        /bin/echo  "当前系统是`cat /etc/issue`,即将开始系统初始化、配置docker-compose与安装docker" && sleep 1
        cp ${DIR}/limits.conf /etc/security/limits.conf
        cp ${DIR}/sysctl.conf /etc/sysctl.conf
        
        /bin/tar xvf ${DOCKER_FILE}
        cp docker/*  /usr/bin 
        cp containerd.service /lib/systemd/system/containerd.service
        cp docker.service  /lib/systemd/system/docker.service
        cp docker.socket /lib/systemd/system/docker.socket
        cp ${DIR}/docker-compose-Linux-x86_64_1.24.1 /usr/bin/docker-compose
        ulimit -n 1000000 
        /bin/su -c - [运维用户] "ulimit -n 1000000"
        /bin/echo "docker 安装完成!" && sleep 1 
        systemctl  enable containerd.service && systemctl  restart containerd.service
        systemctl  enable docker.service && systemctl  restart docker.service
        systemctl  enable docker.socket && systemctl  restart docker.socket 
      fi
    };install_docker
    
    

    1.3 docker镜像加速

    国内下载国外的镜像有时候会很慢,因此可以更改 docker 配置文件添加一个加速器,可以通过加速器达到加速下载镜像的目的。

    #cat >> /etc/docker/daemon.json << EOF
    {
      "registry-mirrors": [
        "https://docker.mirrors.ustc.edu.cn",
        "http://hub-mirror.c.163.com",
        "https://mirror.baidubce.com"
      ],
      "max-concurrent-downloads": 10,
      "log-driver": "json-file",
      "log-level": "warn",
      "log-opts": {
        "max-size": "10m",
        "max-file": "3"
        },
      "data-root": "/var/lib/docker"
    }
    EOF
    
    root@harbor2:~# systemctl restart docker
    root@harbor2:~# docker info
    

    1.4 docker基础命令

    [root@linux-docker ~]# docker ps
    [root@linux-docker ~]# docker ps -a
    
    [root@docker-server1 ~]# docker rm 11445b3a84d3
    [root@docker-server1 ~]# docker rm -f 11445b3a84d3
    
    [root@docker-server1 ~]# docker run -P docker.io/nginx
    [root@docker-server1 ~]# docker run -p 81:80 --name nginx-test-port1 nginx
    [root@docker-server1 ~]# docker run -p 192.168.10.205:83:80/udp --name nginx-test-port4
    
    [root@docker-server1 ~]# docker logs -f nginx-test-port3 #持续查看
    
    [root@docker-server1 ~]# docker port nginx-test-port5
    
    [root@docker-server1 ~]# docker run -t -i --name test-centos2 docker.io/centos /bin/bash
    
    [root@linux-docker opt]# docker run -it --rm --name nginx-delete-test docker.io/nginx
    
    [root@docker-server1 ~]# docker run -d centos /usr/bin/tail -f '/etc/hosts'
    
    [root@docker-server1 ~]# docker stop f821d0cd5a99 
    [root@docker-server1 ~]# docker start f821d0cd5a99
    
    [root@docker-server1 ~]# docker exec -it f821d0cd5a99 bash
    
    [root@docker-server1 ~]# docker ps -a |awk '{print $1}' |xargs docker rm -f
    [root@docker-server1 ~]# docker ps -a -q |xargs docker kill
    [root@docker-server1 ~]# docker rm -f `docker ps -aq -f status=exited`
    

    2 docker镜像管理

    2.1 镜像管理命令

    // 查看本地的所有镜像
    docker images
    // 删除虚悬镜像
    docker image prune
    // 删除指定镜像
    docker rmi 镜像名或ID
    
    // 给镜像打上TAG
    docker tag 镜像ID xxx:xxx
    
    // 将镜像保存到文件,将文件发给其他人,其他人再载入即可
    docker save 镜像ID > xxx.tar
    // 将文件载入
    docker load < xxx.tar
    

    2.2 镜像构建

    2.2.1 手动构建

    root@harbor2:~# docker run --name centos-yum-nginx --rm -it centos:7 bash
    #在容器里边安装以下测试
        1  yum install -y wget bash-com* epel-rel*
        2  yum install -y nginx
        3  vim /etc/nginx/nginx.conf
        4  yum install -y vim
        5  touch 123.txt
    root@harbor2:~# docker commit -a "xxx@qq.com" -m "nginx yum v1" --change="EXPOSE 80 443" centos-yum-nginx centos-nginx:v1
    sha256:020e268ddccd5707e82e7cc593be4e78951503351bbde91e2d72e0ffddad61a1  
    
    root@harbor2:~# docker run -it --rm centos-nginx:v1 bash
    [root@020e268ddccd /]# ls
    123.trxt           bin  etc   lib    media  opt   root  sbin  sys  usr
    anaconda-post.log  dev  home  lib64  mnt    proc  run   srv   tmp  var
    [root@020e268ddccd /]# exit
    

    2.2.2 Dockerfile构建

    脚本中的常用语法如下:

    FROM 镜像基于另一个镜像,就是在另一个镜像的基础上再执行一些脚本构建出新的镜像
    MAINTAINER xx.xxx xxx@qq.com 镜像维护者的信息 
    RUN 执行命令
    WORKDIR 指定工作目录,相当于cd,一般在Dockerfile结尾会将工作目录切到常用的目录下,这样在docker exec进入容器时就会默认进入到此目录下,省去用户再cd目录的操作
    COPY 复制当前目录的文件到镜像中,也可以从另一个镜像复制文件
    ADD 添加当前目录的文件到镜像中,如果要添加的文件是*.tar.gz|tgz格式,则会自动解压到镜像的指定目录下
    ENV 设置镜像中的环境变量
    CMD 指定镜像在启动容器时执行的命令,在Dockerfile中CMD只能出现一次
    ENTRYPOINT 指定镜像在启动容器时执行的命令或脚本,由于CMD只能出现一次,如果要在容器启动时执行多条命令就可以用entrypoint代替,可以在镜像中添加一个.sh文件,在.sh文件中写多条命令,要注意.sh文件需要有可执行权限,一般会RUN chmod u+x xxx.sh添加可执行权限。这里给出一条命令的写法["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/webapps/api.jar", "--spring.profiles.active=test"]
    EXPOSE 说明暴露的端口,写不写都可以,但是写了之后,在docker inspect查看镜像时可以查看到镜像会暴露哪些端口
    VOLUME 该指令可以指定一个或多个目录作为容器的数据卷。容器运行时应该尽量保持容器存储层不发生写操作,对于数据库类需要保存动态数据的应用,其数据库文件应该保存于卷(volume)中。为了防止运行时用户忘记将动态文件所保存目录挂载为卷,在 Dockerfile 中,我们可以事先指定某些目录挂载为匿名卷,这样在运行时如果用户不指定挂载,其应用也可以正常运行,不会向容器存储层写入大量数据
    例如:将 /data 目录作为容器数据卷目录 /data 目录就会在运行时自动挂载为匿名卷,任何向 /data 中写入的信息都不会记录进容器存储层,从而保证了容器存储层的无状态化。当然,运行时可以覆盖这个挂载设置。比如
    docker run -d -v mydata:/data xxxx 命令中,就使用了 mydata 这个命名卷挂载到了 /data 这个位置,替代了 Dockerfile 中定义的匿名卷的挂载配置
    

    springboot 镜像构建

    FROM registry.cn-hangzhou.aliyuncs.com/haozheyu/jdk:ora8u201-alpine3.9-glibc2.29
    
    
    ADD target/x-admin-1.0-SNAPSHOT.jar /x-admin-1.0-SNAPSHOT.jar
    EXPOSE 8080
    
    # 指定docker容器启动时运行jar包
    ENTRYPOINT ["java", "-jar","/x-admin-1.0-SNAPSHOT.jar"]
    # 指定维护者的名字
    MAINTAINER xxx@qq.com
    

    构建nginx镜像

    Nginx 镜像的 DockerFile

    FROM centos:7
    
    MAINTAINER xxx<xxx@qq.com>
    
    # 安装软件
    RUN yum -y update && yum -y install gcc gdb strace gcc-c++ autoconf libjpeg libjpeg-devel libpng libpng-devel freetype freetype-devel libxml2 libxml2-devel zlib zlib-devel glibc glibc-devel glib2 glib2-devel bzip2 bzip2-devel ncurses ncurses-devel curl curl-devel e2fsprogs patch e2fsprogs-devel krb5-devel libidn libidn-devel openldap-devel nss_ldap openldap-clients openldap-servers libevent-devel libevent uuid-devel uuid openssl openssl-devel pcre pcre-devel
    
    # 创建用户
    RUN groupadd www
    RUN useradd -g www www -s /bin/false
    
    # 定义Nginx版本号
    ENV VERSION 1.14.2
    
    # 下载并解压文件
    RUN mkdir -p /usr/local/src/
    ADD http://nginx.org/download/nginx-$VERSION.tar.gz /usr/local/src
    RUN tar -xvf /usr/local/src/nginx-$VERSION.tar.gz -C /usr/local/src/
    
    # 创建安装目录
    ENV NGINX_HOME /usr/local/nginx
    RUN mkdir -p $NGINX_HOME
    RUN chown -R www:www $NGINX_HOME
    
    # 进入解压目录
    WORKDIR /usr/local/src/nginx-$VERSION
    
    # 编译安装
    RUN ./configure 
    	--user=www 
    	--group=www 
    	--prefix=$NGINX_HOME 
    	--with-http_ssl_module 
    	--with-http_realip_module 
    	--with-http_gzip_static_module 
    	--with-http_stub_status_module
    RUN make
    RUN make install
    
    # 备份Nginx的配置文件
    RUN mv $NGINX_HOME/conf/nginx.conf $NGINX_HOME/conf/nginx.conf.default
    
    # 设置环境变量
    ENV PATH $PATH:$NGINX_HOME/sbin
    
    # 创建WebApp目录
    ENV WEB_APP /usr/share/nginx/html
    RUN mkdir -p $WEB_APP
    
    # 设置默认工作目录
    WORKDIR $WEB_APP
    
    # 暴露端口
    EXPOSE 80
    EXPOSE 443
    
    # 清理压缩包与解压文件
    RUN rm -rf /usr/local/src/nginx*
    
    CMD $NGINX_HOME/sbin/nginx -g 'daemon off;' -c $NGINX_HOME/conf/nginx.conf
    

    构建 Nginx 镜像

    # 构建Nginx镜像
    # docker build -f docker-file -t centos-nginx:1.14.2 .
    

    Docker-Compose 管理 Nginx 镜像

    version: "3.5"
    
    services:
      nginx:
        image: centos-nginx:1.14.2
        container_name: nginx-1.14.2
        privileged: false
        ports:
          - 80:80
          - 443:443
        volumes:
           - '/container/nginx/wwwroot:/usr/share/nginx/html'
           - '/container/nginx/logs:/usr/local/nginx/logs'
           - '/container/nginx/nginx.conf:/usr/local/nginx/conf/nginx.conf'
    
    # 上面的配置是docker-compose.yml文件的内容,数据卷部分可以根据自己的实际情况进行修改
    # 注意: 在/container/nginx/nginx.conf配置文件中,需要手动修改root的路径为/usr/share/nginx/html
    

    创建并启动 Nginx 容器

    # 创建并启动容器
    # docker-compose up -d
    
    # 查看容器的运行状态
    # docker-compose ps
    

    手动构建tengine

    FROM centos:7
    
    MAINTAINER xxx<xxx@qq.com>
    
    # 安装软件
    RUN yum -y update && yum -y install vim tree htop tmux net-tools telnet wget curl supervistor autoconf git gcc gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel
    
    # 创建用户
    RUN groupadd tengine
    RUN useradd -g tengine tengine
    
    # 定义Tengine版本号
    ENV VERSION 2.2.3
    
    # 下载并解压文件
    RUN mkdir -p /usr/local/src/
    ADD http://tengine.taobao.org/download/tengine-$VERSION.tar.gz /usr/local/src
    RUN tar -xvf /usr/local/src/tengine-$VERSION.tar.gz -C /usr/local/src/
    
    # 创建安装目录
    ENV TENGINE_HOME /usr/local/tengine
    RUN mkdir -p $TENGINE_HOME
    
    # 进入解压目录
    WORKDIR /usr/local/src/tengine-$VERSION
    
    # 编译安装
    RUN ./configure 
    	--user=tengine 
    	--group=tengine 
    	--prefix=$TENGINE_HOME 
    	--with-http_ssl_module 
    	--with-http_realip_module 
    	--with-http_concat_module  
    	--with-http_gzip_static_module 
    	--with-http_stub_status_module 
    	--with-http_upstream_consistent_hash_module
    RUN make
    RUN make install
    
    # 备份Tengine的配置文件
    RUN mv $TENGINE_HOME/conf/nginx.conf $TENGINE_HOME/conf/nginx.conf.default
    
    # 设置环境变量
    ENV PATH $PATH:$TENGINE_HOME/sbin
    
    # 创建WebApp目录
    ENV WEB_APP /usr/share/tengine/html
    RUN mkdir -p $WEB_APP
    
    # 设置默认工作目录
    WORKDIR $WEB_APP
    
    # 暴露端口
    EXPOSE 80
    EXPOSE 443
    
    # 清理压缩包与解压文件
    RUN rm -rf /usr/local/src/tengine*
    
    CMD $TENGINE_HOME/sbin/nginx -g 'daemon off;' -c $TENGINE_HOME/conf/nginx.conf
    

    构建 Tengine 镜像

    # 构建Tengine镜像
    # docker build -f docker-file -t centos-tengine:2.2.3 .
    

    Docker-Compose 管理 Tengine 镜像

    version: "3.5"
    
    services:
      tengine:
        image: centos-tengine:2.2.3
        container_name: tengine:2.2.3
        restart: always
        privileged: false
        ports:
          - 80:80
          - 443:443
        volumes:
           - '/container/tengine/wwwroot/:/usr/share/tengine/html'
           - '/container/tengine/logs:/usr/local/tengine/logs'
           - '/container/tengine/nginx.conf:/usr/local/tengine/conf/nginx.conf'
    
    # 上面的配置是docker-compose.yml文件的内容,数据卷部分可以根据自己的实际情况进行修改
    # 注意: 在/container/tengine/nginx.conf配置文件中需要手动修改root的路径为/usr/share/tengine/html
    

    创建并启动 Tengine 容器

    # 创建并启动容器
    # docker-compose up -d
    
    # 查看容器的运行状态
    # docker-compose ps
    

    构建tomcat 容器

    FROM registry.cn-hangzhou.aliyuncs.com/haozheyu/jdk:ora8u201-alpine3.9-glibc2.29
    #env 
    ENV TZ "Asia/Shanghai" 
    ENV LANG en_US.UTF-8 
    ENV TERM xterm 
    ENV TOMCAT_MAJOR_VERSION 8 
    ENV TOMCAT_MINOR_VERSION 8.5.45 
    ENV CATALINA_HOME /apps/tomcat 
    ENV APP_DIR ${CATALINA_HOME}/webapps 
    
    #tomcat 
    RUN mkdir /apps 
    ADD apache-tomcat-8.5.45.tar.gz /apps 
    RUN ln -sv /apps/apache-tomcat-8.5.45 /apps/tomcat
    

    docker build -t tomcat-bash:v8.5.45 .

    3 docker 数据管理

    Docker 的镜像是分层设计的,镜像层是只读的,通过镜像启动的容器添加了一层可读写的文件系统,用户写入的数据都保存在这一层当中。 如果要将写入到容器的数据永久保存,则需要将容器中的数据保存到宿主机的指定目录,目前 Docker 的数据类型分为两种:

    一是数据卷(data volume),数据卷类似于挂载的一块磁盘,数据容器是将数据保存在一个容器上.

    二是数据卷容器(Data volume container), 数据卷容器是将宿主机的目录挂载至一个专门的数据卷容器,然后让其他容器通过数据卷容器读写宿主机的数据。

    # docker inspect 89e6038bf99f #查看指定 PID 的容器信息
            "GraphDriver": {
                "Data": {
                    "LowerDir": "/var/lib/docker/overlay2/119f71309ce1be75aab7760dd04bcb7c02ad6b41725c10d3bb6ed590a945359f-init/diff:/var/lib/docker/overlay2/eb1a488d3821fa84d0fbf871891404c5c6cd68046775225a9f652b1591fd9a2c/diff:/var/lib/docker/overlay2/f8bacdf8595bab5330b8a07d83d823722e817f6219808088ce5ab873371e403d/diff",
                    "MergedDir": "/var/lib/docker/overlay2/119f71309ce1be75aab7760dd04bcb7c02ad6b41725c10d3bb6ed590a945359f/merged",
                    "UpperDir": "/var/lib/docker/overlay2/119f71309ce1be75aab7760dd04bcb7c02ad6b41725c10d3bb6ed590a945359f/diff",
                    "WorkDir": "/var/lib/docker/overlay2/119f71309ce1be75aab7760dd04bcb7c02ad6b41725c10d3bb6ed590a945359f/work"
                },
                "Name": "overlay2"
            },
            
    Lower Dir:image 镜像层(镜像本身,只读) 
    Upper Dir:容器的上层(读写) 
    Merged Dir:容器的文件系统,使用 Union FS(联合文件系统)将 lowerdir 和 upper Dir:合并给容器使用。 
    Work Dir:容器在 宿主机的工作目录
    

    数据卷实际上就是宿主机上的目录或者是文件,可以被直接 mount 到容器当中使用。

    3.1 启动测试容器验证

    启动俩个终端

    docker run -it --rm --name test2 -v /tmp:/tmp 53e7a2d8aadd bash
    docker run -it --rm --name test1 -v /tmp:/tmp 53e7a2d8aadd bash
    [root@2efd6b2984d1 /]# cd /tmp/
    [root@2efd6b2984d1 tmp]# touch mytest2
    #在另一个容器
    [root@81d6a015cd7c /]# cd /tmp/
    #宿主机的tmp同样也有
    [root@81d6a015cd7c tmp]# ls
    mytest2
    root@harbor2:/tmp# ls
    mytest2
    

    3.2 数据卷的特点

    1、数据卷是宿主机的目录或者文件,并且可以在多个容器之间共同使用。
    2、在宿主机对数据卷更改数据后会在所有容器里面会立即更新。
    3、数据卷的数据可以持久保存,即使删除使用使用该容器卷的容器也不影响。
    4、在容器里面的写入数据不会影响到镜像本身。
    

    3.3 数据卷的使用场景

    1、日志输出 
    2、静态 web 页面 
    3、应用配置文件 
    4、多容器间目录或文件共享
    

    3.4 镜像管理

    在容器管理这一节主要想说一下容器占用磁盘空间的问题,容器运行久了,占用的空间会越来越大,当一台服务器上运行多个容器时,可能硬盘突然满了,此时就需要能及时找到哪个容器占用了过多的空间,以及如何尽快清理出空间

    通过docker system df命令查看docker占用的磁盘空间大小

    TYPE这一列Images是镜像的大小,Containers是容器占用的大小,Local Volumes是挂载的数据卷大小

    • 如果镜像很大,可以docker images查看是否有哪些镜像不需要,可以删除
    • 如果数据卷很大,可能是挂载的目录被写入很多文件,如容器中运行软件产生的日志文件或产生的数据文件

    通过docker system df -v查看更详细的数据,包括每个镜像占用的空间,每个容器占用的空间等

    通过docker ps --size查看运行中的容器占用的空间

    如果docker根目录所在的硬盘确实腾不出空间了,可以挂一个新的硬盘,service docker stop关闭docker服务,将当前docker根目录下的文件移动到新的硬盘(默认docker的根目录是/var/lib/docker),修改/etc/docker/daemon.json添加或修改data-root到新的硬盘上,再service docker start启动docker服务,这样就完成了docker目录的迁移

    3.5 镜像缩容

    构建的镜像当然是越小越好,使镜像中包含的程序刚刚好,这样构建出来的镜像才能最小化

    docker镜像是由构建的步骤一层一层叠加出来的,这是linux中自带的overlay文件系统,如果在第1步添加了一个文件到镜像中,在第2步再删除该文件,这个文件的大小仍然会算在镜像大小中,如果将第1步和第2步合并成一步,那么镜像大小就不再包含该文件大小了,如下:

    RUN 添加文件到镜像中 
      && 删除文件
    

    上面的是转义换行符的,&&是连接两个命令的符号

    docker提供了一个命令可以查看镜像的构建历史步骤

    docker history 镜像名或ID
    
    root@harbor2:/tmp# docker history c0fa8ada5a84
    IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
    c0fa8ada5a84        10 months ago       /bin/sh -c #(nop)  CMD ["/usr/local/openrest…   0B                  
    62bbb0b2a328        10 months ago       /bin/sh -c #(nop)  EXPOSE 80                    0B                  
    e228ad0a338b        10 months ago       /bin/sh -c #(nop) WORKDIR /                     0B                  
    b109c44fb8bb        10 months ago       /bin/sh -c #(nop) ADD file:c26c0723c8896ccb9…   857B                
    4ed6730165c6        10 months ago       /bin/sh -c make && make install                 95.1MB              
    de61b50377e0        10 months ago       /bin/sh -c ./configure --add-module=/usr/loc…   65.2MB              
    ddbc24e63d21        10 months ago       /bin/sh -c rm -rf ./Makefile                    0B                  
    49631835d627        10 months ago       /bin/sh -c #(nop) WORKDIR /usr/local/src/ope…   0B                  
    6e16d85978fd        10 months ago       /bin/sh -c unzip nginx-module-vts.zip           2.58MB              
    f393685b4b14        10 months ago       /bin/sh -c #(nop) WORKDIR /usr/local/src        0B                  
    f6d9e27571d4        10 months ago       /bin/sh -c #(nop) ADD file:9bb161e63fe3864a4…   1.49MB              
    1548553d9fc8        10 months ago       /bin/sh -c #(nop) ADD file:3d10850fb8b3e933a…   27.1MB              
    e8b57fcbd31d        10 months ago       /bin/sh -c apt-get install unzip make gcc li…   210MB               
    bc016c01c294        10 months ago       /bin/sh -c apt-get update                       26.2MB              
    d820629463da        10 months ago       /bin/sh -c echo 'deb http://mirrors.aliyun.c…   351B                
    5991db817e28        10 months ago       /bin/sh -c echo 'deb http://mirrors.aliyun.c…   260B                
    1f911ec5d70c        10 months ago       /bin/sh -c echo 'deb http://mirrors.aliyun.c…   171B                
    65e875ff00b0        10 months ago       /bin/sh -c echo 'deb http://mirrors.aliyun.c…   81B                 
    90088fba500c        10 months ago       /bin/sh -c #(nop) WORKDIR /etc/apt              0B                  
    fab5e942c505        10 months ago       /bin/sh -c #(nop)  CMD ["/bin/bash"]            0B                  
    <missing>           10 months ago       /bin/sh -c mkdir -p /run/systemd && echo 'do…   7B                  
    <missing>           10 months ago       /bin/sh -c set -xe   && echo '#!/bin/sh' > /…   745B                
    <missing>           10 months ago       /bin/sh -c rm -rf /var/lib/apt/lists/*          0B                  
    <missing>           10 months ago       /bin/sh -c #(nop) ADD file:513ae777bc4042f84…   126MB
    

    通过这个命令可以查看到指定镜像在构建时每一步操作所占用的大小

    3.6 多阶段构建

    docker也提供了多阶段构建的能力,以构建Java镜像为例,我们需要maven来将源码编译成.jar,在容器运行时只需要jre,这样maven只用于构建时,如果能将maven从镜像中排除掉,最终构建出的java镜像就会小很多

    # 先基于maven镜像来编译出.jar
    FROM maven:3-jdk-8 AS builder
    COPY ./src/ /usr/src/java/
    WORKDIR /usr/src/java/
    RUN mvn clean package
    
    # 再基于openjdk镜像来运行java,此时就需要从上面镜像中复制编译好的.jar文件
    FROM openjdk:8u102-jre
    COPY --from=builder /usr/src/java/target/xxx.jar /usr/local/jar/xxx.jar
    CMD ["java", "-jar", "/usr/local/jar/xxx.jar"]
    

    4 Harbor 仓库使用

    Harbor是一个用于存储和分发Docker镜像的企业级Registry服务器,由vmware开源,其通过添加一些企业必需的功能特性,例如安全、标识和管理等,扩展了开源 Docker Distribution。作为一个企业级私有 Registry 服务器,Harbor 提供了更好的性能和安全。提升用户使用 Registry 构建和运行环境传输镜像的效率。Harbor 支持安装在多个 Registry 节点的镜像资源复制,镜像全部保存在私有 Registry 中,确保数据和知识产权在公司内部网络中管控,另外,Harbor 也提供了高级的安全特性,诸如用户管理,访问控制和活动审计等。

    vmware 官方开源服务列表地址:https://vmware.github.io/harbor/cn/,

    harbor 官方 github 地址:https://github.com/vmware/harbor

    harbor 官方网址:https://goharbor.io/

    下载地址:https://github.com/vmware/harbor/releases

    安装文档:https://goharbor.io/docs/2.3.0/install-config/

    4.1 harbor部署

    4.1.1 机器环境

    节点hostname host IP
    harbor reg.local.com 192.168.43.131

    4.1.2 hostname

    [root@base1 ~]# hostnamectl set-hostname harbor --static
    

    4.1.3 网络设置

    [root@base1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
    BOOTPROTO="static" #dhcp改为static 
    ONBOOT="yes" #开机启用本配置
    IPADDR=192.168.43.131 #静态IP
    GATEWAY=192.168.43.1 #默认网关
    NETMASK=255.255.255.0 #子网掩码
    DNS1=114.114.114.114 #DNS 配置
    DNS2=8.8.8.8 #DNS 配置
    
    $# reboot
    

    4.1.4 查看主机名

    hostname
    

    4.1.5 ip:hostname到每一台机器节点

    echo "192.168.43.131 reg.local.com" >> /etc/hosts
    

    4.1.6 安装依赖环境,注意:每一台机器都需要安装此依赖环境

    yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstatlibseccomp wget vim net-tools git iproute lrzsz bash-completion tree bridge-utils unzip bind-utils gcc
    

    4.1.7 docker部署

    4.1.7.1 安装docker

    yum install -y yum-utils device-mapper-persistent-data lvm2
    
    #紧接着配置一个稳定的仓库、仓库配置会保存到/etc/yum.repos.d/docker-ce.repo文件中
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    #更新Yum安装的相关Docker软件包&安装Docker CE
    yum update -y && yum install docker-ce
    

    4.1.7.2 设置docker daemon文件

    #创建/etc/docker目录
    mkdir /etc/docker
    #更新daemon.json文件
    cat > /etc/docker/daemon.json <<EOF
    {
    "exec-opts":["native.cgroupdriver=systemd"],
    "log-driver":"json-file",
    "log-opts":{"max-size":"100m"}
    }
    EOF
    #注意:一定注意编码问题,出现错误---查看命令:journalctl -amu docker 即可发现错误
    #创建,存储docker配置文件
    mkdir -p /etc/systemd/system/docker.service.d
    

    4.1.7.3 重启docker服务

    systemctl daemon-reload && systemctl restart docker && systemctl enable docker
    

    4.1.8 安装compose

    打开github.com官网,在登录页面的右上角搜索compose找到docker/compose再找releases,(网址:github.com/docker/comp…

    复制自己所需版本下提供的两条命令,在第一台Docker服务器上依次进行操作:

    #在线下载docker-compose ,harbor需要借助docker-compose安装
    #复制官网上的上述命令
    curl -L https://github.com/docker/compose/releases/download/1.27.4/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
    
    #赋予该命令执行权限
    chmod u+x /usr/local/bin/docker-compose  
    #查看其版本信息
    docker-compose -version   
    docker-compose version 1.24.1, build 4667896b
    

    4.1.9 Harbor安装

    4.1.9.1 下载Harbor并配置

    #下载harbor
    wget https://github.com/goharbor/harbor/releases/download/v2.1.2/harbor-offline-installer-v2.1.2.tgz
    #将下载的安装包解压到指定目录
    tar zxf harbor-offline-installer-v2.1.2.tgz -C /usr/local
    #切换至解压后的目录中
    cd /usr/local/harbor/
    #编辑这个配置文件
    mv harbor.yml.tmpl harbor.yml
    
    vim harbor.yml
    

    修改harbor.yml配置文件

    注意点#TODO

    # Configuration file of Harbor
    
    # The IP address or hostname to access admin UI and registry service.
    # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
    hostname: reg.local.com|192.168.43.131
    
    # http related config
    # http: #TODO
      # port for http, default is 80. If https enabled, this port will redirect to https port
      # port: 80#TODO
    
    # https related config
    https:
      # https port for harbor, default is 443
      port: 443
      # The path of cert and key files for nginx
      certificate: /data/cert/reg.local.com.crt#TODO
      private_key: /data/cert/reg.local.com.key#TODO
    
    # # Uncomment following will enable tls communication between all harbor components
    # internal_tls:
    #   # set enabled to true means internal tls is enabled
    #   enabled: true
    #   # put your cert and key files on dir
    #   dir: /etc/harbor/tls/internal
    
    # Uncomment external_url if you want to enable external proxy
    # And when it enabled the hostname will no longer used
    # external_url: https://reg.mydomain.com:8433
    
    # The initial password of Harbor admin
    # It only works in first time to install harbor
    # Remember Change the admin password from UI after launching Harbor.
    harbor_admin_password: Harbor12345
    
    # Harbor DB configuration
    database:
      # The password for the root user of Harbor DB. Change this before any production use.
      password: root123
      # The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
      max_idle_conns: 50
      # The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
      # Note: the default number of connections is 1024 for postgres of harbor.
      max_open_conns: 1000
    
    # The default data volume
    data_volume: /data
    
    # Harbor Storage settings by default is using /data dir on local filesystem
    # Uncomment storage_service setting If you want to using external storage
    # storage_service:
    #   # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
    #   # of registry's and chart repository's containers.  This is usually needed when the user hosts a internal storage with self signed certificate.
    #   ca_bundle:
    
    #   # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
    #   # for more info about this configuration please refer https://docs.docker.com/registry/configuration/
    #   filesystem:
    #     maxthreads: 100
    #   # set disable to true when you want to disable registry redirect
    #   redirect:
    #     disabled: false
    
    # Clair configuration
    clair:
      # The interval of clair updaters, the unit is hour, set to 0 to disable the updaters.
      updaters_interval: 12
    
    # Trivy configuration
    #
    # Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
    # It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
    # in the local file system. In addition, the database contains the update timestamp so Trivy can detect whether it
    # should download a newer version from the Internet or use the cached one. Currently, the database is updated every
    # 12 hours and published as a new release to GitHub.
    trivy:
      # ignoreUnfixed The flag to display only fixed vulnerabilities
      ignore_unfixed: false
      # skipUpdate The flag to enable or disable Trivy DB downloads from GitHub
      #
      # You might want to enable this flag in test or CI/CD environments to avoid GitHub rate limiting issues.
      # If the flag is enabled you have to download the `trivy-offline.tar.gz` archive manually, extract `trivy.db` and
      # `metadata.json` files and mount them in the `/home/scanner/.cache/trivy/db` path.
      skip_update: false
      #
      # insecure The flag to skip verifying registry certificate
      insecure: false
      # github_token The GitHub access token to download Trivy DB
      #
      # Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
      # for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
      # requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
      # https://developer.github.com/v3/#rate-limiting
      #
      # You can create a GitHub token by following the instructions in
      # https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
      #
      # github_token: xxx
    
    jobservice:
      # Maximum number of job workers in job service
      max_job_workers: 10
    
    notification:
      # Maximum retry count for webhook job
      webhook_job_max_retry: 10
    
    chart:
      # Change the value of absolute_url to enabled can enable absolute url in chart
      absolute_url: disabled
    
    # Log configurations
    log:
      # options are debug, info, warning, error, fatal
      level: info
      # configs for logs in local storage
      local:
        # Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
        rotate_count: 50
        # Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
        # If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
        # are all valid.
        rotate_size: 200M
        # The directory on your host that store log
        location: /var/log/harbor
    
      # Uncomment following lines to enable external syslog endpoint.
      # external_endpoint:
      #   # protocol used to transmit log to external endpoint, options is tcp or udp
      #   protocol: tcp
      #   # The host of external endpoint
      #   host: localhost
      #   # Port of external endpoint
      #   port: 5140
    
    #This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
    _version: 2.0.0
    
    # Uncomment external_database if using external database.
    # external_database:
    #   harbor:
    #     host: harbor_db_host
    #     port: harbor_db_port
    #     db_name: harbor_db_name
    #     username: harbor_db_username
    #     password: harbor_db_password
    #     ssl_mode: disable
    #     max_idle_conns: 2
    #     max_open_conns: 0
    #   clair:
    #     host: clair_db_host
    #     port: clair_db_port
    #     db_name: clair_db_name
    #     username: clair_db_username
    #     password: clair_db_password
    #     ssl_mode: disable
    #   notary_signer:
    #     host: notary_signer_db_host
    #     port: notary_signer_db_port
    #     db_name: notary_signer_db_name
    #     username: notary_signer_db_username
    #     password: notary_signer_db_password
    #     ssl_mode: disable
    #   notary_server:
    #     host: notary_server_db_host
    #     port: notary_server_db_port
    #     db_name: notary_server_db_name
    #     username: notary_server_db_username
    #     password: notary_server_db_password
    #     ssl_mode: disable
    
    # Uncomment external_redis if using external Redis server
    # external_redis:
    #   # support redis, redis+sentinel
    #   # host for redis: <host_redis>:<port_redis>
    #   # host for redis+sentinel:
    #   #  <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
    #   host: redis:6379
    #   password:
    #   # sentinel_master_set must be set to support redis+sentinel
    #   #sentinel_master_set:
    #   # db_index 0 is for core, it's unchangeable
    #   registry_db_index: 1
    #   jobservice_db_index: 2
    #   chartmuseum_db_index: 3
    #   clair_db_index: 4
    #   trivy_db_index: 5
    #   idle_timeout_seconds: 30
    
    # Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
    # uaa:
    #   ca_file: /path/to/ca
    
    # Global proxy
    # Config http proxy for components, e.g. http://my.proxy.com:3128
    # Components doesn't need to connect to each others via http proxy.
    # Remove component from `components` array if want disable proxy
    # for it. If you want use proxy for replication, MUST enable proxy
    # for core and jobservice, and set `http_proxy` and `https_proxy`.
    # Add domain to the `no_proxy` field, when you want disable proxy
    # for some special registry.
    proxy:
      http_proxy:
      https_proxy:
      no_proxy:
      components:
        - core
        - jobservice
        - clair
        - trivy
    

    4.1.9.2 生成证书

    一键脚本文件create_cert.sh

    #!/bin/bash
    
    # 在该目录下操作生成证书,正好供harbor.yml使用
    mkdir -p /data/cert
    cd /data/cert
    
    openssl genrsa -out ca.key 4096
    openssl req -x509 -new -nodes -sha512 -days 3650 -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=reg.local.com" -key ca.key -out ca.crt
    openssl genrsa -out reg.local.com.key 4096
    openssl req -sha512 -new -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=reg.local.com" -key reg.local.com.key -out reg.local.com.csr
    
    cat > v3.ext <<-EOF
    authorityKeyIdentifier=keyid,issuer
    basicConstraints=CA:FALSE
    keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
    extendedKeyUsage = serverAuth
    subjectAltName = @alt_names
    
    [alt_names]
    DNS.1=reg.local.com
    DNS.2=harbor
    DNS.3=ks-allinone
    EOF
    
    openssl x509 -req -sha512 -days 3650 -extfile v3.ext -CA ca.crt -CAkey ca.key -CAcreateserial -in reg.local.com.csr -out reg.local.com.crt
        
    openssl x509 -inform PEM -in reg.local.com.crt -out reg.local.com.cert
    
    cp reg.local.com.crt /etc/pki/ca-trust/source/anchors/reg.local.com.crt 
    update-ca-trust
    

    执行脚本,生成证书

    chmod 755 create_cert.sh
    ./create_cert.sh
    

    4.1.9.3 安装

    #执行自带的安装脚本,安装完毕,浏览器即可访问
    ./install.sh
    ...
    [Step 5]: starting Harbor ...
    Creating network "harbor_harbor" with the default driver
    Creating harbor-log ... done
    Creating harbor-db     ... done
    Creating registry      ... done
    Creating registryctl   ... done
    Creating redis         ... done
    Creating harbor-portal ... done
    Creating harbor-core   ... done
    Creating nginx             ... done
    Creating harbor-jobservice ... done
    ✔ ----Harbor has been installed and started successfully.----
    

    4.1.9.4 更新daemon.json文件

    cat > /etc/docker/daemon.json <<EOF
    {
    "exec-opts":["native.cgroupdriver=systemd"],
    "log-driver":"json-file","log-opts":{"max-size":"100m"},
    "registry-mirrors":["https://pee6w651.mirror.aliyuncs.com"],
    "insecure-registries": ["https://reg.local.com"]
    }
    EOF
    
    #确定80端口正在监听
    netstat -antp | grep 80 
    
    #重启docker
    systemctl daemon-reload && systemctl restart docker
    
    #重启所有容器
    cd /usr/local/harbor
    docker-compose stop && docker-compose start
    Stopping harbor-jobservice ... done
    Stopping nginx             ... done
    Stopping harbor-core       ... done
    Stopping harbor-portal     ... done
    Stopping redis             ... done
    Stopping registryctl       ... done
    Stopping registry          ... done
    Stopping harbor-db         ... done
    Stopping harbor-log        ... done
    Starting log         ... done
    Starting registry    ... done
    Starting registryctl ... done
    Starting postgresql  ... done
    Starting portal      ... done
    Starting redis       ... done
    Starting core        ... done
    Starting jobservice  ... done
    Starting proxy       ... done
    

    启动验证Harbor(admin/Harbor12345)

    4.2 harbor 上传和下载镜像

    注意:如果我们配置的是 https 的话,本地 docker 就不需要有任何操作就可以访问 harbor 了

    4.2.1 node节点添加私有仓库

    # cat /etc/docker/daemon.json 
    {
      "registry-mirrors": [
        "https://docker.mirrors.ustc.edu.cn",
        "http://hub-mirror.c.163.com",
        "https://mirror.baidubce.com"
      ],
      "max-concurrent-downloads": 10,
      "log-driver": "json-file",
      "log-level": "warn",
      "log-opts": {
        "max-size": "10m",
        "max-file": "3"
        },
      "data-root": "/var/lib/docker",
      "insecure-registries": [
        "xx.xx.xx.xx",
        "xx.xx.xx.xx"
      ]
    }
    
    or-------------------------------------------
    # cat /etc/systemd/system/multi-user.target.wants/docker.service
    Service]
    Type=notify
    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry=xx.xx.xx.xx
    

    4.2.2 验证是否可以登录

    # docker login 192.168.43.131 --username=admin --password=Harbor12345
    WARNING! Using --password via the CLI is insecure. Use --password-stdin.
    WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    
    Login Succeeded
    

    4.2.3 镜像打tag推送

    # docker tag centos-nginx:v1 192.168.43.131/library/centos-nginx:v1
    # docker push 192.168.43.131/library/centos-nginx:v1
    

    4.3 实现harbor的高可用

    Harbor 支持基于策略的 Docker 镜像复制功能,这类似于 MySQL 的主从同步, 其可以实现不同的数据中心、不同的运行环境之间同步镜像

    4.3.1 准备运行环境

    192.168.43.66 harbor1 admin Harbor12345
    192.168.43.67 harbor2 admin Harbor12345

    4.3.2 添加仓库和复制规则

    #192.168.43.66
    #1 系统管理-仓库管理-新建目标
    目标名称:67
    目标url:http://192.168.43.67
    访问ID:admin
    访问密码:Harbor12345
    #2 系统管理-复制管理-新建目录
    名称:push-67
    复制模式:push-based
    源资源过滤器:全部
    目标仓库:67-http://192.168.43.67
    触发模式:事件驱动
    
    #192.168.43.67
    #1 系统管理-仓库管理-新建目标
    目标名称:66
    目标url:http://192.168.43.66
    访问ID:admin
    访问密码:Harbor12345
    #2 系统管理-复制管理-新建目录
    名称:push-66
    复制模式:push-based
    源资源过滤器:全部
    目标仓库:66-http://192.168.43.66
    触发模式:事件驱动
    

    4.3.3 验证复制是否成功

    #1 将镜像push到66的goapp项目下
    root@harbor2:~# docker tag registry.cn-hangzhou.aliyuncs.com/haozheyu/centos-stress:v1 192.168.43.66/goapp/centos-stress:11
    root@harbor2:~# docker push 192.168.43.66/goapp/centos-stress:11
    The push refers to repository [192.168.43.66/goapp/centos-stress]
    b2afbca5d0cd: Pushed 
    174f56854903: Mounted from goapp/centos 
    11: digest: sha256:fac58e4d667483e16f2be89dce8cc8112e5aec8e79ba76d7cd83889c600446a9 size: 741
    
    #2 在67上拉取192.168.43.66/goapp/centos-stress:11
    root@harbor:~/harbor# docker pull 192.168.43.67/goapp/centos-stress:11
    11: Pulling from goapp/centos-stress
    2d473b07cdd5: Pull complete 
    97002bbdba0c: Pull complete 
    Digest: sha256:fac58e4d667483e16f2be89dce8cc8112e5aec8e79ba76d7cd83889c600446a9
    Status: Downloaded newer image for 192.168.43.67/goapp/centos-stress:11
    

    4.3.4 配置harbor的代理

    配置使用的VIP:192.168.43.250

    4.3.4.1 配置keepalived

    #harbor1 harbor2 
    apt install -y keepalive haproxy
    #harbor1 
    cat >> /etc/keepalived/keepalived.conf << EOF
    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 1
        priority 100
        advert_int 3
        unicast_src_ip 192.168.43.66
        unicast_peer {
        192.168.43.67
        }
        authentication {
        auth_type PASS
        auth_pass 123abc
        }
        virtual_ipaddress {
        192.168.43.250 dev ens33 label ens33:1
        }
    }
    EOF
    
    #harbor2
    cat >> /etc/keepalived/keepalived.conf << EOF
    vrrp_instance VI_1 {
        state SLAVE
        interface ens33
        virtual_router_id 1
        priority 100
        advert_int 3
        unicast_src_ip 192.168.43.67
        unicast_peer {
        192.168.43.66
        }
        authentication {
        auth_type PASS
        auth_pass 123abc
        }
        virtual_ipaddress {
        192.168.43.250 dev ens33 label ens33:1
        }
    }
    EOF
    
    #harbor1 harbo2 启动服务
    root@harbor:~/harbor# systemctl start keepalived.service
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
        link/ether 00:0c:29:2d:30:86 brd ff:ff:ff:ff:ff:ff
        inet 192.168.43.66/24 brd 192.168.43.255 scope global ens33
           valid_lft forever preferred_lft forever
        inet `192.168.43.250/32` scope global ens33:1
           valid_lft forever preferred_lft forever
        inet6 240e:418:400:8f6:20c:29ff:fe2d:3086/64 scope global dynamic mngtmpaddr noprefixroute 
           valid_lft 3124sec preferred_lft 3124sec
        inet6 fe80::20c:29ff:fe2d:3086/64 scope link 
           valid_lft forever preferred_lft forever
           
    #这里看到VIP已经起来了   
    

    4.3.4.2 配置haproxy

    #harbor1 harbor2 追加 /etc/haproxy/haproxy.cfg
    listen harbor_80
        bind 192.168.43.250:8180
        mode tcp
        balance source
        server 192.168.43.66 192.168.43.66:80 check inter 2000 fall 3 rise 5
        server 192.168.43.67 192.168.43.67:80 check inter 2000 fall 3 rise 5
    #harbor1 harbor2 启动服务
    systemctl start haproxy.service
    

    4.3.4.3 验证配置

    root@harbor:~/harbor# cat /etc/docker/daemon.json 
    {
      "registry-mirrors": [
        "https://docker.mirrors.ustc.edu.cn",
        "http://hub-mirror.c.163.com"
      ],
      "max-concurrent-downloads": 10,
      "log-driver": "json-file",
      "log-level": "warn",
      "log-opts": {
        "max-size": "10m",
        "max-file": "3"
        },
      "data-root": "/var/lib/docker",
        "insecure-registries": [
        "192.168.43.66",
        "192.168.43.67",
        "http://192.168.43.250:8180"
    		      ]
    }
    
    root@harbor:~/harbor# systemctl restart docker
    root@harbor:~/harbor# docker login 192.168.43.250:8180 --username=admin --password=Harbor12345
    WARNING! Using --password via the CLI is insecure. Use --password-stdin.
    WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    
    Login Succeeded
    
    root@harbor:~/harbor# docker pull 192.168.43.250:8180/goapp/centos-stress:11
    11: Pulling from goapp/centos-stress
    Digest: sha256:fac58e4d667483e16f2be89dce8cc8112e5aec8e79ba76d7cd83889c600446a9
    Status: Downloaded newer image for 192.168.43.250:8180/goapp/centos-stress:11
    192.168.43.250:8180/goapp/centos-stress:11
    root@harbor:~/harbor# docker images
    REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
    192.168.43.250:8180/goapp/centos-stress   11                  dc9ee26d0c43        8 hours ago         377MB
    192.168.43.67/goapp/centos-stress         11                  dc9ee26d0c43        8 hours ago         377MB
    

    4.3.5 harbor的配置文件解析

    hostname: 192.168.43.66 #ip或域名
    
    http:
      port: 80
    https:
      port: 443
      certificate: /your/certificate/path
      private_key: /your/private/key/path
    harbor_admin_password: Harbor12345
    
    database:  #pgsql的链接配置  
      password: root123
      max_idle_conns: 100
      max_open_conns: 900
      
    data_volume: /data  #数据存储路径这里可以使用共享存储,也是高可用的一种实现
    
    # 12 hours and published as a new release to GitHub.
    trivy:
      ignore_unfixed: false
      skip_update: false
      #
      # insecure The flag to skip verifying registry certificate
      insecure: false
    
    jobservice:
      # Maximum number of job workers in job service
      max_job_workers: 10
    
    notification:
      # Maximum retry count for webhook job
      webhook_job_max_retry: 10
    
    chart:
      # Change the value of absolute_url to enabled can enable absolute url in chart
      absolute_url: disabled
    
    # Log configurations
    log:
      # options are debug, info, warning, error, fatal
      level: info
      # configs for logs in local storage
      local:
        # Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
        rotate_count: 50
        # Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
        # If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
        # are all valid.
        rotate_size: 200M
        # The directory on your host that store log
        location: /var/log/harbor
    
      # Uncomment following lines to enable external syslog endpoint.
      # external_endpoint:
      #   # protocol used to transmit log to external endpoint, options is tcp or udp
      #   protocol: tcp
      #   # The host of external endpoint
      #   host: localhost
      #   # Port of external endpoint
      #   port: 5140
    
    #This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
    _version: 2.3.0
    
    # 使用外部数据库
    # Uncomment external_database if using external database.
    # external_database:
    #   harbor:
    #     host: harbor_db_host
    #     port: harbor_db_port
    #     db_name: harbor_db_name
    #     username: harbor_db_username
    #     password: harbor_db_password
    #     ssl_mode: disable
    #     max_idle_conns: 2
    #     max_open_conns: 0
    #   notary_signer:
    #     host: notary_signer_db_host
    #     port: notary_signer_db_port
    #     db_name: notary_signer_db_name
    #     username: notary_signer_db_username
    #     password: notary_signer_db_password
    #     ssl_mode: disable
    #   notary_server:
    #     host: notary_server_db_host
    #     port: notary_server_db_port
    #     db_name: notary_server_db_name
    #     username: notary_server_db_username
    #     password: notary_server_db_password
    #     ssl_mode: disable
    
    # 使用外部redis
    # Uncomment external_redis if using external Redis server
    # external_redis:
    #   # support redis, redis+sentinel
    #   # host for redis: <host_redis>:<port_redis>
    #   # host for redis+sentinel:
    #   #  <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
    #   host: redis:6379
    #   password:
    #   # sentinel_master_set must be set to support redis+sentinel
    #   #sentinel_master_set:
    #   # db_index 0 is for core, it's unchangeable
    #   registry_db_index: 1
    #   jobservice_db_index: 2
    #   chartmuseum_db_index: 3
    #   trivy_db_index: 5
    #   idle_timeout_seconds: 30
    
    # Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
    # uaa:
    #   ca_file: /path/to/ca
    
    # Global proxy
    proxy:
      http_proxy:
      https_proxy:
      no_proxy:
      components:
        - core
        - jobservice
        - trivy
    
    # 状态接口
    # metric:
    #   enabled: false
    #   port: 9090
    #   path: /metrics
    
    如果当你发现自己的才华撑不起野心时,那就请你安静下来学习
  • 相关阅读:
    完全卸载Oracle方法、步骤
    使用oracle11g_instant_client来解决在不安装oracle客户端的情况下连接服务端
    Android系统中的6种模式
    现代汉语常用3500字
    debug1: Could not open authorized keys
    所选用户秘钥未在远程主机上注册
    directshow播放摄像头卡死问题
    linux设置静态IP
    gcc编译错误
    centos6.4编译gcc6.4
  • 原文地址:https://www.cnblogs.com/haozheyu/p/15230377.html
Copyright © 2020-2023  润新知