• Docker容器网络配置


    Docker容器网络配置

    Linux内核实现名称空间的创建

    ip netns命令

    可以借助ip netns命令来完成对 Network Namespace 的各种操作。ip netns命令来自于iproute安装包,一般系统会默认安装,如果没有的话,请自行安装。

    注意:ip netns命令修改网络配置时需要 sudo 权限。

    可以通过ip netns命令完成对Network Namespace 的相关操作,可以通过ip netns help查看命令帮助信息:

    [root@localhost ~]# ip netns help
    Usage:	ip netns list                    //查看
    	ip netns add NAME                    //开辟空间
    	ip netns attach NAME PID             //进入空间,只能查看 
    	ip netns set NAME NETNSID            //设置名称空间的ID
    	ip [-all] netns delete [NAME]        //删除空间
    	ip netns identify [PID]
    	ip netns pids NAME                   //查看空间的PID
    	ip [-all] netns exec [NAME] cmd ...  //进入空间,可以执行命令
    	ip netns monitor                     //监控
    	ip netns list-id                     //查看ID
    NETNSID := auto | POSITIVE-INT
    

    默认情况下,Linux系统中是没有任何 Network Namespace的,所以ip netns list命令不会返回任何信息。

    创建Network Namespace

    通过命令创建一个名为ns1的名称空间

    //查看
    [root@localhost ~]# ip netns list
    
    //添加
    [root@localhost ~]# ip netns add ns1
    
    //添加成功
    [root@localhost ~]# ip netns list
    ns1
    

    新创建的 Network Namespace 会出现在/var/run/netns/目录下。如果相同名字的 namespace 已经存在,命令会报Cannot create namespace file "/var/run/netns/ns0": File exists的错误。

    //查看目录
    [root@localhost ~]#  ls /var/run/netns/
    ns1
    
    //再次创建ns1报错
    [root@localhost ~]# ip netns add ns1
    Cannot create namespace file "/var/run/netns/ns1": File exists
    

    对于每个 Network Namespace 来说,它会有自己独立的网卡、路由表、ARP 表、iptables 等和网络相关的资源。

    删除Network Namespace

    我们可以创建名称空间当然也可以删除名称空间

    //创建一个名为ns3的名称空间
    [root@localhost ~]# ip netns list
    ns2 (id: 2)
    ns1 (id: 1)
    [root@localhost ~]# ip netns add ns3
    [root@localhost ~]# ip netns list
    ns3
    ns2 (id: 2)
    ns1 (id: 1)
    
    //使用delete删除你不需要的名称空间ns3
    [root@localhost ~]# ip netns delete ns3
    [root@localhost ~]# ip netns list
    ns2 (id: 2)
    ns1 (id: 1)
    

    上面所演示的就是删除的命令

    操作Network Namespace

    ip命令提供了ip netns exec子命令可以在对应的 Network Namespace 中执行命令。

    查看新创建 Network Namespace 的网卡信息

    //对ns1执行ip a命令
    [root@localhost ~]# ip netns exec ns1 ip a
    1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    

    可以看到,新创建的Network Namespace中会默认创建一个lo回环网卡,此时网卡处于关闭状态。此时,尝试去 ping 该lo回环网卡,会提示Network is unreachable

    [root@localhost ~]# ip netns exec ns1 ping 127.0.0.1
    connect: Network is unreachable
    

    通过下面的命令启用lo回环网卡:

    //启动ns1的lo回环网卡
    [root@localhost ~]# ip netns exec ns1 ip link set lo up
    
    //启动成功
    [root@localhost ~]# ip netns exec ns1 ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
           
    //测试能否ping通
    [root@localhost ~]# ip netns exec ns1 ping 127.0.0.1
    PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
    64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms
    64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.124 ms
    ^C
    --- 127.0.0.1 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 59ms
    rtt min/avg/max/mdev = 0.035/0.079/0.124/0.045 ms
    
           
    //关闭ns1的lo回环网卡
    [root@localhost ~]# ip netns exec ns1 ip link set lo down
    [root@localhost ~]# ip netns exec ns1 ip a
    1: lo: <LOOPBACK> mtu 65536 qdisc noqueue state DOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    

    转移设备

    我们可以在不同的 Network Namespace 之间转移设备(如veth)。由于一个设备只能属于一个 Network Namespace ,所以转移后在这个 Network Namespace 内就看不到这个设备了。

    其中,veth设备属于可转移设备,而很多其它设备(如lo、vxlan、ppp、bridge等)是不可以转移的。

    veth pair

    veth pair 全称是 Virtual Ethernet Pair,是一个成对的端口,所有从这对端口一 端进入的数据包都将从另一端出来,反之也是一样。
    引入veth pair是为了在不同的 Network Namespace 直接进行通信,利用它可以直接将两个 Network Namespace 连接起来。

    创建veth pair

    //查看网卡信息
    [root@localhost ~]# ip link show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether 00:0c:29:0b:34:02 brd ff:ff:ff:ff:ff:ff
    3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
        link/ether 02:42:ee:28:39:8e brd ff:ff:ff:ff:ff:ff
    57: veth680c986@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
        link/ether c2:9a:18:0b:c7:95 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    
    //添加一个虚拟网卡
    [root@localhost ~]# ip link add type veth
    
    //查看网卡信息,多出来一对虚拟网卡veth0和veth1
    [root@localhost ~]# ip link show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether 00:0c:29:0b:34:02 brd ff:ff:ff:ff:ff:ff
    3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
        link/ether 02:42:ee:28:39:8e brd ff:ff:ff:ff:ff:ff
    57: veth680c986@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
        link/ether c2:9a:18:0b:c7:95 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    58: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether a2:b8:ed:51:d8:31 brd ff:ff:ff:ff:ff:ff
    59: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether 92:fe:21:ca:c5:25 brd ff:ff:ff:ff:ff:ff
    

    可以看到,此时系统中新增了一对veth pair,将veth0和veth1两个虚拟网卡连接了起来,此时这对 veth pair 处于”未启用“状态。

    实现Network Namespace间通信

    下面我们利用veth pair实现两个不同的 Network Namespace 之间的通信。刚才我们已经创建了一个名为ns1的 Network Namespace,下面再创建一个信息Network Namespace,命名为ns2

    //添加ns2
    [root@localhost ~]# ip netns add ns2
    
    //查看
    [root@localhost ~]# ip netns list
    ns2
    ns1 (id: 1)
    
    //启动ns2的lo回环网卡
    [root@localhost ~]# ip netns exec ns2 ip link set lo up
    

    然后我们将veth0加入到ns1,将veth1加入到ns2

    //把veth0加入到ns1
    [root@localhost ~]# ip link set veth0 netns ns1
    [root@localhost ~]# ip netns exec ns1 ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    58: veth0@if59: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether a2:b8:ed:51:d8:31 brd ff:ff:ff:ff:ff:ff link-netns ns2
        
    //把veth1加入到ns2
    [root@localhost ~]# ip link set veth1 netns ns2
    [root@localhost ~]# ip netns exec ns2 ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    59: veth1@if58: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether 92:fe:21:ca:c5:25 brd ff:ff:ff:ff:ff:ff link-netns ns1
    
    //查看本机的网卡信息,发现刚才添加的一对虚拟网卡没有了
    [root@localhost ~]# ip link show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether 00:0c:29:0b:34:02 brd ff:ff:ff:ff:ff:ff
    3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
        link/ether 02:42:ee:28:39:8e brd ff:ff:ff:ff:ff:ff
    57: veth680c986@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
        link/ether c2:9a:18:0b:c7:95 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    
    

    我们分别为这对veth pair配置上ip地址,并启用它们

    #对ns1的veth0配置,并启用
    //添加ip地址1.1.1.1
    [root@localhost ~]# ip netns exec ns1 ip addr add 1.1.1.1/8 dev veth0
    
    //查看ip信息
    [root@localhost ~]# ip netns exec ns1 ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    58: veth0@if59: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether a2:b8:ed:51:d8:31 brd ff:ff:ff:ff:ff:ff link-netns ns2
        inet 1.1.1.1/8 scope global veth0
           valid_lft forever preferred_lft forever
    
    //启动veth0网卡
    [root@localhost ~]# ip netns exec ns1 ip link set veth0 up
    
    //查看ip信息,添加成功
    [root@localhost ~]# ip netns exec ns1 ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    58: veth0@if59: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
        link/ether a2:b8:ed:51:d8:31 brd ff:ff:ff:ff:ff:ff link-netns ns2
        inet 1.1.1.1/8 scope global veth0
           valid_lft forever preferred_lft forever
           
    #对ns2的veth1配置,并启用
    //添加ip地址1.1.1.2
    [root@localhost ~]# ip netns exec ns2 ip addr add 1.1.1.2/8 dev veth1
    
    //查看ip信息
    [root@localhost ~]# ip netns exec ns2 ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    59: veth1@if58: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether 92:fe:21:ca:c5:25 brd ff:ff:ff:ff:ff:ff link-netns ns1
        inet 1.1.1.2/8 scope global veth1
           valid_lft forever preferred_lft forever
    
    //启动veth1网卡
    [root@localhost ~]# ip netns exec ns2 ip link set veth1 up
    
    //查看ip信息,添加成功
    [root@localhost ~]# ip netns exec ns2 ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    59: veth1@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 92:fe:21:ca:c5:25 brd ff:ff:ff:ff:ff:ff link-netns ns1
        inet 1.1.1.2/8 scope global veth1
           valid_lft forever preferred_lft forever
        inet6 fe80::90fe:21ff:feca:c525/64 scope link 
           valid_lft forever preferred_lft forever
           
    #测试ns1能否和ns2相互ping通
    //在ns1名称空间ping通ns2
    [root@localhost ~]# ip netns exec ns1 ping 1.1.1.2
    PING 1.1.1.2 (1.1.1.2) 56(84) bytes of data.
    64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=0.085 ms
    64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=0.092 ms
    ^C
    --- 1.1.1.2 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 43ms
    rtt min/avg/max/mdev = 0.085/0.088/0.092/0.010 ms
    
    //在ns2名称空间ping通ns1
    [root@localhost ~]# ip netns exec ns2 ping 1.1.1.1
    PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
    64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=0.089 ms
    64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=0.104 ms
    ^C
    --- 1.1.1.1 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 37ms
    rtt min/avg/max/mdev = 0.089/0.096/0.104/0.012 ms
    

    从上面可以看出,我们已经成功启用了这个veth pair,并为每个veth设备分配了对应的ip地址。

    可以看到,veth pair成功实现了两个不同Network Namespace之间的网络交互。

    veth设备重命名

    把网卡veth0的名称改成eth0

    //关闭网卡
    [root@localhost ~]# ip netns exec ns1 ip link set veth0 down
    
    //改名为eth0
    [root@localhost ~]# ip netns exec ns1 ip link set veth0 name eth0
    
    //启动eth0网卡
    [root@localhost ~]# ip netns exec ns1 ip link set eth0 up
    
    //查看网卡信息,成功改名
    [root@localhost ~]# ip netns exec ns1 ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    58: eth0@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether a2:b8:ed:51:d8:31 brd ff:ff:ff:ff:ff:ff link-netns ns2
        inet 1.1.1.1/8 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::a0b8:edff:fe51:d831/64 scope link 
           valid_lft forever preferred_lft forever
    

    注意:需要停掉网卡才可以改名

    情景一

    如果我们再添加一个veth pair,把一个虚拟网卡veth0给名称空间ns2,另外一个留给本机,二者可以ping通吗

    //添加一对新的虚拟网卡
    [root@localhost ~]# ip link add type veth
    [root@localhost ~]# ip link show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether 00:0c:29:0b:34:02 brd ff:ff:ff:ff:ff:ff
    3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
        link/ether 02:42:ee:28:39:8e brd ff:ff:ff:ff:ff:ff
    57: veth680c986@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
        link/ether c2:9a:18:0b:c7:95 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    64: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether c2:93:a1:e6:c8:81 brd ff:ff:ff:ff:ff:ff
    65: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether ba:8e:9a:ae:0a:30 brd ff:ff:ff:ff:ff:ff
    
    //把虚拟网卡veth0给ns1名称空间,并配置ip地址
    [root@localhost ~]# ip link set veth0 netns ns1
    [root@localhost ~]# ip netns exec ns1 ip link set veth0 up
    [root@localhost ~]# ip netns exec ns1 ip addr add 192.168.100.1/24 dev veth0
    [root@localhost ~]# ip netns exec ns1 ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    58: eth0@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether a2:b8:ed:51:d8:31 brd ff:ff:ff:ff:ff:ff link-netns ns2
        inet 1.1.1.1/8 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::a0b8:edff:fe51:d831/64 scope link 
           valid_lft forever preferred_lft forever
    64: veth0@if65: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
        link/ether c2:93:a1:e6:c8:81 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 192.168.100.1/24 scope global veth0
           valid_lft forever preferred_lft forever
    
    //启动本机剩下的一个虚拟网卡veth1
    [root@localhost ~]# ip link set veth1 up
    
    //给veth1配置ip地址
    [root@localhost ~]# ip addr add 192.168.100.2/24 dev veth1
    [root@localhost ~]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 00:0c:29:0b:34:02 brd ff:ff:ff:ff:ff:ff
        inet 192.168.110.20/24 brd 192.168.110.255 scope global noprefixroute ens160
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fe0b:3402/64 scope link 
           valid_lft forever preferred_lft forever
    3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
        link/ether 02:42:ee:28:39:8e brd ff:ff:ff:ff:ff:ff
    57: veth680c986@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
        link/ether c2:9a:18:0b:c7:95 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet6 fe80::c09a:18ff:fe0b:c795/64 scope link 
           valid_lft forever preferred_lft forever
    65: veth1@if64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether ba:8e:9a:ae:0a:30 brd ff:ff:ff:ff:ff:ff link-netns ns1
        inet 192.168.100.2/24 scope global veth1
           valid_lft forever preferred_lft forever
        inet6 fe80::b88e:9aff:feae:a30/64 scope link 
           valid_lft forever preferred_lft forever
    
    //测试能否ping通ns1物理空间
    [root@localhost ~]# ping 192.168.100.1
    PING 192.168.100.1 (192.168.100.1) 56(84) bytes of data.
    64 bytes from 192.168.100.1: icmp_seq=1 ttl=64 time=0.061 ms
    64 bytes from 192.168.100.1: icmp_seq=2 ttl=64 time=0.034 ms
    ^C
    --- 192.168.100.1 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 58ms
    rtt min/avg/max/mdev = 0.034/0.047/0.061/0.015 ms
    

    通过以上得知,是可以互相ping通的

    注意:当你删除一个名称空间的虚拟网卡时,与之对应的虚拟网卡也会被删除

    四种网络模式配置

    bridge模式配置

    在创建容器的时候默认的网络模式就是bridge和不加--network选项的效果是一样的

    //不加选项
    [root@localhost ~]# docker run -it --rm busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    66: eth0@if67: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # exit
    #使用--rm选项,当用exit退出的时,不会保留容器
    
    //添加--network bridge模式
    [root@localhost ~]# docker run -it --rm --network bridge busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    68: eth0@if69: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # 
    

    none模式配置

    //使用none网络模式
    [root@localhost ~]# docker run -it --rm --network none busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    / # 
    
    //效果是一样的,以上是docker容器自动帮你做好了,和自己开辟的名称空间效果一样
    [root@localhost ~]# ip netns add n1
    [root@localhost ~]# ip netns exec n1 ip link set lo up
    [root@localhost ~]# ip netns exec n1 ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    

    container模式配置

    启动第一个容器

    [root@localhost ~]# docker run -it --rm --name ldz1 busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    70: eth0@if71: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # 
    

    启动第二个容器

    #打开一个新终端
    [root@localhost ~]# docker run -it --rm --name ldz2 busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    72: eth0@if73: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # ping 172.17.0.3
    PING 172.17.0.3 (172.17.0.3): 56 data bytes
    64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.109 ms
    64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.124 ms
    ^C
    --- 172.17.0.3 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.109/0.116/0.124 ms
    / # exit
    

    可以看到名为ldz2的容器IP地址是172.17.0.4,与第一个容器的IP地址不是一样的,也就是说并没有共享网络,此时如果我们将第二个容器的启动方式改变一下,就可以使名为ldz2的容器IP与ldz1容器IP一致,也即共享IP,但不共享文件系统。

    [root@localhost ~]# docker run -it --rm --name ldz2 --network container:ldz1 busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    70: eth0@if71: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / #
    

    此时我们在ldz1容器上创建一个目录

    [root@localhost ~]# docker run -it --rm --name ldz1 busybox
    / # ls
    bin   dev   etc   home  proc  root  sys   tmp   usr   var
    / # mkdir /data
    / # 
    

    在ldz2容器上是看不到这个目录的,因为文件系统不能共享

    [root@localhost ~]# docker run -it --rm --name ldz2 --network container:ldz1 busybox
    / # ls
    bin   dev   etc   home  proc  root  sys   tmp   usr   var
    / # 
    

    此时在ldz2容器上部署一个站点

    / # echo 'hello leidazhuang' > /tmp/index.html
    / # netstat -antl
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       
    / # httpd -h /tmp/
    / # netstat -antl
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       
    tcp        0      0 :::80                   :::*                    LISTEN      
    

    在ldz1容器上用本地ip地址访问这个站点

    / # wget -O - 127.0.0.1
    Connecting to 127.0.0.1 (127.0.0.1:80)
    writing to stdout
    hello leidazhuang
    -                    100% |********************************************|    18  0:00:00 ETA
    written to stdout
    / # 
    

    由此可见,访问的内容是我们子啊ldz2中部署的站点

    所以,container模式下的容器间关系就相当于一台主机上的两个不同进程

    情景一

    如果此时我们在ldz1容器上部署一个站点,我们再次访问,访问到的内容是什么?

    #在ldz1容器中
    / # echo 'hello ldz2' > /tmp/index.html
    / # wget -O - 127.0.0.1
    Connecting to 127.0.0.1 (127.0.0.1:80)
    writing to stdout
    hello leidazhuang
    -                    100% |********************************************|    18  0:00:00 ETA
    written to stdout
    / # 
    

    由上可知,访问的依旧是ldz2容器上部署的站点,这种模式知识共享网络,不能够共享文件

    host模式配置

    启动容器时直接指明为host模式

    [root@localhost ~]# docker run -it --rm --network host busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
        link/ether 00:0c:29:0b:34:02 brd ff:ff:ff:ff:ff:ff
        inet 192.168.110.20/24 brd 192.168.110.255 scope global ens160
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue 
        link/ether 02:42:33:b7:b1:70 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    
    

    此时如果我们在这个容器中启动一个http站点,我们就可以直接用宿主机的IP直接在浏览器中访问这个容器中的站点了

    #编写一个index.html
    / # echo 'hello leidazhuang' > /tmp/index.html
    / # httpd -h /tmp
    / # netstat -antl
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      
    tcp        0    248 192.168.110.20:22       192.168.110.1:53185     ESTABLISHED 
    tcp        0      0 :::22                   :::*                    LISTEN      
    tcp        0      0 :::80                   :::*                    LISTEN
    [root@localhost ~]# ss -antl
    State       Recv-Q      Send-Q           Local Address:Port           Peer Address:Port     
    LISTEN      0           128                    0.0.0.0:22                  0.0.0.0:*        
    LISTEN      0           128                       [::]:22                     [::]:*        
    LISTEN      0           9                            *:80                        *:*      
    #在宿主机上访问
    [root@localhost ~]# curl 192.168.110.20
    hello leidazhuang
    

    情景一

    假如此时我们再创建一个容器能够部署httpd站点吗?

    [root@localhost ~]# docker run -it --rm --network host busybox
    / # echo 'hello world' > /tmp/index.html
    / # httpd -h /tmp
    httpd: bind: Address already in use
    

    由此可见不可以部署了,端口会冲突

    容器的常用操作

    查看容器的主机名

    #容器的主机名默认是容器ID
    [root@localhost ~]# docker run -it --rm busybox
    / # hostname
    0704d26789ff
    [root@localhost ~]# docker ps
    CONTAINER ID   IMAGE     COMMAND   CREATED          STATUS          PORTS     NAMES
    0704d26789ff   busybox   "sh"      57 seconds ago   Up 55 seconds             relaxed_pasca
    
    //进入容器是无法修改主机名的,因为是只读状态
    / # hostname leidazhuang
    hostname: sethostname: Operation not permitted
    

    在容器启动时注入主机名

    //用--hostname选项
    [root@localhost ~]# docker run -it --rm --hostname leidazhuang busybox
    / # hostname
    leidazhuang
    
    //此时的容器ID不会因为主机名的更改而改变
    [root@localhost ~]# docker ps
    CONTAINER ID   IMAGE     COMMAND   CREATED              STATUS              PORTS     NAMES
    305b19fe4adf   busybox   "sh"      About a minute ago   Up About a minute             quizzical_dubinsky
    
    //注入主机名时会自动创建主机名和IP的映射关系
    # cat /etc/hosts
    127.0.0.1	localhost
    ::1	localhost ip6-localhost ip6-loopback
    fe00::0	ip6-localnet
    ff00::0	ip6-mcastprefix
    ff02::1	ip6-allnodes
    ff02::2	ip6-allrouters
    172.17.0.2	leidazhuang
    
    //可以使用映射的主机名ping通
    / # ping leidazhuang
    PING leidazhuang (172.17.0.2): 56 data bytes
    64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.033 ms
    64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.055 ms
    ^C
    --- leidazhuang ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.033/0.044/0.055 ms
    
    
    //DNS也会自动配置成宿主机的DNS
    / # cat /etc/resolv.conf 
    # Generated by NetworkManager
    nameserver 114.114.114.114
    

    手动指定容器要使用的DNS

    [root@localhost ~]# docker run -it --rm --hostname leidazhuang --dns 114.114.114.114 busybox 
    / # cat /etc/resolv.conf 
    nameserver 114.114.114.114
    / # ping baidu.com
    PING baidu.com (220.181.38.148): 56 data bytes
    64 bytes from 220.181.38.148: seq=0 ttl=127 time=32.096 ms
    64 bytes from 220.181.38.148: seq=1 ttl=127 time=27.224 ms
    64 bytes from 220.181.38.148: seq=2 ttl=127 time=26.965 ms
    ^C
    --- baidu.com ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max = 26.965/28.761/32.096 ms
    

    手动往/etc/hosts文件中注入主机名到IP地址的映射

    #打开一个新容器
    [root@localhost ~]# docker run -it --rm busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # hostname
    44af3428823c
    
    #加入新容器的IP地址做一个映射
    [root@localhost ~]# docker run -it --rm --hostname leidazhuang --dns 114.114.114.114 --add-host leidazhuang2:172.17.0.3 busybox
    / # cat /etc/hosts
    127.0.0.1	localhost
    ::1	localhost ip6-localhost ip6-loopback
    fe00::0	ip6-localnet
    ff00::0	ip6-mcastprefix
    ff02::1	ip6-allnodes
    ff02::2	ip6-allrouters
    172.17.0.3	leidazhuang2
    172.17.0.2	leidazhuang
    / # ping leidazhuang
    PING leidazhuang (172.17.0.2): 56 data bytes
    64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.114 ms
    64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.084 ms
    ^C
    --- leidazhuang ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.084/0.099/0.114 ms
    / # ping leidazhuang2
    PING leidazhuang2 (172.17.0.3): 56 data bytes
    64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.090 ms
    64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.407 ms
    ^C
    --- leidazhuang2 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.090/0.248/0.407 ms
    

    开放容器端口

    执行docker run的时候有个-p选项,可以将容器中的应用端口映射到宿主机中,从而实现让外部主机可以通过访问宿主机的某端口来访问容器内应用的目的。

    -p选项能够使用多次,其所能够暴露的端口必须是容器确实在监听的端口。

    -p选项的使用格式:

    • -p (容器的端口号)
      • 将指定的容器端口映射至主机所有地址的一个动态端口
    • -p :
      • 将容器端口映射至指定的主机端口
    • -p ::
      • 将指定的容器端口映射至主机指定的动态端口
    • -p ::
      • 将指定的容器端口映射至主机指定的端口

    演示如下:

    1. 将指定的容器端口映射至主机所有地址的一个动态端口
    [root@localhost ~]# docker run -itd -p 80 httpd
    4b660f72ea09808161727eb290c713e2073369c681b30405c41d382e78913191
    [root@localhost ~]# docker ps
    CONTAINER ID   IMAGE     COMMAND              CREATED         STATUS         PORTS                   NAMES
    4b660f72ea09   httpd     "httpd-foreground"   7 seconds ago   Up 6 seconds   0.0.0.0:49154->80/tcp   jovial_lalande
    [root@localhost ~]# ss -antl
    State      Recv-Q      Send-Q           Local Address:Port            Peer Address:Port     
    LISTEN     0           128                    0.0.0.0:22                   0.0.0.0:*        
    LISTEN     0           128                    0.0.0.0:49154                0.0.0.0:*        
    LISTEN     0           128                       [::]:22                      [::]:*        
    [root@localhost ~]# curl 172.17.0.2
    <html><body><h1>It works!</h1></body></html>
    

    通过49154端口号访问

    1. 将容器端口映射至指定的主机端口
    [root@localhost ~]# docker run -itd --rm -p 99:80 httpd
    5d65f4e06d6884676ce2e05d5c95758ea58ba6a33b241d564456114b72b78d46
    [root@localhost ~]# docker ps
    CONTAINER ID   IMAGE     COMMAND              CREATED         STATUS         PORTS                   NAMES
    5d65f4e06d68   httpd     "httpd-foreground"   5 seconds ago   Up 3 seconds   0.0.0.0:99->80/tcp      quizzical_dubinsky
    4b660f72ea09   httpd     "httpd-foreground"   7 minutes ago   Up 7 minutes   0.0.0.0:49154->80/tcp   jovial_lalande
    [root@localhost ~]# curl 172.17.0.3
    <html><body><h1>It works!</h1></body></html>
    

    通过99端口号访问

    1. 将指定的容器端口映射至主机指定的动态端口
    [root@localhost ~]# docker run -d --rm -p 192.168.110.20::80 httpd
    31803c393fcdfb9dd9ebfb4df42391d015fc38f243075b6e97db8baca730d88d
    [root@localhost ~]# docker ps
    CONTAINER ID   IMAGE     COMMAND              CREATED          STATUS          PORTS                          NAMES
    31803c393fcd   httpd     "httpd-foreground"   3 seconds ago    Up 2 seconds    192.168.110.20:49155->80/tcp   practical_matsumoto
    5d65f4e06d68   httpd     "httpd-foreground"   7 minutes ago    Up 6 minutes    0.0.0.0:99->80/tcp             quizzical_dubinsky
    4b660f72ea09   httpd     "httpd-foreground"   14 minutes ago   Up 14 minutes   0.0.0.0:49154->80/tcp          jovial_lalande
    [root@localhost ~]# curl 172.17.0.4
    <html><body><h1>It works!</h1></body></html>
    

    通过随机的端口号49155

    1. 将指定的容器端口映射至主机指定的端口
    [root@localhost ~]# docker run -d --rm -p 192.168.110.20:80:80 httpd
    f27ed4c52fefb59d1e7e00e98a44e5d8e46d3e46794e468c9a043acca0fae5f5
    [root@localhost ~]# docker ps
    CONTAINER ID   IMAGE     COMMAND              CREATED          STATUS          PORTS                          NAMES
    f27ed4c52fef   httpd     "httpd-foreground"   4 seconds ago    Up 3 seconds    192.168.110.20:80->80/tcp      admiring_dijkstra
    31803c393fcd   httpd     "httpd-foreground"   3 minutes ago    Up 3 minutes    192.168.110.20:49155->80/tcp   practical_matsumoto
    5d65f4e06d68   httpd     "httpd-foreground"   10 minutes ago   Up 10 minutes   0.0.0.0:99->80/tcp             quizzical_dubinsky
    4b660f72ea09   httpd     "httpd-foreground"   17 minutes ago   Up 17 minutes   0.0.0.0:49154->80/tcp          jovial_lalande
    [root@localhost ~]# curl 172.17.0.5
    <html><body><h1>It works!</h1></body></html>
    

    通过80端口号访问

    动态端口指的是随机端口,具体的映射结果可使用docker port命令查看。

    [root@localhost ~]# docker ps
    CONTAINER ID   IMAGE     COMMAND              CREATED          STATUS          PORTS                          NAMES
    f27ed4c52fef   httpd     "httpd-foreground"   2 minutes ago    Up 2 minutes    192.168.110.20:80->80/tcp      admiring_dijkstra
    31803c393fcd   httpd     "httpd-foreground"   6 minutes ago    Up 6 minutes    192.168.110.20:49155->80/tcp   practical_matsumoto
    5d65f4e06d68   httpd     "httpd-foreground"   13 minutes ago   Up 13 minutes   0.0.0.0:99->80/tcp             quizzical_dubinsky
    4b660f72ea09   httpd     "httpd-foreground"   20 minutes ago   Up 20 minutes   0.0.0.0:49154->80/tcp          jovial_lalande
    [root@localhost ~]# docker port f27ed4c52fef
    80/tcp -> 192.168.110.20:80
    

    查看iptable防火墙的详细信息

    iptables是Linux中的一个防火墙,它常用的有三个表(filter,nat,mangle),每个表有三个链(input,forward,output)

    iptables防火墙规则将随容器的创建自动生成,随容器的删除自动删除规则。

    iptables -t 表名是指定nat表

    • -nvL 这其实是三个参数,等效于 -n -v -L
    • -n 不解析主机名和端口名,也就是全部主机和端口都用数字表示
    • -v 详细信息列表
    • -L 列表
    [root@localhost ~]# iptables -t -nvL
    iptables v1.8.4 (nf_tables): table '-nvL' does not exist
    Perhaps iptables or your kernel needs to be upgraded.
    [root@localhost ~]# iptables -t nat -nvL
    Chain PREROUTING (policy ACCEPT 96 packets, 7983 bytes)
     pkts bytes target     prot opt in     out     source               destination         
       46  2432 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL
    
    Chain INPUT (policy ACCEPT 4 packets, 248 bytes)
     pkts bytes target     prot opt in     out     source               destination         
    
    Chain POSTROUTING (policy ACCEPT 200 packets, 13419 bytes)
     pkts bytes target     prot opt in     out     source               destination         
        3   194 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
        0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:80
        0     0 MASQUERADE  tcp  --  *      *       172.17.0.3           172.17.0.3           tcp dpt:80
        0     0 MASQUERADE  tcp  --  *      *       172.17.0.4           172.17.0.4           tcp dpt:80
        0     0 MASQUERADE  tcp  --  *      *       172.17.0.5           172.17.0.5           tcp dpt:80
    
    Chain OUTPUT (policy ACCEPT 187 packets, 12711 bytes)
     pkts bytes target     prot opt in     out     source               destination         
        0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL
    
    Chain DOCKER (2 references)
     pkts bytes target     prot opt in     out     source               destination         
        0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
        4   208 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:49154 to:172.17.0.2:80
        2   104 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:99 to:172.17.0.3:80
        2   104 DNAT       tcp  --  !docker0 *       0.0.0.0/0            192.168.110.20       tcp dpt:49155 to:172.17.0.4:80
        4   208 DNAT       tcp  --  !docker0 *       0.0.0.0/0            192.168.110.20       tcp dpt:80 to:172.17.0.5:80
    
    #下面已经停掉了所有的容器
    //通过以下看出刚才自动创建的规则已经被删除
    [root@localhost ~]# iptables -t nat -nvL
    Chain PREROUTING (policy ACCEPT 100 packets, 8446 bytes)
     pkts bytes target     prot opt in     out     source               destination         
       46  2432 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL
    
    Chain INPUT (policy ACCEPT 4 packets, 248 bytes)
     pkts bytes target     prot opt in     out     source               destination         
    
    Chain POSTROUTING (policy ACCEPT 200 packets, 13419 bytes)
     pkts bytes target     prot opt in     out     source               destination         
        3   194 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
    
    Chain OUTPUT (policy ACCEPT 187 packets, 12711 bytes)
     pkts bytes target     prot opt in     out     source               destination         
        0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL
    
    Chain DOCKER (2 references)
     pkts bytes target     prot opt in     out     source               destination         
        0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0 
    

    容器的批量停止

    //docker ps -q 取出运行的容器ID
    [root@localhost ~]# docker ps
    CONTAINER ID   IMAGE     COMMAND              CREATED          STATUS          PORTS                          NAMES
    f27ed4c52fef   httpd     "httpd-foreground"   17 minutes ago   Up 17 minutes   192.168.110.20:80->80/tcp      admiring_dijkstra
    31803c393fcd   httpd     "httpd-foreground"   20 minutes ago   Up 20 minutes   192.168.110.20:49155->80/tcp   practical_matsumoto
    5d65f4e06d68   httpd     "httpd-foreground"   27 minutes ago   Up 27 minutes   0.0.0.0:99->80/tcp             quizzical_dubinsky
    4b660f72ea09   httpd     "httpd-foreground"   34 minutes ago   Up 34 minutes   0.0.0.0:49154->80/tcp          jovial_lalande
    [root@localhost ~]# docker ps -q
    f27ed4c52fef
    31803c393fcd
    5d65f4e06d68
    4b660f72ea09
    
    //使用$(docker ps -q)命令替换的方式
    [root@localhost ~]# docker stop $(docker ps -q)
    f27ed4c52fef
    31803c393fcd
    5d65f4e06d68
    4b660f72ea09
    [root@localhost ~]# docker ps
    CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
    
    //同样的可以用来批量删除容器
    [root@localhost ~]# docker 肉卖-f $(docker ps -aq)
    f27ed4c52fef
    31803c393fcd
    5d65f4e06d68
    4b660f72ea09
    

    自定义docker0网桥属性信息

    官方文档相关配置

    自定义docker0桥的网络属性信息需要修改/etc/docker/daemon.json配置文件

    //配置daemon.json文件
    [root@localhost ~]# vim /etc/docker/daemon.json 
    {
        "bip": "192.168.80.1/24",
    }
    
    //重读一下文件,并且重启
    [root@localhost ~]# systemctl daemon-reload
    [root@localhost ~]# systemctl restart docker
    
    //查看ip信息已经修改了docker0网桥的信息
    [root@localhost ~]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 00:0c:29:0b:34:02 brd ff:ff:ff:ff:ff:ff
        inet 192.168.110.20/24 brd 192.168.110.255 scope global noprefixroute ens160
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
        link/ether 02:42:33:b7:b1:70 brd ff:ff:ff:ff:ff:ff
        inet 192.168.80.1/24 brd 192.168.80.255 scope global docker0
           valid_lft forever preferred_lft forever
        inet6 fe80::42:33ff:feb7:b170/64 scope link 
           valid_lft forever preferred_lft forever
    
    //此时创建的第一个容器ip就是192.168.80.2
    [root@localhost ~]# docker run -it --rm busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    24: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:c0:a8:50:02 brd ff:ff:ff:ff:ff:ff
        inet 192.168.80.2/24 brd 192.168.80.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.80.1    0.0.0.0         UG    0      0        0 eth0
    192.168.80.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
    

    核心选项为bip,即bridge ip之意,用于指定docker0桥自身的IP地址;其它选项可通过此地址计算得出。

    Docker远程连接

    dockerd守护进程的C/S,其默认仅监听Unix Socket格式的地址(/var/run/docker.sock),如果要使用TCP套接字,则需要修改/lib/systemd/system/docker.service 配置文件,添加如下内容,然后重启docker服务:

    //编辑service文件
    [root@localhost ~]# vim /lib/systemd/system/docker.service 
    [Service]
    Type=notify
    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    //#修改为下面这一行配置
    ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///run/docker.sock
    ExecReload=/bin/kill -s HUP $MAINPID
    TimeoutSec=0
    RestartSec=2
    Restart=always
    
    //重读一下文件,并重启
    [root@localhost ~]# systemctl daemon-reload
    [root@localhost ~]# systemctl restart docker
    
    //在客户端上向dockerd直接传递“-H|--host”选项指定要控制哪台主机上的docker容器
    [root@localhost ~]# docker -H 192.168.110.20:2375 ps -a
    CONTAINER ID   IMAGE     COMMAND              CREATED        STATUS                    PORTS     NAMES
    4b660f72ea09   httpd     "httpd-foreground"   12 hours ago   Exited (0) 12 hours ago             jovial_lalande
    
    //查看信息
    [root@localhost ~]# route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.110.2   0.0.0.0         UG    100    0        0 ens160
    192.168.80.0    0.0.0.0         255.255.255.0   U     0      0        0 docker0
    192.168.110.0   0.0.0.0         255.255.255.0   U     100    0        0 ens160
    

    Docker创建自定义桥

    创建一个额外的字定义桥,区别于docker0

    //创建一个新网桥取名为br1
    [root@localhost ~]#  docker network create -d bridge --subnet "192.168.1.0/24" --gateway "192.168.1.1" br1
    0e437808d37149cd94a0ed8c38f14b1d9526f0e11bfd98228878067bf7ce7d92
    
    //查看网卡信息
    [root@localhost ~]# docker network ls
    NETWORK ID     NAME      DRIVER    SCOPE
    0e437808d371   br1       bridge    local
    9e7c35d4af45   bridge    bridge    local
    fe4268dcfb12   host      host      local
    726e1028cedf   none      null      local
    

    使用新创建的网卡来创建容器b1

    [root@localhost ~]# docker run -it --rm --name b1 --network br1 busybox
    / # ip link show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0
           valid_lft forever preferred_lft forever
    

    再创建一个容器,使用默认的bridge网桥

    [root@localhost ~]# docker run -it --rm --name b2 busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    29: eth0@if30: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:c0:a8:50:02 brd ff:ff:ff:ff:ff:ff
        inet 192.168.80.2/24 brd 192.168.80.255 scope global eth0
           valid_lft forever preferred_lft forever
    

    思考

    试想一下,此时的c1与c2能否互相通信?如果不能该如何实现通信?

    创建c1容器,使用bridge桥接模式

    [root@localhost ~]# docker run -it --rm --name c1 busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    31: eth0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:c0:a8:50:02 brd ff:ff:ff:ff:ff:ff
        inet 192.168.80.2/24 brd 192.168.80.255 scope global eth0
           valid_lft forever preferred_lft forever
    

    再创建c2容器,使用br1网桥模式

    [root@localhost ~]# docker run -it --rm --name c2 --network br1 busybox
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    33: eth0@if34: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0
           valid_lft forever preferred_lft forever
    

    查看网络情况

    [root@localhost ~]# docker network ls
    NETWORK ID     NAME      DRIVER    SCOPE
    0e437808d371   br1       bridge    local
    9e7c35d4af45   bridge    bridge    local
    fe4268dcfb12   host      host      local
    726e1028cedf   none      null      local
    

    在容器c1中添加br1桥接

    //把网桥br1连接到c1容器
    [root@localhost ~]# docker network connect br1 c1
    
    #添加成功
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    31: eth0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:c0:a8:50:02 brd ff:ff:ff:ff:ff:ff
        inet 192.168.80.2/24 brd 192.168.80.255 scope global eth0
           valid_lft forever preferred_lft forever
    35: eth1@if36: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:c0:a8:01:03 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.3/24 brd 192.168.1.255 scope global eth1
           valid_lft forever preferred_lft forever
    

    在容器c2中添加bridge网桥

    //把网桥bridge连接到c2容器
    [root@localhost ~]# docker network connect bridge c2
    
    #添加成功
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    33: eth0@if34: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0
           valid_lft forever preferred_lft forever
    37: eth1@if38: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:c0:a8:50:03 brd ff:ff:ff:ff:ff:ff
        inet 192.168.80.3/24 brd 192.168.80.255 scope global eth1
           valid_lft forever preferred_lft forever
    

    尝试通信,在两台主机上ping通

    #在c1容器中ping通
    / # ping 192.168.1.2
    PING 192.168.1.2 (192.168.1.2): 56 data bytes
    64 bytes from 192.168.1.2: seq=0 ttl=64 time=0.129 ms
    64 bytes from 192.168.1.2: seq=1 ttl=64 time=0.173 ms
    ^C
    --- 192.168.1.2 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.129/0.151/0.173 ms
    
    #在c2容器中ping通
    / # ping 192.168.80.2
    PING 192.168.80.2 (192.168.80.2): 56 data bytes
    64 bytes from 192.168.80.2: seq=0 ttl=64 time=0.132 ms
    64 bytes from 192.168.80.2: seq=1 ttl=64 time=0.134 ms
    ^C
    --- 192.168.80.2 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.132/0.133/0.134 ms
    

    完成通信,完整过程如上

  • 相关阅读:
    VS2010-MFC(Ribbon界面开发:创建Ribbon样式的应用程序框架)
    VS2010-MFC(图形图像:GDI对象之画刷CBrush)
    VS2010-MFC(图形图像:GDI对象之画笔CPen)
    VS2010-MFC(图形图像:CDC类及其屏幕绘图函数)
    VS2010-MFC(字体和文本输出:文本输出)
    VS2010-MFC(字体和文本输出:CFont字体类)
    VS2010-MFC(MFC常用类:MFC异常处理)
    矩阵快速幂
    Codeforces 510C (拓扑排序)
    UVA10305 Ordering Tasks
  • 原文地址:https://www.cnblogs.com/leixixi/p/14471723.html
Copyright © 2020-2023  润新知