• Everything Docker



    >>>>Dockerfile
    ARG
    ARG <name>[=<default value>]
    The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag. If a user specifies a build argument that was not defined in the Dockerfile, the build outputs a warning.
    It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.
    Refer to the “build images with BuildKit” section to learn about secure ways to use secrets when building images.

    ENV
    The environment variables set using ENV will persist when a container is run from the resulting image. You can view the values using docker inspect, and change them using docker run --env <key>=<value>.
    If a variable is only needed during build, and not in the final image, consider setting a value for a single command instead. Or using ARG in Dockerfile, which is not persisted in the final image: ARG DEBIAN_FRONTEND=noninteractive

    WORKDIR
    WORKDIR /path/to/workdir
    The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
    The WORKDIR instruction can resolve environment variables previously set using ENV. You can only use environment variables explicitly set in the Dockerfile.
    For example:
    ENV DIRPATH=/path
    WORKDIR $DIRPATH/$DIRNAME
    RUN pwd
    The output of the final pwd command in this Dockerfile would be /path/$DIRNAME

    ADD (vs COPY)
    ADD [--chown=<user>:<group>] <src>... <dest>
    ADD [--chown=<user>:<group>] ["<src>",... "<dest>"]
    The latter form is required for paths containing whitespace.
    The ADD instruction copies new files, directories, or remote file URLs from <src> and adds them to the filesystem of the image at the path <dest>.
    The <dest> is an absolute path, or a path relative to WORKDIR, into which the source will be copied inside the destination container.
    ADD node_modules /myCompany/node_modules
    ADD dist /myCompany/dist

    RUN
    RUN <command> (shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows)
    RUN ["executable", "param1", "param2"] (exec form)
    The RUN instruction will execute a command in a new layer on top of the current image and commit the results. The resulting image will be used for the next step in the Dockerfile.
    Layering RUN instructions and generating commits conforms to the core concepts of Docker where commits are cheap and containers can be created from any point in an image's history, much like source control.

    VOLUME
    The docker run command initializes the newly created volume with any data that exists at the specified location within the base image. For example, consider the following Dockerfile snippet:
    FROM ubuntu
    RUN mkdir /myvol
    RUN echo "hello world" > /myvol/greeting
    VOLUME /myvol
    This Dockerfile results in an image that causes docker run to create a new mountpoint at /myvol (image/container mountpoint) and copy the greeting file into the newly created volume on the host.
    The VOLUME instruction does not support specifying a host-dir parameter. You can’t mount a host directory from within the Dockerfile. You can only specify the host directory when you run the container. This is to preserve image portability, since a given host directory (host mountpoint) can’t be guaranteed to be available on all hosts. The host directory (the mountpoint) is, by its nature, host-dependent.

    Start a container with a volume
    If you start a container with a volume that does not yet exist, Docker creates the volume for you
    docker run -d --name devtest --mount type=volume,source=my-vol,destination=/myvol nginx:latest
    docker run -d --name devtest --volume my-vol:/myvol nginx:latest
    docker inspect devtest
    "Mounts": [
    {
    "Type": "volume",
    "Name": "my-vol",
    "Source": "/var/lib/docker/volumes/my-vol/_data",
    "Destination": "/myvol",
    }
    ],

    -v or --volume, Consists of three fields, separated by colon characters (:). The fields must be in the correct order:
    In the case of named volumes, the first field is the name of the volume, and is unique on a given host machine.
    For anonymous volumes, the first field is omitted.
    In the case of bind mounts, the first field is a path on the host machine.

    You can’t use Docker CLI commands to directly manage bind mounts.
    but you can use them to manage volumes outside the scope of any container:
    docker volume create my-vol
    docker volume inspect my-vol
    "Mountpoint": "/var/lib/docker/volumes/my-vol/_data",

    CMD
    CMD ["executable","param1","param2"] (exec form, this is the preferred form)
    CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
    CMD command param1 param2 (shell form)
    These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well.
    If CMD is used to provide default arguments for the ENTRYPOINT instruction, both the CMD and ENTRYPOINT instructions should be specified with the string array format.
    Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, CMD [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME" ].
    When used in the shell or exec formats, the CMD instruction sets the command to be executed when running the image.
    Do not confuse RUN with CMD. RUN runs a command and commits the result at build time; CMD does not execute at build time.

    ENTRYPOINT
    The exec form: ENTRYPOINT ["executable", "param1", "param2"]
    The shell form: ENTRYPOINT command param1 param2
    Command line arguments to docker run <image> will be appended after all elements in an exec form ENTRYPOINT, and will override all elements specified using CMD. You can override the ENTRYPOINT instruction using the docker run --entrypoint flag.
    You can use the exec form of ENTRYPOINT to set fairly stable default commands and arguments and then use either form of CMD to set additional defaults that are more likely to be changed.
    CMD should be used as a way of defining default arguments for ENTRYPOINT or for executing ad-hoc commands in a container.
    If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.


    EXPOSE
    EXPOSE <port> [<port>/<protocol>...]
    The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. You can specify whether the port listens on TCP or UDP, and the default is TCP if the protocol is not specified.
    The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.

    By default, EXPOSE assumes TCP. You can also specify UDP:
    EXPOSE 80/udp
    To expose on both TCP and UDP, include two lines:
    EXPOSE 80/tcp
    EXPOSE 80/udp
    In this case, if you use -P with docker run, the port will be exposed once for TCP and once for UDP. Remember that -P uses an ephemeral high-ordered host port on the host, so the port will not be the same for TCP and UDP.
    Regardless of the EXPOSE settings, you can override them at runtime by using the -p flag. For example
    docker run -p 80:80/tcp -p 80:80/udp …

    To set up port redirection on the host system, see using the -P flag. The docker network command supports creating networks for communication among containers without the need to expose or publish specific ports, because the containers connected to the network can communicate with each other over any port. For detailed information, see the overview of this feature.


    >>>>Networks
    Bridge Networks
    The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.
    Bridge networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network.
    The default bridge network is considered a legacy detail of Docker and is not recommended for production use. Configuring it is a manual operation, and it has technical shortcomings.
    User-defined bridge networks are best when you need multiple containers to communicate on the same Docker host.

    Overlay Networks
    Overlay networks are best when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.
    When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
    an overlay network called ingress, which handles control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
    a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.
    Containers can be connected to more than one network at a time. Containers can only communicate across networks they are each connected to.
    The ingress network is created without the --attachable flag, which means that only swarm services can use it, and not standalone containers. You can connect standalone containers to user-defined overlay networks which are created with the --attachable flag. This gives standalone containers running on different Docker daemons the ability to communicate without the need to set up routing on the individual Docker daemon hosts.
    Container discovery
    For most situations, you should connect to the service name, which is load-balanced and handled by all containers (“tasks”) backing the service. To get a list of all tasks backing the service, do a DNS lookup for tasks.<service-name>.

    Host Networks
    Host networks are best when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.
    If you use the host network mode for a container, that container’s network stack is not isolated from the Docker host (the container shares the host’s networking namespace), and the container does not get its own IP-address allocated. For instance, if you run a container which binds to port 80 and you use host networking, the container’s application is available on port 80 on the host’s IP address.
    Note: Given that the container does not have its own IP-address when using host mode networking, port-mapping does not take effect, and the -p, --publish, -P, and --publish-all option are ignored, producing a warning instead:
    WARNING: Published ports are discarded when using host network mode
    Host mode networking can be useful to optimize performance, as it does not require network address translation (NAT).
    The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.


    >>>> Networking from the container's point of view
    Published ports
    The type of network a container uses, whether it is a bridge, an overlay, a macvlan network, or a custom network plugin, is transparent from within the container. From the container's point of view, it has a network interface with an IP address, a gateway, a routing table, DNS services, and other networking details (assuming the container is not using the none network driver).
    By default, when you create or run a container using docker create or docker run, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container's network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world. Here are some examples.

    IP address and hostname
    By default, the container is assigned an IP address for every Docker network it connects to. The IP address is assigned from the pool assigned to the network, so the Docker daemon effectively acts as a DHCP server for each container. Each network also has a default subnet mask and gateway.
    When you connect an existing container to a different network using docker network connect, you can use the --ip or --ip6 flags on that command to specify the container’s IP address on the additional network.
    In the same way, a container’s hostname defaults to be the container's ID in Docker. You can override the hostname using --hostname. When connecting to an existing network using docker network connect, you can use the --alias flag to specify an additional network alias for the container on that network.

    DNS services
    By default, a container inherits the DNS settings of the host, as defined in the /etc/resolv.conf configuration file. Containers that use the default bridge network get a copy of this file, whereas containers that use a custom network use Docker’s embedded DNS server, which forwards external DNS lookups to the DNS servers configured on the host.
    --dns The IP address of a DNS server. To specify multiple DNS servers, use multiple --dns flags. If the container cannot reach any of the IP addresses you specify, Google’s public DNS server 8.8.8.8 is added, so that your container can resolve internet domains.


    >>>>Application Data in Docker
    By default all files created inside a container are stored on a writable container layer. This means that: The data doesn't persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it. And is low performant.
    Advanced Types: volume, bind, tmpfs
    Volumes are stored in a part of the host filesystem managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker.
    Bind mounts may be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the host or a Docker container can modify them at any time.
    tmpfs mounts are stored in the host’s memory only, and are never written to the host's filesystem.

    Consider a situation where your image starts a lightweight web server. You could use NGINX as a base image, copy in your website’s source files, and package that into another image. Each time your website changed, you'd need to update the image and redeploy all of the containers serving your website. A better solution is to store the website in a shared volume which is attached to each of your web server containers. To update the website, you just update the shared volume.
    Multiple containers can use the same volume in the same time period. For example, one container writes data and the others read the data.


    >>>>>docker run
    Detached (-d)
    To start a container in detached mode, you use -d=true or just -d option. By design, containers started in detached mode exit when the root process used to run the container exits. If you use -d with --rm, the container is removed when it exits or when the daemon exits, whichever happens first.
    To reattach to a detached container, use docker attach command.

    Do not pass a service x start command to a detached container. For example, this command attempts to start the nginx service.
    $ docker run -d -p 80:80 my_image service nginx start
    This succeeds in starting the nginx service inside the container. However, it fails the detached container paradigm in that, the root process (service nginx start) returns and the detached container stops as designed. As a result, the nginx service is started but could not be used.

    Instead, to start a process such as the nginx web server do the following:
    $ docker run -d -p 80:80 my_image nginx -g 'daemon off;'

    To do input/output with a detached container use network connections or shared volumes. These are required because the container is no longer listening to the command line where docker run was run.

  • 相关阅读:
    并发编程——IO模型(重点)
    03-使用 HashMap 还是 TreeMap?
    02-Spring框架中Bean的生命周期
    01-Spring,SpringMVC,SpringBoot,SpringCloud的区别和联系
    MongoDB教程17-MongoDB聚合
    MongoDB教程16-MongoDB索引
    MongoDB教程15-MongoDB $type 操作符
    MongoDB教程14-MongoDB查询文档
    MongoDB教程13-MongoDB删除文档
    MongoDB教程12-MongoDB更新文档
  • 原文地址:https://www.cnblogs.com/andycja/p/15955653.html
Copyright © 2020-2023  润新知