遇到拉取镜像安装失败,出现:
error pulling image configuration: Get https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/e1/e174261beaec7ef8b8fc7e3df6b62b87e442f3451d408e7fe4525b151a061ebd/data?verify=1659234882-n%2FDgaxLyXimg47lGxgCwJ8mM9dg%3D: dial tcp [2606:4700::6812:7a19]:443: connect: network is unreachable
解决方法,修改系统文件:vi /etc/resolv.conf ,新增 nameserver 8.8.8.8 ,保存后重启 docker ,systemctl restart docker 即可
=============================================
安装 nginx:
version: '3.0' services: nginx: image: nginx:1.21.6 restart: always container_name: nginx volumes: - nginx_html:/usr/share/nginx/html - nginx_config:/etc/nginx - nginx_log:/var/log/nginx ports: - "80:80" networks: - mongo_log networks: mongo_log: volumes: nginx_html: nginx_config: nginx_log:
安装 mongodb:
version: '3.0' services: mongo: image: mongo:4.4.13 restart: always container_name: mongo environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: 8Dqx9rT8AyZwnYuV volumes: - mongo_db:/data/db - mongo_init:/usr/local/src/init ports: - "27017:27017" networks: - mongo_log networks: mongo_log: volumes: mongo_db: mongo_init:
安装 redis:
version: '3.0' services: redis: image: redis:6.2.6 restart: always container_name: redis networks: - mongo_log networks: mongo_log:
安装 zookeeper:
version: '3.0' services: zookeeper: image: wurstmeister/zookeeper restart: always container_name: zookeeper ports: - "2181:2181" networks: - mongo_log networks: mongo_log:
安装 kafka:
version: '3.0' services: kafka: image: wurstmeister/kafka restart: always container_name: kafka volumes: - etc_localtime:/etc/localtime ports: - "9092:9092" environment: KAFKA_ADVERTISED_HOST_NAME: 192.168.8.120//ip 改成自己的 ip KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_ADVERTISED_PORT: 9092 KAFKA_LOG_RETENTION_HOURS: 120 KAFKA_MESSAGE_MAX_BYTES: 10000000 KAFKA_REPLICA_FETCH_MAX_BYTES: 10000000 KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS: 60000 KAFKA_NUM_PARTITIONS: 3 KAFKA_DELETE_RETENTION_MS: 1000 networks: - mongo_log networks: mongo_log: volumes: etc_localtime:
安装 kafka-manager:
version: '3.0' services: kafka-manager: image: sheepkiller/kafka-manager restart: always container_name: kafka-manager environment: ZK_HOSTS: 192.168.8.120//ip 改成自己的 ip ports: - "9001:9000" networks: - mongo_log networks: mongo_log:
安装 elasticsearch:elasticsearch 比较重要的地方,是在文件夹:/usr/share/elasticsearch/config 下面的配置文件,需要修改:
① elasticsearch.yml 配置文件:
network.host: xxxx 修改为本机 IP 0.0.0.0【否则前台启动可以看到报:BindTransportException[Failed to bind to [9300-9400]】
② 设置vm.max_map_count:
sysctl -w vm.max_map_count=262144【这个不是在 elasticsearch 文件内改的,是针对系统而言的,系统重启会失效,需要在运行一次】
version: '3.0' services: elasticsearch: image: daocloud.io/library/elasticsearch:6.5.4 restart: always container_name: elasticsearch volumes: - elasticsearch:/usr/share/elasticsearch ports: - 9200:9200 #外部端口 - 9300:9300 #内部端口,也要映射,不然整合 springboot 的时候,访问不到 environment: ES_JAVA_OPTS: -Xms256m -Xmx256m networks: - mongo_log networks: mongo_log: volumes: elasticsearch:
安装 kibana:kibana 主要的配置文件在 /usr/share/kibana/config 目录,映射出来之后,需要修改 kibana.yml 文件
server.port: 5601
server.name: kibana
server.host: 0.0.0.0
version: '3.0' services: kibana: image: daocloud.io/library/kibana:6.5.4 restart: always container_name: kibana volumes: - kibana:/usr/share/kibana ports: - 5601:5601 environment: - elasticsearch_url=http://192.168.17.150:9200【换成自己 elasticsearch 所在 ip】 depends_on: - elasticsearch networks: - mongo_log networks: mongo_log: volumes: kibana:
安装 logstash:
version: '3.0' services: logstash: image: docker.elastic.co/logstash/logstash:7.15.2 restart: always container_name: logstash volumes: - logstash:/usr/share/logstash ports: - 9600:9600 depends_on: - elasticsearch networks: - mongo_log networks: mongo_log: volumes: logstash: