淘先锋技术网

首页 1 2 3 4 5 6 7

docker-compose部署一套生产环境

以下是我们公司产品,需要在客户那边搭建起来一套环境,操作系统centos-7.9

需要组件服务

所需服务版本问题
mysql5.65.7,需要关闭一些5.6数据库上面的配置
Redis自定义
elasticsearch7.17.2
kibana7.17.2
rocketmq4.8.0
nginx自定义
Nacos自定义

img

服务首先安装docker,docker-compose
[root@docker-compose ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@docker-compose ~]# mv docker-ce.repo /etc/yum.repos.d/
[root@docker-compose ~]# yum -y install docker-ce
[root@docker-compose ~]# systemctl start docker-ce && systemctl enable docker
[root@docker-compose ~]# docker -v
Docker version 20.10.16, build aa7e414

image-20220811112800217

配置国内镜像源

[root@docker-compose ~]# vi /etc/docker/daemon.json
{ "registry-mirrors": ["https://cq20bk8v.mirror.aliyuncs.com"] }

Docker-compose安装

pip3安装
[root@docker-compose ~]# yum install epel-release
[root@docker-compose ~]# yum install python3-pip 
[root@docker-compose ~]# pip3 install --upgrade pip 
[root@docker-compose ~]# pip3 install docker-compose 

这里会报错:ModuleNotFoundError: No module named 'setuptools_rust'
解决方法:pip3 install -U pip setuptools
 
 
[root@docker-compose ~]# docker-compose --version

image-20220811113207121

image-20220811113223914

二进制安装

github上下载docker-compose二进制文件安装

  • 下载最新版的docker-compose文件
  • 若是github访问太慢,可以用daocloud下载

http://get.daocloud.io/

Docker Compose存放在Git Hub,不太稳定。

你可以也通过执行下面的命令,高速安装Docker Compose。

curl -L https://get.daocloud.io/docker/compose/releases/download/v2.5.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose

你可以通过修改URL中的版本,可以自定义您的需要的版本。

镜像打成tar包

我这边是从原来服务器上面打包的镜像,没有的需要从新拉取

docker save newmq > 你的储存路径/文件名称.tar

docker save mysql >  /tar/mysql.5.6.tar
docker save redis >  /tar/redis.2.8.tar
docker save elasticsearch  >  /tar/elasticsearch.7.17.2.tar
docker save kibana  >  /tar/kibana.7.17.2.tar
docker save rocketmqinc/rocketmq  >  /tar/rocketmq.tar
docker save nginx   >  /tar/nginx.1.21.6.tar
docker save nacos/nacos-server   >  /tar/nacos.2.0.3.tar
上传镜像tar包

我这边写了个脚本

vim docker-load.sh

hmod +x docker-load.sh 
#!/bin/bash
docker load < /tar/elasticsearch.7.17.2.tar
docker load < /tar/kibana.7.17.2.tar
docker load < /tar/mysql.5.6.tar
docker load < /tar/nacos.2.0.3.tar
docker load < /tar/nginx.1.21.6.tar
docker load < /tar/redis.2.8.tar
docker load < /tar/rocketmq.tar

image-20220811140843700

image-20220811141446476

image-20220811141542875

部署mysql服务

我们其实可以把所有组件的yml都放在一起,一起启动,后面再讲

创建目录
mkdir -p /usr/local/docker/mysql5.6
vim docker-compose.yml
编写yaml文件
version: "3"

services:
  mysql:
    image: mysql:5.6
    restart: always  
    ports:
      - 3306:3306
    environment:
      - MYSQL_ROOT_PASSWORD=Bimuyu2022
    volumes:
      - "/data/container/mysql/data:/var/lib/mysql"
      - "/data/container/mysql/init:/docker-entrypoint-initdb.d"

image-20220811133030292

启动
docker-compose  up -d

image-20220811142052922

创建一个mysql的库,用于等下连接nacos

image-20220815181643006

部署redis服务

创建目录
mkdir -p /usr/local/docker/redis
vim docker-compose.yml
编写yaml文件
version: "3"

services:
  redis:
    image: redis:2.8
    restart: always
    ports:
      - 6379:6379
    volumes:
      - "/data/container/redis/data:/data"

image-20220811142740585

启动
docker-compose  up -d

image-20220811142800051

部署es服务

创建目录
mkdir -p /usr/local/docker/es
vim docker-compose.yml
编写yaml文件
version: "3.0"

services:
  elasticsearch:
    image: elasticsearch:7.17.2
    restart: always
    environment:
      - xpack.security.enabled=false
      - discovery.type=single-node
    ports:
      - 9200:9200
    volumes:
      - "./data:/usr/share/elasticsearch/data"
      - "./plugins:/usr/share/elasticsearch/plugins"
      - "./config/jvm.options.d:/usr/share/elasticsearch/config/jvm.options.d"

  kibana:
    image: kibana:7.17.2
    restart: always
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
    depends_on:
      - elasticsearch
    ports:
      - 5601:5601

image-20220811142932453

启动
docker-compose  up -d

image-20220811143033394

访问kibana

部署RocketMQ服务

创建目录
mkdir -p /usr/local/docker/rocketmq
vim docker-compose.yml
编写yaml文件

vim下,粘贴带注释代码,会出现代码缩进错位,解决办法:

先设置vim为粘贴模式,即执行:set paste,然后在进入编辑状态,执行粘贴即可。

version: '3'
services:
  rocketmq-namesrv:
    image: apache/rocketmq:4.8.0
    container_name: rocketmq-namesrv
    restart: always
    ports:
      - 9876:9876
    volumes:
    # docker-compose.yml 文件地址 /usr/local/ibimfish/rocketmq
    # ./namesrv/logs 主机路径(docker-compose.yml的相对路径):/home/rocketmq/logs 容器内路径
      - ./namesrv/logs:/home/rocketmq/logs
      - ./namesrv/store:/home/rocketmq/store
    environment:
      JAVA_OPT_EXT: "-Duser.home=/home/rocketmq -Xms512M -Xmx512M -Xmn128m"
    command: ["sh","mqnamesrv"]
    networks:
      rocketmq_net:
        aliases:
          - rocketmq-namesrv


  rocketmq-broker:
    image: apache/rocketmq:4.8.0
    container_name: rocketmq-broker
    restart: always
    ports:
      - 10909:10909
      - 10911:10911
    volumes:
      - ./broker/logs:/home/rocketmq/logs
      - ./broker/store:/home/rocketmq/store
      - ./broker/conf/broker.conf:/etc/rocketmq/broker.conf
    environment:
      JAVA_OPT_EXT: "-Duser.home=/home/rocketmq -Xms512M -Xmx512M -Xmn128m"
      # 容器内路径
    command: ["sh","mqbroker","-c","/etc/rocketmq/broker.conf","-n","rocketmq-namesrv:9876","autoCreateTopicEnable=true"]
    depends_on:
      - rocketmq-namesrv
    networks:
      rocketmq_net:
        aliases:
          - rocketmq-broker


  rocketmq-console:
    image: styletang/rocketmq-console-ng
    container_name: rocketmq-console
    restart: always
    ports:
      - 8180:8080
    environment:
      JAVA_OPTS: "-Drocketmq.namesrv.addr=rocketmq-namesrv:9876 -Dcom.rocketmq.sendMessageWithVIPChannel=false"
    depends_on:
      - rocketmq-namesrv
    networks:
      rocketmq_net:
        aliases:
          - rocketmq-console

networks:
  rocketmq_net:
    name: rocketmq_net
    driver: bridge

启动
docker-compose  up -d

image-20220815162437015

部署nginx服务

创建目录
mkdir -p /usr/local/docker/nginx
vim docker-compose.yml
编写yaml文件
version: "3.0"

services:
  nginx:
    image: nginx:1.21.6
    restart: always
    #network_mode: "host"
    ports:
      - 80:80
      - 443:443
    volumes:
      - "./nginx/html:/usr/share/nginx/html"
      - "./nginx/conf.d:/etc/nginx/conf.d"
      - "./nginx/ssl:/etc/nginx/ssl"
      - "./nginx/logs:/var/log/nginx"
启动
docker-compose  up -d

image-20220815170102560

部署Nacos服务

创建目录
mkdir -p /usr/local/docker/nacos
vim docker-compose.yml
编写yaml文件
version: '3'

services:
  nacos:
    # build是Dockfile文件的所属文件夹,如果不用build可以使用image: nacos/nacos-server:1.4.3
    #build:
      #context: ../docker-file/nacos
    image: nacos/nacos-server:2.0.3
    container_name: nacos
    restart: always
    volumes:
      # 日志挂载
      - ./logs:/home/nacos/logs
      # 配置文件挂载
      - ./application.properties:/home/nacos/conf/application.properties
    ports:
      - "8848:8848"
    environment:
      - PREFER_HOST_MODE=ip
      - MODE=standalone
      - SPRING_DATASOURCE_PLATFORM=mysql
      # mysql信息配置
      - MYSQL_MASTER_SERVICE_HOST=192.168.2.205
      - MYSQL_MASTER_SERVICE_PORT=3306
      - MYSQL_MASTER_SERVICE_USER=root
      - MYSQL_MASTER_SERVICE_PASSWORD=Bimuyu2022
      # 内存配置
      - JVM_XMS=512m
      - JVM_MMS=320m

image-20220815170416373

创建nacos文件

下面复制即可,把数据库ip改成你的服务器

server.servlet.contextPath=/nacos
server.port=8848
spring.datasource.platform=mysql
# 数据库数量
db.num=1
db.url.0=jdbc:mysql://ip   /nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
db.user=账户
db.password=密码
 
 
# 同步任务生成的周期,单位为毫秒	
# nacos.naming.distro.taskDispatchPeriod=200
 
### Data count of batch sync task:
# 同步任务每批的key的数目
# nacos.naming.distro.batchSyncKeyCount=1000
### Retry delay in milliseconds if sync task failed:
# 同步任务失败的重试间隔,单位为毫秒
# nacos.naming.distro.syncRetryDelay=5000
 
### If enable data warmup. If set to false, the server would accept request without local data preparation:
# 是否在Server启动时进行数据预热
# nacos.naming.data.warmup=true
 
### If enable the instance auto expiration, kind like of health check of instance:
# 是否自动摘除临时实例
# nacos.naming.expireInstance=true
#是否自动清理不在线服务 
nacos.naming.empty-service.auto-clean=true
# 清理延迟时间
nacos.naming.empty-service.clean.initial-delay-ms=50000
# 清理间隔时间
nacos.naming.empty-service.clean.period-time-ms=30000
 
 
#*************** CMDB Module Related Configurations ***************#
### The interval to dump external CMDB in seconds:
# 全量dump的间隔,单位为秒
# nacos.cmdb.dumpTaskInterval=3600
### The interval of polling data change event in seconds:
# 变更事件的拉取间隔,单位为秒
# nacos.cmdb.eventTaskInterval=10
 
### The interval of loading labels in seconds:
# 标签集合的拉取间隔,单位为秒
# nacos.cmdb.labelTaskInterval=300
 
### If turn on data loading task:
# 是否打开CMDB
# nacos.cmdb.loadDataAtStart=false
 
 
#*************** Metrics Related Configurations ***************#
### Metrics for prometheus
# 监控端点
#management.endpoints.web.exposure.include=*
 
### Metrics for elastic search
# 是否导出监控数据到ES
management.metrics.export.elastic.enabled=false
# ES地址
#management.metrics.export.elastic.host=http://localhost:9200
 
### Metrics for influx
# 是否导出监控数据到influxdb(一款时序数据库)
management.metrics.export.influx.enabled=false
# 数据库名
#management.metrics.export.influx.db=springboot
# 数据库地址
#management.metrics.export.influx.uri=http://localhost:8086
# 是否自动创建数据库
#management.metrics.export.influx.auto-create-db=true
# 为每个点编写一致性
#management.metrics.export.influx.consistency=one
# 是否启用发布到Influx的指标批次的GZIP压缩
#management.metrics.export.influx.compressed=true
 
 
#*************** Access Log Related Configurations ***************#
### If turn on the access log:
# 是否打印access日志
server.tomcat.accesslog.enabled=true
 
### The access log pattern:
# 日志打印格式
server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i
 
### The directory of access log:
# 日志存储目录
server.tomcat.basedir=
 
 
#*************** Access Control Related Configurations ***************#
### If enable spring security, this option is deprecated in 1.2.0:
# 开启security框架访问控制
#spring.security.enabled=false
 
### The ignore urls of auth, is deprecated in 1.2.0:
# 配置security放行路径
nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**
 
### The auth system to use, currently only 'nacos' is supported:
# 系统授权认证类型
nacos.core.auth.system.type=nacos
 
### If turn on auth system:
# 是否开启授权
nacos.core.auth.enabled=false
 
### The token expiration in seconds:
# 令牌失效时间
nacos.core.auth.default.token.expire.seconds=36000
 
### The default token:
# 默认访问密钥
nacos.core.auth.default.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789
 
### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay.
# 更新授权信息后的延迟时间
nacos.core.auth.caching.enabled=true
 
 
#*************** Istio Related Configurations ***************#
### If turn on the MCP server:
# 是否开启MCP
nacos.istio.mcp.server.enabled=false
 
 
 
###*************** Add from 1.3.0 ***************###
 
 
#*************** Core Related Configurations ***************#
 
### set the WorkerID manually
# 数据的主键雪花ID
# nacos.core.snowflake.worker-id=
 
### Member-MetaData
# nacos.core.member.meta.site=
# nacos.core.member.meta.adweight=
# nacos.core.member.meta.weight=
 
### MemberLookup
### Addressing pattern category, If set, the priority is highest
# 寻址模式类型
# nacos.core.member.lookup.type=[file,address-server,discovery]
## Set the cluster list with a configuration file or command-line argument
# 使用配置文件或命令行参数设置群集列表
# nacos.member.list=192.168.16.101:8847?raft_port=8807,192.168.16.101?raft_port=8808,192.168.16.101:8849?raft_port=8809
## for DiscoveryMemberLookup
# If you want to use cluster node self-discovery, turn this parameter on
# 自动寻址
# nacos.member.discovery=false
## for AddressServerMemberLookup
# Maximum number of retries to query the address server upon initialization
# 初始化时查询地址服务器的最大重试次数
# nacos.core.address-server.retry=5
 
#*************** JRaft Related Configurations ***************#
 
### Sets the Raft cluster election timeout, default value is 5 second
# 选举超时时间
# nacos.core.protocol.raft.data.election_timeout_ms=5000
### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute
# 集群以中性数据快照间隔时间
# nacos.core.protocol.raft.data.snapshot_interval_secs=30
### Requested retries, default value is 1
# 请求失败尝试次数
# nacos.core.protocol.raft.data.request_failoverRetries=1
### raft internal worker threads
# 线程个数
# nacos.core.protocol.raft.data.core_thread_num=8
### Number of threads required for raft business request processing
# 客户端线程请求数
# nacos.core.protocol.raft.data.cli_service_thread_num=4
### raft linear read strategy, defaults to index
# 一致性线性读策略
# nacos.core.protocol.raft.data.read_index_type=ReadOnlySafe
### rpc request timeout, default 5 seconds
# RPC 请求超时时间
# nacos.core.protocol.raft.data.rpc_request_timeout_ms=5000

image-20220815172018623

image-20220815171110676

启动
docker-compose  up -d

image-20220815171315992

日志启动成功

image-20220815181716810

访问
http://ip:8848/nacos/#/login

账户 :nacos
密码:nacos

image-20220815181743983

我这边遇到的问题:nacos没起来容器日志报错

查看容器日志

docker logs -f c6a7

image-20220816093832658

问题二

image-20220816093858208

查看nacos配置文件
vim application.properties

image-20220816094052148

部署后端jar包服务

我这边是跑了3个后端jar包,开发给到对应的jar包,然后再把jar包写到dockerfile里面,然后再写个yml文件启动容器

部署keeper -java

创建目录
mkdir java
[root@210 java]# pwd
/usr/local/docker/java
mkdir keeper     #把jar包放到这个位置

image-20220816104405586

编写Dockerfile文件
FROM  java:8
# author
MAINTAINER aike
# 挂载目录
VOLUME /home/keeper
# 复制jar文件到路径
ADD keeper-provider.jar keeper.jar
# 启动服务命令
ENTRYPOINT ["java","-jar","keeper.jar","--spring.profiles.active=dev"]
#暴露端口
EXPOSE 8080

image-20220816105159658

docker-biuld镜像

image-20220816110837974

编写docker-compose.yml文件
version: '3'
services:
  keeper:
    image: keeper-huarun:0802
    container_name: keeper
    restart: always
    network_mode: "host"
    environment:
      - TZ=Asia/Shanghai
    ports:
      - 8080:8080
    volumes:
      - ./logs:/logs

image-20220816105419046

启动
docker-compose up -d

遇到报错:
是因为ports上面镜像已经指定了,我们这里就不需要暴漏了

image-20220816111200140

成功
image-20220816140052437

部署lane -java

创建目录
mkdir java
[root@210 java]# pwd
/usr/local/docker/java
mkdir lane    #把jar包放到这个位置

image-20220816104405586

编写Dockerfile文件
FROM  java:8
# author
MAINTAINER aike
# 挂载目录
VOLUME /home/lane
# 复制jar文件到路径
ADD lane-provider.jar lane.jar
# 启动服务命令
ENTRYPOINT ["java","-jar","lane.jar","--spring.profiles.active=dev"]
#暴露端口
EXPOSE 8082

image-20220816141109498

docker-biuld镜像

image-20220816141156533

编写docker-compose.yml文件
version: '3'
services:
  lane:
    image: lane-huarun:0802
    container_name: lane
    restart: always
    network_mode: "host"
    environment:
      - TZ=Asia/Shanghai
    ports:
      - 8082:8082
    volumes:
      - ./logs:/logs

image-20220816141320047

启动
docker-compose up -d

image-20220816141344952

部署pond -java

创建目录
mkdir java
[root@210 java]# pwd
/usr/local/docker/java
mkdir pod   #把jar包放到这个位置

image-20220816104405586

编写Dockerfile文件
FROM  java:8
# author
MAINTAINER qingyangzi
# 挂载目录
VOLUME /home/pond
# 复制jar文件到路径
ADD pond-provider.jar pond.jar
# 启动服务命令
ENTRYPOINT ["java","-jar","pond.jar","--spring.profiles.active=dev"]
#暴露端口
EXPOSE 8081

image-20220816142042265

docker-biuld镜像

image-20220816142214336

编写docker-compose.yml文件
version: '3'
services:
  pond:
    image: pond-huarun:0802
    container_name: pond
    restart: always
    network_mode: "host"
    environment:
      - TZ=Asia/Shanghai
    ports:
      - 8081:8081
    volumes:
      - ./logs:/logs

image-20220816142301772

启动
docker-compose up -d

image-20220816142352860

后期考虑优化问题,离线环境部署,外部映射目录或者文件增加,各位看官,有好的方案记得私信我,编写不易多多点赞