likes
comments
collection
share

【Redis】docker compose 部署集群模式

作者站长头像
站长
· 阅读数 38

主从复制解决了 Redis 的性能问题,哨兵模式解决了 Redis 的可用性问题,但是随着数据量的增大,单个 Redis 服务节点依旧会面临数据量过大的问题,因此,Redis 推出了集群模式,可以设置多台 Master,将数据量进行分片存储。

6.x 版本以后的镜像 docker 支持部署集群模式,由于 Redis 要求集群至少要有三个主节点,因此本次演练搭建了三主三从的 Redis 集群。

注意点

由于 Redis 集群不支持网络转发,因此 Docker 搭建 Redis 集群需要注意网络的设置。搭建一个子网是没问题的,集群可以跑起来可以用,但是宿主机是无法使用集群的,只能在子网内部使用,如:

# 先到 6380 映射的机器上创建 “hello”
zohar@ZOHAR-LAPTOP:~$ redis-cli -c -p 6380
127.0.0.1:6380> ping
PONG
127.0.0.1:6380> set hello world
OK
127.0.0.1:6380> exit

# 再到 6381 映射的机器上取 “hello”
zohar@ZOHAR-LAPTOP:~$ redis-cli -c -p 6381
127.0.0.1:6381> ping
PONG
127.0.0.1:6381> get hello
-> Redirected to slot [866] located at 172.26.0.101:6379
>
# 客户端卡住了

由于 hello 这个 key 已经在 6380 上被写入了,我们切到 6381 机器上查询的时候,6381 会让我们的客户的自动跳转到存储该 key 的节点上。但是因为我们定义的是内网的集群,所以跳转用的是内网的 IP 与地址,外网无法连接,所以客户端就无法使用了。

所以创建 Docker 创建 Redis 集群直接使用宿主机网络最好,即使用 “host” 模式。由于基于 WSL2 的 Docker Host 模式存在问题,因此基于 Host 模式的我直接在 Debian 真机上操作,基于 Bridge 模式的我使用 WSL2 操作。

基于 Host 网络模式

目录结构

集群模式下,Master 与 Slave 是自动决定的,所有机器统称节点即可。

cluster-host/
├── docker-compose.yml
├── node1
│   ├── data
│   └── redis.conf
├── node2
│   ├── data
│   └── redis.conf
├── node3
│   ├── data
│   └── redis.conf
├── node4
│   ├── data
│   └── redis.conf
├── node5
│   ├── data
│   └── redis.conf
└── node6
    ├── data
    └── redis.conf

Compose File

version: "3"

services:
  node1:
    image: redis
    container_name: redis-cluster-node-1
    network_mode: "host"
    volumes:
      - "./node1/redis.conf:/etc/redis.conf"
      - "./node1/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always

  node2:
    image: redis
    container_name: redis-cluster-node-2
    network_mode: "host"
    volumes:
      - "./node2/redis.conf:/etc/redis.conf"
      - "./node2/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always

  node3:
    image: redis
    container_name: redis-cluster-node-3
    network_mode: "host"
    volumes:
      - "./node3/redis.conf:/etc/redis.conf"
      - "./node3/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always

  node4:
    image: redis
    container_name: redis-cluster-node-4
    network_mode: "host"
    volumes:
      - "./node4/redis.conf:/etc/redis.conf"
      - "./node4/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always

  node5:
    image: redis
    container_name: redis-cluster-node-5
    network_mode: "host"
    volumes:
      - "./node5/redis.conf:/etc/redis.conf"
      - "./node5/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always

  node6:
    image: redis
    container_name: redis-cluster-node-6
    network_mode: "host"
    volumes:
      - "./node6/redis.conf:/etc/redis.conf"
      - "./node6/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always

节点配置

所有节点的配置除了 portcluster-announce-bus-port 不同之外,其他的都一致:

port 6371
protected-mode no
daemonize no

################################ REDIS CLUSTER  ###############################

cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
# cluster-announce-ip 127.0.0.1
# cluster-announce-port 6380
cluster-announce-bus-port 16371

对于六个节点,这里所配置的 Port 是 6371 ~ 6376bus-port16371 ~ 16376

对于 cluster-config-file 这个节点配置文件,是自动生成的,不需要我们进行配置,只需要定义文件名即可。

启动服务配置集群

启动所有 Service

docker-compose up -d
Creating redis-cluster-node-6 ... done
Creating redis-cluster-node-1 ... done
Creating redis-cluster-node-3 ... done
Creating redis-cluster-node-2 ... done
Creating redis-cluster-node-4 ... done
Creating redis-cluster-node-5 ... done

查看节点是否正常运行

docker ps
CONTAINER ID   IMAGE  COMMAND                  CREATED          STATUS          PORTS  NAMES
4db611346310   redis  "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes          redis-cluster-node-5
0f98fa0209af   redis  "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes          redis-cluster-node-2
59415143e37d   redis  "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes          redis-cluster-node-4
0a8856052297   redis  "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes          redis-cluster-node-3
30f965ef01b3   redis  "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes          redis-cluster-node-1
151f70be6dc6   redis  "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes          redis-cluster-node-6

此时我们可以发现,nodes.conf 文件在各节点下生成了。

tree node1
node1
├── data
│   ├── dump.rdb
│   └── nodes.conf
└── redis.conf

cat node1/data/nodes.conf
bcaaee5d3304868c74f55868713b517f29db08f0 :0@0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0

可以看到,每个节点的 nodes.conf 都写入了自己的信息。

启动集群

因为使用的是 Host 网络,因此直接 localhost + port 指定就行,--cluster-replicas 设置每个 Master 复制的数量。

使用 redis-cli 启动集群,也就是说需要 Redis 客户端,如果没有安装客户端也没关系,起一个 redis 容器并且共享数据卷,然后将容器 /usr/local/bin/ 目录下的所有 redis 工具复制到数据卷那,这样主机就有所有的 Redis 程序了,包括 redis-cli

redis-cli --cluster create 127.0.0.1:6371 127.0.0.1:6372 127.0.0.1:6373 127.0.0.1:6374 127.0.0.1:6375 127.0.0.1:6376 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 127.0.0.1:6375 to 127.0.0.1:6371
Adding replica 127.0.0.1:6376 to 127.0.0.1:6372
Adding replica 127.0.0.1:6374 to 127.0.0.1:6373
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 2a84fbf4c63ad7e0fda88783c41aadf87a403463 127.0.0.1:6371
   slots:[0-5460] (5461 slots) master
M: 8457fba97e70cffa25e42ce3fe8099081ea244f6 127.0.0.1:6372
   slots:[5461-10922] (5462 slots) master
M: 7644693f18b72312e2d9d6e0ad472a98ab3d33be 127.0.0.1:6373
   slots:[10923-16383] (5461 slots) master
S: afbc31f6bcbb3e07925ebe694221a185f4f47d17 127.0.0.1:6374
   replicates 7644693f18b72312e2d9d6e0ad472a98ab3d33be
S: c4a5391101de8b046d4494399f29ec479e4ab4b0 127.0.0.1:6375
   replicates 2a84fbf4c63ad7e0fda88783c41aadf87a403463
S: cbed04ea790b849dd9dc66b27d64b69c98cd8db7 127.0.0.1:6376
   replicates 8457fba97e70cffa25e42ce3fe8099081ea244f6
Can I set the above configuration? (type 'yes' to accept):

此时会询问你这个配置行不行,输入 yes 就会按照这个配置进行集群设置:

Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 127.0.0.1:6371)
M: 2a84fbf4c63ad7e0fda88783c41aadf87a403463 127.0.0.1:6371
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 7644693f18b72312e2d9d6e0ad472a98ab3d33be 127.0.0.1:6373
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 8457fba97e70cffa25e42ce3fe8099081ea244f6 127.0.0.1:6372
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: c4a5391101de8b046d4494399f29ec479e4ab4b0 127.0.0.1:6375
   slots: (0 slots) slave
   replicates 2a84fbf4c63ad7e0fda88783c41aadf87a403463
S: afbc31f6bcbb3e07925ebe694221a185f4f47d17 127.0.0.1:6374
   slots: (0 slots) slave
   replicates 7644693f18b72312e2d9d6e0ad472a98ab3d33be
S: cbed04ea790b849dd9dc66b27d64b69c98cd8db7 127.0.0.1:6376
   slots: (0 slots) slave
   replicates 8457fba97e70cffa25e42ce3fe8099081ea244f6
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

此时可以发现,每个节点下的 nodes.conf 信息发生了改变,添加了其他节点的信息:

cat node1/data/nodes.conf
e1ba5da93c357ec7d98c7b96455773525f1f8890 127.0.0.1:6375@16375 slave bcaaee5d3304868c74f55868713b517f29db08f0 0 1629280926614 1 connected
9f2decedb68110c12e5e4f320b560438c39915e6 127.0.0.1:6376@16376 slave af0bb933d8d155f4a18c0d99a969bbd2239c8ec0 0 1629280925611 2 connected
bcaaee5d3304868c74f55868713b517f29db08f0 127.0.0.1:6371@16371 myself,master - 0 1629280924000 1 connected 0-5460
0985adaae5b03bc13039fd2af7bcdecb29a12bac 127.0.0.1:6374@16374 slave e7eb0cb150332c13f993014c93db99e63aaf5935 0 1629280926000 3 connected
af0bb933d8d155f4a18c0d99a969bbd2239c8ec0 127.0.0.1:6372@16372 master - 0 1629280924607 2 connected 5461-10922
e7eb0cb150332c13f993014c93db99e63aaf5935 127.0.0.1:6373@16373 master - 0 1629280927617 3 connected 10923-16383
vars currentEpoch 6 lastVoteEpoch 0

验证集群

登录之后可以通过 cluster info 查看集群配置信息,可以通过 cluster nodes 查看节点信息:

zohar@VM-4-2-debian:~$ redis-cli -c -p 6372
127.0.0.1:6372> cluster nodes
bcaaee5d3304868c74f55868713b517f29db08f0 127.0.0.1:6371@16371 master - 0 1629283228000 1 connected 0-5460
e1ba5da93c357ec7d98c7b96455773525f1f8890 127.0.0.1:6375@16375 slave bcaaee5d3304868c74f55868713b517f29db08f0 0 1629283228868 1 connected
9f2decedb68110c12e5e4f320b560438c39915e6 127.0.0.1:6376@16376 slave af0bb933d8d155f4a18c0d99a969bbd2239c8ec0 0 1629283229873 2 connected
0985adaae5b03bc13039fd2af7bcdecb29a12bac 127.0.0.1:6374@16374 slave e7eb0cb150332c13f993014c93db99e63aaf5935 0 1629283229000 3 connected
af0bb933d8d155f4a18c0d99a969bbd2239c8ec0 127.0.0.1:6372@16372 myself,master - 0 1629283226000 2 connected 5461-10922
e7eb0cb150332c13f993014c93db99e63aaf5935 127.0.0.1:6373@16373 master - 0 1629283226000 3 connected 10923-16383
127.0.0.1:6372>

读写测试

127.0.0.1:6372> set Hello World
-> Redirected to slot [3030] located at 127.0.0.1:6371
OK
127.0.0.1:6371> get Hello
"World"
127.0.0.1:6371> set test jump
-> Redirected to slot [6918] located at 127.0.0.1:6372
OK
127.0.0.1:6372> get test
"jump"

可以看到,由于 Hello 哈希之后的 Slot 在 6371 上,因此自动跳转到 6371 并执行写入。 由于 Test 哈希之后的 Slot 在 6372 上,因此自动跳转到 6372 并执行写入。

关闭 Master 节点

docker stop redis-cluster-node-1
redis-cluster-node-1

其他节点的日志记录为:

# redis-cluster-node-1
1:M 18 Aug 2021 10:42:46.364 * FAIL message received from e7eb0cb150332c13f993014c93db99e63aaf5935 about bcaaee5d3304868c74f55868713b517f29db08f0
1:M 18 Aug 2021 10:42:46.365 # Cluster state changed: fail
1:M 18 Aug 2021 10:42:47.070 # Failover auth granted to e1ba5da93c357ec7d98c7b96455773525f1f8890 for epoch 7
1:M 18 Aug 2021 10:42:47.112 # Cluster state changed: ok

登录其他节点并查询节点信息:

127.0.0.1:6372> cluster nodes
bcaaee5d3304868c74f55868713b517f29db08f0 127.0.0.1:6371@16371 master,fail - 1629283349279 1629283346000 1 disconnected
e1ba5da93c357ec7d98c7b96455773525f1f8890 127.0.0.1:6375@16375 master - 0 1629283432687 7 connected 0-5460
9f2decedb68110c12e5e4f320b560438c39915e6 127.0.0.1:6376@16376 slave af0bb933d8d155f4a18c0d99a969bbd2239c8ec0 0 1629283432000 2 connected
0985adaae5b03bc13039fd2af7bcdecb29a12bac 127.0.0.1:6374@16374 slave e7eb0cb150332c13f993014c93db99e63aaf5935 0 1629283431682 3 connected
af0bb933d8d155f4a18c0d99a969bbd2239c8ec0 127.0.0.1:6372@16372 myself,master - 0 1629283433000 2 connected 5461-10922
e7eb0cb150332c13f993014c93db99e63aaf5935 127.0.0.1:6373@16373 master - 0 1629283433691 3 connected 10923-16383

可以看到,6375 节点被提升为 Master。

基于 Bridge 网络模式

目录结构

与 Host 模式的一致。

cluster-subnet/
├── docker-compose.yml
├── node1
│   ├── data
│   └── redis.conf
├── node2
│   ├── data
│   └── redis.conf
├── node3
│   ├── data
│   └── redis.conf
├── node4
│   ├── data
│   └── redis.conf
├── node5
│   ├── data
│   └── redis.conf
└── node6
    ├── data
    └── redis.conf

Compose File

定义了一个子网 redis-cluster:

version: "3"

networks:
  redis-cluster:
    driver: bridge
    ipam:
      config:
        - subnet: 172.26.0.0/24

services:
  node1:
    image: redis
    container_name: redis-cluster-node-1
    ports:
      - "6371:6379"
    volumes:
      - "./node1/redis.conf:/etc/redis.conf"
      - "./node1/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always
    networks:
      redis-cluster:
        ipv4_address: 172.26.0.101

  node2:
    image: redis
    container_name: redis-cluster-node-2
    ports:
      - "6372:6379"
    volumes:
      - "./node2/redis.conf:/etc/redis.conf"
      - "./node2/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always
    networks:
      redis-cluster:
        ipv4_address: 172.26.0.102

  node3:
    image: redis
    container_name: redis-cluster-node-3
    ports:
      - "6373:6379"
    volumes:
      - "./node3/redis.conf:/etc/redis.conf"
      - "./node3/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always
    networks:
      redis-cluster:
        ipv4_address: 172.26.0.103

  node4:
    image: redis
    container_name: redis-cluster-node-4
    ports:
      - "6374:6379"
    volumes:
      - "./node4/redis.conf:/etc/redis.conf"
      - "./node4/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always
    networks:
      redis-cluster:
        ipv4_address: 172.26.0.104

  node5:
    image: redis
    container_name: redis-cluster-node-5
    ports:
      - "6375:6379"
    volumes:
      - "./node5/redis.conf:/etc/redis.conf"
      - "./node5/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always
    networks:
      redis-cluster:
        ipv4_address: 172.26.0.105

  node6:
    image: redis
    container_name: redis-cluster-node-6
    ports:
      - "6376:6379"
    volumes:
      - "./node6/redis.conf:/etc/redis.conf"
      - "./node6/data:/data"
    command: ["redis-server", "/etc/redis.conf"]
    restart: always
    networks:
      redis-cluster:
        ipv4_address: 172.26.0.106

节点配置

所有节点配置都一样,不需要进行修改:

port 6379
protected-mode no
daemonize no

################################ REDIS CLUSTER  ###############################

cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
# cluster-announce-ip 127.0.0.1
# cluster-announce-port 6380
cluster-announce-bus-port 16379

Redis 提供了 cluster-announce-ipcluster-announce-port 让集群支持在转发的网络中工作,但是我尝试之后发现无效,日后再来研究。

启动服务配置集群

启动集群

docker-compose up -d
Creating redis-cluster-node-5 ... done
Creating redis-cluster-node-6 ... done
Creating redis-cluster-node-3 ... done
Creating redis-cluster-node-4 ... done
Creating redis-cluster-node-2 ... done
Creating redis-cluster-node-1 ... done

进入实例内部启动集群:

docker exec -it redis-cluster-node-1 /bin/bash
root@53026834c158:/data# redis-cli --cluster create 172.26.0.101:6379 172.26.0.102:6379 172.26.0.103:6379 172.26.0.104:6379 172.26.0.105:6379 172.26.0.106:6379 --cluster-replicas 1 
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.26.0.105:6379 to 172.26.0.101:6379
Adding replica 172.26.0.106:6379 to 172.26.0.102:6379
Adding replica 172.26.0.104:6379 to 172.26.0.103:6379
M: 5857fffad0e75e119e86fffef8b606900a0b886a 172.26.0.101:6379
   slots:[0-5460] (5461 slots) master
M: 98d5586036314b23a0992f8db2ab21238ebc13d9 172.26.0.102:6379
   slots:[5461-10922] (5462 slots) master
M: 0371736e14b0d7ce8e7e5c0685b8a2a75d757822 172.26.0.103:6379
   slots:[10923-16383] (5461 slots) master
S: ca417fac5e3417ac906f1b5a4d854eae3751820e 172.26.0.104:6379
   replicates 0371736e14b0d7ce8e7e5c0685b8a2a75d757822
S: d619095c3bbe18ed027c538f49aeaafbdbe327a9 172.26.0.105:6379
   replicates 5857fffad0e75e119e86fffef8b606900a0b886a
S: 6a081bce8790e2a616c2b352da95ad877b8057f7 172.26.0.106:6379
   replicates 98d5586036314b23a0992f8db2ab21238ebc13d9
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 172.26.0.101:6379)
M: 5857fffad0e75e119e86fffef8b606900a0b886a 172.26.0.101:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: d619095c3bbe18ed027c538f49aeaafbdbe327a9 172.26.0.105:6379
   slots: (0 slots) slave
   replicates 5857fffad0e75e119e86fffef8b606900a0b886a
M: 0371736e14b0d7ce8e7e5c0685b8a2a75d757822 172.26.0.103:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: ca417fac5e3417ac906f1b5a4d854eae3751820e 172.26.0.104:6379
   slots: (0 slots) slave
   replicates 0371736e14b0d7ce8e7e5c0685b8a2a75d757822
S: 6a081bce8790e2a616c2b352da95ad877b8057f7 172.26.0.106:6379
   slots: (0 slots) slave
   replicates 98d5586036314b23a0992f8db2ab21238ebc13d9
M: 98d5586036314b23a0992f8db2ab21238ebc13d9 172.26.0.102:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

查询节点信息:

root@53026834c158:/data# redis-cli -c
127.0.0.1:6379> cluster nodes
d619095c3bbe18ed027c538f49aeaafbdbe327a9 172.26.0.105:6379@16379 slave 5857fffad0e75e119e86fffef8b606900a0b886a 0 1629284311000 1 connected
5857fffad0e75e119e86fffef8b606900a0b886a 172.26.0.101:6379@16379 myself,master - 0 1629284312000 1 connected 0-5460
0371736e14b0d7ce8e7e5c0685b8a2a75d757822 172.26.0.103:6379@16379 master - 0 1629284313881 3 connected 10923-16383
ca417fac5e3417ac906f1b5a4d854eae3751820e 172.26.0.104:6379@16379 slave 0371736e14b0d7ce8e7e5c0685b8a2a75d757822 0 1629284312000 3 connected
6a081bce8790e2a616c2b352da95ad877b8057f7 172.26.0.106:6379@16379 slave 98d5586036314b23a0992f8db2ab21238ebc13d9 0 1629284314885 2 connected
98d5586036314b23a0992f8db2ab21238ebc13d9 172.26.0.102:6379@16379 master - 0 1629284313000 2 connected 5461-10922

验证集群

  • 容器内网

    127.0.0.1:6379> set hello world
    OK
    127.0.0.1:6379> set test redsi
    -> Redirected to slot [6918] located at 172.26.0.102:6379
    OK
    
  • 宿主机

    redis-cli -c -p 6371
    127.0.0.1:6371> get hello
    "world"
    127.0.0.1:6371> get test
    -> Redirected to slot [6918] located at 172.26.0.102:6379
    
    # 卡住
    

    无法进行服务切换。

转载自:https://juejin.cn/post/6997723668155482149
评论
请登录