k8s实战案例之部署redis单机和redis cluster
redis是一款基于BSD协议,开源的非关系型数据库(nosql数据库),作者是意大利开发者Salvatore Sanfilippo在2009年发布,使用C语言编写;redis是基于内存存储,而且是目前比较流行的键值数据库(key-value database),它提供将内存通过网络远程共享的一种服务,提供类似功能的还有memcache,但相比 memcache,redis 还提供了易扩展、高性能、具备数据持久性等功能。主要的应用场景有session共享,常用于web集群中的tomcat或PHP中多web服务器的session共享;消息队列,ELK的日志缓存,部分业务的订阅发布系统;计数器,常用于访问排行榜,商品浏览数等和次数相关的数值统计场景;缓存,常用于数据查询、电商网站商品信息、新闻内容等;相对memcache,redis支持数据的持久化,可以将内存的数据保存在磁盘中,重启redis服务或者服务器之后可以从备份文件中恢复数据到内存继续使用;
(资料图片)
1.3、构建redis镜像由于redis的数据(主要是redis快照)都存放在存储系统中,即便redis pod挂掉,对应数据都不会丢;因为在k8s上部署redis单机,redis pod挂了,k8s会将对应pod重建,重建时会把对应pvc挂载至pod中,加载快照,从而使得redis的数据不被pod的挂掉而丢数据;
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# lltotal 1784drwxr-xr-x 2 root root 4096 Jun 5 15:22 ./drwxr-xr-x 11 root root 4096 Aug 9 2022 ../-rw-r--r-- 1 root root 717 Jun 5 15:20 Dockerfile-rwxr-xr-x 1 root root 235 Jun 5 15:21 build-command.sh*-rw-r--r-- 1 root root 1740967 Jun 22 2021 redis-4.0.14.tar.gz-rw-r--r-- 1 root root 58783 Jun 22 2021 redis.conf-rwxr-xr-x 1 root root 84 Jun 5 15:21 run_redis.sh*root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat Dockerfile #Redis Image# 导入自定义centos基础镜像FROM harbor.ik8s.cc/baseimages/magedu-centos-base:7.9.2009 # 添加redis源码包至/usr/local/srcADD redis-4.0.14.tar.gz /usr/local/src# 编译安装redisRUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server /usr/sbin/ && mkdir -pv /data/redis-data # 添加redis配置文件ADD redis.conf /usr/local/redis/redis.conf # 暴露redis服务端口EXPOSE 6379#ADD run_redis.sh /usr/local/redis/run_redis.sh#CMD ["/usr/local/redis/run_redis.sh"]# 添加启动脚本ADD run_redis.sh /usr/local/redis/entrypoint.sh# 启动redisENTRYPOINT ["/usr/local/redis/entrypoint.sh"]root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat build-command.sh #!/bin/bashTAG=$1#docker build -t harbor.ik8s.cc/magedu/redis:${TAG} .#sleep 3#docker push harbor.ik8s.cc/magedu/redis:${TAG}nerdctl build -t harbor.ik8s.cc/magedu/redis:${TAG} .nerdctl push harbor.ik8s.cc/magedu/redis:${TAG}root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat run_redis.sh #!/bin/bash# Redis启动命令/usr/sbin/redis-server /usr/local/redis/redis.conf# 使用tail -f 在pod内部构建守护进程tail -f /etc/hostsroot@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# grep -v "^#\|^$" redis.conf bind 0.0.0.0protected-mode yesport 6379tcp-backlog 511timeout 0tcp-keepalive 300daemonize yessupervised nopidfile /var/run/redis_6379.pidloglevel noticelogfile ""databases 16always-show-logo yessave 900 1save 5 1save 300 10save 60 10000stop-writes-on-bgsave-error nordbcompression yesrdbchecksum yesdbfilename dump.rdbdir /data/redis-dataslave-serve-stale-data yesslave-read-only yesrepl-diskless-sync norepl-diskless-sync-delay 5repl-disable-tcp-nodelay noslave-priority 100requirepass 123456lazyfree-lazy-eviction nolazyfree-lazy-expire nolazyfree-lazy-server-del noslave-lazy-flush noappendonly noappendfilename "appendonly.aof"appendfsync everysecno-appendfsync-on-rewrite noauto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mbaof-load-truncated yesaof-use-rdb-preamble nolua-time-limit 5000slowlog-log-slower-than 10000slowlog-max-len 128latency-monitor-threshold 0notify-keyspace-events ""hash-max-ziplist-entries 512hash-max-ziplist-value 64list-max-ziplist-size -2list-compress-depth 0set-max-intset-entries 512zset-max-ziplist-entries 128zset-max-ziplist-value 64hll-sparse-max-bytes 3000activerehashing yesclient-output-buffer-limit normal 0 0 0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub 32mb 8mb 60hz 10aof-rewrite-incremental-fsync yesroot@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis#
1.3.1、验证rdis镜像是否上传至harbor?1.4、测试redis 镜像1.4.1、验证将redis镜像运行为容器,看看是否正常运行?1.4.2、远程连接redis,看看是否可正常连接?1.5、创建PV和PVC1.5.1、在nfs服务器上准备redis数据存储目录能够将redis镜像运行为容器,并且能够通过远程主机连接至redis进行数据读写,说明我们构建的reids镜像没有问题;
root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis-datadir-1mkdir: created directory "/data/k8sdata/magedu/redis-datadir-1"root@harbor:~# cat /etc/exports# /etc/exports: the access control list for filesystems which may be exported# to NFS clients. See exports(5).## Example for NFSv2 and NFSv3:# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)## Example for NFSv4:# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)#/data/k8sdata/kuboard *(rw,no_root_squash)/data/volumes *(rw,no_root_squash)/pod-vol *(rw,no_root_squash)/data/k8sdata/myserver *(rw,no_root_squash)/data/k8sdata/mysite *(rw,no_root_squash)/data/k8sdata/magedu/images *(rw,no_root_squash)/data/k8sdata/magedu/static *(rw,no_root_squash)/data/k8sdata/magedu/zookeeper-datadir-1 *(rw,no_root_squash)/data/k8sdata/magedu/zookeeper-datadir-2 *(rw,no_root_squash)/data/k8sdata/magedu/zookeeper-datadir-3 *(rw,no_root_squash)/data/k8sdata/magedu/redis-datadir-1 *(rw,no_root_squash) root@harbor:~# exportfs -avexportfs: /etc/exports [1]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/kuboard". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [2]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/volumes". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [3]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/pod-vol". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [4]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/myserver". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [5]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/mysite". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [7]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/images". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [8]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/static". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [11]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [12]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [13]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [16]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis-datadir-1". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexporting *:/data/k8sdata/magedu/redis-datadir-1exporting *:/data/k8sdata/magedu/zookeeper-datadir-3exporting *:/data/k8sdata/magedu/zookeeper-datadir-2exporting *:/data/k8sdata/magedu/zookeeper-datadir-1exporting *:/data/k8sdata/magedu/staticexporting *:/data/k8sdata/magedu/imagesexporting *:/data/k8sdata/mysiteexporting *:/data/k8sdata/myserverexporting *:/pod-volexporting *:/data/volumesexporting *:/data/k8sdata/kuboardroot@harbor:~#
1.5.2、创建pvroot@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolume.yaml ---apiVersion: v1kind: PersistentVolumemetadata: name: redis-datadir-pv-1spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce nfs: path: /data/k8sdata/magedu/redis-datadir-1 server: 192.168.0.42root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv#
1.5.3、创建pvcroot@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolumeclaim.yaml ---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: redis-datadir-pvc-1 namespace: mageduspec: volumeName: redis-datadir-pv-1 accessModes: - ReadWriteOnce resources: requests: storage: 10Giroot@k8s-master01:~/k8s-data/yaml/magedu/redis/pv#
1.6、部署redis服务root@k8s-master01:~/k8s-data/yaml/magedu/redis# cat redis.yamlkind: Deployment#apiVersion: extensions/v1beta1apiVersion: apps/v1metadata: labels: app: devops-redis name: deploy-devops-redis namespace: mageduspec: replicas: 1 selector: matchLabels: app: devops-redis template: metadata: labels: app: devops-redis spec: containers: - name: redis-container image: harbor.ik8s.cc/magedu/redis:v4.0.14 imagePullPolicy: Always volumeMounts: - mountPath: "/data/redis-data/" name: redis-datadir volumes: - name: redis-datadir persistentVolumeClaim: claimName: redis-datadir-pvc-1 ---kind: ServiceapiVersion: v1metadata: labels: app: devops-redis name: srv-devops-redis namespace: mageduspec: type: NodePort ports: - name: http port: 6379 targetPort: 6379 nodePort: 36379 selector: app: devops-redis sessionAffinity: ClientIP sessionAffinityConfig: clientIP: timeoutSeconds: 10800root@k8s-master01:~/k8s-data/yaml/magedu/redis#
1.6.1、修改nodeport端口范围上述报错说我们的服务端口超出范围,这是因为我们在初始化k8s集群时指定的服务端口范围;
1.6.2、重载kube-apiserver.service,重启kube-apiserver编辑/etc/systemd/system/kube-apiserver.service,将其--service-node-port-range选项指定的值修改即可;其他两个master节点也需要修改哦
root@k8s-master01:~# systemctl daemon-reload root@k8s-master01:~# systemctl restart kube-apiserver.serviceroot@k8s-master01:~#
再次部署redis
1.7、验证redis数据读写1.7.1、连接k8s任意节点的36376端口,测试redis读写数据1.8、验证redis pod 重建对应数据是否丢失?1.8.1、查看redis快照文件是否存储到存储上呢?root@harbor:~# ll /data/k8sdata/magedu/redis-datadir-1total 12drwxr-xr-x 2 root root 4096 Jun 5 16:29 ./drwxr-xr-x 8 root root 4096 Jun 5 15:53 ../-rw-r--r-- 1 root root 116 Jun 5 16:29 dump.rdbroot@harbor:~#
1.8.2、删除redis pod 等待k8s重建redis pod1.8.3、验证重建后的redis pod数据可以看到刚才我们向redis写入数据,对应redis在规定时间内发现key的变化就做了快照,因为redis数据目录时通过pv/pvc挂载的nfs,所以我们在nfs对应目录里时可以正常看到这个快照文件的;
2、在k8s上部署redis集群2.1、PV/PVC及Redis Cluster-StatefulSet可以看到k8s重建后的redis pod 还保留着原有pod的数据;这说明k8s重建时挂载了前一个pod的pvc;
2.2、创建PV2.2.1、在nfs上准备redis cluster 数据目录redis cluster相比redis单机要稍微复杂一点,我们也是通过pv/pvc将redis cluster数据存放在存储系统中,不同于redis单机,redis cluster对存入的数据会做crc16计算,然后和16384做取模计算,得出一个数字,这个数字就是存入redis cluster的一个槽位;即redis cluster将16384个槽位,平均分配给集群所有master节点,每个master节点存放整个集群数据的一部分;这样一来就存在一个问题,如果master宕机,那么对应槽位的数据也就不可用,为了防止master单点故障,我们还需要对master做高可用,即专门用一个slave节点对master做备份,master宕机的情况下,对应slave会接管master继续向集群提供服务,从而实现redis cluster master的高可用;如上图所示,我们使用3主3从的redis cluster,redis0,1,2为master,那么3,4,5就对应为0,1,2的slave,负责备份各自对应的master的数据;这六个pod都是通过k8s集群的pv/pvc将数据存放在存储系统中;
root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis{0,1,2,3,4,5}mkdir: created directory "/data/k8sdata/magedu/redis0"mkdir: created directory "/data/k8sdata/magedu/redis1"mkdir: created directory "/data/k8sdata/magedu/redis2"mkdir: created directory "/data/k8sdata/magedu/redis3"mkdir: created directory "/data/k8sdata/magedu/redis4"mkdir: created directory "/data/k8sdata/magedu/redis5"root@harbor:~# tail -6 /etc/exports /data/k8sdata/magedu/redis0 *(rw,no_root_squash)/data/k8sdata/magedu/redis1 *(rw,no_root_squash)/data/k8sdata/magedu/redis2 *(rw,no_root_squash)/data/k8sdata/magedu/redis3 *(rw,no_root_squash)/data/k8sdata/magedu/redis4 *(rw,no_root_squash)/data/k8sdata/magedu/redis5 *(rw,no_root_squash)root@harbor:~# exportfs -avexportfs: /etc/exports [1]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/kuboard". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [2]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/volumes". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [3]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/pod-vol". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [4]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/myserver". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [5]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/mysite". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [7]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/images". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [8]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/static". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [11]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [12]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [13]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [16]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis-datadir-1". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [18]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis0". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [19]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis1". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [20]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis2". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [21]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis3". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [22]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis4". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [23]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis5". Assuming default behaviour ("no_subtree_check"). NOTE: this default has changed since nfs-utils version 1.0.xexporting *:/data/k8sdata/magedu/redis5exporting *:/data/k8sdata/magedu/redis4exporting *:/data/k8sdata/magedu/redis3exporting *:/data/k8sdata/magedu/redis2exporting *:/data/k8sdata/magedu/redis1exporting *:/data/k8sdata/magedu/redis0exporting *:/data/k8sdata/magedu/redis-datadir-1exporting *:/data/k8sdata/magedu/zookeeper-datadir-3exporting *:/data/k8sdata/magedu/zookeeper-datadir-2exporting *:/data/k8sdata/magedu/zookeeper-datadir-1exporting *:/data/k8sdata/magedu/staticexporting *:/data/k8sdata/magedu/imagesexporting *:/data/k8sdata/mysiteexporting *:/data/k8sdata/myserverexporting *:/pod-volexporting *:/data/volumesexporting *:/data/k8sdata/kuboardroot@harbor:~#
2.2.2、创建pvroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat pv/redis-cluster-pv.yaml apiVersion: v1kind: PersistentVolumemetadata: name: redis-cluster-pv0spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.0.42 path: /data/k8sdata/magedu/redis0 ---apiVersion: v1kind: PersistentVolumemetadata: name: redis-cluster-pv1spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.0.42 path: /data/k8sdata/magedu/redis1 ---apiVersion: v1kind: PersistentVolumemetadata: name: redis-cluster-pv2spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.0.42 path: /data/k8sdata/magedu/redis2 ---apiVersion: v1kind: PersistentVolumemetadata: name: redis-cluster-pv3spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.0.42 path: /data/k8sdata/magedu/redis3 ---apiVersion: v1kind: PersistentVolumemetadata: name: redis-cluster-pv4spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.0.42 path: /data/k8sdata/magedu/redis4 ---apiVersion: v1kind: PersistentVolumemetadata: name: redis-cluster-pv5spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.0.42 path: /data/k8sdata/magedu/redis5 root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
2.3、部署redis cluster2.3.1、基于redis.conf文件创建configmaproot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.conf appendonly yescluster-enabled yescluster-config-file /var/lib/redis/nodes.confcluster-node-timeout 5000dir /var/lib/redisport 6379root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
2.3.2、创建configmaproot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl create cm redis-conf --from-file=./redis.conf -n magedu configmap/redis-conf createdroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl get cm -n magedu NAME DATA AGEkube-root-ca.crt 1 35hredis-conf 1 6sroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
2.3.3、验证configmaproot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl describe cm redis-conf -n magedu Name: redis-confNamespace: mageduLabels: Annotations: Data====redis.conf:----appendonly yescluster-enabled yescluster-config-file /var/lib/redis/nodes.confcluster-node-timeout 5000dir /var/lib/redisport 6379BinaryData====Events: root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
2.3.4、部署redis clusterroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.yaml apiVersion: v1kind: Servicemetadata: name: redis namespace: magedu labels: app: redisspec: selector: app: redis appCluster: redis-cluster ports: - name: redis port: 6379 clusterIP: None ---apiVersion: v1kind: Servicemetadata: name: redis-access namespace: magedu labels: app: redisspec: type: NodePort selector: app: redis appCluster: redis-cluster ports: - name: redis-access protocol: TCP port: 6379 targetPort: 6379 nodePort: 36379---apiVersion: apps/v1kind: StatefulSetmetadata: name: redis namespace: mageduspec: serviceName: redis replicas: 6 selector: matchLabels: app: redis appCluster: redis-cluster template: metadata: labels: app: redis appCluster: redis-cluster spec: terminationGracePeriodSeconds: 20 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - redis topologyKey: kubernetes.io/hostname containers: - name: redis image: redis:4.0.14 command: - "redis-server" args: - "/etc/redis/redis.conf" - "--protected-mode" - "no" resources: requests: cpu: "500m" memory: "500Mi" ports: - containerPort: 6379 name: redis protocol: TCP - containerPort: 16379 name: cluster protocol: TCP volumeMounts: - name: conf mountPath: /etc/redis - name: data mountPath: /var/lib/redis volumes: - name: conf configMap: name: redis-conf items: - key: redis.conf path: redis.conf volumeClaimTemplates: - metadata: name: data namespace: magedu spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Giroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
上述配置清单,主要用sts控制器创建了6个pod副本,每个副本都使用configmap中的配置文件作为redis配置文件,使用pvc模板指定pod在k8s上自动关联pv,并在magedu名称空间创建pvc,即只要k8s上有空余的pv,对应pod就会在magedu这个名称空间按pvc模板信息创建pvc;当然我们可以使用存储类自动创建pvc,也可以提前创建好pvc,一般情况下使用sts控制器,我们可以使用pvc模板的方式来指定pod自动创建pvc(前提是k8s有足够的pv可用);
应用配置清单部署redis cluster
2.4、初始化redis cluster2.4.1、在k8s上创建临时容器,安装redis cluster 初始化工具使用sts控制器创建pod,pod名称是sts控制器的名称-id,使用pvc模板创建pvc的名称为pvc模板名称-pod名称,即pvc模板名-sts控制器名-id;
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl run -it ubuntu1804 --image=ubuntu:18.04 --restart=Never -n magedu bashIf you don"t see a command prompt, try pressing enter.root@ubuntu1804:/#root@ubuntu1804:/# apt update# 安装必要工具root@ubuntu1804:/# apt install python2.7 python-pip redis-tools dnsutils iputils-ping net-tools# 更新piproot@ubuntu1804:/# pip install --upgrade pip# 使用pip安装redis cluster初始化工具redis-tribroot@ubuntu1804:/# pip install redis-trib==0.5.1root@ubuntu1804:/#
2.4.2、初始化redis clusterroot@ubuntu1804:/# redis-trib.py create \ `dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \ `dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \ `dig +short redis-2.redis.magedu.svc.cluster.local`:6379
2.4.3、给master指定slave给redis-0指定slave为 redis-3在k8s上我们使用sts创建pod,对应pod的名称是固定不变的,所以我们初始化redis 集群就直接使用redis pod名称就可以直接解析到对应pod的IP地址;在传统虚拟机或物理机上初始化redis集群,我们可用直接使用IP地址,原因是物理机或虚拟机IP地址是固定的,在k8s上pod的IP地址是不固定的;
root@ubuntu1804:/# redis-trib.py replicate \ --master-addr `dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \ --slave-addr `dig +short redis-3.redis.magedu.svc.cluster.local`:6379
给redis-1指定slave为 redis-4root@ubuntu1804:/# redis-trib.py replicate \ --master-addr `dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \ --slave-addr `dig +short redis-4.redis.magedu.svc.cluster.local`:6379
给redis-2指定slave为 redis-5root@ubuntu1804:/# redis-trib.py replicate \--master-addr `dig +short redis-2.redis.magedu.svc.cluster.local`:6379 \--slave-addr `dig +short redis-5.redis.magedu.svc.cluster.local`:6379
2.5、验证redis cluster状态2.5.1、进入redis cluster 任意pod 查看集群信息2.5.2、查看集群节点2.5.3、查看当前节点信息集群节点信息中记录了master节点id和slave id,其中slave后面会对应master的id,表示该slave备份对应master数据;
127.0.0.1:6379> info# Serverredis_version:4.0.14redis_git_sha1:00000000redis_git_dirty:0redis_build_id:165c932261a105d7redis_mode:clusteros:Linux 5.15.0-73-generic x86_64arch_bits:64multiplexing_api:epollatomicvar_api:atomic-builtingcc_version:8.3.0process_id:1run_id:aa8ef00d843b4f622374dbb643cf27cdbd4d5ba3tcp_port:6379uptime_in_seconds:4303uptime_in_days:0hz:10lru_clock:8272053executable:/data/redis-serverconfig_file:/etc/redis/redis.conf# Clientsconnected_clients:1client_longest_output_list:0client_biggest_input_buf:0blocked_clients:0# Memoryused_memory:2642336used_memory_human:2.52Mused_memory_rss:5353472used_memory_rss_human:5.11Mused_memory_peak:2682248used_memory_peak_human:2.56Mused_memory_peak_perc:98.51%used_memory_overhead:2559936used_memory_startup:1444856used_memory_dataset:82400used_memory_dataset_perc:6.88%total_system_memory:16740012032total_system_memory_human:15.59Gused_memory_lua:37888used_memory_lua_human:37.00Kmaxmemory:0maxmemory_human:0Bmaxmemory_policy:noevictionmem_fragmentation_ratio:2.03mem_allocator:jemalloc-4.0.3active_defrag_running:0lazyfree_pending_objects:0# Persistenceloading:0rdb_changes_since_last_save:0rdb_bgsave_in_progress:0rdb_last_save_time:1685992849rdb_last_bgsave_status:okrdb_last_bgsave_time_sec:0rdb_current_bgsave_time_sec:-1rdb_last_cow_size:245760aof_enabled:1aof_rewrite_in_progress:0aof_rewrite_scheduled:0aof_last_rewrite_time_sec:-1aof_current_rewrite_time_sec:-1aof_last_bgrewrite_status:okaof_last_write_status:okaof_last_cow_size:0aof_current_size:0aof_base_size:0aof_pending_rewrite:0aof_buffer_length:0aof_rewrite_buffer_length:0aof_pending_bio_fsync:0aof_delayed_fsync:0# Statstotal_connections_received:7total_commands_processed:17223instantaneous_ops_per_sec:1total_net_input_bytes:1530962total_net_output_bytes:108793instantaneous_input_kbps:0.04instantaneous_output_kbps:0.00rejected_connections:0sync_full:1sync_partial_ok:0sync_partial_err:1expired_keys:0expired_stale_perc:0.00expired_time_cap_reached_count:0evicted_keys:0keyspace_hits:0keyspace_misses:0pubsub_channels:0pubsub_patterns:0latest_fork_usec:853migrate_cached_sockets:0slave_expires_tracked_keys:0active_defrag_hits:0active_defrag_misses:0active_defrag_key_hits:0active_defrag_key_misses:0# Replicationrole:masterconnected_slaves:1slave0:ip=10.200.155.175,port=6379,state=online,offset=1120,lag=1master_replid:60381a28fee40b44c409e53eeef49215a9d3b0ffmaster_replid2:0000000000000000000000000000000000000000master_repl_offset:1120second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:1120# CPUused_cpu_sys:12.50used_cpu_user:7.51used_cpu_sys_children:0.01used_cpu_user_children:0.00# Clustercluster_enabled:1# Keyspace127.0.0.1:6379>
2.5.4、验证redis cluster读写数据是否正常?2.5.4.1、手动连接redis cluster 进行数据读写2.5.4.2、使用python脚本连接redis cluster 进行数据读写手动连接redis 集群master节点进行数据读写,存在一个问题就是当我们写入的key经过crc16计算对16384取模后,对应槽位可能不在当前节点,redis它会告诉我们该key该在哪里去写;从上面的截图可用看到,现在redis cluster 是可用正常读写数据的
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis-client-test.py#!/usr/bin/env python#coding:utf-8#Author:Zhang ShiJie#python 2.7/3.8#pip install redis-py-clusterimport sys,timefrom rediscluster import RedisClusterdef init_redis(): startup_nodes = [ {"host": "192.168.0.34", "port": 36379}, {"host": "192.168.0.35", "port": 36379}, {"host": "192.168.0.36", "port": 36379}, {"host": "192.168.0.34", "port": 36379}, {"host": "192.168.0.35", "port": 36379}, {"host": "192.168.0.36", "port": 36379}, ] try: conn = RedisCluster(startup_nodes=startup_nodes, # 有密码要加上密码哦 decode_responses=True, password="") print("连接成功!!!!!1", conn) #conn.set("key-cluster","value-cluster") for i in range(100): conn.set("key%s" % i, "value%s" % i) time.sleep(0.1) data = conn.get("key%s" % i) print(data) #return conn except Exception as e: print("connect error ", str(e)) sys.exit(1)init_redis()root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
运行脚本,向redis cluster 写入数据
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# python redis-client-test.pyTraceback (most recent call last): File "/root/k8s-data/yaml/magedu/redis-cluster/redis-client-test.py", line 8, in from rediscluster import RedisClusterModuleNotFoundError: No module named "rediscluster"root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
这里提示没有找到rediscluster模块,解决办法就是通过pip安装redis-py-cluster模块即可;
安装redis-py-cluster模块运行脚本连接redis cluster进行数据读写连接redis pod,验证数据是否正常写入?
从上面的截图可用看到三个reids cluster master pod各自都存放了一部分key,并非全部;说明刚才我们用python脚本把数据正常写入了redis cluster;
验证在slave 节点是否可用正常读取数据?
从上面的截图可以了解到在slave节点是不可以读取数据;
到slave对应的master节点读取数据
2.6、验证验证redis cluster高可用2.6.1、在k8s node节点将redis:4.0.14镜像上传至本地harbor修改镜像tag上述验证说明了redis cluster 只有master可以读写数据,slave只是对master数据做备份,不可以在slave上读写数据;
root@k8s-node01:~# nerdctl tag redis:4.0.14 harbor.ik8s.cc/redis-cluster/redis:4.0.14
上传redis镜像至本地harborroot@k8s-node01:~# nerdctl push harbor.ik8s.cc/redis-cluster/redis:4.0.14INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.list.v2+json, sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625) WARN[0000] skipping verifying HTTPS certs for "harbor.ik8s.cc" index-sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625: done |++++++++++++++++++++++++++++++++++++++| manifest-sha256:5bd4fe08813b057df2ae55003a75c39d80a4aea9f1a0fbc0fbd7024edf555786: done |++++++++++++++++++++++++++++++++++++++| config-sha256:191c4017dcdd3370f871a4c6e7e1d55c7d9abed2bebf3005fb3e7d12161262b8: done |++++++++++++++++++++++++++++++++++++++| elapsed: 1.4 s total: 8.5 Ki (6.1 KiB/s) root@k8s-node01:~#
2.6.2、修改redis cluster部署清单镜像和镜像拉取策略2.6.3、重新apply redis cluster部署清单修改镜像为本地harbor镜像和拉取策略是方便我们测试redis cluster的高可用;
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl apply -f redis.yamlservice/redis unchangedservice/redis-access unchangedstatefulset.apps/redis configuredroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
验证pod是否都正常running?验证集群状态和集群关系这里相当于给redis cluster更新,他们之间的集群关系还存在,因为集群关系配置都保存在远端存储之上;
2.6.4、停掉本地harbor,删除redis master pod,看看对应slave是否会提升为master?停止harbor服务不同于之前,这里rdis-0变成了slave ,redis-3变成了master;从上面的截图我们也发现,在k8s上部署redis cluster pod重建以后(IP地址发生变化),对应集群关系不会发生变化;对应master和salve一对关系始终只是再对应的master和salve两个pod中切换,这其实就是高可用;
root@harbor:~# systemctl stop harbor
删除redis-3,看看redis-0是否会提升为master?2.6.5、恢复harbor服务,看看对应redis-3恢复会议后是否还是redis-0的slave呢?恢复harbor服务验证redis-3pod是否恢复?可用看到我们把redis-3删除(相当于master宕机)以后,对应slave提升为master了;
验证redis-3的主从关系、再次删除redis-3以后,对应pod正常被重建,并处于running状态;
可以看到redis-3恢复以后,对应自动加入集群成为redis-0的slave;
标签:

加快场景创新 科技部首批支持建设十个人工智能示范应用场景
2022-08-16

科技部公布《企业技术创新能力提升行动方案》 亮出10项行动内容
2022-08-16

进入了发展快车道 冷链行业市场规模正在快速膨胀
2022-03-21

行业正站在风口 数字化时代在为传统的自行车产业赋能
2022-03-21

以做强实体经济支撑为重点 成都单个项目年度计划投资同比提升
2022-03-21

拥有多个国际赛事的直播版权 广州游戏电竞企业业绩向好
2022-03-21

投诉量激增 直播带货存在这么多问题的主要原因是什么?
2022-03-21

工作专班深入到各企业 春寒料峭挡不住松原市施工热情
2022-03-21

引导企业向提供“产品+服务”转变 湖南加快智能农机服务化转型
2022-03-21

创新平台建设和科技成果转化 德州加大力度重奖创新
2022-03-21
科技部公布《企业技术创新能力提升行动方案》 亮出10项行动内容
进入了发展快车道 冷链行业市场规模正在快速膨胀
行业正站在风口 数字化时代在为传统的自行车产业赋能
以做强实体经济支撑为重点 成都单个项目年度计划投资同比提升
拥有多个国际赛事的直播版权 广州游戏电竞企业业绩向好
投诉量激增 直播带货存在这么多问题的主要原因是什么?
工作专班深入到各企业 春寒料峭挡不住松原市施工热情
引导企业向提供“产品+服务”转变 湖南加快智能农机服务化转型
创新平台建设和科技成果转化 德州加大力度重奖创新
潜在风险进一步放大 商品房现房销售已是大势所趋
有序复工复产 1—2月份工业经济发展新动能持续增强
多层次高频调度 1至2月河北省工业运行先行指标稳中有增
以车路协同为基础 智能交通推动城市交通绿色高质量发展
人才短板成为制约产业链高质量发展的关键节点
通过技术手段整合调配供给资源 家政行业不断提质扩容
强化产业链深层次合作 加强重大装备国产化“一条龙”模式构建
如何进一步提升纳税人缴费人的减税降费获得感?
探索建设大数据及网络安全示范试点城市有哪些积极意义?
对制造业中小微企业实施缓缴税费政策有哪些积极意义?
进一步增强自我保护意识 消费者需注意辨别谨慎消费
将“走出去”变“请进来” 西安贸易产业转移承接作用不断得到增强
厦门应如何融入“数字中国”的重大战略发展大局?
江苏省如何不断满足老人日益增长的养老服务需求?
建设一体化的职业健康信息管理平台 天津职业人群保障加强
潜力持续释放 1—2月乡村消费品市场恢复略好于城镇
直接对接社会化服务 楼宇调解室将整体提升青岛劳动争议水平
成功化解纠纷11.47万件 银保监会服务质量日趋提高
春雷响百虫出 惊蛰文化在其他方面有了进一步发展
青绿山水画在古代山水画发展史上有着怎样的影响与地位?


- 开播即爆款 “文化类节目收视率低”这一固有印象被推翻
- 涵盖了109件真迹作品 凯斯·哈林展览将持续至6月13日
- 带有一点自信的自嘲 “隔路”是另一种味道的“凡尔赛”
- 与文渊阁前后呼应 “何以中国”特展隆重致敬文化大成
- 严重者可造成暂时性失明 享受冰雪运动要注意眼睛的健康防护
- 种类繁多让人眼花缭乱 选购牛奶时需要重点关注什么?
- 网课让孩子感到不安焦虑怎么办?八问八答回应广大家长关切
- 循环系统很容易受到刺激 “倒春寒”期间老人该如何做?
- 青少年患者睡眠问题日趋增加 9条建议为孩子助眠
- 我国肥胖人群正逐年递增 不良饮食习惯是重要诱因
- 如何减少噪声对听力的损伤?这份耳部和听力保健小贴士请收好
- 强化住房限购措施 西安限购限售范围进一步扩大
- 多种方式增加供给 进一步降低新市民和青年人的居住成本
- 预计9月下旬海口可实现安居房申请网上办理
- 政策调控力度持续升级 8月百城二手房市场均价止涨转跌
- 8月中国新房找房热度依然保持平稳 环比微涨0.2%
- 进一步加强商品房销售价格备案管理 今年全国楼市调控刷新历史纪录
- 西安第二批集中供地中28宗为现场拍卖方式出让
- 细分化需求得到释放 房屋居住的属性越发凸显
- 佛山顺德龙江近日挂牌商住地起拍价约19.88亿元
- 青岛市4宗地竞品质抽签结果出炉 地溢价均约15%
- 坚持政策支持、多方参与 浙江版保障性租赁住房明确新增比例目标
- 简化审批流程 武汉将实现房源申请配租全程网上办
- 哈尔滨新增本土确诊病例3例 活动轨迹公布
- 哈尔滨市公布3例新增本土新冠肺炎确诊病例活动轨迹
- 山东深耕文化资源 推动旅游业高质量发展
- 今年新增952件(套)!南京大屠杀再添新证
- 四川非遗传承人张雄志:巧手捏面塑 指尖传非遗
- 10月以来我国寒潮为何如此频繁?中国气象局回应
- 56位残疾人士登上黄山 互利互勉共建生活希望
- 安徽潜山两车相撞 已致8人死亡3人受伤
- 上海洋山海关首次在出口货运渠道查获夹带卷烟
- 山西忻州古城:一城风华延续千年历史文脉
- 呼伦贝尔新巴尔虎右旗公布1例无症状感染者行动轨迹
- 新增“53+1” 内蒙古累计本土确诊病例增至185例
- 昆明公安打击破坏生物多样性犯罪 抓获130名涉案嫌疑人
- 山西朔州“11·11”较大透水事故调查报告发布 对38人问责处理
- “海关国门小卫士”竞争上岗 淘汰率接近一半
- 深圳摧毁特大品牌化妆品走私网
- 28人被问责!山西石港煤业“3·25”事故调查报告公布
- 湖南韶山以河长制带动全民治水 让每一处水面“长治久清”
- 上海市奉贤区人大常委会原党组书记袁晓林被“双开”
- 民进会员谈反映社情民意信息工作:心怀大我 敢讲实情
- 80岁“留守”奶奶短视频诉孤独 千万网友心疼:我们陪您唠嗑
- 40年来为子弟兵送出1.3万余双布鞋和鞋垫的“布鞋奶奶”走了
- 当男幼师是什么体验?他们说:有委屈尴尬 但大部分是幸福
- 庐阳警方通报幼童坠亡事件:嫌疑人已被刑拘
- 内蒙古新增本土确诊病例53例、本土无症状感染者1例
- 哈尔滨市启动部分地区第一轮全员核酸检测
- 四川通江发生两车相撞事故 致3人死亡