mongodb安装

MongoDB安装

选型

高可用模式

参考: https://cloud.tencent.com/developer/article/1026185

参考: 全面剖析 MongoDB 高可用架构

类似于redis, mongodb也有主从,replicaset(类似于redis哨兵模式),cluster三种模式。

Community Edition vs Enterprise Edition

参考: https://hovermind.com/mongodb/comparison-of-editions.html

Criteria Community Edition Enterprise Edition
Basic Features (Server, Database Engine, Tools etc.) YES YES
Replication & Sharding YES YES
In-Memory Storage Engine Auditing Kerberos Authentication LDAP Proxy Authentication and LDAP Authorization Encryption at Rest NO YES
Subscription / License Free $6k ~ $13k per year

More Details:

  • https://developpaper.com/comparison-table-of-mongodb-community-version-and-enterprise-version
  • https://www.percona.com/software/mongodb/feature-comparison
  • http://s3.amazonaws.com/info-mongodb-com/MongoDB_Enterprise_Advanced_Datasheet.pdf

Comparison with Atlas

  • MongoDB Atlas (Also known as MongoDB Cloud) is the cloud hosted version of MongoDB Enterprise Advanced
  • DaaS (Database as a Service) in the cloud of “MongoDB Enterprise Advanced”
  • Atlas -> all features of “MongoDB Enterprise Advanced”, plus much more

helm charts

mongodb官网提供的helm charts。基本是企业版和云企业版atlas。也有社区版。参考: https://github.com/mongodb/helm-charts 以及 参考: https://mongodb.github.io/helm-charts/

bitnami维护的helm charts

参考: https://artifacthub.io/packages/helm/bitnami/mongodb ,看起来目前只支持standalone和replicaset模式。

参考: https://artifacthub.io/packages/helm/bitnami/mongodb-sharded , 看起来, 对于sharding分片模式的集群,bitnami也有相应的helm charts。

暂选定bitnami提供的chart

mongo数据库管理

创建数据库与用户

参考: https://cloud.tencent.com/developer/article/1545011

建库

  1. 添加数据库
use 数据库名;

此时数据库有了,但是默认不会显示,需要插入一条数据

db.test.insert({'test': 'test'})

然后执行show dbs就能看到此数据库了。

  1. 添加一个可读写操作的用户
db.createUser(
   {
     user: "用户名",
     pwd: "密码",
     roles: [ "readWrite" ]
   }
);

这样,在当前数据库下就会添加一个具有readWrite操作权限的用户了。

这里要强调的是,需要在哪个库里添加用户,需要先执行use 数据库名 进入当前数据库下,再执行db.createUser创建用户。

角色说明

mongo是基于角色做的权限控制。

内置角色

参考: https://www.jianshu.com/p/62736bff7e2e

参考: https://docs.mongodb.com/manual/tutorial/enable-authentication/

参考: http://www.runoob.com/mongodb/mongodb-window-install.html

参考: https://www.cnblogs.com/zxtceq/p/7690977.html

参考: https://www.mongodb.com/docs/manual/reference/built-in-roles/

数据库用户角色
  • read: 只读数据权限
  • readWrite:读写数据权限
数据库管理角色
  • dbAdmin: 在当前db中执行管理操作的权限
  • dbOwner: 在当前db中执行任意操作
  • userAdmin: 在当前db中管理user的权限
备份和还原角色
  • backup
  • restore
夸库角色
  • readAnyDatabase: 在所有数据库上都有读取数据的权限
  • readWriteAnyDatabase: 在所有数据库上都有读写数据的权限
  • userAdminAnyDatabase: 在所有数据库上都有管理user的权限
  • dbAdminAnyDatabase: 管理所有数据库的权限
集群管理
  • clusterAdmin: 管理机器的最高权限
  • clusterManager: 管理和监控集群的权限
  • clusterMonitor: 监控集群的权限
  • hostManager: 管理Server
超级权限
  • root: 超级用户

自定义角色

内置角色只能控制User在DB级别上执行的操作,管理员可以创建自定义角色,控制用户在集合级别(Collection-Level)上执行的操作,即,控制User在当前DB的特定集合上执行特定的操作

部署安装

先试下部署 standalone 和 replicaset模式, 再试下sharding集群模式。

standalone模式mongo部署安装

参考: https://artifacthub.io/packages/helm/bitnami/mongodb#architecture

单独建个命名空间,设为当前操作的默认命名空间

kubectl create ns mongodb
kubectl config set-context --current --namespace mongodb
kubectl get all

先下载chart看下

helm fetch oci://registry-1.docker.io/bitnamicharts/mongodb --untar
cd mongodb

复制一份values.yaml, 在复制的文件中编辑,保留需要修改(覆盖)的配置

cp values.yaml my-override-values.yaml
[root@jingmin-kube-archlinux mongodb]# vim my-override-values.yaml

如下是需要覆盖(修改)的配置:

[root@jingmin-kube-archlinux mongodb]# cat ./my-override-values.yaml 
## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
##

## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global StorageClass for Persistent Volume(s)
## @param global.namespaceOverride Override the namespace for resource deployed by the chart, but can itself be overridden by the local namespaceOverride
##
global:
  storageClass: ""

## @param clusterDomain Default Kubernetes cluster domain
##
clusterDomain: cluster.local


## @section MongoDB(®) parameters
##


## @param architecture MongoDB(®) architecture (`standalone` or `replicaset`)
##
architecture: standalone
## @param useStatefulSet Set to true to use a StatefulSet instead of a Deployment (only when `architecture=standalone`)
##
useStatefulSet: false
## MongoDB(®) Authentication parameters
##
auth:
  ## @param auth.enabled Enable authentication
  ## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
  ##
  enabled: true
  ## @param auth.rootUser MongoDB(®) root user
  ##
  rootUser: root
  ## @param auth.rootPassword MongoDB(®) root password
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#setting-the-root-user-and-password-on-first-run
  ##
  rootPassword: "Mongo12345"
  ## MongoDB(®) custom users and databases
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#creating-a-user-and-database-on-first-run
  ## @param auth.usernames List of custom users to be created during the initialization
  ## @param auth.passwords List of passwords for the custom users set at `auth.usernames`
  ## @param auth.databases List of custom databases to be created during the initialization
  ##
  usernames: ["test"]
  passwords: ["Test12345"]
  databases: ["test"]
tls:
  ## @param tls.enabled Enable MongoDB(®) TLS support between nodes in the cluster as well as between mongo clients and nodes
  ##
  enabled: false

## @param enableIPv6 Switch to enable/disable IPv6 on MongoDB(®)
## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#enablingdisabling-ipv6
##
enableIPv6: false

## @section MongoDB(®) statefulset parameters
##

## @param replicaCount Number of MongoDB(®) nodes (only when `architecture=replicaset`)
## Ignored when mongodb.architecture=standalone
##
replicaCount: 2

## @section Traffic exposure parameters
##

## Service parameters
##
service:
  ## @param service.type Kubernetes Service type (only for standalone architecture)
  ##
  type: ClusterIP
  ## @param service.portName MongoDB(®) service port name (only for standalone architecture)
  ##
  portName: mongodb
  ## @param service.ports.mongodb MongoDB(®) service port.
  ##
  ports:
    mongodb: 27017
  ## @param service.nodePorts.mongodb Port to bind to for NodePort and LoadBalancer service types (only for standalone architecture)
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  nodePorts:
    mongodb: ""
  ## @param service.externalTrafficPolicy service external traffic policy (only for standalone architecture)
  ## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
  ##
  externalTrafficPolicy: Local

## @section Persistence parameters
##

## Enable persistence using Persistent Volume Claims
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  ## @param persistence.enabled Enable MongoDB(®) data persistence using PVC
  ##
  enabled: true
  ## @param persistence.storageClass PVC Storage Class for MongoDB(®) data volume
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ## set, choosing the default provisioner.
  ##
  storageClass: ""
  ## @param persistence.accessModes PV Access Mode
  ##
  accessModes:
    - ReadWriteOnce
  ## @param persistence.mountPath Path to mount the volume at
  ## MongoDB(&reg;) images.
  ##
  mountPath: /bitnami/mongodb



## @section Arbiter parameters
##

arbiter:
  ## @param arbiter.enabled Enable deploying the arbiter
  ##   https://docs.mongodb.com/manual/tutorial/add-replica-set-arbiter/
  ##
  enabled: true

## @section Hidden Node parameters
##

hidden:
  ## @param hidden.enabled Enable deploying the hidden nodes
  ##   https://docs.mongodb.com/manual/tutorial/configure-a-hidden-replica-set-member/
  ##
  enabled: false

实际上,这里我只改了下root系统管理员的密码,以及新建了个test库,test账号及密码。

只留下了一些看起来比较重要的配置,方便自己重点关注。

安装部署

[root@jingmin-kube-archlinux k8s]# helm install mongodb -f ./mongodb/my-override-values.yaml ./mongodb/
NAME: mongodb
LAST DEPLOYED: Sun Sep  3 15:18:59 2023
NAMESPACE: mongodb
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb
CHART VERSION: 13.18.1
APP VERSION: 6.0.9

** Please be patient while the chart is being deployed **

MongoDB&reg; can be accessed on the following DNS name(s) and ports from within your cluster:

    mongodb.mongodb.svc.cluster.local

To get the root password run:

    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongodb mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 -d)

To get the password for "test" run:

    export MONGODB_PASSWORD=$(kubectl get secret --namespace mongodb mongodb -o jsonpath="{.data.mongodb-passwords}" | base64 -d | awk -F',' '{print $1}')

To connect to your database, create a MongoDB&reg; client container:

    kubectl run --namespace mongodb mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:6.0.9-debian-11-r5 --command -- bash

Then, run the following command:
    mongosh admin --host "mongodb" --authenticationDatabase admin -u $MONGODB_ROOT_USER -p $MONGODB_ROOT_PASSWORD

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace mongodb svc/mongodb 27017:27017 &
    mongosh --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD

提示了一些在集群内和集群外连接mongodb的命令。

确认下pod和其他资源都起来了

[root@jingmin-kube-archlinux k8s]# kubectl get all,cm,secrets,cr,pvc
NAME                           READY   STATUS    RESTARTS   AGE
pod/mongodb-7965bcb79f-7zcfs   1/1     Running   0          4m43s

NAME              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
service/mongodb   ClusterIP   172.31.2.8   <none>        27017/TCP   4m43s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mongodb   1/1     1            1           4m43s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/mongodb-7965bcb79f   1         1         1       4m43s

NAME                               DATA   AGE
configmap/kube-root-ca.crt         1      7m9s
configmap/mongodb-common-scripts   3      4m43s

NAME                                   TYPE                 DATA   AGE
secret/mongodb                         Opaque               2      4m43s
secret/sh.helm.release.v1.mongodb.v1   helm.sh/release.v1   1      4m43s

NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/mongodb   Bound    pvc-f0e7c279-b898-4998-8017-899bc74c3078   8Gi        RWO            nfs-storage    4m43s

确认已经建好了root系统管理员账号,和test库的test账号

[root@jingmin-kube-archlinux k8s]# export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongodb mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 -d)
[root@jingmin-kube-archlinux k8s]# echo $MONGODB_ROOT_PASSWORD
Mongo12345
[root@jingmin-kube-archlinux k8s]# export MONGODB_PASSWORD=$(kubectl get secret --namespace mongodb mongodb -o jsonpath="{.data.mongodb-passwords}" | base64 -d | awk -F',' '{print $1}')
[root@jingmin-kube-archlinux k8s]# echo $MONGODB_PASSWORD
Test12345

密码

按前面的提示,在集群内连接下mongodb

[root@jingmin-kube-archlinux k8s]# kubectl run --namespace mongodb mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:6.0.9-debian-11-r5 --command -- bash
If you don't see a command prompt, try pressing enter.
I have no name!@mongodb-client:/$ mongosh admin --host "mongodb" --authenticationDatabase admin -u $MONGODB_ROOT_USER -p $MONGODB_ROOT_PASSWORD
MongoshInvalidInputError: [COMMON-10001] Invalid connection information: Password specified but no username provided (did you mean '--port' instead of '-p'?)
I have no name!@mongodb-client:/$ mongosh admin --host "mongodb" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
Current Mongosh Log ID: 64f437c8deb1aca2377ad1d1
Connecting to:      mongodb://<credentials>@mongodb:27017/admin?directConnection=true&authSource=admin&appName=mongosh+1.10.6
Using MongoDB:      6.0.9
Using Mongosh:      1.10.6

For mongosh info see: https://docs.mongodb.com/mongodb-shell/


To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.

------
   The server generated these startup warnings when booting
   2023-09-03T07:20:32.381+00:00: You are running on a NUMA machine. We suggest launching mongod like this to avoid performance problems: numactl --interleave=all mongod [other options]
   2023-09-03T07:20:32.381+00:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
   2023-09-03T07:20:32.381+00:00: vm.max_map_count is too low
------

admin> show dbs
admin   100.00 KiB
config   12.00 KiB
local    72.00 KiB
admin> use admin
already on db admin

退出root账号,使用test账号登录test库

admin> exit
I have no name!@mongodb-client:/$ mongosh admin --host "mongodb" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
Current Mongosh Log ID: 64f4393c6fbaf2cb79630047
Connecting to:      mongodb://<credentials>@mongodb:27017/admin?directConnection=true&authSource=admin&appName=mongosh+1.10.6
Using MongoDB:      6.0.9
Using Mongosh:      1.10.6

For mongosh info see: https://docs.mongodb.com/mongodb-shell/

------
   The server generated these startup warnings when booting
   2023-09-03T07:20:32.381+00:00: You are running on a NUMA machine. We suggest launching mongod like this to avoid performance problems: numactl --interleave=all mongod [other options]
   2023-09-03T07:20:32.381+00:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
   2023-09-03T07:20:32.381+00:00: vm.max_map_count is too low
------

admin> exit
I have no name!@mongodb-client:/$ mongosh admin --host "mongodb" --authenticationDatabase test -u test -p Test12345             
Current Mongosh Log ID: 64f4395d77b1eb90e4a70038
Connecting to:      mongodb://<credentials>@mongodb:27017/admin?directConnection=true&authSource=test&appName=mongosh+1.10.6
Using MongoDB:      6.0.9
Using Mongosh:      1.10.6

For mongosh info see: https://docs.mongodb.com/mongodb-shell/

admin> show dbs

admin> function add(){var i = 0;for(;i<20;i++){db.persons.insert({"name":"wang"+i})}}
[Function: add]
admin> use test
switched to db test
test> show dbs

test> add()
DeprecationWarning: Collection.insert() is deprecated. Use insertOne, insertMany, or bulkWrite.

test> db.persons.find()
[
  { _id: ObjectId("64f439d277b1eb90e4a70039"), name: 'wang0' },
  { _id: ObjectId("64f439d277b1eb90e4a7003a"), name: 'wang1' },
  { _id: ObjectId("64f439d277b1eb90e4a7003b"), name: 'wang2' },
  { _id: ObjectId("64f439d277b1eb90e4a7003c"), name: 'wang3' },
  { _id: ObjectId("64f439d277b1eb90e4a7003d"), name: 'wang4' },
  { _id: ObjectId("64f439d277b1eb90e4a7003e"), name: 'wang5' },
  { _id: ObjectId("64f439d277b1eb90e4a7003f"), name: 'wang6' },
  { _id: ObjectId("64f439d277b1eb90e4a70040"), name: 'wang7' },
  { _id: ObjectId("64f439d277b1eb90e4a70041"), name: 'wang8' },
  { _id: ObjectId("64f439d277b1eb90e4a70042"), name: 'wang9' },
  { _id: ObjectId("64f439d277b1eb90e4a70043"), name: 'wang10' },
  { _id: ObjectId("64f439d277b1eb90e4a70044"), name: 'wang11' },
  { _id: ObjectId("64f439d277b1eb90e4a70045"), name: 'wang12' },
  { _id: ObjectId("64f439d277b1eb90e4a70046"), name: 'wang13' },
  { _id: ObjectId("64f439d277b1eb90e4a70047"), name: 'wang14' },
  { _id: ObjectId("64f439d277b1eb90e4a70048"), name: 'wang15' },
  { _id: ObjectId("64f439d277b1eb90e4a70049"), name: 'wang16' },
  { _id: ObjectId("64f439d277b1eb90e4a7004a"), name: 'wang17' },
  { _id: ObjectId("64f439d277b1eb90e4a7004b"), name: 'wang18' },
  { _id: ObjectId("64f439d277b1eb90e4a7004c"), name: 'wang19' }
]
Type "it" for more

test> show dbs
test  40.00 KiB

只有添加过数据之后,test库才会真正可见。

综上,集群内访问mongodb是没问题的。

试下本地连接k8s,开启下端口转发(仅调试)

kubectl port-forward --namespace mongodb svc/mongodb 27017:27017

在本地navicat连了下root和test账号,都可以用。

综上,集群外访问mongodb也是没问题的。

replicaset模式mongo部署安装

参考: https://artifacthub.io/packages/helm/bitnami/mongodb#architecture

单独建个命名空间,设为当前操作的默认命名空间

kubectl create ns mongodb-replicaset
kubectl config set-context --current --namespace mongodb-replicaset
kubectl get all

先下载chart看下(略,和上一节使用的是同一个chart)

helm fetch oci://registry-1.docker.io/bitnamicharts/mongodb --untar
cd mongodb

这里复制一份chart,修改需要覆盖的配置

cp -rdp ./mongodb ./mongodb-replicaset
cd ./mongodb-replicaset

复制一份values.yaml, 在复制的文件中编辑,保留需要修改(覆盖)的配置

mv my-override-values.yaml my-override-values.yaml_bak
cp values.yaml my-override-values.yaml
vim my-override-values.yaml

如下是需要覆盖(修改)的配置:

[root@jingmin-kube-archlinux mongodb-replicaset]# cat my-override-values.yaml
## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global StorageClass for Persistent Volume(s)
## @param global.namespaceOverride Override the namespace for resource deployed by the chart, but can itself be overridden by the local namespaceOverride
##
global:
  storageClass: ""

## @section Common parameters
##

## @param clusterDomain Default Kubernetes cluster domain
##
clusterDomain: cluster.local


## @section MongoDB(&reg;) parameters
##

## @param architecture MongoDB(&reg;) architecture (`standalone` or `replicaset`)
##
architecture: replicaset
## MongoDB(&reg;) Authentication parameters
##
auth:
  ## @param auth.enabled Enable authentication
  ## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
  ##
  enabled: true
  ## @param auth.rootUser MongoDB(&reg;) root user
  ##
  rootUser: root
  ## @param auth.rootPassword MongoDB(&reg;) root password
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#setting-the-root-user-and-password-on-first-run
  ##
  rootPassword: "Mongo12345"
  ## MongoDB(&reg;) custom users and databases
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#creating-a-user-and-database-on-first-run
  ## @param auth.usernames List of custom users to be created during the initialization
  ## @param auth.passwords List of passwords for the custom users set at `auth.usernames`
  ## @param auth.databases List of custom databases to be created during the initialization
  ##
  usernames: ["test"]
  passwords: ["Test12345"]
  databases: ["test"]
  ## @param auth.replicaSetKey Key used for authentication in the replicaset (only when `architecture=replicaset`)
  ##
  replicaSetKey: ""
tls:
  ## @param tls.enabled Enable MongoDB(&reg;) TLS support between nodes in the cluster as well as between mongo clients and nodes
  ##
  enabled: false

## @param replicaSetName Name of the replica set (only when `architecture=replicaset`)
## Ignored when mongodb.architecture=standalone
##
replicaSetName: rs0
## @param enableIPv6 Switch to enable/disable IPv6 on MongoDB(&reg;)
## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#enablingdisabling-ipv6
##
enableIPv6: false
## @param directoryPerDB Switch to enable/disable DirectoryPerDB on MongoDB(&reg;)
## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#enablingdisabling-directoryperdb
##
directoryPerDB: false

## @section MongoDB(&reg;) statefulset parameters
##

## @param replicaCount Number of MongoDB(&reg;) nodes (only when `architecture=replicaset`)
## Ignored when mongodb.architecture=standalone
##
replicaCount: 2
## @param updateStrategy.type Strategy to use to replace existing MongoDB(&reg;) pods. When architecture=standalone and useStatefulSet=false,
## this parameter will be applied on a deployment object. In other case it will be applied on a statefulset object
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
## Example:
## updateStrategy:
##  type: RollingUpdate
##  rollingUpdate:
##    maxSurge: 25%
##    maxUnavailable: 25%
##
## @param containerPorts.mongodb MongoDB(&reg;) container port
##
containerPorts:
  mongodb: 27017

## @section Traffic exposure parameters
##

## Service parameters
##
service:
  ## @param service.type Kubernetes Service type (only for standalone architecture)
  ##
  type: ClusterIP
  ## @param service.portName MongoDB(&reg;) service port name (only for standalone architecture)
  ##
  portName: mongodb
  ## @param service.ports.mongodb MongoDB(&reg;) service port.
  ##
  ports:
    mongodb: 27017
  ## @param service.nodePorts.mongodb Port to bind to for NodePort and LoadBalancer service types (only for standalone architecture)
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  nodePorts:
    mongodb: ""

## @section Persistence parameters
##

## Enable persistence using Persistent Volume Claims
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  ## @param persistence.enabled Enable MongoDB(&reg;) data persistence using PVC
  ##
  enabled: true
  ## @param persistence.storageClass PVC Storage Class for MongoDB(&reg;) data volume
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ## set, choosing the default provisioner.
  ##
  storageClass: ""
  ## @param persistence.mountPath Path to mount the volume at
  ## MongoDB(&reg;) images.
  ##
  mountPath: /bitnami/mongodb
  ## @param persistence.subPath Subdirectory of the volume to mount at
  ## and one PV for multiple services.
  ##
  subPath: ""



## @section Arbiter parameters
##

arbiter:
  ## @param arbiter.enabled Enable deploying the arbiter
  ##   https://docs.mongodb.com/manual/tutorial/add-replica-set-arbiter/
  ##
  enabled: true

## @section Hidden Node parameters
##

hidden:
  ## @param hidden.enabled Enable deploying the hidden nodes
  ##   https://docs.mongodb.com/manual/tutorial/configure-a-hidden-replica-set-member/
  ##
  enabled: false

这里调整了下architecture: replicaset.

改了下root系统管理员的密码,以及新建了个test库,test账号及密码。

只留下了一些看起来比较重要的配置,方便自己重点关注。

安装部署

[root@jingmin-kube-archlinux mongodb-replicaset]# cd ..
[root@jingmin-kube-archlinux k8s]# helm install mongodb-replicaset -f ./mongodb-replicaset/my-override-values.yaml ./mongodb-replicaset/
NAME: mongodb-replicaset
LAST DEPLOYED: Sun Sep  3 18:14:31 2023
NAMESPACE: mongodb-replicaset
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb
CHART VERSION: 13.18.1
APP VERSION: 6.0.9

** Please be patient while the chart is being deployed **

MongoDB&reg; can be accessed on the following DNS name(s) and ports from within your cluster:

    mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017
    mongodb-replicaset-1.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017

To get the root password run:

    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongodb-replicaset mongodb-replicaset -o jsonpath="{.data.mongodb-root-password}" | base64 -d)

To get the password for "test" run:

    export MONGODB_PASSWORD=$(kubectl get secret --namespace mongodb-replicaset mongodb-replicaset -o jsonpath="{.data.mongodb-passwords}" | base64 -d | awk -F',' '{print $1}')

To connect to your database, create a MongoDB&reg; client container:

    kubectl run --namespace mongodb-replicaset mongodb-replicaset-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:6.0.9-debian-11-r5 --command -- bash

Then, run the following command:
    mongosh admin --host "mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017,mongodb-replicaset-1.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017" --authenticationDatabase admin -u $MONGODB_ROOT_USER -p $MONGODB_ROOT_PASSWORD

这里有一些在集群内访问mongo replicaset 集群的提示。

先看下pod和其他一些资源是否正常。

[root@jingmin-kube-archlinux k8s]# kubectl get all,cm,secrets,cr,pvc
NAME                               READY   STATUS             RESTARTS        AGE
pod/mongodb-replicaset-0           1/1     Running            0               14m
pod/mongodb-replicaset-1           0/1     CrashLoopBackOff   7 (2m57s ago)   14m
pod/mongodb-replicaset-arbiter-0   1/1     Running            0               14m

NAME                                          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
service/mongodb-replicaset-arbiter-headless   ClusterIP   None         <none>        27017/TCP   14m
service/mongodb-replicaset-headless           ClusterIP   None         <none>        27017/TCP   14m

NAME                                          READY   AGE
statefulset.apps/mongodb-replicaset           1/2     14m
statefulset.apps/mongodb-replicaset-arbiter   1/1     14m

NAME                                          DATA   AGE
configmap/kube-root-ca.crt                    1      42m
configmap/mongodb-replicaset-common-scripts   3      14m
configmap/mongodb-replicaset-scripts          2      14m

NAME                                              TYPE                 DATA   AGE
secret/mongodb-replicaset                         Opaque               3      14m
secret/sh.helm.release.v1.mongodb-replicaset.v1   helm.sh/release.v1   1      14m

NAME                                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/datadir-mongodb-replicaset-0   Bound    pvc-9d14a18a-be3e-485f-8add-e4bb1580471e   8Gi        RWO            nfs-storage    14m
persistentvolumeclaim/datadir-mongodb-replicaset-1   Bound    pvc-a07d2c31-ae36-4188-8513-872da5087285   8Gi        RWO            nfs-storage    14m

看了下有个pod一直起不来

pod日志

[root@jingmin-kube-archlinux k8s]# kubectl logs pods/mongodb-replicaset-1
mongodb 10:26:24.15 INFO  ==> Advertised Hostname: mongodb-replicaset-1.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local
mongodb 10:26:24.15 INFO  ==> Advertised Port: 27017
mongodb 10:26:24.15 INFO  ==> Data dir empty, checking if the replica set already exists
mongodb 10:26:25.05 INFO  ==> Detected existing primary: mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017
mongodb 10:26:25.05 INFO  ==> Current primary is different from this node. Configuring the node as replica of mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017
mongodb 10:26:25.06 
mongodb 10:26:25.06 Welcome to the Bitnami mongodb container
mongodb 10:26:25.06 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 10:26:25.07 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 10:26:25.07 
mongodb 10:26:25.07 INFO  ==> ** Starting MongoDB setup **
mongodb 10:26:25.08 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 10:26:25.38 INFO  ==> Initializing MongoDB...
mongodb 10:26:25.40 INFO  ==> Writing keyfile for replica set authentication...
mongodb 10:26:25.41 INFO  ==> Deploying MongoDB from scratch...
/opt/bitnami/scripts/libos.sh: line 346:    69 Illegal instruction     (core dumped) "$@" > /dev/null 2>&1

搜索引擎查下相关问题, 参考: https://github.com/bitnami/charts/issues/12834 这个issue有提到,mongodb5需要使用avx指令。官网的相关note 也有说明。 参考: https://www.mongodb.com/docs/manual/administration/production-notes/#platform-support-notes 关于支持avx的cpu。 参考: https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX

[root@jingmin-kube-archlinux k8s]# kubectl get pods -o wide
NAME                           READY   STATUS             RESTARTS       AGE   IP             NODE                     NOMINATED NODE   READINESS GATES
mongodb-replicaset-0           1/1     Running            0              18m   172.30.1.177   jingmin-kube-archlinux   <none>           <none>
mongodb-replicaset-1           0/1     CrashLoopBackOff   8 (111s ago)   18m   172.30.0.78    jingmin-kube-master1     <none>           <none>
mongodb-replicaset-arbiter-0   1/1     Running            0              18m   172.30.1.176   jingmin-kube-archlinux   <none>           <none>

jingmin-kube-master1这个节点上的pod失败了。

确认下jingmin-kube-archlinux这个节点是支持avx指令的

[root@jingmin-kube-archlinux k8s]# grep avx /proc/cpuinfo
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
...

jingmin-kube-master1这个节点是不支持avx指令的

[root@jingmin-kube-master1 ~]# grep avx /proc/cpuinfo
[root@jingmin-kube-master1 ~]# 

想起来,jingmin-kube-master1这个节点是软路由主机,cpu是 intel n5105,不是常规的cpu。

需要调整pod使用nodeSelector或者节点亲和度。

参考: https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/ 参考: https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/ 参考: https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/labels/ 参考: https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity

[root@jingmin-kube-master1 ~]# kubectl get nodes --show-labels
NAME                     STATUS   ROLES           AGE   VERSION   LABELS
jingmin-kube-archlinux   Ready    <none>          13d   v1.27.4   beta.kubernetes.io/arch=amd64,...
jingmin-kube-master1     Ready    control-plane   13d   v1.27.4   beta.kubernetes.io/arch=amd64,...
[root@jingmin-kube-master1 ~]# kubectl label nodes jingmin-kube-master1 avx=false
node/jingmin-kube-master1 labeled
[root@jingmin-kube-master1 ~]# kubectl get nodes --show-labels
NAME                     STATUS   ROLES           AGE   VERSION   LABELS
jingmin-kube-archlinux   Ready    <none>          13d   v1.27.4   beta.kubernetes.io/arch=amd64,...
jingmin-kube-master1     Ready    control-plane   13d   v1.27.4   avx=false,beta.kubernetes.io/arch=amd64,...

这里给不支持avx指令的那个节点增加了个avx=false的标签。

这里调整了下helm覆盖的配置, 加一下节点affinity的配置

[root@jingmin-kube-archlinux k8s]# vim ./mongodb-replicaset/values.yaml 
[root@jingmin-kube-archlinux k8s]# vim ./mongodb-replicaset/my-override-values.yaml
[root@jingmin-kube-archlinux k8s]# cat ./mongodb-replicaset/my-override-values.yaml
## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global StorageClass for Persistent Volume(s)
## @param global.namespaceOverride Override the namespace for resource deployed by the chart, but can itself be overridden by the local namespaceOverride
##
global:
  storageClass: ""

## @section Common parameters
##

## @param clusterDomain Default Kubernetes cluster domain
##
clusterDomain: cluster.local


## @section MongoDB(&reg;) parameters
##

## @param architecture MongoDB(&reg;) architecture (`standalone` or `replicaset`)
##
architecture: replicaset
## MongoDB(&reg;) Authentication parameters
##
auth:
  ## @param auth.enabled Enable authentication
  ## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
  ##
  enabled: true
  ## @param auth.rootUser MongoDB(&reg;) root user
  ##
  rootUser: root
  ## @param auth.rootPassword MongoDB(&reg;) root password
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#setting-the-root-user-and-password-on-first-run
  ##
  rootPassword: "Mongo12345"
  ## MongoDB(&reg;) custom users and databases
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#creating-a-user-and-database-on-first-run
  ## @param auth.usernames List of custom users to be created during the initialization
  ## @param auth.passwords List of passwords for the custom users set at `auth.usernames`
  ## @param auth.databases List of custom databases to be created during the initialization
  ##
  usernames: ["test"]
  passwords: ["Test12345"]
  databases: ["test"]
  ## @param auth.replicaSetKey Key used for authentication in the replicaset (only when `architecture=replicaset`)
  ##
  replicaSetKey: ""
tls:
  ## @param tls.enabled Enable MongoDB(&reg;) TLS support between nodes in the cluster as well as between mongo clients and nodes
  ##
  enabled: false

## @param replicaSetName Name of the replica set (only when `architecture=replicaset`)
## Ignored when mongodb.architecture=standalone
##
replicaSetName: rs0
## @param enableIPv6 Switch to enable/disable IPv6 on MongoDB(&reg;)
## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#enablingdisabling-ipv6
##
enableIPv6: false
## @param directoryPerDB Switch to enable/disable DirectoryPerDB on MongoDB(&reg;)
## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#enablingdisabling-directoryperdb
##
directoryPerDB: false

## @section MongoDB(&reg;) statefulset parameters
##

## @param replicaCount Number of MongoDB(&reg;) nodes (only when `architecture=replicaset`)
## Ignored when mongodb.architecture=standalone
##
replicaCount: 2
## @param affinity MongoDB(&reg;) Affinity for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set
##
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: avx
          operator: NotIn
          values:
          - "false"
## @param updateStrategy.type Strategy to use to replace existing MongoDB(&reg;) pods. When architecture=standalone and useStatefulSet=false,
## this parameter will be applied on a deployment object. In other case it will be applied on a statefulset object
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
## Example:
## updateStrategy:
##  type: RollingUpdate
##  rollingUpdate:
##    maxSurge: 25%
##    maxUnavailable: 25%
##
## @param containerPorts.mongodb MongoDB(&reg;) container port
##
containerPorts:
  mongodb: 27017

## @section Traffic exposure parameters
##

## Service parameters
##
service:
  ## @param service.type Kubernetes Service type (only for standalone architecture)
  ##
  type: ClusterIP
  ## @param service.portName MongoDB(&reg;) service port name (only for standalone architecture)
  ##
  portName: mongodb
  ## @param service.ports.mongodb MongoDB(&reg;) service port.
  ##
  ports:
    mongodb: 27017
  ## @param service.nodePorts.mongodb Port to bind to for NodePort and LoadBalancer service types (only for standalone architecture)
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  nodePorts:
    mongodb: ""

## @section Persistence parameters
##

## Enable persistence using Persistent Volume Claims
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  ## @param persistence.enabled Enable MongoDB(&reg;) data persistence using PVC
  ##
  enabled: true
  ## @param persistence.storageClass PVC Storage Class for MongoDB(&reg;) data volume
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ## set, choosing the default provisioner.
  ##
  storageClass: ""
  ## @param persistence.mountPath Path to mount the volume at
  ## MongoDB(&reg;) images.
  ##
  mountPath: /bitnami/mongodb
  ## @param persistence.subPath Subdirectory of the volume to mount at
  ## and one PV for multiple services.
  ##
  subPath: ""



## @section Arbiter parameters
##

arbiter:
  ## @param arbiter.enabled Enable deploying the arbiter
  ##   https://docs.mongodb.com/manual/tutorial/add-replica-set-arbiter/
  ##
  enabled: true

## @section Hidden Node parameters
##

hidden:
  ## @param hidden.enabled Enable deploying the hidden nodes
  ##   https://docs.mongodb.com/manual/tutorial/configure-a-hidden-replica-set-member/
  ##
  enabled: false

注意上面的配置中加了如下几行

## @param affinity MongoDB(&reg;) Affinity for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set
##
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: avx
          operator: NotIn
          values:
          - "false"

然后更新下helm安装好的release

[root@jingmin-kube-archlinux k8s]# helm upgrade mongodb-replicaset -f ./mongodb-replicaset/my-override-values.yaml ./mongodb-replicaset/
Release "mongodb-replicaset" has been upgraded. Happy Helming!
NAME: mongodb-replicaset
LAST DEPLOYED: Sun Sep  3 19:26:02 2023
NAMESPACE: mongodb-replicaset
STATUS: deployed
REVISION: 6
TEST SUITE: None
NOTES:
CHART NAME: mongodb
CHART VERSION: 13.18.1
APP VERSION: 6.0.9

** Please be patient while the chart is being deployed **

MongoDB&reg; can be accessed on the following DNS name(s) and ports from within your cluster:

    mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017
    mongodb-replicaset-1.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017

To get the root password run:

    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongodb-replicaset mongodb-replicaset -o jsonpath="{.data.mongodb-root-password}" | base64 -d)

To get the password for "test" run:

    export MONGODB_PASSWORD=$(kubectl get secret --namespace mongodb-replicaset mongodb-replicaset -o jsonpath="{.data.mongodb-passwords}" | base64 -d | awk -F',' '{print $1}')

To connect to your database, create a MongoDB&reg; client container:

    kubectl run --namespace mongodb-replicaset mongodb-replicaset-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:6.0.9-debian-11-r5 --command -- bash

Then, run the following command:
    mongosh admin --host "mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017,mongodb-replicaset-1.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017" --authenticationDatabase admin -u $MONGODB_ROOT_USER -p $MONGODB_ROOT_PASSWORD

更新成功了。(因为我之前语法错误失败过几次,这里REVISION: 6了)

看下有没有起来

[root@jingmin-kube-archlinux k8s]# kubectl get all -o wide
NAME                               READY   STATUS             RESTARTS        AGE   IP             NODE                     NOMINATED NODE   READINESS GATES
pod/mongodb-replicaset-0           1/1     Running            0               77m   172.30.1.177   jingmin-kube-archlinux   <none>           <none>
pod/mongodb-replicaset-1           0/1     CrashLoopBackOff   19 (4m5s ago)   77m   172.30.0.78    jingmin-kube-master1     <none>           <none>
pod/mongodb-replicaset-arbiter-0   1/1     Running            0               77m   172.30.1.176   jingmin-kube-archlinux   <none>           <none>

NAME                                          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE   SELECTOR
service/mongodb-replicaset-arbiter-headless   ClusterIP   None         <none>        27017/TCP   77m   app.kubernetes.io/component=arbiter,app.kubernetes.io/instance=mongodb-replicaset,app.kubernetes.io/name=mongodb
service/mongodb-replicaset-headless           ClusterIP   None         <none>        27017/TCP   77m   app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb-replicaset,app.kubernetes.io/name=mongodb

NAME                                          READY   AGE   CONTAINERS        IMAGES
statefulset.apps/mongodb-replicaset           1/2     77m   mongodb           docker.io/bitnami/mongodb:6.0.9-debian-11-r5
statefulset.apps/mongodb-replicaset-arbiter   1/1     77m   mongodb-arbiter   docker.io/bitnami/mongodb:6.0.9-debian-11-r5

pod/mongodb-replicaset-1暂时还没起来。还是在jingmin-kube-master1这个节点上。

直接删下pod,让k8s重建pod重新调度。

[root@jingmin-kube-archlinux k8s]# kubectl delete pod mongodb-replicaset-1
pod "mongodb-replicaset-1" deleted
[root@jingmin-kube-archlinux k8s]# kubectl get all
NAME                               READY   STATUS    RESTARTS   AGE
pod/mongodb-replicaset-0           1/1     Running   0          16s
pod/mongodb-replicaset-1           1/1     Running   0          44s
pod/mongodb-replicaset-arbiter-0   1/1     Running   0          79m

NAME                                          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
service/mongodb-replicaset-arbiter-headless   ClusterIP   None         <none>        27017/TCP   79m
service/mongodb-replicaset-headless           ClusterIP   None         <none>        27017/TCP   79m

NAME                                          READY   AGE
statefulset.apps/mongodb-replicaset           2/2     79m
statefulset.apps/mongodb-replicaset-arbiter   1/1     79m

看了下,已经都正常了。(这里可能需要点时间,多刷新几次命令)

看下密码是不是自己前面设置的

[root@jingmin-kube-archlinux k8s]# export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongodb-replicaset mongodb-replicaset -o jsonpath="{.data.mongodb-root-password}" | base64 -d)
[root@jingmin-kube-archlinux k8s]# echo $MONGODB_ROOT_PASSWORD
Mongo12345
[root@jingmin-kube-archlinux k8s]# export MONGODB_PASSWORD=$(kubectl get secret --namespace mongodb-replicaset mongodb-replicaset -o jsonpath="{.data.mongodb-passwords}" | base64 -d | awk -F',' '{print $1}')
[root@jingmin-kube-archlinux k8s]# echo $MONGODB_PASSWORD
Test12345

密码没问题。

试下集群内连接mongodb

[root@jingmin-kube-archlinux k8s]# kubectl run --namespace mongodb-replicaset mongodb-replicaset-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:6.0.9-debian-11-r5 --command -- bash
If you don't see a command prompt, try pressing enter.
I have no name!@mongodb-replicaset-client:/$ mongosh admin --host "mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017,mongodb-replicaset-1.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
Current Mongosh Log ID: 64f4722e15de22acdca72615
Connecting to:      mongodb://<credentials>@mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017,mongodb-replicaset-1.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017/admin?authSource=admin&appName=mongosh+1.10.6
Using MongoDB:      6.0.9
Using Mongosh:      1.10.6

For mongosh info see: https://docs.mongodb.com/mongodb-shell/


To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.

------
   The server generated these startup warnings when booting
   2023-09-03T11:33:39.306+00:00: You are running on a NUMA machine. We suggest launching mongod like this to avoid performance problems: numactl --interleave=all mongod [other options]
   2023-09-03T11:33:39.306+00:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
   2023-09-03T11:33:39.306+00:00: vm.max_map_count is too low
------

rs0 [primary] admin> show dbs
admin   172.00 KiB
config  216.00 KiB
local   460.00 KiB
rs0 [primary] admin> show users
[
  {
    _id: 'admin.root',
    userId: new UUID("6c9801f0-6317-4bb1-b8f4-0795ce8abfe8"),
    user: 'root',
    db: 'admin',
    roles: [ { role: 'root', db: 'admin' } ],
    mechanisms: [ 'SCRAM-SHA-1', 'SCRAM-SHA-256' ]
  }
]

rs0 [primary] admin> exit
I have no name!@mongodb-replicaset-client:/$ mongosh admin --host "mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017,mongodb-replicaset-1.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017" --authenticationDatabase admin -u test -p Test12345              
Current Mongosh Log ID: 64f472823d16b77f0d9be239
Connecting to:      mongodb://<credentials>@mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017,mongodb-replicaset-1.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017/admin?authSource=admin&appName=mongosh+1.10.6
MongoServerError: Authentication failed.
I have no name!@mongodb-replicaset-client:/$ mongosh admin --host "mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017,mongodb-replicaset-1.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017" --authenticationDatabase test -u test -p Test12345
Current Mongosh Log ID: 64f4728d1831398eb218961c
Connecting to:      mongodb://<credentials>@mongodb-replicaset-0.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017,mongodb-replicaset-1.mongodb-replicaset-headless.mongodb-replicaset.svc.cluster.local:27017/admin?authSource=test&appName=mongosh+1.10.6
Using MongoDB:      6.0.9
Using Mongosh:      1.10.6

For mongosh info see: https://docs.mongodb.com/mongodb-shell/

rs0 [primary] admin> show dbs

rs0 [primary] admin> db.persons.insertOne({userId: "001", name: "wang1"})
MongoServerError: not authorized on admin to execute command { insert: "persons", documents: [ { userId: "001", name: "wang1", _id: ObjectId('64f472fd1831398eb218961d') } ], ordered: true, lsid: { id: UUID("ebd17071-528d-4851-a54e-e9f860fd23dd") }, txnNumber: 1, $clusterTime: { clusterTime: Timestamp(1693741740, 1), signature: { hash: BinData(0, 1CB8C685C68B556C5AB86BD435ED53FE3E53B4C7), keyId: 7274541101720010758 } }, $db: "admin" }
rs0 [primary] admin> use test
switched to db test
rs0 [primary] test> db.persons.insertOne({userId: "001", name: "wang1"})
{
  acknowledged: true,
  insertedId: ObjectId("64f473131831398eb218961e")
}
rs0 [primary] test> db.persons.find()
[
  {
    _id: ObjectId("64f473131831398eb218961e"),
    userId: '001',
    name: 'wang1'
  }
]
rs0 [primary] test> db.persons.insertOne({userId: "001", name: "wang2", "gender": 1})
{
  acknowledged: true,
  insertedId: ObjectId("64f473481831398eb218961f")
}
rs0 [primary] test> db.persons.find()
[
  {
    _id: ObjectId("64f473131831398eb218961e"),
    userId: '001',
    name: 'wang1'
  },
  {
    _id: ObjectId("64f473481831398eb218961f"),
    userId: '001',
    name: 'wang2',
    gender: 1
  }
]
rs0 [primary] test> exit
I have no name!@mongodb-replicaset-client:/$ exit
exit
pod "mongodb-replicaset-client" deleted

集群内使用没有问题。

在本地连接上k8s, 开启端口转发(仅调试)

PS C:\WINDOWS\system32> kubectl port-forward mongodb-replicaset-0 27017:27017
Forwarding from 127.0.0.1:27017 -> 27017
Forwarding from [::1]:27017 -> 27017
Handling connection for 27017
Handling connection for 27017
Handling connection for 27017

然后本地用navicat连接 localhost:27017, 读写正常。

PS C:\WINDOWS\system32> kubectl port-forward mongodb-replicaset-1 27017:27017
Forwarding from 127.0.0.1:27017 -> 27017
Forwarding from [::1]:27017 -> 27017
Handling connection for 27017
Handling connection for 27017
Handling connection for 27017
Handling connection for 27017
Handling connection for 27017

然后本地用navicat连接 localhost:27017, 读正常,写异常。

db.persons.insertOne({userId:"004", name: "wang4", gender:1})
> [Error] not master

因为当前连接节点不是master节点。

db.isMaster().ismaster

结果是false

综上,集群外访问,要选定master节点连接。(由于故障转移的原因,master可能会变的)

cluster模式mongo部署安装

参考: https://artifacthub.io/packages/helm/bitnami/mongodb-sharded

参考: https://artifacthub.io/packages/helm/bitnami/mongodb-sharded#sharding

This chart deploys a sharded cluster by default. Some characteristics of this chart are: 此图表默认部署分片集群。该图表的一些特征是:

  • It allows HA by enabling replication on the shards and the config servers. The mongos instances can be scaled horizontally as well. 它通过在分片和配置服务器上启用复制来实现高可用性。 mongos 实例也可以水平缩放。
  • The number of secondary and arbiter nodes can be scaled out independently. 辅助节点和仲裁节点的数量可以独立扩展。

单独建个命名空间,设为当前操作的默认命名空间

[root@jingmin-kube-archlinux mongodb-sharded]# kubectl create ns mongodb-sharded
namespace/mongodb-sharded created
[root@jingmin-kube-archlinux mongodb-sharded]# kubectl config set-context --current --namespace mongodb-sharded
Context "kubernetes-admin@kubernetes" modified.

老规矩,先把chart下载下来看下。

helm pull oci://registry-1.docker.io/bitnamicharts/mongodb-sharded --untar
cd mongodb-sharded/

复制一份默认的values.yaml配置,在副本中保留需要调整的内容,用于之后覆盖默认配置。

[root@jingmin-kube-archlinux mongodb-sharded]# cp ./values.yaml ./my-override-values.yaml
[root@jingmin-kube-archlinux mongodb-sharded]# vim ./my-override-values.yaml 
[root@jingmin-kube-archlinux mongodb-sharded]# cat ./my-override-values.yaml 
## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
##

## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global storage class for dynamic provisioning
##
global:
  storageClass: ""

## @section Common parameters
##



## MongoDB(&reg;) Authentication parameters
##
auth:
  ## @param auth.enabled Enable authentication
  ## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
  ##
  enabled: true
  ## @param auth.rootUser MongoDB(&reg;) root user
  ##
  rootUser: root
  ## @param auth.rootPassword MongoDB(&reg;) root password
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#setting-the-root-user-and-password-on-first-run
  ##
  rootPassword: "Mongo12345"
  ## @param auth.replicaSetKey Key used for authentication in the replicaset (only when `architecture=replicaset`)
  ##
  replicaSetKey: ""


## @param shards Number of shards to be created
## ref: https://docs.mongodb.com/manual/core/sharded-cluster-shards/
##
shards: 2
## Properties for all of the pods in the cluster (shards, config servers and mongos)
##
common:
  ## @param common.mongodbEnableIPv6 Switch to enable/disable IPv6 on MongoDB&reg;
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#enablingdisabling-ipv6
  ##
  mongodbEnableIPv6: false
  ## @param common.mongodbDirectoryPerDB Switch to enable/disable DirectoryPerDB on MongoDB&reg;
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#enablingdisabling-directoryperdb
  ##
  mongodbDirectoryPerDB: false
  ## @param common.containerPorts.mongodb MongoDB container port
  ##
  containerPorts:
    mongodb: 27017

## Kubernetes service type
## ref: https://kubernetes.io/docs/concepts/services-networking/service/
##
service:
  type: ClusterIP
  ## @param service.externalTrafficPolicy External traffic policy
  ## Enable client source IP preservation
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
  ##
  externalTrafficPolicy: Cluster
  ## @param service.ports.mongodb MongoDB&reg; service port
  ##
  ports:
    mongodb: 27017
  ## @param service.nodePorts.mongodb Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  nodePorts:
    mongodb: ""
## @section Config Server parameters
##

## Config Server replica set properties
## ref: https://docs.mongodb.com/manual/core/sharded-cluster-config-servers/
##
configsvr:
  ## @param configsvr.replicaCount Number of nodes in the replica set (the first node will be primary)
  ##
  replicaCount: 1
  ## @param configsvr.affinity Config Server Affinity for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ## Note: configsvr.podAffinityPreset, configsvr.podAntiAffinityPreset, and configsvr.nodeAffinityPreset will be ignored when it's set
  ##
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: avx
            operator: NotIn
            values:
            - "false"
  ## Enable persistence using Persistent Volume Claims
  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    ## @param configsvr.persistence.enabled Use a PVC to persist data
    ##
    enabled: true
    ## @param configsvr.persistence.mountPath Path to mount the volume at
    ## MongoDB&reg; images.
    ##
    mountPath: /bitnami/mongodb
    ## @param configsvr.persistence.subPath Subdirectory of the volume to mount at (evaluated as a template)
    ## Useful in dev environments and one PV for multiple services.
    ##
    subPath: ""
    ## @param configsvr.persistence.storageClass Storage class of backing PVC
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClass: ""

## @section Mongos parameters
##

## Mongos properties
## ref: https://docs.mongodb.com/manual/reference/program/mongos/#bin.mongos
##
mongos:
  ## @param mongos.replicaCount Number of replicas
  ##
  replicaCount: 1
  ## @param mongos.affinity Mongos Affinity for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ## Note: mongos.podAffinityPreset, mongos.podAntiAffinityPreset, and mongos.nodeAffinityPreset will be ignored when it's set
  ##
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: avx
            operator: NotIn
            values:
            - "false"
  ## @param mongos.nodeSelector Mongos Node labels for pod assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  ## @param mongos.useStatefulSet Use StatefulSet instead of Deployment
  ##
  useStatefulSet: false
  ## When using a statefulset, you can enable one service per replica
  ## This is useful when exposing the mongos through load balancers to make sure clients
  ## connect to the same mongos and therefore can follow their cursors
  ##
  servicePerReplica:
    ## @param mongos.servicePerReplica.enabled Create one service per mongos replica (must be used with statefulset)
    ##
    enabled: false

## @section Shard configuration: Data node parameters
##

## Shard replica set properties
## ref: https://docs.mongodb.com/manual/replication/index.html
##
shardsvr:
  ## Properties for data nodes (primary and secondary)
  ##
  dataNode:
    ## @param shardsvr.dataNode.replicaCount Number of nodes in each shard replica set (the first node will be primary)
    ##
    replicaCount: 1
    ## @param shardsvr.dataNode.affinity Data nodes Affinity for pod assignment
    ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ## You can set dataNodeLoopId (or any other parameter) by setting the below code block under this 'affinity' section:
    ## affinity:
    ##   matchLabels:
    ##     shard: "{{ .dataNodeLoopId }}"
    ##
    ## Note: shardsvr.dataNode.podAffinityPreset, shardsvr.dataNode.podAntiAffinityPreset, and shardsvr.dataNode.nodeAffinityPreset will be ignored when it's set
    ##
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: avx
              operator: NotIn
              values:
              - "false"

  ## @section Shard configuration: Persistence parameters
  ##

  ## Enable persistence using Persistent Volume Claims
  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    ## @param shardsvr.persistence.enabled Use a PVC to persist data
    ##
    enabled: true
    ## @param shardsvr.persistence.mountPath The path the volume will be mounted at, useful when using different MongoDB&reg; images.
    ##
    mountPath: /bitnami/mongodb
    ## @param shardsvr.persistence.subPath Subdirectory of the volume to mount at (evaluated as a template)
    ## Useful in development environments and one PV for multiple services.
    ##
    subPath: ""
    ## @param shardsvr.persistence.storageClass Storage class of backing PVC
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClass: ""

  ## @section Shard configuration: Arbiter parameters
  ##

  ## Properties for arbiter nodes
  ## ref: https://docs.mongodb.com/manual/tutorial/add-replica-set-arbiter/
  ##
  arbiter:
    ## @param shardsvr.arbiter.replicaCount Number of arbiters in each shard replica set (the first node will be primary)
    ##
    replicaCount: 0
    ## @param shardsvr.arbiter.affinity Arbiter's Affinity for pod assignment
    ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ## You can set arbiterLoopId (or any other parameter) by setting the below code block under this 'affinity' section:
    ## affinity:
    ##   matchLabels:
    ##     shard: "{{ .arbiterLoopId }}"
    ##
    ## Note: shardsvr.arbiter.podAffinityPreset, shardsvr.arbiter.podAntiAffinityPreset, and shardsvr.arbiter.nodeAffinityPreset will be ignored when it's set
    ##
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: avx
              operator: NotIn
              values:
              - "false"

使用默认的storageclass

这里我设置了root密码。

然后设置了各处的的节点affinity。

与前一个部署一样,我的k8s集群中有个节点是n5105的cpu,不支持avx指令,而mongodb5之后会用到这个指令,这里把那个k8s节点排除。

前面的章节中在在不支持avx指令的那个节点上,设置标签。这里重新贴一下过程。

[root@jingmin-kube-master1 ~]# kubectl get nodes --show-labels
NAME                     STATUS   ROLES           AGE   VERSION   LABELS
jingmin-kube-archlinux   Ready    <none>          13d   v1.27.4   beta.kubernetes.io/arch=amd64,...
jingmin-kube-master1     Ready    control-plane   13d   v1.27.4   beta.kubernetes.io/arch=amd64,...
[root@jingmin-kube-master1 ~]# kubectl label nodes jingmin-kube-master1 avx=false
node/jingmin-kube-master1 labeled
[root@jingmin-kube-master1 ~]# kubectl get nodes --show-labels
NAME                     STATUS   ROLES           AGE   VERSION   LABELS
jingmin-kube-archlinux   Ready    <none>          13d   v1.27.4   beta.kubernetes.io/arch=amd64,...
jingmin-kube-master1     Ready    control-plane   13d   v1.27.4   avx=false,beta.kubernetes.io/arch=amd64,...

这里给不支持avx指令的那个节点增加了个avx=false的标签。

然后部署安装

[root@jingmin-kube-archlinux mongodb-sharded]# cd ..
[root@jingmin-kube-archlinux k8s]# helm install mongodb-sharded -f ./mongodb-sharded/my-override-values.yaml ./mongodb-sharded/
NAME: mongodb-sharded
LAST DEPLOYED: Sun Sep  3 23:37:55 2023
NAMESPACE: mongodb-sharded
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb-sharded
CHART VERSION: 6.6.2
APP VERSION: 6.0.9

** Please be patient while the chart is being deployed **

The MongoDB&reg; Sharded cluster can be accessed via the Mongos instances in port 27017 on the following DNS name from within your cluster:

    mongodb-sharded.mongodb-sharded.svc.cluster.local

To get the root password run:

    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongodb-sharded mongodb-sharded -o jsonpath="{.data.mongodb-root-password}" | base64 -d)

To connect to your database run the following command:

    kubectl run --namespace mongodb-sharded mongodb-sharded-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb-sharded:6.0.9-debian-11-r0 --command -- mongosh admin --host mongodb-sharded --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace mongodb-sharded svc/mongodb-sharded 27017:27017 &
    mongosh --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD

这里有一些部署成功后,在集群内连接mongodb集群的说明。

看下有pod等资源有没有都起来

[root@jingmin-kube-archlinux k8s]# kubectl get all,cm,secrets,cr,pvc
NAME                                          READY   STATUS    RESTARTS   AGE
pod/mongodb-sharded-configsvr-0               1/1     Running   0          2m34s
pod/mongodb-sharded-mongos-647498488f-2hqrj   1/1     Running   0          2m34s
pod/mongodb-sharded-shard0-data-0             1/1     Running   0          2m34s
pod/mongodb-sharded-shard1-data-0             1/1     Running   0          2m34s

NAME                               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
service/mongodb-sharded            ClusterIP   172.31.8.135   <none>        27017/TCP   2m34s
service/mongodb-sharded-headless   ClusterIP   None           <none>        27017/TCP   2m34s

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mongodb-sharded-mongos   1/1     1            1           2m34s

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/mongodb-sharded-mongos-647498488f   1         1         1       2m34s

NAME                                           READY   AGE
statefulset.apps/mongodb-sharded-configsvr     1/1     2m34s
statefulset.apps/mongodb-sharded-shard0-data   1/1     2m34s
statefulset.apps/mongodb-sharded-shard1-data   1/1     2m34s

NAME                                              DATA   AGE
configmap/kube-root-ca.crt                        1      46m
configmap/mongodb-sharded-replicaset-entrypoint   1      2m34s

NAME                                           TYPE                 DATA   AGE
secret/mongodb-sharded                         Opaque               2      2m34s
secret/sh.helm.release.v1.mongodb-sharded.v1   helm.sh/release.v1   1      2m34s

NAME                                                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/datadir-mongodb-sharded-configsvr-0     Bound    pvc-b20e88ae-5c93-489a-9015-3e0376cf0c92   8Gi        RWO            nfs-storage    2m34s
persistentvolumeclaim/datadir-mongodb-sharded-shard0-data-0   Bound    pvc-0df9293f-9585-41c3-9f87-7571e5d71347   8Gi        RWO            nfs-storage    2m34s
persistentvolumeclaim/datadir-mongodb-sharded-shard1-data-0   Bound    pvc-a3ba215f-0d13-4fa6-be96-ef964a3c49c9   8Gi        RWO            nfs-storage    2m34s

都起来了,但是看了下,默认的配置,每个shard只有一个实例,没有多余的副本。

然后scale一下每个shard的副本数目

[root@jingmin-kube-archlinux k8s]# kubectl scale statefulset mongodb-sharded-shard0-data --replicas 5
statefulset.apps/mongodb-sharded-shard0-data scaled
[root@jingmin-kube-archlinux k8s]# kubectl scale statefulset mongodb-sharded-shard1-data --replicas 5
statefulset.apps/mongodb-sharded-shard1-data scaled

然后稍等一会儿, 才能各变成5个。

[root@jingmin-kube-archlinux k8s]# kubectl get all
NAME                                          READY   STATUS    RESTARTS   AGE
pod/mongodb-sharded-configsvr-0               1/1     Running   0          15m
pod/mongodb-sharded-mongos-647498488f-2hqrj   1/1     Running   0          15m
pod/mongodb-sharded-shard0-data-0             1/1     Running   0          15m
pod/mongodb-sharded-shard0-data-1             1/1     Running   0          8m44s
pod/mongodb-sharded-shard0-data-2             1/1     Running   0          8m11s
pod/mongodb-sharded-shard0-data-3             1/1     Running   0          7m38s
pod/mongodb-sharded-shard0-data-4             1/1     Running   0          6m35s
pod/mongodb-sharded-shard1-data-0             1/1     Running   0          15m
pod/mongodb-sharded-shard1-data-1             1/1     Running   0          8m21s
pod/mongodb-sharded-shard1-data-2             1/1     Running   0          7m47s
pod/mongodb-sharded-shard1-data-3             1/1     Running   0          6m44s
pod/mongodb-sharded-shard1-data-4             1/1     Running   0          5m41s

NAME                               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
service/mongodb-sharded            ClusterIP   172.31.8.135   <none>        27017/TCP   15m
service/mongodb-sharded-headless   ClusterIP   None           <none>        27017/TCP   15m

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mongodb-sharded-mongos   1/1     1            1           15m

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/mongodb-sharded-mongos-647498488f   1         1         1       15m

NAME                                           READY   AGE
statefulset.apps/mongodb-sharded-configsvr     1/1     15m
statefulset.apps/mongodb-sharded-shard0-data   5/5     15m
statefulset.apps/mongodb-sharded-shard1-data   5/5     15m

然后config server和 mongos入口实例,都应该scale到更多的实例,实现高可用。

。。。(略)

确认root密码是前面设置的密码

[root@jingmin-kube-archlinux k8s]# export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongodb-sharded mongodb-sharded -o jsonpath="{.data.mongodb-root-password}" | base64 -d)
[root@jingmin-kube-archlinux k8s]# echo $MONGODB_ROOT_PASSWORD
Mongo12345

没问题

试下在k8s集群内连接mongodb-sharded集群

[root@jingmin-kube-archlinux k8s]# kubectl run --namespace mongodb-sharded mongodb-sharded-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb-sharded:6.0.9-debian-11-r0 --command -- mongosh admin --host mongodb-sharded --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
If you don't see a command prompt, try pressing enter.
Using MongoDB:          6.0.9
Using Mongosh:          1.10.6

For mongosh info see: https://docs.mongodb.com/mongodb-shell/


To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.

[direct: mongos] admin> show dbs
admin   172.00 KiB
config    1.81 MiB
[direct: mongos] admin> use test
switched to db test
[direct: mongos] test> show dbs
admin   172.00 KiB
config    1.81 MiB
[direct: mongos] test> db.persons.insertOne({userId: "001", name: "li1"});
{
  acknowledged: true,
  insertedId: ObjectId("64f4b01b0caefda221d3b80c")
}
[direct: mongos] test> show dbs
admin   172.00 KiB
config    1.83 MiB
test      8.00 KiB
[direct: mongos] test> db.createUser({user: "test", pwd: "Test12345", roles: ["dbOwner"]});
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1693757543, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("2cef09a8c1a110ef6278149e23fc1f163f803669", "hex"), 0),
      keyId: Long("7274624780567838745")
    }
  },
  operationTime: Timestamp({ t: 1693757543, i: 1 })
}
[direct: mongos] test> exit
pod "mongodb-sharded-client" deleted
[root@jingmin-kube-archlinux k8s]# kubectl run --namespace mongodb-sharded mongodb-sharded-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb-sharded:6.0.9-debian-11-r0 --command -- mongosh admin --host mongodb-sharded --authenticationDatabase test -u test -p Test12345
If you don't see a command prompt, try pressing enter.
[direct: mongos] admin> show dbs
test  40.00 KiB
[direct: mongos] admin> db.persons.find()
MongoServerError: not authorized on admin to execute command { find: "persons", filter: {}, lsid: { id: UUID("c91c3f64-7d25-4cde-8542-e71f8f259899") }, $clusterTime: { clusterTime: Timestamp(1693757611, 2), signature: { hash: BinData(0, 57B7C6B3D2CE1B6CEBCF731241D178BFEC3C1757), keyId: 7274624780567838745 } }, $db: "admin" }
[direct: mongos] admin> use test;
switched to db test
[direct: mongos] test> db.persons.find()
[
  {
    _id: ObjectId("64f4b01b0caefda221d3b80c"),
    userId: '001',
    name: 'li1'
  }
]
[direct: mongos] test> db.persons.insertOne({userId: "002", name: "li2"});
{
  acknowledged: true,
  insertedId: ObjectId("64f4b0ee98fab1da75079da7")
}
[direct: mongos] test> db.persons.find();
[
  {
    _id: ObjectId("64f4b01b0caefda221d3b80c"),
    userId: '001',
    name: 'li1'
  },
  {
    _id: ObjectId("64f4b0ee98fab1da75079da7"),
    userId: '002',
    name: 'li2'
  }
]
[direct: mongos] test> exit
pod "mongodb-sharded-client" deleted

这里用root登录,创建了test库和test帐号。

然后用test重新登陆,试了下读写。

综上,从集群内访问mongodb-sharded集群是没有问题的。

然后在本地电脑上连好k8s,临时开下端口转发(仅调试使用)

[root@jingmin-kube-archlinux k8s]# kubectl port-forward svc/mongodb-sharded 27017:27017
Forwarding from 127.0.0.1:27017 -> 27017
Forwarding from [::1]:27017 -> 27017
Handling connection for 27017

然后在本地试了下读写,没问题。

综上,在k8s集群外访问mongdodb-sharded集群是没问题的。

然后这里默认是2个shard (shard-0和shard-1)

如果想要调整shard个数,我试下upgrade这个helm的release哈。至于会不会重置(清空)数据。。。。

[root@jingmin-kube-archlinux k8s]# vim mongodb-sharded/my-override-values.yaml 
[root@jingmin-kube-archlinux k8s]# cat mongodb-sharded/my-override-values.yaml 
## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
##

## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global storage class for dynamic provisioning
##
global:
  storageClass: ""

## @section Common parameters
##



## MongoDB(&reg;) Authentication parameters
##
auth:
  ## @param auth.enabled Enable authentication
  ## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
  ##
  enabled: true
  ## @param auth.rootUser MongoDB(&reg;) root user
  ##
  rootUser: root
  ## @param auth.rootPassword MongoDB(&reg;) root password
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#setting-the-root-user-and-password-on-first-run
  ##
  rootPassword: "Mongo12345"
  ## @param auth.replicaSetKey Key used for authentication in the replicaset (only when `architecture=replicaset`)
  ##
  replicaSetKey: ""


## @param shards Number of shards to be created
## ref: https://docs.mongodb.com/manual/core/sharded-cluster-shards/
##
shards: 6
## Properties for all of the pods in the cluster (shards, config servers and mongos)
##
common:
  ## @param common.mongodbEnableIPv6 Switch to enable/disable IPv6 on MongoDB&reg;
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#enablingdisabling-ipv6
  ##
  mongodbEnableIPv6: false
  ## @param common.mongodbDirectoryPerDB Switch to enable/disable DirectoryPerDB on MongoDB&reg;
  ## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#enablingdisabling-directoryperdb
  ##
  mongodbDirectoryPerDB: false
  ## @param common.containerPorts.mongodb MongoDB container port
  ##
  containerPorts:
    mongodb: 27017

## Kubernetes service type
## ref: https://kubernetes.io/docs/concepts/services-networking/service/
##
service:
  type: ClusterIP
  ## @param service.externalTrafficPolicy External traffic policy
  ## Enable client source IP preservation
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
  ##
  externalTrafficPolicy: Cluster
  ## @param service.ports.mongodb MongoDB&reg; service port
  ##
  ports:
    mongodb: 27017
  ## @param service.nodePorts.mongodb Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  nodePorts:
    mongodb: ""
## @section Config Server parameters
##

## Config Server replica set properties
## ref: https://docs.mongodb.com/manual/core/sharded-cluster-config-servers/
##
configsvr:
  ## @param configsvr.replicaCount Number of nodes in the replica set (the first node will be primary)
  ##
  replicaCount: 4
  ## @param configsvr.affinity Config Server Affinity for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ## Note: configsvr.podAffinityPreset, configsvr.podAntiAffinityPreset, and configsvr.nodeAffinityPreset will be ignored when it's set
  ##
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: avx
            operator: NotIn
            values:
            - "false"
  ## Enable persistence using Persistent Volume Claims
  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    ## @param configsvr.persistence.enabled Use a PVC to persist data
    ##
    enabled: true
    ## @param configsvr.persistence.mountPath Path to mount the volume at
    ## MongoDB&reg; images.
    ##
    mountPath: /bitnami/mongodb
    ## @param configsvr.persistence.subPath Subdirectory of the volume to mount at (evaluated as a template)
    ## Useful in dev environments and one PV for multiple services.
    ##
    subPath: ""
    ## @param configsvr.persistence.storageClass Storage class of backing PVC
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClass: ""

## @section Mongos parameters
##

## Mongos properties
## ref: https://docs.mongodb.com/manual/reference/program/mongos/#bin.mongos
##
mongos:
  ## @param mongos.replicaCount Number of replicas
  ##
  replicaCount: 4
  ## @param mongos.affinity Mongos Affinity for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ## Note: mongos.podAffinityPreset, mongos.podAntiAffinityPreset, and mongos.nodeAffinityPreset will be ignored when it's set
  ##
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: avx
            operator: NotIn
            values:
            - "false"
  ## @param mongos.nodeSelector Mongos Node labels for pod assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  ## @param mongos.useStatefulSet Use StatefulSet instead of Deployment
  ##
  useStatefulSet: false
  ## When using a statefulset, you can enable one service per replica
  ## This is useful when exposing the mongos through load balancers to make sure clients
  ## connect to the same mongos and therefore can follow their cursors
  ##
  servicePerReplica:
    ## @param mongos.servicePerReplica.enabled Create one service per mongos replica (must be used with statefulset)
    ##
    enabled: false

## @section Shard configuration: Data node parameters
##

## Shard replica set properties
## ref: https://docs.mongodb.com/manual/replication/index.html
##
shardsvr:
  ## Properties for data nodes (primary and secondary)
  ##
  dataNode:
    ## @param shardsvr.dataNode.replicaCount Number of nodes in each shard replica set (the first node will be primary)
    ##
    replicaCount: 4
    ## @param shardsvr.dataNode.affinity Data nodes Affinity for pod assignment
    ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ## You can set dataNodeLoopId (or any other parameter) by setting the below code block under this 'affinity' section:
    ## affinity:
    ##   matchLabels:
    ##     shard: "{{ .dataNodeLoopId }}"
    ##
    ## Note: shardsvr.dataNode.podAffinityPreset, shardsvr.dataNode.podAntiAffinityPreset, and shardsvr.dataNode.nodeAffinityPreset will be ignored when it's set
    ##
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: avx
              operator: NotIn
              values:
              - "false"

  ## @section Shard configuration: Persistence parameters
  ##

  ## Enable persistence using Persistent Volume Claims
  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    ## @param shardsvr.persistence.enabled Use a PVC to persist data
    ##
    enabled: true
    ## @param shardsvr.persistence.mountPath The path the volume will be mounted at, useful when using different MongoDB&reg; images.
    ##
    mountPath: /bitnami/mongodb
    ## @param shardsvr.persistence.subPath Subdirectory of the volume to mount at (evaluated as a template)
    ## Useful in development environments and one PV for multiple services.
    ##
    subPath: ""
    ## @param shardsvr.persistence.storageClass Storage class of backing PVC
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClass: ""

  ## @section Shard configuration: Arbiter parameters
  ##

  ## Properties for arbiter nodes
  ## ref: https://docs.mongodb.com/manual/tutorial/add-replica-set-arbiter/
  ##
  arbiter:
    ## @param shardsvr.arbiter.replicaCount Number of arbiters in each shard replica set (the first node will be primary)
    ##
    replicaCount: 4
    ## @param shardsvr.arbiter.affinity Arbiter's Affinity for pod assignment
    ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ## You can set arbiterLoopId (or any other parameter) by setting the below code block under this 'affinity' section:
    ## affinity:
    ##   matchLabels:
    ##     shard: "{{ .arbiterLoopId }}"
    ##
    ## Note: shardsvr.arbiter.podAffinityPreset, shardsvr.arbiter.podAntiAffinityPreset, and shardsvr.arbiter.nodeAffinityPreset will be ignored when it's set
    ##
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: avx
              operator: NotIn
              values:
              - "false"

这里调整了一下shards: 6

然后调整了各个组件的replicaCount: 4

更新下helm的release

helm upgrade mongodb-sharded -f ./mongodb-sharded/my-override-values.yaml ./mongodb-sharded/

死机了。。。

看了下内存不够,64G内存(电脑上还有一些其他服务)打满了。io好像也跑满了,top 显示 wa 30+, iostat -m有MB_read/s 0.56。桌面也一直提示Filesystem is not respondingkubectl describe 看下失败的pod也是接口超时。

重新调整一下。。。。

调整了一下shards: 3

调整了各个组件的replicaCount: 4

[root@jingmin-kube-archlinux k8s]# vim ./mongodb-sharded/my-override-values.yaml 
[root@jingmin-kube-archlinux k8s]# helm upgrade mongodb-sharded -f ./mongodb-sharded/my-override-values.yaml ./mongodb-sharded/
Release "mongodb-sharded" has been upgraded. Happy Helming!
NAME: mongodb-sharded
LAST DEPLOYED: Mon Sep  4 01:29:13 2023
NAMESPACE: mongodb-sharded
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
CHART NAME: mongodb-sharded
CHART VERSION: 6.6.2
APP VERSION: 6.0.9

** Please be patient while the chart is being deployed **

The MongoDB&reg; Sharded cluster can be accessed via the Mongos instances in port 27017 on the following DNS name from within your cluster:

    mongodb-sharded.mongodb-sharded.svc.cluster.local

To get the root password run:

    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongodb-sharded mongodb-sharded -o jsonpath="{.data.mongodb-root-password}" | base64 -d)

To connect to your database run the following command:

    kubectl run --namespace mongodb-sharded mongodb-sharded-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb-sharded:6.0.9-debian-11-r0 --command -- mongosh admin --host mongodb-sharded --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace mongodb-sharded svc/mongodb-sharded 27017:27017 &
    mongosh --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD

看下pod什么的都起来了没

[root@jingmin-kube-archlinux k8s]# kubectl get all
NAME                                          READY   STATUS    RESTARTS        AGE
pod/mongodb-sharded-configsvr-0               1/1     Running   0               43m
pod/mongodb-sharded-configsvr-1               1/1     Running   0               42m
pod/mongodb-sharded-configsvr-2               1/1     Running   0               42m
pod/mongodb-sharded-configsvr-3               1/1     Running   0               42m
pod/mongodb-sharded-mongos-647498488f-9x9nx   1/1     Running   10 (3m8s ago)   45m
pod/mongodb-sharded-mongos-647498488f-cvxfm   1/1     Running   10 (98s ago)    45m
pod/mongodb-sharded-mongos-647498488f-mb7ff   1/1     Running   10 (98s ago)    45m
pod/mongodb-sharded-mongos-647498488f-sk7zj   1/1     Running   10 (98s ago)    45m
pod/mongodb-sharded-shard0-arbiter-0          1/1     Running   1 (21m ago)     43m
pod/mongodb-sharded-shard0-arbiter-1          1/1     Running   0               42m
pod/mongodb-sharded-shard0-arbiter-2          1/1     Running   2 (22m ago)     42m
pod/mongodb-sharded-shard0-arbiter-3          1/1     Running   1 (15m ago)     41m
pod/mongodb-sharded-shard0-data-0             1/1     Running   0               43m
pod/mongodb-sharded-shard0-data-1             1/1     Running   0               42m
pod/mongodb-sharded-shard0-data-2             1/1     Running   0               41m
pod/mongodb-sharded-shard0-data-3             1/1     Running   0               40m
pod/mongodb-sharded-shard1-arbiter-0          1/1     Running   3 (10m ago)     43m
pod/mongodb-sharded-shard1-arbiter-1          1/1     Running   3 (3m16s ago)   42m
pod/mongodb-sharded-shard1-arbiter-2          1/1     Running   3 (3m16s ago)   42m
pod/mongodb-sharded-shard1-arbiter-3          1/1     Running   0               41m
pod/mongodb-sharded-shard1-data-0             1/1     Running   0               43m
pod/mongodb-sharded-shard1-data-1             1/1     Running   0               42m
pod/mongodb-sharded-shard1-data-2             1/1     Running   0               41m
pod/mongodb-sharded-shard1-data-3             1/1     Running   0               41m
pod/mongodb-sharded-shard2-arbiter-0          1/1     Running   5 (7m52s ago)   43m
pod/mongodb-sharded-shard2-arbiter-1          1/1     Running   0               42m
pod/mongodb-sharded-shard2-arbiter-2          1/1     Running   1 (14m ago)     42m
pod/mongodb-sharded-shard2-arbiter-3          1/1     Running   2 (10m ago)     41m
pod/mongodb-sharded-shard2-data-0             1/1     Running   0               43m
pod/mongodb-sharded-shard2-data-1             1/1     Running   1 (14m ago)     42m
pod/mongodb-sharded-shard2-data-2             1/1     Running   0               41m
pod/mongodb-sharded-shard2-data-3             1/1     Running   4               41m

NAME                               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
service/mongodb-sharded            ClusterIP   172.31.8.135   <none>        27017/TCP   113m
service/mongodb-sharded-headless   ClusterIP   None           <none>        27017/TCP   113m

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mongodb-sharded-mongos   4/4     4            4           113m

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/mongodb-sharded-mongos-647498488f   4         4         4       113m

NAME                                              READY   AGE
statefulset.apps/mongodb-sharded-configsvr        4/4     113m
statefulset.apps/mongodb-sharded-shard0-arbiter   4/4     52m
statefulset.apps/mongodb-sharded-shard0-data      4/4     113m
statefulset.apps/mongodb-sharded-shard1-arbiter   4/4     52m
statefulset.apps/mongodb-sharded-shard1-data      4/4     113m
statefulset.apps/mongodb-sharded-shard2-arbiter   4/4     52m
statefulset.apps/mongodb-sharded-shard2-data      4/4     52m

看起来都起来了。

top看下wa值也在0.x

试下k8s集群外访问mongodb-sharded集群(端口映射一下,仅用于开发或测试,不可用于生产)

[root@jingmin-kube-archlinux k8s]# kubectl port-forward svc/mongodb-sharded 27017:27017
Forwarding from 127.0.0.1:27017 -> 27017
Forwarding from [::1]:27017 -> 27017

本地用navicat连一下localhost:27017,发现之前操作的数据还在。

综上, helm upgrade 调整 看起来没有问题的。一下shards个数、各个组件的replicaCount数目没问题的。


评论

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注