nacos安装 – 副本

nacos安装

官方文档: https://nacos.io/zh-cn/docs/use-nacos-with-kubernetes.html

官方文档: https://github.com/nacos-group/nacos-k8s

官方文档: https://github.com/nacos-group/nacos-k8s/blob/master/README-CN.md

k8s 中单独建个命名空间,用来放nacos服务。

# 创建命名空间
kubectl create ns nacos
# 修改当前操作的默认命名空间为 nacos (在nacos命名空间下操作)
kubectl config set-context --current --namespace nacos

按照官方文档,git clone

git clone https://github.com/nacos-group/nacos-k8s.git

进入工作目录

cd nacos-k8s

K8S命名空间不是default,在部署RBAC之前执行以下脚本(修改默认配置中涉及命名空间的部分):

# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/nfs/rbac.yaml

创建角色

kubectl create -f deploy/nfs/rbac.yaml

创建 NFS StorageClass

kubectl create -f deploy/nfs/class.yaml

创建 ServiceAccount 和部署 NFS-Client Provisioner

kubectl create -f deploy/nfs/deployment.yaml

(这里我有根据自己nfs的所在位置调整yaml文件,同时nfs服务器的/etc/exports中加上了no_root_squash选项)

验证NFS部署成功

kubectl get pod -l app=nfs-client-provisioner

部署数据库

kubectl create -f deploy/mysql/mysql-nfs.yaml

验证数据库是否正常

kubectl get pod

执行数据库初始化语句(!请忽略这一步骤)

这里这是备注一下。跳过这一步骤,直接往后执行,部署nacos,自动初始化了(后面部署nacos后, kubectl exec -it mysql-0 -- /bin/bash进去,确实已经自动初始化了)。不必手动执行,而且手动执行版本也不一定对的上。数据库初始化语句位置 https://github.com/alibaba/nacos/blob/develop/distribution/conf/mysql-schema.sql

创建 Nacos

kubectl create -f deploy/nacos/nacos-pvc-nfs.yaml

验证Nacos节点启动成功

kubectl get pod -l app=nacos

(因为我这里的cluster,只有一个master和一个工作node, 且master标记了污点NoSchedule, 所以只启动了一个pod实例,剩余的pod为pending状态,不影响使用)

复制deploy/nacos/nacos-no-pvc-ingress.yaml作为模板,选择其中的ingress部分,生成ingress配置文件。需要根据实际情况配置。

[root@jingmin-kube-archlinux nacos-k8s]# cat deploy/nacos/nacos-ingress.yaml 
# ------------------- App Ingress ------------------- #
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nacos-headless
  annotations:
    nginx.ingress.kubernetes.io/app-root: /nacos
spec:
  ingressClassName: nginx
  rules:
  - host: nacos.ole12138.cn
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service: 
            name: nacos-headless
            port:
              name: server

我这边之前安装了 nginx ingress controller,所以其中有一些nginx ingress专用的配置。

比如spec.ingressClassName: nginxmetadata.annotations.nginx.ingress.kubernetes.io/app-root: /nacos 。 后者说明应用的上下文是/nacos, 被rewrite擦除掉了。 没有这个annotation之前,需要访问的网址是http://nacos.ole121238.cn/nacos 。 加上这个annotation之后,需要访问的网址是http://nacos.ole121238.cn

另外需要将nacos.ole121238.cn解析到(或转发到)cluster的ingress地址(基于metallb的LoadBalancer)。

家庭丐版私有云

阿里云ecs转发配置(这台服务器有公网ip,dns解析为ole12138.cn)

nginx配置

[root@iZbp10a4jqmwddltilchn0Z ~]# cat /etc/nginx/nginx.conf
# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    keepalive_timeout   65;
    types_hash_max_size 4096;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    include /etc/nginx/conf.d/*.conf;

}

stream {
    include /etc/nginx/tcp.d/*.conf;
}

然后是子目录中的配置

[root@iZbp10a4jqmwddltilchn0Z ~]# cat /etc/nginx/tcp.d/80_443.conf 
upstream ole12138_top_10080 {
    hash $remote_addr consistent;
    server ole12138.top:10080 max_fails=3 fail_timeout=10s;
}

server {
    listen 80;
    proxy_connect_timeout 20s;
    #proxy_timeout 5m;
    proxy_pass ole12138_top_10080;
}


upstream ole12138_top_10443 {
    hash $remote_addr consistent;
    server ole12138.top:10443 max_fails=3 fail_timeout=10s;
}

server {
    listen 443;
    proxy_connect_timeout 20s;
    #proxy_timeout 5m;
    proxy_pass ole12138_top_10443;
}

家庭网网关(linux软路由)配置

nginx配置

(这里只列出了子目录中的配置)

[root@jingmin-kube-master1 ~]# cat /etc/nginx/tcp.d/10080_10443.conf 
upstream 192.168.1.100_80 {
    hash $remote_addr consistent;
    server 192.168.1.100:80 max_fails=3 fail_timeout=10s;
}

server {
    listen 10080;
    proxy_connect_timeout 20s;
    #proxy_timeout 5m;
    proxy_pass 192.168.1.100_80;
}


upstream 192.168.1.100_443 {
    hash $remote_addr consistent;
    server 192.168.1.100:443 max_fails=3 fail_timeout=10s;
}

server {
    listen 10443;
    proxy_connect_timeout 20s;
    #proxy_timeout 5m;
    proxy_pass 192.168.1.100_443;
}

通过以上配置,公网对 nacos.ole12138.cn的请求,dns解析到了那台阿里云ecs服务器;经过这台阿里云ecs服务器转发,到了家庭网网关ole12138.top:10080/10443;家庭网网关linux又转发到192.168.1.100:80/443; 而192.168.1.100:80/443这个地址,是家庭网内部k8s的ingress服务地址。

这里经过阿里云服务器转发的原因是: 家庭网虽然有公网ip,虽然ddns可以绑定到域名。但是电信这边,对于家庭宽带,默认是封了80/443端口的。

附录

创建角色

[root@jingmin-kube-archlinux nacos-k8s]# cat deploy/nfs/rbac.yaml 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  namespace: nacos
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nacos
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner

NFS StorageClass

[root@jingmin-kube-archlinux nacos-k8s]# cat deploy/nfs/class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "true"

创建 ServiceAccount 和部署 NFS-Client Provisioner

[root@jingmin-kube-archlinux nacos-k8s]# cat deploy/nfs/deployment.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: quay.io/external_storage/nfs-client-provisioner:latest
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              #value: 172.17.79.3
              value: 192.168.1.7
            - name: NFS_PATH
              #value: /data/nfs-share
              #value: /ole/data/nfs/public
              value: /ole/data/nfs/no_root_squash
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.7
            #path: /data/nfs-share
            path: /ole/data/nfs/no_root_squash

部署数据库

[root@jingmin-kube-archlinux nacos-k8s]# cat deploy/mysql/mysql-nfs.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  labels:
    name: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      name: mysql
  template:
    metadata:
      labels:
        name: mysql
    spec:
      containers:
      - name: mysql
        image: nacos/nacos-mysql:5.7
        ports:
        - containerPort: 3306
        volumeMounts:
        - name: mysql-data
          mountPath: /var/lib/mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "root"
        - name: MYSQL_DATABASE
          value: "nacos_devtest"
        - name: MYSQL_USER
          value: "nacos"
        - name: MYSQL_PASSWORD
          value: "nacos"
      volumes:
      - name: mysql-data
        persistentVolumeClaim:
          claimName: mysql-nacos-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-nacos-pv-claim
spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    name: mysql
spec:
  ports:
  - port: 3306
    targetPort: 3306
  selector:
    name: mysql

创建nacos

[root@jingmin-kube-archlinux nacos-k8s]# cat deploy/nacos/nacos-pvc-nfs.yaml 
# 请阅读Wiki文章# https://github.com/nacos-group/nacos-k8s/wiki/%E4%BD%BF%E7%94%A8peerfinder%E6%89%A9%E5%AE%B9%E6%8F%92%E4%BB%B6
---
apiVersion: v1
kind: Service
metadata:
  name: nacos-headless
  labels:
    app: nacos
spec:
  publishNotReadyAddresses: true 
  ports:
    - port: 8848
      name: server
      targetPort: 8848
    - port: 9848
      name: client-rpc
      targetPort: 9848
    - port: 9849
      name: raft-rpc
      targetPort: 9849
    ## 兼容1.4.x版本的选举端口    - port: 7848
      name: old-raft-rpc
      targetPort: 7848
  clusterIP: None
  selector:
    app: nacos
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
data:
  mysql.host: "mysql"
  mysql.db.name: "nacos_devtest"
  mysql.port: "3306"
  mysql.user: "nacos"
  mysql.password: "nacos"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
spec:
  podManagementPolicy: Parallel
  serviceName: nacos-headless
  #replicas: 3
  replicas: 2
  template:
    metadata:
      labels:
        app: nacos
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - nacos
              topologyKey: "kubernetes.io/hostname"
      serviceAccountName: nfs-client-provisioner
      initContainers:
        - name: peer-finder-plugin-install
          image: nacos/nacos-peer-finder-plugin:1.1
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: /home/nacos/plugins/peer-finder
              name: data
              subPath: peer-finder
      containers:
        - name: nacos
          imagePullPolicy: Always
          image: nacos/nacos-server:latest
          resources:
            requests:
              memory: "2Gi"
              cpu: "500m"
          ports:
            - containerPort: 8848
              name: client-port
            - containerPort: 9848
              name: client-rpc
            - containerPort: 9849
              name: raft-rpc
            - containerPort: 7848
              name: old-raft-rpc
          env:
            - name: NACOS_REPLICAS
              value: "3"
            - name: SERVICE_NAME
              value: "nacos-headless"
            - name: DOMAIN_NAME
              value: "cluster.local"
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: MYSQL_SERVICE_HOST
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.host
            - name: MYSQL_SERVICE_DB_NAME
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.db.name
            - name: MYSQL_SERVICE_PORT
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.port
            - name: MYSQL_SERVICE_USER
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.user
            - name: MYSQL_SERVICE_PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.password
            - name: SPRING_DATASOURCE_PLATFORM
              value: "mysql"
            - name: NACOS_SERVER_PORT
              value: "8848"
            - name: NACOS_APPLICATION_PORT
              value: "8848"
            - name: PREFER_HOST_MODE
              value: "hostname"
          volumeMounts:
            - name: data
              mountPath: /home/nacos/plugins/peer-finder
              subPath: peer-finder
            - name: data
              mountPath: /home/nacos/data
              subPath: data
            - name: data
              mountPath: /home/nacos/logs
              subPath: logs
  volumeClaimTemplates:
    - metadata:
        name: data
        annotations:
          volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 20Gi
  selector:
    matchLabels:
      app: nacos

评论

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注