nextcloud安装

Nextcloud安装

参考: https://www.jianshu.com/p/a0b6fa67af03

参考: https://nextcloud.com/include/

参考: https://github.com/nextcloud/server

参考: https://artifacthub.io/packages/helm/nextcloud/nextcloud

之前用docker装过一次nextcloud, 但是有单点问题. 存储也是个问题, 现在迁移到k8s中.

helm安装nextcloud

新建命名空间

kubectl create ns nextcloud

修改为当前默认命名空间

kubectl config set-context --current --namespace nextcloud

helm仓库信息

helm repo add nextcloud https://nextcloud.github.io/helm/

helm search repo nextcloud

helm pull nextcloud/nextcloud

helm pull nextcloud/nextcloud --untar

cp nextcloud/values.yaml my-override-values.yaml

修改my-override-values.yaml, 仅保留做了修改的内容

# cat my-override-values.yaml 
ingress:
  enabled: true
  # className: nginx
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 4G
    kubernetes.io/tls-acme: "true"
    #cert-manager.io/cluster-issuer: letsencrypt-prod
    cert-manager.io/issuer: letsencrypt-prod
    # Keep this in sync with the README.md:
    nginx.ingress.kubernetes.io/server-snippet: |-
      server_tokens off;
      proxy_hide_header X-Powered-By;
      rewrite ^/.well-known/webfinger /index.php/.well-known/webfinger last;
      rewrite ^/.well-known/nodeinfo /index.php/.well-known/nodeinfo last;
      rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
      rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
      location = /.well-known/carddav {
        return 301 $scheme://$host/remote.php/dav;
      }
      location = /.well-known/caldav {
        return 301 $scheme://$host/remote.php/dav;
      }
      location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
      }
      location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
        deny all;
      }
      location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
        deny all;
      }
  tls:
     - secretName: nextcloud-tls
       hosts:
         - nextcloud.ole12138.cn
  labels: {}
  path: /
  pathType: Prefix
phpClientHttpsFix:
  enabled: true
  protocol: https

nextcloud:
  host: nextcloud.ole12138.cn
  username: admin
  password: w784319947
internalDatabase:
  enabled: false
  name: nextcloud


##
## MariaDB chart configuration
## ref: https://github.com/bitnami/charts/tree/main/bitnami/mariadb
##
mariadb:
  ## Whether to deploy a mariadb server from the bitnami mariab db helm chart
  # to satisfy the applications database requirements. if you want to deploy this bitnami mariadb, set this and externalDatabase to true
  # To use an ALREADY DEPLOYED mariadb database, set this to false and configure the externalDatabase parameters
  enabled: true
  auth:
    database: nextcloud
    username: nextcloud
    password: w784319947
  architecture: standalone

  ## Enable persistence using Persistent Volume Claims
  ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  primary:
    persistence:
      enabled: true
      accessMode: ReadWriteOnce
      size: 8Gi

##
## Redis chart configuration
## for more options see https://github.com/bitnami/charts/tree/main/bitnami/redis
##

redis:
  enabled: true
  auth:
    enabled: true
    password: 'changeme'

service:
  type: ClusterIP
  port: 8080
  loadBalancerIP: ""
  nodePort: nil

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  # Nextcloud Data (/var/www/html)
  enabled: true
  annotations: {}
  ## nextcloud data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"

  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:

  accessMode: ReadWriteOnce
  size: 30Gi

  ## Use an additional pvc for the data directory rather than a subpath of the default PVC
  ## Useful to store data on a different storageClass (e.g. on slower disks)
  nextcloudData:
    enabled: false
    subPath:
    annotations: {}
    # storageClass: "-"
    # existingClaim:
    accessMode: ReadWriteOnce
    size: 8Gi


## Enable pod autoscaling using HorizontalPodAutoscaler
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
##
hpa:
  enabled: false
  cputhreshold: 60
  minPods: 1
  maxPods: 10

## Prometheus Exporter / Metrics
##
metrics:
  enabled: true

  replicaCount: 1
  # The metrics exporter needs to know how you serve Nextcloud either http or https
  https: false
  # Use API token if set, otherwise fall back to password authentication
  # https://github.com/xperimental/nextcloud-exporter#token-authentication
  # Currently you still need to set the token manually in your nextcloud install
  token: ""
  timeout: 5s
  # if set to true, exporter skips certificate verification of Nextcloud server.
  tlsSkipVerify: false

  image:
    repository: xperimental/nextcloud-exporter
    tag: 0.6.2
    pullPolicy: IfNotPresent
    # pullSecrets:
    #   - myRegistrKeySecretName

  ## Metrics exporter resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  # resources: {}

  ## Metrics exporter pod Annotation and Labels
  # podAnnotations: {}

  # podLabels: {}

  service:
    type: ClusterIP
    ## Use serviceLoadBalancerIP to request a specific static IP,
    ## otherwise leave blank
    # loadBalancerIP:
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "9205"
    labels: {}

  ## Prometheus Operator ServiceMonitor configuration
  ##
  serviceMonitor:
    ## @param metrics.serviceMonitor.enabled Create ServiceMonitor Resource for scraping metrics using PrometheusOperator
    ##
    enabled: false

    ## @param metrics.serviceMonitor.namespace Namespace in which Prometheus is running
    ##
    namespace: ""

    ## @param metrics.serviceMonitor.namespaceSelector The selector of the namespace where the target service is located (defaults to the release namespace)
    namespaceSelector:

    ## @param metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus.
    ##
    jobLabel: ""

    ## @param metrics.serviceMonitor.interval Interval at which metrics should be scraped
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
    ##
    interval: 30s

    ## @param metrics.serviceMonitor.scrapeTimeout Specify the timeout after which the scrape is ended
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
    ##
    scrapeTimeout: ""

    ## @param metrics.serviceMonitor.labels Extra labels for the ServiceMonitor
    ##
    labels: {}

主要是enable/disable了一些功能

这里redis密码忘记修改了. 还是 changeme

尝试部署

root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# helm install nextcloud -f ./my-override-values.yaml nextcloud/nextcloud
Error: INSTALLATION FAILED: 1 error occurred:
        * admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: nginx.ingress.kubernetes.io/server-snippet annotation cannot be used. Snippet directives are disabled by the Ingress administrator

部署失败了,有个hook失败了, 看起来是 ingress-nginx controller的问题.

调整ingress-nginx 允许 snippet.

配置nextcloud的时候,需要配置server-snippet.但是发现当前版本的chart中,默认禁掉了相关配置,所以需要打开相应的配置.

参考: https://github.com/nextcloud/helm/issues/188

参考: https://github.com/kubernetes/ingress-nginx/issues/7837

Set allow-snippet-annotations to false in your ingress-nginx ConfigMap based on how you deploy ingress-nginx:

Static Deploy Files Edit the ConfigMap for ingress-nginx after deployment:

kubectl edit configmap -n ingress-nginx ingress-nginx-controller

Add directive:

data:
  allow-snippet-annotations: “true”

More information on the ConfigMap here

Deploying Via Helm Set controller.allowSnippetAnnotations to false in the Values.yaml or add the directive to the helm deploy:

helm install [RELEASE_NAME] --set controller.allowSnippetAnnotations=true ingress-nginx/ingress-nginx

或者

root@wangjm-B550M-K-1:~/k8s/helm/ingress-nginx# helm search repo ingress-nginx
root@wangjm-B550M-K-1:~/k8s/helm/ingress-nginx# helm pull ingress-nginx/ingress-nginx --untar


root@wangjm-B550M-K-1:~/k8s/helm/ingress-nginx# cp ingress-nginx/values.yaml my-override-values.yaml
root@wangjm-B550M-K-1:~/k8s/helm/ingress-nginx# ls
ingress-nginx  ingress-nginx-4.10.1.tgz  my-override-values.yaml
root@wangjm-B550M-K-1:~/k8s/helm/ingress-nginx# vim my-override-values.yaml 
root@wangjm-B550M-K-1:~/k8s/helm/ingress-nginx# cat my-override-values.yaml 
controller:
  # -- This configuration defines if Ingress Controller should allow users to set
  # their own *-snippet annotations, otherwise this is forbidden / dropped
  # when users add those annotations.
  # Global snippets in ConfigMap are still respected
  allowSnippetAnnotations: true
root@wangjm-B550M-K-1:~/k8s/helm/ingress-nginx# kubectl config set-context --current --namespace ingress-nginx
Context "kubernetes-admin@kubernetes" modified.
root@wangjm-B550M-K-1:~/k8s/helm/ingress-nginx# helm upgrade ingress-nginx -f ./my-override-values.yaml ingress-nginx/ingress-nginx
Release "ingress-nginx" has been upgraded. Happy Helming!
NAME: ingress-nginx
LAST DEPLOYED: Fri May 10 00:18:06 2024
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running 'kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch'

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

然后, ingress-nginx controller更新完成.

重新部署 nextcloud

先去云服务商那边配置下 nextcloud.ole12138.cn域名的解析. 解析到云服务商那边的跳板机上.

然后利用nginx, 多次转发, 最终转发到内网k8s 192.168.1.100的ingress地址上.

重新部署nextcloud

再试下部署nextcloud

root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# helm list
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS  CHART           APP VERSION
nextcloud       nextcloud       1               2024-05-09 23:49:47.556521307 +0800 CST failed  nextcloud-4.6.8 29.0.0     
root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# helm upgrade nextcloud -f ./my-override-values.yaml nextcloud/nextcloud
Release "nextcloud" has been upgraded. Happy Helming!
NAME: nextcloud
LAST DEPLOYED: Fri May 10 00:26:32 2024
NAMESPACE: nextcloud
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
1. Get the nextcloud URL by running:

  export POD_NAME=$(kubectl get pods --namespace nextcloud -l "app.kubernetes.io/name=nextcloud" -o jsonpath="{.items[0].metadata.name}")
  echo http://127.0.0.1:8080/
  kubectl port-forward --namespace nextcloud $POD_NAME 8080:80

2. Get your nextcloud login credentials by running:

  echo User:     admin
  echo Password: $(kubectl get secret --namespace nextcloud nextcloud -o jsonpath="{.data.nextcloud-password}" | base64 --decode)

看起来已经部署到k8s了.

注意到前面 my-override-values.yaml中ingress关于域名以及tls的设置.

ingress:
  tls:
     - secretName: nextcloud-tls
       hosts:
         - nextcloud.ole12138.cn
  #...
nextcloud:
  host: nextcloud.ole12138.cn
  #...

查一下

root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# kubectl get ingress
NAME        CLASS   HOSTS                   ADDRESS         PORTS     AGE
nextcloud   nginx   nextcloud.ole12138.cn   192.168.1.100   80, 443   3m54s
root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# kubectl get issur
error: the server doesn't have a resource type "issur"
root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# kubectl get issuer
No resources found in nextcloud namespace.
root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# kubectl get issuer,challenge,certificate
NAME                                        READY   SECRET          AGE
certificate.cert-manager.io/nextcloud-tls   False   nextcloud-tls   4m37s

部署cert-manager的issuer

cert-manager的证书没自动部署成功, 这是因为当前命名空间下没有放issuer

# cat staging-issuer.yaml 
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # The ACME server URL
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: 784319947@qq.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-staging
    # Enable the HTTP-01 challenge provider
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx
# cat production-issuer.yaml 
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: 784319947@qq.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx

在当前命名空间下应用

root@wangjm-B550M-K-1:~/k8s/helm/nextcloud/cert# kubectl apply -f ./staging-issuer.yaml 
issuer.cert-manager.io/letsencrypt-staging created
root@wangjm-B550M-K-1:~/k8s/helm/nextcloud/cert# kubectl apply -f production-issuer.yaml 
issuer.cert-manager.io/letsencrypt-prod created

再看一下

root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# kubectl get issuer,challenge,certificate,cr
NAME                                         READY   AGE
issuer.cert-manager.io/letsencrypt-prod      True    18m
issuer.cert-manager.io/letsencrypt-staging   True    19m

NAME                                        READY   SECRET          AGE
certificate.cert-manager.io/nextcloud-tls   False   nextcloud-tls   16m

NAME                                                 APPROVED   DENIED   READY   ISSUER             REQUESTOR                                         AGE
certificaterequest.cert-manager.io/nextcloud-tls-1   True                False   letsencrypt-prod   system:serviceaccount:cert-manager:cert-manager   16m

证书颁发有有点问题

root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# kubectl describe certificaterequest.cert-manager.io/nextcloud-tls-1
Name:         nextcloud-tls-1
Namespace:    nextcloud
Labels:       app.kubernetes.io/component=app
              app.kubernetes.io/instance=nextcloud
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=nextcloud
              helm.sh/chart=nextcloud-4.6.8
Annotations:  cert-manager.io/certificate-name: nextcloud-tls
              cert-manager.io/certificate-revision: 1
              cert-manager.io/private-key-secret-name: nextcloud-tls-v6zjr
API Version:  cert-manager.io/v1
Kind:         CertificateRequest
Metadata:
  Creation Timestamp:  2024-05-09T16:36:26Z
  Generation:          1
  Owner References:
    API Version:           cert-manager.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Certificate
    Name:                  nextcloud-tls
    UID:                   602e3f00-4001-4d4f-8c5b-79cccb53b548
  Resource Version:        444902
  UID:                     06db4f35-272d-4683-8a3f-91a2ca378581
Spec:
  Extra:
    authentication.kubernetes.io/credential-id:
      JTI=b799a1cf-ac6d-4bd8-bbd2-8a8576f81b72
    authentication.kubernetes.io/node-name:
      wangjm-b550m-k-3
    authentication.kubernetes.io/node-uid:
      bdd89e46-94ea-45ea-9b4d-8f02f4200dc9
    authentication.kubernetes.io/pod-name:
      cert-manager-788887dcf6-4z84w
    authentication.kubernetes.io/pod-uid:
      e0f8084b-3b60-4c41-9eba-96ddd3e585ad
  Groups:
    system:serviceaccounts
    system:serviceaccounts:cert-manager
    system:authenticated
  Issuer Ref:
    Group:  cert-manager.io
    Kind:   ClusterIssuer
    Name:   letsencrypt-prod
  Request:  LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2l6Q0NBWE1DQVFBd0FEQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1BSApTSmpXQ2dSTGxrNzNCK0NEZ3lMMlNuck9ybnYzSHZTOUFBcnh3aVI1djFHOUlBSHh5NEZEZnhJakRwcVJ2V2J5CkUwNlZOMU9rZ3pwNWRQSFpGWWpBemFlQ1dad3FaemRZYnNjYVM3aG5JcjI2WUNkcXJ0M0VadWpYWWZkZHpGUXgKSitQVStXWEsrTnBLYjhjQXQ4WGhrM0loWHkyZXkzTWlQR2RCNG4rYXliNmdSdXY4bDVjejhjTHlXMEh3dk9vZAo5eDJZNnRPM1dDUUY3RlNWQStkSzdpS2Z4MmhLY29jUGFWNkVraU5vUGlTWjZYVjNlYkJZZEh6cU15ZXhtUzRjCk05WGJMQk1yZUZFdHJNODBJRGFGY1gwZW50b3liL1Q4UlVIaDVjZ09kclJ6N3lmNnhTV2d6aktrYVNkZmMySWMKM3VBR09TUlpWK3F1b3Z6ckQxMENBd0VBQWFCR01FUUdDU3FHU0liM0RRRUpEakUzTURVd0l3WURWUjBSQVFILwpCQmt3RjRJVmJtVjRkR05zYjNWa0xtOXNaVEV5TVRNNExtTnVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQU5CZ2txCmhraUc5dzBCQVFzRkFBT0NBUUVBZklUSFpacjF2TEdidzg2Y3Bhdmo1dHg0cGVySThuY1pDejVlTENXM09WSVYKSFF6UXNoTjBYS24zcGVWQkgzeUg0OXVFNzQxdE5waUVNejJoVWl5bE1wTmhmT2xCYnZDZStZVkh3NGVDM3VpWQpIZWpodktQSmQvdFVYREQzS1dIWVZLK2VJRm5GV3RZQis0L1hJK3c0UmE1Uk5Mby9hTXNkUG45SGNxZm0yVm0xCmJnYm9zTVlJMTJvMjJOZWZBWW5hRGdjWmN6amxyZWY5amJZWjZUS0ViVmROZVNTL0tBbHlQVStIay8zR3ZSa2oKRHZlNFRXQ05ETVlGSzMrV3VZVktlaEhmSUUyOVhBY3pvalN5d3YrbllrT1B6enlZSkNKVG9idFNRU2dQTUQvcQo2cmpKU2tVRW9wZVpmWldWVWFBVTZ4cnZSUU90U1NUUGd1eHdpeElncHc9PQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K
  UID:      f31959c8-4ba2-4dbe-b70a-7c7a5ed04da4
  Usages:
    digital signature
    key encipherment
  Username:  system:serviceaccount:cert-manager:cert-manager
Status:
  Conditions:
    Last Transition Time:  2024-05-09T16:36:26Z
    Message:               Certificate request has been approved by cert-manager.io
    Reason:                cert-manager.io
    Status:                True
    Type:                  Approved
    Last Transition Time:  2024-05-09T16:36:26Z
    Message:               Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt-prod" not found
    Reason:                Pending
    Status:                False
    Type:                  Ready
Events:
  Type    Reason              Age   From                                                Message
  ----    ------              ----  ----                                                -------
  Normal  WaitingForApproval  16m   cert-manager-certificaterequests-issuer-acme        Not signing CertificateRequest until it is Approved
  Normal  WaitingForApproval  16m   cert-manager-certificaterequests-issuer-venafi      Not signing CertificateRequest until it is Approved
  Normal  WaitingForApproval  16m   cert-manager-certificaterequests-issuer-selfsigned  Not signing CertificateRequest until it is Approved
  Normal  WaitingForApproval  16m   cert-manager-certificaterequests-issuer-vault       Not signing CertificateRequest until it is Approved
  Normal  WaitingForApproval  16m   cert-manager-certificaterequests-issuer-ca          Not signing CertificateRequest until it is Approved
  Normal  cert-manager.io     16m   cert-manager-certificaterequests-approver           Certificate request has been approved by cert-manager.io
  Normal  IssuerNotFound      16m   cert-manager-certificaterequests-issuer-selfsigned  Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt-prod" not found
  Normal  IssuerNotFound      16m   cert-manager-certificaterequests-issuer-vault       Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt-prod" not found
  Normal  IssuerNotFound      16m   cert-manager-certificaterequests-issuer-acme        Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt-prod" not found
  Normal  IssuerNotFound      16m   cert-manager-certificaterequests-issuer-ca          Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt-prod" not found
  Normal  IssuerNotFound      16m   cert-manager-certificaterequests-issuer-venafi      Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt-prod" not found

helm覆盖的配置中 annotations中issuer写错了. 不是cluster-issuer, 而是issuer.

注: 前面的my-override-values.yaml 已经是改正过后的版本.

尝试重新部署

root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# helm upgrade nextcloud -f ./my-override-values.yaml nextcloud/nextcloud
Release "nextcloud" has been upgraded. Happy Helming!
NAME: nextcloud
LAST DEPLOYED: Fri May 10 00:56:29 2024
NAMESPACE: nextcloud
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
1. Get the nextcloud URL by running:

  export POD_NAME=$(kubectl get pods --namespace nextcloud -l "app.kubernetes.io/name=nextcloud" -o jsonpath="{.items[0].metadata.name}")
  echo http://127.0.0.1:8080/
  kubectl port-forward --namespace nextcloud $POD_NAME 8080:80

2. Get your nextcloud login credentials by running:

  echo User:     admin
  echo Password: $(kubectl get secret --namespace nextcloud nextcloud -o jsonpath="{.data.nextcloud-password}" | base64 --decode)

再看下

root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# kubectl get issuer,challenge,certificate,cr
NAME                                         READY   AGE
issuer.cert-manager.io/letsencrypt-prod      True    23m
issuer.cert-manager.io/letsencrypt-staging   True    23m

NAME                                                                 STATE     DOMAIN                  AGE
challenge.acme.cert-manager.io/nextcloud-tls-1-949004722-733452632   pending   nextcloud.ole12138.cn   15s

NAME                                        READY   SECRET          AGE
certificate.cert-manager.io/nextcloud-tls   False   nextcloud-tls   20m

NAME                                                 APPROVED   DENIED   READY   ISSUER             REQUESTOR                                         AGE
certificaterequest.cert-manager.io/nextcloud-tls-1   True                False   letsencrypt-prod   system:serviceaccount:cert-manager:cert-manager   19s

web登陆 nextcloud.ole12138.cn

有个初始化过程, 使用前面my-override-values.yaml中配置的应用帐号密码.

admin
w784319947

创建管理员帐号

wangjm
w784319947

数据库主机默认

nextcloud-mariadb

选择mysql/maridb ,也是因为在my-override-values.yaml中启用了mariadb.

查一下mariadb在k8s中部署的服务名也也行(应该是一致的)

root@wangjm-B550M-K-1:~/k8s/helm/nextcloud# kubectl get svc
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
cm-acme-http-solver-m9cw7   NodePort    172.31.1.127    <none>        8089:30528/TCP   14m
nextcloud                   ClusterIP   172.31.14.96    <none>        8080/TCP         81m
nextcloud-mariadb           ClusterIP   172.31.2.169    <none>        3306/TCP         81m
nextcloud-metrics           ClusterIP   172.31.9.59     <none>        9205/TCP         81m
nextcloud-redis-headless    ClusterIP   None            <none>        6379/TCP         81m
nextcloud-redis-master      ClusterIP   172.31.8.72     <none>        6379/TCP         81m
nextcloud-redis-replicas    ClusterIP   172.31.12.248   <none>        6379/TCP         81m

评论

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注