jenkins安装

Jenkins安装

参考: https://www.jenkins.io/doc/book/installing/kubernetes/

参考: https://cloud.google.com/kubernetes-engine/docs/archive/jenkins-on-kubernetes-engine

参考: https://www.jenkins.io/doc/book/managing/nodes/

选型

代码ci/cd工具,私有化部署用的比较多的是Jenkins和Gitlab CI。 网上有很多比较两者的文章。

而Gitlab CI作为GitLab全家桶的一个组件。

目前暂用Jenkins。之后如果不太麻烦的话,再考虑迁移到GitLab CI。

看了下网上其他人的说法,Jenkins应该是不支持高可用的,多replicas建立的是多个不相关的实例。

所以controller的replicas数应当是1.

The following image describes the architecture for deploying Jenkins in a multi-node Kubernetes cluster. 下图描述了在多节点 Kubernetes 集群中部署 Jenkins 的架构。

jenkins-kubernetes-architecture

关于node,agent,executor的说明: https://www.jenkins.io/doc/book/managing/nodes/

安装

参考: https://www.jenkins.io/doc/book/installing/kubernetes/#install-jenkins-with-helm-v3

参考: https://charts.jenkins.io

参考: https://github.com/jenkinsci/helm-charts/blob/main/charts/jenkins/README.md

参考: https://octopus.com/blog/jenkins-helm-install-guide

参考: https://medium.com/@viewlearnshare/setting-up-jenkins-with-helm-on-a-kubernetes-cluster-5d10458e7596

参考: https://anuja-kumari.medium.com/helm-chart-to-deploy-jenkins-in-kubernetes-c77d1e9955c4

参考: https://cloud.tencent.com/developer/article/1807943

helm添加jenkins仓库

helm repo add jenkinsci https://charts.jenkins.io
helm repo update
helm search repo

先下载jenkins的chart看下

helm fetch jenkinsci/jenkins --untar

复制一份配置文件,保留自己修改的内容

[root@jingmin-kube-archlinux jenkins]# cp values.yaml my-override-values.yaml
[root@jingmin-kube-archlinux jenkins]# vim my-override-values.yaml
[root@jingmin-kube-archlinux jenkins]# cat my-override-values.yaml 

如下是准备覆盖的配置

controller:
  # When enabling LDAP or another non-Jenkins identity source, the built-in admin account will no longer exist.
  # If you disable the non-Jenkins identity store and instead use the Jenkins internal one,
  # you should revert controller.adminUser to your preferred admin user:
  adminUser: "admin"
  # adminPassword: <defaults to random>
  adminPassword: Jenkins12345
  admin:
    existingSecret: ""
    userKey: jenkins-admin-user
    passwordKey: jenkins-admin-password
  # For minikube, set this to NodePort, elsewhere use LoadBalancer
  # Use ClusterIP if your setup includes ingress controller
  serviceType: ClusterIP


  # Name of default cloud configuration.
  cloudName: "kubernetes"


  ingress:
    #enabled: false
    enabled: true
    # Override for the default paths that map requests to the backend
    paths: []
    # - backend:
    #     serviceName: ssl-redirect
    #     servicePort: use-annotation
    # - backend:
    #     serviceName: >-
    #       {{ template "jenkins.fullname" . }}
    #     # Don't use string here, use only integer value!
    #     servicePort: 8080
    # For Kubernetes v1.14+, use 'networking.k8s.io/v1beta1'
    # For Kubernetes v1.19+, use 'networking.k8s.io/v1'
    #apiVersion: "extensions/v1beta1"
    apiVersion: "networking.k8s.io/v1"
    labels: {}
    annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    # Set this path to jenkinsUriPrefix above or use annotations to rewrite path
    # path: "/jenkins"
    # configures the hostname e.g. jenkins.example.com
    #hostName:
    hostName: jenkins.ole12138.cn
    #tls:
    # - secretName: jenkins.cluster.local
    #   hosts:
    #     - jenkins.cluster.local
    tls:
     - secretName: jenkins-ole12138-cn-tls
       hosts:
         - jenkins.ole12138.cn


agent:
  enabled: true
  resources:
    requests:
      cpu: "512m"
      memory: "512Mi"
    limits:
      cpu: "512m"
      memory: "512Mi"
  # You can define the volumes that you want to mount for this container
  # Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, PVC, Secret
  # Configure the attributes as they appear in the corresponding Java class for that type
  # https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes
  volumes: []
  # - type: ConfigMap
  #   configMapName: myconfigmap
  #   mountPath: /var/myapp/myconfigmap
  # - type: EmptyDir
  #   mountPath: /var/myapp/myemptydir
  #   memory: false
  # - type: HostPath
  #   hostPath: /var/lib/containers
  #   mountPath: /var/myapp/myhostpath
  # - type: Nfs
  #   mountPath: /var/myapp/mynfs
  #   readOnly: false
  #   serverAddress: "192.0.2.0"
  #   serverPath: /var/lib/containers
  # - type: PVC
  #   claimName: mypvc
  #   mountPath: /var/myapp/mypvc
  #   readOnly: false
  # - type: Secret
  #   defaultMode: "600"
  #   mountPath: /var/myapp/mysecret
  #   secretName: mysecret
  # Pod-wide environment, these vars are visible to any container in the agent pod

  # You can define the workspaceVolume that you want to mount for this container
  # Allowed types are: DynamicPVC, EmptyDir, HostPath, Nfs, PVC
  # Configure the attributes as they appear in the corresponding Java class for that type
  # https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes/workspace
  workspaceVolume: {}
  ## DynamicPVC example
  # type: DynamicPVC
  # configMapName: myconfigmap
  ## EmptyDir example
  # type: EmptyDir
  # memory: false
  ## HostPath example
  # type: HostPath
  # hostPath: /var/lib/containers
  ## NFS example
  # type: Nfs
  # readOnly: false
  # serverAddress: "192.0.2.0"
  # serverPath: /var/lib/containers
  ## PVC example
  # type: PVC
  # claimName: mypvc
  # readOnly: false
  #
  # Pod-wide environment, these vars are visible to any container in the agent pod
  envVars: []
  # - name: PATH
  #   value: /usr/local/bin
  # Mount a secret as environment variable
  secretEnvVars: []
  # - key: PATH
  #   optional: false # default: false
  #   secretKey: MY-K8S-PATH
  #   secretName: my-k8s-secret
  nodeSelector: {}
  # Key Value selectors. Ex:
  # jenkins-agent: v1

  # Add additional containers to the agents.
  # Containers specified here are added to all agents. Set key empty to remove container from additional agents.
  additionalContainers: []
  #  - sideContainerName: dind
  #    image: docker
  #    tag: dind
  #    command: dockerd-entrypoint.sh
  #    args: ""
  #    privileged: true
  #    resources:
  #      requests:
  #        cpu: 500m
  #        memory: 1Gi
  #      limits:
  #        cpu: 1
  #        memory: 2Gi

  # Disable the default Jenkins Agent configuration.
  # Useful when configuring agents only with the podTemplates value, since the default podTemplate populated by values mentioned above will be excluded in the rendered template.
  disableDefaultAgent: false

  # Below is the implementation of custom pod templates for the default configured kubernetes cloud.
  # Add a key under podTemplates for each pod template. Each key (prior to | character) is just a label, and can be any value.
  # Keys are only used to give the pod template a meaningful name.  The only restriction is they may only contain RFC 1123 \ DNS label
  # characters: lowercase letters, numbers, and hyphens. Each pod template can contain multiple containers.
  # For this pod templates configuration to be loaded the following values must be set:
  # controller.JCasC.defaultConfig: true
  # Best reference is https://<jenkins_url>/configuration-as-code/reference#Cloud-kubernetes. The example below creates a python pod template.
  podTemplates: {}
  #  python: |
  #    - name: python
  #      label: jenkins-python
  #      serviceAccount: jenkins
  #      containers:
  #        - name: python
  #          image: python:3
  #          command: "/bin/sh -c"
  #          args: "cat"
  #          ttyEnabled: true
  #          privileged: true
  #          resourceRequestCpu: "400m"
  #          resourceRequestMemory: "512Mi"
  #          resourceLimitCpu: "1"
  #          resourceLimitMemory: "1024Mi"

# Here you can add additional agents
# They inherit all values from `agent` so you only need to specify values which differ
#additionalAgents: {}
additionalAgents:
  maven:
    podName: maven
    customJenkinsLabels: maven
    # An example of overriding the jnlp container
    # sideContainerName: jnlp
    image: jenkins/jnlp-agent-maven
    tag: latest
  python:
    podName: python
    customJenkinsLabels: python
    sideContainerName: python
    image: python
    tag: "3"
    command: "/bin/sh -c"
    args: "cat"
    TTYEnabled: true

persistence:
  enabled: true
  ## jenkins data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass:
  annotations: {}
  labels: {}
  accessMode: "ReadWriteOnce"
  size: "8Gi"

主要是在这里直接设了下初始管理员密码。

然后启用了ingress的配置。

agent里加了两个additionalAgents,其他没调整,agent里其他内容还是默认配置,这里列出来只是方便之后进一步调整。

persistence存储这里会使用默认的存储类。其他章节已经配过默认的storageclass,这里直接留空即可。

创建jenkins命名空间,并设为当前操作的默认命名空间

[root@jingmin-kube-archlinux jenkins]# cd ..
[root@jingmin-kube-archlinux k8s]# kubectl create ns jenkins
[root@jingmin-kube-archlinux k8s]# kubectl config set-context --current --namespace jenkins

安装jenkins,使用前面的修改的内容,覆盖默认配置。

[root@jingmin-kube-archlinux k8s]# helm install jenkins -f ./jenkins/my-override-values.yaml ./jenkins
NAME: jenkins
LAST DEPLOYED: Sun Aug 27 19:56:57 2023
NAMESPACE: jenkins
STATUS: deployed
REVISION: 1
NOTES:
1. Get your 'admin' user password by running:
  kubectl exec --namespace jenkins -it svc/jenkins -c jenkins -- /bin/cat /run/secrets/additional/chart-admin-password && echo
2. Visit https://jenkins.ole12138.cn

3. Login with the password from step 1 and the username: admin
4. Configure security realm and authorization strategy
5. Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file, see documentation: https://jenkins.ole12138.cn/configuration-as-code and examples: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos

For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine

For more information about Jenkins Configuration as Code, visit:
https://jenkins.io/projects/jcasc/


NOTE: Consider using a custom image with pre-installed plugins

然后等pods都起来, 看下pods的状态,没起来就等一下。

kubectl get all
kubectl describe pods/jenkins-0

看下ingress是否正常创建(前面配过ingress-nginx的那个ingress类为默认ingress类,所有这里会有ingressClassName: nginx项)

[root@jingmin-kube-archlinux k8s]# kubectl get ingress
NAME      CLASS   HOSTS                 ADDRESS         PORTS     AGE
jenkins   nginx   jenkins.ole12138.cn   192.168.1.100   80, 443   13m
[root@jingmin-kube-archlinux k8s]# kubectl get ingress -o yaml
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    annotations:
      meta.helm.sh/release-name: jenkins
      meta.helm.sh/release-namespace: jenkins
    creationTimestamp: "2023-08-27T11:56:58Z"
    generation: 1
    labels:
      app.kubernetes.io/component: jenkins-controller
      app.kubernetes.io/instance: jenkins
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: jenkins
      helm.sh/chart: jenkins-4.6.1
    name: jenkins
    namespace: jenkins
    resourceVersion: "1161614"
    uid: 0568a3c7-599b-4d14-99c4-b3347a121ee7
  spec:
    ingressClassName: nginx
    rules:
    - host: jenkins.ole12138.cn
      http:
        paths:
        - backend:
            service:
              name: jenkins
              port:
                number: 8080
          pathType: ImplementationSpecific
    tls:
    - hosts:
      - jenkins.ole12138.cn
      secretName: jenkins-ole12138-cn-tls
  status:
    loadBalancer:
      ingress:
      - ip: 192.168.1.100
kind: List
metadata:
  resourceVersion: ""

dns服务商那里配一下dns解析,还有nginx做下转发。

浏览器访问一下https://jenkins.ole12138.cn,提示不安全,看下证书,是k8s默认自签的证书。

配tls证书

接下来,配下cert-manager的issuer,改为由Let’s Encrypt颁发证书即可。

之前章节配好了cert-manager,在当前命名空间下还是建一下staging和production环境的issuer (由Let’s Encrypt提供服务)

修改其中的邮箱部分,用于创建账号,以及将来有证书将要过期相关的内容会发到对应的邮箱

[root@jingmin-kube-archlinux issuer]# vim staging-issuer.yaml 
[root@jingmin-kube-archlinux issuer]# cat staging-issuer.yaml 
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # The ACME server URL
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: 784319947@qq.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-staging
    # Enable the HTTP-01 challenge provider
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx

部署staging-issuer

kubectl create -f ./staging-issuer.yaml 

类似的方式,创建production-issuer

wget https://raw.githubusercontent.com/cert-manager/website/master/content/docs/tutorials/acme/example/production-issuer.yaml

同样,修改其中的邮箱为自己的邮箱

[root@jingmin-kube-archlinux issuer]# vim production-issuer.yaml 
[root@jingmin-kube-archlinux issuer]# cat production-issuer.yaml 
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: 784319947@qq.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx

部署到当前命名空间中

kubectl create -f ./production-issuer.yaml

这两个issuer都通过http01的方式向Let’s Encrypt 发出challenge.

kubectl describe issuer

可以看到description中都有一条Message: The ACME account was registered with the ACME server

向ingress中,

添加cert-manager的issuer注解cert-manager.io/issuer: letsencrypt-staging

以及添加tls的hosts和secretsName部分(secretsName名称随便起,cert-manager会自动生成)

[root@jingmin-kube-archlinux jenkins]# kubectl edit ingress jenkins 
ingress.networking.k8s.io/jenkins edited
[root@jingmin-kube-archlinux jenkins]# kubectl get ingress jenkins -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/issuer: letsencrypt-staging
    meta.helm.sh/release-name: jenkins
    meta.helm.sh/release-namespace: jenkins
  creationTimestamp: "2023-08-27T11:56:58Z"
  generation: 1
  labels:
    app.kubernetes.io/component: jenkins-controller
    app.kubernetes.io/instance: jenkins
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: jenkins
    helm.sh/chart: jenkins-4.6.1
  name: jenkins
  namespace: jenkins
  resourceVersion: "1164378"
  uid: 0568a3c7-599b-4d14-99c4-b3347a121ee7
spec:
  ingressClassName: nginx
  rules:
  - host: jenkins.ole12138.cn
    http:
      paths:
      - backend:
          service:
            name: jenkins
            port:
              number: 8080
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - jenkins.ole12138.cn
    secretName: jenkins-ole12138-cn-tls
status:
  loadBalancer:
    ingress:
    - ip: 192.168.1.100

在浏览器中,使用https访问ingress地址https://jenkins.ole12138.cn/, 会有提示警告,看下证书,以及颁发者(虽然是提示无效,但不是k8s提供默认的fake证书,那就是Let’s Encrypt提供的staging证书)。

现在再修改一下ingress中annotations中的issuer,切换为production环境的issuer。注意其中一行: cert-manager.io/issuer: letsencrypt-prod

[root@jingmin-kube-archlinux jenkins]# kubectl edit ingress jenkins 
ingress.networking.k8s.io/jenkins edited
[root@jingmin-kube-archlinux jenkins]# kubectl get ingress jenkins -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/issuer: letsencrypt-prod
    meta.helm.sh/release-name: jenkins
    meta.helm.sh/release-namespace: jenkins
  creationTimestamp: "2023-08-27T11:56:58Z"
  generation: 1
  labels:
    app.kubernetes.io/component: jenkins-controller
    app.kubernetes.io/instance: jenkins
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: jenkins
    helm.sh/chart: jenkins-4.6.1
  name: jenkins
  namespace: jenkins
  resourceVersion: "1164671"
  uid: 0568a3c7-599b-4d14-99c4-b3347a121ee7
spec:
  ingressClassName: nginx
  rules:
  - host: jenkins.ole12138.cn
    http:
      paths:
      - backend:
          service:
            name: jenkins
            port:
              number: 8080
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - jenkins.ole12138.cn
    secretName: jenkins-ole12138-cn-tls
status:
  loadBalancer:
    ingress:
    - ip: 192.168.1.100

再次在浏览器中,以https方式,访问nacos的ingress地址https://jenkins.ole12138.cn/. 正常的话,可以直接访通,没有任何警告。 看下地址栏前面的锁头标志,点看看下证书,确认是Let’s Encrypt颁发的。

Jenkins配置代理

参考: https://blog.csdn.net/luChenH/article/details/107693990

系统管理-》插件管理-》高级-》代理设置

Jenkins中使用docker

参考: https://rokpoto.com/jenkins-docker-in-docker-agent/

Jenkins架构包含一个controller和多个agent。

  • controller提供控制功能,并提供web UI界面。
  • agent是执行各种job的节点。可以在不同的agent安装不同的软件环境,比如node软件,python软件,docker软件,maven软件等,并为各agent分配不同的标签。即可选择合适的agent执行相应的job。

Jenkins 可以手动添加永久node/agent, 或者 k8s/cloud中动态分配的agent。

在 jenkins的agent上安装(附加) node软件, python, docker ,maven等工具环境,可用来编译构建各种软件包。

  • 对于手动添加的agent,一般是物理机,直接在对应的机子上安装相关的软件即可。

  • 如果是k8s中的pod作为agent, 一般是根据podTemplate配置软件镜像及参数,jenkins执行任务的时候,自动按需生成pod,用完即删。

k8s动态提供docker agent

参考: https://plugins.jenkins.io/kubernetes/

jenkins有个kubernetes插件。

官方的helm chart中,默认已经安装了这个插件。

DashBoard->Manage Jenkins->Clouds->kubernetes.

默认的配置,就是连接到当前k8s。

当jenkins需要执行job的时候,使用模板在k8s中创建pod(作为agent),继而执行job:

  • 可以安装jenkins时使用helm chart中设好的template创建pod,
  • 也可以在Jenkinsfile中提供临时的template创建pod

docker是c-s架构,分为docker daemon和 docker client。

这里关注一下在k8s中使用agent构建docker镜像的情况。

k8s中的pod一般是在docker中运行(外层Docker daemon& client,也可能是其他oci环境),又要在pod中有docker环境(内层Docker daemon/client,需要执行docker pull, docker build, docker push 等命令)。为了实现这种,一般有两种方式

  • 映射宿主机的docker socket。内外层docker使用同一个daemon。但是资源隔离和安全是个问题。
  • Docker in Docker(DinD)。内层docker使用单独的daemon。

这里进一步关注下Docker in Docker的配置。可以在配置podTemplate/agent的时候,使用docker:dind相关的镜像, 以sidecar方式运行。

主要参考: https://rokpoto.com/jenkins-docker-in-docker-agent/

副参考 :https://www.chenshaowen.com/blog/how-to-use-docker-in-docker.html

参考: https://github.com/w7089/jenkins-dind-demo/blob/main/values.yaml

参考: https://github.com/docker-archive/classicswarm/issues/1259

参考: https://github.com/docker/for-linux/issues/1313

参考: https://github.com/docker-library/docker/blob/9728dce92752348ac2623bcf96436f1a89e15dd3/20.10/docker-entrypoint.sh#L15-L20

参考: https://devops.stackexchange.com/questions/15418/running-builds-requiring-docker-daemon-in-jenkins-installed-using-helm-and-runni/17053

还是基于前面安装的helm chart。

这里调整了下原来的helm配置。

[root@jingmin-kube-archlinux k8s]# cat jenkins/my-override-values.yaml 
controller:
  # When enabling LDAP or another non-Jenkins identity source, the built-in admin account will no longer exist.
  # If you disable the non-Jenkins identity store and instead use the Jenkins internal one,
  # you should revert controller.adminUser to your preferred admin user:
  adminUser: "admin"
  # adminPassword: <defaults to random>
  adminPassword: Jenkins12345
  admin:
    existingSecret: ""
    userKey: jenkins-admin-user
    passwordKey: jenkins-admin-password
  # For minikube, set this to NodePort, elsewhere use LoadBalancer
  # Use ClusterIP if your setup includes ingress controller
  serviceType: ClusterIP


  # Name of default cloud configuration.
  cloudName: "kubernetes"


  ingress:
    #enabled: false
    enabled: true
    # Override for the default paths that map requests to the backend
    paths: []
    # - backend:
    #     serviceName: ssl-redirect
    #     servicePort: use-annotation
    # - backend:
    #     serviceName: >-
    #       {{ template "jenkins.fullname" . }}
    #     # Don't use string here, use only integer value!
    #     servicePort: 8080
    # For Kubernetes v1.14+, use 'networking.k8s.io/v1beta1'
    # For Kubernetes v1.19+, use 'networking.k8s.io/v1'
    #apiVersion: "extensions/v1beta1"
    apiVersion: "networking.k8s.io/v1"
    labels: {}
    annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    # Set this path to jenkinsUriPrefix above or use annotations to rewrite path
    # path: "/jenkins"
    # configures the hostname e.g. jenkins.example.com
    #hostName:
    hostName: jenkins.ole12138.cn
    #tls:
    # - secretName: jenkins.cluster.local
    #   hosts:
    #     - jenkins.cluster.local
    tls:
     - secretName: jenkins-ole12138-cn-tls
       hosts:
         - jenkins.ole12138.cn


agent:
  enabled: true
  resources:
    requests:
      cpu: "512m"
      memory: "512Mi"
    limits:
      cpu: "512m"
      memory: "512Mi"
  # You can define the volumes that you want to mount for this container
  # Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, PVC, Secret
  # Configure the attributes as they appear in the corresponding Java class for that type
  # https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes
  volumes: []
  # - type: ConfigMap
  #   configMapName: myconfigmap
  #   mountPath: /var/myapp/myconfigmap
  # - type: EmptyDir
  #   mountPath: /var/myapp/myemptydir
  #   memory: false
  # - type: HostPath
  #   hostPath: /var/lib/containers
  #   mountPath: /var/myapp/myhostpath
  # - type: Nfs
  #   mountPath: /var/myapp/mynfs
  #   readOnly: false
  #   serverAddress: "192.0.2.0"
  #   serverPath: /var/lib/containers
  # - type: PVC
  #   claimName: mypvc
  #   mountPath: /var/myapp/mypvc
  #   readOnly: false
  # - type: Secret
  #   defaultMode: "600"
  #   mountPath: /var/myapp/mysecret
  #   secretName: mysecret
  # Pod-wide environment, these vars are visible to any container in the agent pod

  # You can define the workspaceVolume that you want to mount for this container
  # Allowed types are: DynamicPVC, EmptyDir, HostPath, Nfs, PVC
  # Configure the attributes as they appear in the corresponding Java class for that type
  # https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes/workspace
  workspaceVolume: {}
  ## DynamicPVC example
  # type: DynamicPVC
  # configMapName: myconfigmap
  ## EmptyDir example
  # type: EmptyDir
  # memory: false
  ## HostPath example
  # type: HostPath
  # hostPath: /var/lib/containers
  ## NFS example
  # type: Nfs
  # readOnly: false
  # serverAddress: "192.0.2.0"
  # serverPath: /var/lib/containers
  ## PVC example
  # type: PVC
  # claimName: mypvc
  # readOnly: false
  #
  # Pod-wide environment, these vars are visible to any container in the agent pod
  envVars: []
  # - name: PATH
  #   value: /usr/local/bin
  # Mount a secret as environment variable
  secretEnvVars: []
  # - key: PATH
  #   optional: false # default: false
  #   secretKey: MY-K8S-PATH
  #   secretName: my-k8s-secret
  nodeSelector: {}
  # Key Value selectors. Ex:
  # jenkins-agent: v1

  # Add additional containers to the agents.
  # Containers specified here are added to all agents. Set key empty to remove container from additional agents.
  additionalContainers: []
  #  - sideContainerName: dind
  #    image: docker
  #    tag: dind
  #    command: dockerd-entrypoint.sh
  #    args: ""
  #    privileged: true
  #    resources:
  #      requests:
  #        cpu: 500m
  #        memory: 1Gi
  #      limits:
  #        cpu: 1
  #        memory: 2Gi

  # Disable the default Jenkins Agent configuration.
  # Useful when configuring agents only with the podTemplates value, since the default podTemplate populated by values mentioned above will be excluded in the rendered template.
  disableDefaultAgent: false

  # Below is the implementation of custom pod templates for the default configured kubernetes cloud.
  # Add a key under podTemplates for each pod template. Each key (prior to | character) is just a label, and can be any value.
  # Keys are only used to give the pod template a meaningful name.  The only restriction is they may only contain RFC 1123 \ DNS label
  # characters: lowercase letters, numbers, and hyphens. Each pod template can contain multiple containers.
  # For this pod templates configuration to be loaded the following values must be set:
  # controller.JCasC.defaultConfig: true
  # Best reference is https://<jenkins_url>/configuration-as-code/reference#Cloud-kubernetes. The example below creates a python pod template.
  podTemplates: {}
  #  python: |
  #    - name: python
  #      label: jenkins-python
  #      serviceAccount: jenkins
  #      containers:
  #        - name: python
  #          image: python:3
  #          command: "/bin/sh -c"
  #          args: "cat"
  #          ttyEnabled: true
  #          privileged: true
  #          resourceRequestCpu: "400m"
  #          resourceRequestMemory: "512Mi"
  #          resourceLimitCpu: "1"
  #          resourceLimitMemory: "1024Mi"

# Here you can add additional agents
# They inherit all values from `agent` so you only need to specify values which differ
#additionalAgents: {}
additionalAgents:
  maven:
    podName: maven
    customJenkinsLabels: maven
    # An example of overriding the jnlp container
    # sideContainerName: jnlp
    image: jenkins/jnlp-agent-maven
    tag: latest
  python:
    podName: python
    customJenkinsLabels: python
    sideContainerName: python
    image: python
    tag: "3"
    command: "/bin/sh -c"
    args: "cat"
    TTYEnabled: true
  dind:
    podName: dind-agent
    customJenkinsLabels: dind-agent
    image: docker.io/warrior7089/dind-client-jenkins-agent
    tag: latest
    envVars:
     - name: DOCKER_HOST
       value: "tcp://localhost:2375"
    alwaysPullImage: true
    yamlTemplate:  |- 
     spec: 
         containers:
           - name: dind-daemon 
             image: docker:20.10-dind
             securityContext: 
               privileged: true
             env: 
               - name: DOCKER_TLS_VERIFY
                 value: ""
               - name: DOCKER_TLS_CERTDIR
                 value: ""

persistence:
  enabled: true
  ## jenkins data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass:
  annotations: {}
  labels: {}
  accessMode: "ReadWriteOnce"
  size: "8Gi"

这里调整了下additionalAgents的配置。相当于添加了几个静态的podTemplate。具体参考: https://rokpoto.com/jenkins-docker-in-docker-agent/

然后更新下helm的release

helm upgrade jenkins -f ./jenkins/my-override-values.yaml ./jenkins

如果没有生效,可以删除下对应statefulset,重新更新release (可以放心的是,并不会清数据, pvc还在)

kubectl delete sts jenkins
helm upgrade jenkins -f ./jenkins/my-override-values.yaml ./jenkins

使用helm安装的Jenkins中,默认已经安装并开启了kubernetes插件。

写Jenkinsfile的时候,可以选择合适的podTemplate。 (注意这里agent指定了静态的podTemplate的方式)

pipeline {
    agent {
        kubernetes{
        inheritFrom 'dind-agent'
        }
    }
    stages {
        stage('代码构建阶段') {
            steps {
               echo "git 拉取代码 项目名称: ${projectName}  ${build_tag}  "
               checkout([$class: 'GitSCM', branches: [[name: '$branchName']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'git_cred_id_xxx', url: 'https://xxx.git']]]);
            }
        }
        stage('docker镜像构建阶段'){
              steps {
                    echo 'docker镜像 构建'
                    withCredentials([usernamePassword(credentialsId: 'harbor', passwordVariable: 'dockerPassword', usernameVariable: 'dockerUser')]) {
                       sh "docker login -u $dockerUser  -p $dockerPassword xxx.io"
                       dir('java') {
                           sh "docker build  --no-cache -t xxx.io/yyy/zzz:${build_tag} . "
                        }
                       
                  }
              }
        }
       stage('docker镜像上传阶段'){
             steps {
               echo '自定义镜像上传'
               sh "docker push xxx.io/yyy/zzz:${build_tag}"
            }
          }
    }
}

dind的mtu问题

dind (Docker in Docker)可能会遇到mtu问题。

直接现象是,在内层Docker(docker:dind镜像做docker daemon,docker:lastest镜像做client,合起来作为一个pod,提供隔离的docker环境)中, docker build的时候,可能会卡在某个步骤中(pull镜像或者fetch文件的时候)。

先复现一下:

参考: https://www.chenshaowen.com/blog/how-to-use-docker-in-docker.html#4-kubernetes-环境下的演示

k8s中单独找个命名空间里,放个dind的pod

创建一个 dind.yaml 文件:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dind
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dind
  template:
    metadata:
      labels:
        app: dind
    spec:
      containers:
        - name: dockerd
          image: 'docker:dind'
          env:
            - name: DOCKER_TLS_CERTDIR
              value: ""
          securityContext:
            privileged: true
        - name: docker-cli
          image: 'docker:latest'
          env:
          - name: DOCKER_HOST
            value: 127.0.0.1
          command: ["/bin/sh"]
          args: ["-c", "sleep 86400;"]

执行

kubectl apply -f ./dind.yaml

看下当前k8s命名空间

[root@jingmin-kube-archlinux ~]# kubectl get all
NAME                         READY   STATUS      RESTARTS   AGE
pod/dind-56b746bdc5-hc77x    2/2     Running     0          158m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dind    1/1     1            1           158m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/dind-56b746bdc5    1         1         1       158m

先准备下dockerfile和相应的脚本

[root@jingmin-kube-archlinux ~]# cd /tmp/tmp1
[root@jingmin-kube-archlinux tmp1]# tree
.
└── java
    ├── app.sh
    └── Dockerfile
[root@jingmin-kube-archlinux tmp1]# cat java/app.sh 
...
[root@jingmin-kube-archlinux tmp1]# cat java/Dockerfile 
FROM openjdk:8-jre-alpine

ENV TZ="Asia/Shanghai"

ADD  ./app.sh  /

RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories &&\
    apk add --update ttf-dejavu curl tzdata &&\
    rm -rf /var/cache/apk/* &&\
    cp /usr/share/zoneinfo/Asia/Shanghai /etc/lcoaltime &&\
    echo 'Asia/Shanghai' > /etc/timezone &&\
    chmod 777  /app.sh

然后将文件复制到pod中的其中一个容器里

[root@jingmin-kube-archlinux tmp1]# kubectl cp ./java/ -c docker-cli dind-56b746bdc5-hc77x:tmp

进入pod中容器,build

[root@jingmin-kube-archlinux tmp1]# kubectl exec dind-56b746bdc5-hc77x -c docker-cli -it -- sh
/ # cd tmp/java
/tmp/java # ls
Dockerfile  app.sh
/tmp/java # docker build  --no-cache -t harbor.ole12138.cn/wy_spc/java:0.0.2 .
[+] Building 613.2s (7/7) FINISHED      docker:default
 => [internal] load .dockerignore    0.0s
 => => transferring context: 2B      0.0s
 => [internal] load build definition from Dockerfile  0.0s
 => => transferring dockerfile: 408B              0.0s
 => [internal] load metadata for docker.io/library/openjdk:8-jre-alpine                   3.2s
 => [internal] load build context    0.0s
 => => transferring context: 629B    0.0s
 => [1/3] FROM docker.io/library/openjdk:8-jre-alpine@sha256:f362b165b870ef129cbe730f29065ff37399c0aa8bcab3e44b51c302938c9193     8.6s
 => => resolve docker.io/library/openjdk:8-jre-alpine@sha256:f362b165b870ef129cbe730f29065ff37399c0aa8bcab3e44b51c302938c9193     0.0s
 => => sha256:b2ad93b079b1495488cc01375de799c402d45086015a120c105ea00e1be0fd52 947B / 947B    0.0s
 => => sha256:f7a292bbb70c4ce57f7704cc03eb09e299de9da19013b084f138154421918cb4 3.42kB / 3.42kB                                    0.0s
 => => sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10 2.76MB / 2.76MB                                    2.2s
 => => sha256:f910a506b6cb1dbec766725d70356f695ae2bf2bea6224dbe8c7c6ad4f3664a2 238B / 238B    0.4s
 => => sha256:b6abafe80f63b02535fc111df2ed6b3c728469679ab654e03e482b6f347c9639 54.94MB / 54.94MB                                  7.2s
 => => sha256:f362b165b870ef129cbe730f29065ff37399c0aa8bcab3e44b51c302938c9193 1.64kB / 1.64kB                                    0.0s
 => => extracting sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10     0.1s
 => => extracting sha256:f910a506b6cb1dbec766725d70356f695ae2bf2bea6224dbe8c7c6ad4f3664a2     0.0s
 => => extracting sha256:b6abafe80f63b02535fc111df2ed6b3c728469679ab654e03e482b6f347c9639     1.2s
 => [2/3] ADD  ./app.sh  /           0.0s
 => ERROR [3/3] RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories &&    apk add --update ttf-dejavu curl tzdata &&    rm -rf /var/cache  601.3s
------      
 > [3/3] RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories &&    apk add --update ttf-dejavu curl tzdata &&    rm -rf /var/cache/apk/* &&    cp /usr/share/zoneinfo/Asia/Shanghai /etc/lcoaltime &&     echo 'Asia/Shanghai' > /etc/timezone &&    chmod 777  /app.sh:                 
0.224 fetch http://mirrors.aliyun.com/alpine/v3.9/main/x86_64/APKINDEX.tar.gz                  
300.4 ERROR: http://mirrors.aliyun.com/alpine/v3.9/main: network connection aborted
300.4 WARNING: Ignoring APKINDEX.2ac53f5f.tar.gz: No such file or directory
300.4 fetch http://mirrors.aliyun.com/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
601.3 ERROR: http://mirrors.aliyun.com/alpine/v3.9/community: network connection aborted
601.3 WARNING: Ignoring APKINDEX.96fa836e.tar.gz: No such file or directory
601.3 ERROR: unsatisfiable constraints:
601.3   curl (missing):
601.3     required by: world[curl]
601.3   ttf-dejavu (missing):
601.3     required by: world[ttf-dejavu]
601.3   tzdata (missing):
601.3     required by: world[tzdata]
------
Dockerfile:7
--------------------
   6 |     
   7 | >>> RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories &&\
   8 | >>>     apk add --update ttf-dejavu curl tzdata &&\
   9 | >>>     rm -rf /var/cache/apk/* &&\
  10 | >>>     cp /usr/share/zoneinfo/Asia/Shanghai /etc/lcoaltime &&\
  11 | >>>  echo 'Asia/Shanghai' > /etc/timezone &&\
  12 | >>>     chmod 777  /app.sh
--------------------
ERROR: failed to solve: process "/bin/sh -c sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories &&    apk add --update ttf-dejavu curl tzdata &&    rm -rf /var/cache/apk/* &&    cp /usr/share/zoneinfo/Asia/Shanghai /etc/lcoaltime &&\techo 'Asia/Shanghai' > /etc/timezone &&    chmod 777  /app.sh" did not complete successfully: exit code: 3

可以看到网络连接超时,失败了。

开始我以为是dns或者其他原因造成的问题,但是ping是正常的

/tmp/java # ping mirrors.aliyun.com
PING mirrors.aliyun.com (150.138.40.142): 56 data bytes
64 bytes from 150.138.40.142: seq=0 ttl=55 time=25.330 ms
64 bytes from 150.138.40.142: seq=1 ttl=55 time=25.230 ms
^C
--- mirrors.aliyun.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 25.230/25.280/25.330 ms

然后又考虑是不是需要https协议才行,现在已经不支持http了,但是在主机或者外层直接build是没问题的。

参考: https://abhijeet-kamble619.medium.com/fix-for-docker-inside-docker-networking-issue-on-kubernetes-5de64b2c42d4

参考: https://github.com/docker-library/docker/issues/103

主参考: https://www.maoxuner.cn/post/2020/03/host-container-mtu/

参考: https://www.v2ex.com/t/380044

参考: https://mlohr.com/docker-mtu/

副参考: https://blog.zespre.com/dind-mtu-size-matters.html

参考: https://github.com/moby/moby/issues/36659

参考: https://hub.docker.com/_/docker?tab=description

各种搜索,发现DinD (Docker in Docker)可能会有mtu不匹配问题,再结合之前家里软路由也遇到过MTU问题,调整过软路由的MTU,基本可以确认是这个问题了。

eg.

外层主机出口网卡mtu小于1500的时候(比如1450),外层docker0默认网卡是1500,但是通讯的时候应该可以协商MSS。所以外层其他容器或pod是正常的。

内层docker中的容器mtu,则根据外层docker的mtu定死的,没有协商的过程,直接定死了1500。所以会出现小文件正常,大文件失败的情况。

类似于这种情况

还是调整下原来的helm配置。

[root@jingmin-kube-archlinux k8s]# cat jenkins/my-override-values.yaml 
controller:
  # When enabling LDAP or another non-Jenkins identity source, the built-in admin account will no longer exist.
  # If you disable the non-Jenkins identity store and instead use the Jenkins internal one,
  # you should revert controller.adminUser to your preferred admin user:
  adminUser: "admin"
  # adminPassword: <defaults to random>
  adminPassword: Jenkins12345
  admin:
    existingSecret: ""
    userKey: jenkins-admin-user
    passwordKey: jenkins-admin-password
  # For minikube, set this to NodePort, elsewhere use LoadBalancer
  # Use ClusterIP if your setup includes ingress controller
  serviceType: ClusterIP


  # Name of default cloud configuration.
  cloudName: "kubernetes"


  ingress:
    #enabled: false
    enabled: true
    # Override for the default paths that map requests to the backend
    paths: []
    # - backend:
    #     serviceName: ssl-redirect
    #     servicePort: use-annotation
    # - backend:
    #     serviceName: >-
    #       {{ template "jenkins.fullname" . }}
    #     # Don't use string here, use only integer value!
    #     servicePort: 8080
    # For Kubernetes v1.14+, use 'networking.k8s.io/v1beta1'
    # For Kubernetes v1.19+, use 'networking.k8s.io/v1'
    #apiVersion: "extensions/v1beta1"
    apiVersion: "networking.k8s.io/v1"
    labels: {}
    annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    # Set this path to jenkinsUriPrefix above or use annotations to rewrite path
    # path: "/jenkins"
    # configures the hostname e.g. jenkins.example.com
    #hostName:
    hostName: jenkins.ole12138.cn
    #tls:
    # - secretName: jenkins.cluster.local
    #   hosts:
    #     - jenkins.cluster.local
    tls:
     - secretName: jenkins-ole12138-cn-tls
       hosts:
         - jenkins.ole12138.cn


agent:
  enabled: true
  resources:
    requests:
      cpu: "512m"
      memory: "512Mi"
    limits:
      cpu: "512m"
      memory: "512Mi"
  # You can define the volumes that you want to mount for this container
  # Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, PVC, Secret
  # Configure the attributes as they appear in the corresponding Java class for that type
  # https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes
  volumes: []
  # - type: ConfigMap
  #   configMapName: myconfigmap
  #   mountPath: /var/myapp/myconfigmap
  # - type: EmptyDir
  #   mountPath: /var/myapp/myemptydir
  #   memory: false
  # - type: HostPath
  #   hostPath: /var/lib/containers
  #   mountPath: /var/myapp/myhostpath
  # - type: Nfs
  #   mountPath: /var/myapp/mynfs
  #   readOnly: false
  #   serverAddress: "192.0.2.0"
  #   serverPath: /var/lib/containers
  # - type: PVC
  #   claimName: mypvc
  #   mountPath: /var/myapp/mypvc
  #   readOnly: false
  # - type: Secret
  #   defaultMode: "600"
  #   mountPath: /var/myapp/mysecret
  #   secretName: mysecret
  # Pod-wide environment, these vars are visible to any container in the agent pod

  # You can define the workspaceVolume that you want to mount for this container
  # Allowed types are: DynamicPVC, EmptyDir, HostPath, Nfs, PVC
  # Configure the attributes as they appear in the corresponding Java class for that type
  # https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes/workspace
  workspaceVolume: {}
  ## DynamicPVC example
  # type: DynamicPVC
  # configMapName: myconfigmap
  ## EmptyDir example
  # type: EmptyDir
  # memory: false
  ## HostPath example
  # type: HostPath
  # hostPath: /var/lib/containers
  ## NFS example
  # type: Nfs
  # readOnly: false
  # serverAddress: "192.0.2.0"
  # serverPath: /var/lib/containers
  ## PVC example
  # type: PVC
  # claimName: mypvc
  # readOnly: false
  #
  # Pod-wide environment, these vars are visible to any container in the agent pod
  envVars: []
  # - name: PATH
  #   value: /usr/local/bin
  # Mount a secret as environment variable
  secretEnvVars: []
  # - key: PATH
  #   optional: false # default: false
  #   secretKey: MY-K8S-PATH
  #   secretName: my-k8s-secret
  nodeSelector: {}
  # Key Value selectors. Ex:
  # jenkins-agent: v1

  # Add additional containers to the agents.
  # Containers specified here are added to all agents. Set key empty to remove container from additional agents.
  additionalContainers: []
  #  - sideContainerName: dind
  #    image: docker
  #    tag: dind
  #    command: dockerd-entrypoint.sh
  #    args: ""
  #    privileged: true
  #    resources:
  #      requests:
  #        cpu: 500m
  #        memory: 1Gi
  #      limits:
  #        cpu: 1
  #        memory: 2Gi

  # Disable the default Jenkins Agent configuration.
  # Useful when configuring agents only with the podTemplates value, since the default podTemplate populated by values mentioned above will be excluded in the rendered template.
  disableDefaultAgent: false

  # Below is the implementation of custom pod templates for the default configured kubernetes cloud.
  # Add a key under podTemplates for each pod template. Each key (prior to | character) is just a label, and can be any value.
  # Keys are only used to give the pod template a meaningful name.  The only restriction is they may only contain RFC 1123 \ DNS label
  # characters: lowercase letters, numbers, and hyphens. Each pod template can contain multiple containers.
  # For this pod templates configuration to be loaded the following values must be set:
  # controller.JCasC.defaultConfig: true
  # Best reference is https://<jenkins_url>/configuration-as-code/reference#Cloud-kubernetes. The example below creates a python pod template.
  podTemplates: {}
  #  python: |
  #    - name: python
  #      label: jenkins-python
  #      serviceAccount: jenkins
  #      containers:
  #        - name: python
  #          image: python:3
  #          command: "/bin/sh -c"
  #          args: "cat"
  #          ttyEnabled: true
  #          privileged: true
  #          resourceRequestCpu: "400m"
  #          resourceRequestMemory: "512Mi"
  #          resourceLimitCpu: "1"
  #          resourceLimitMemory: "1024Mi"

# Here you can add additional agents
# They inherit all values from `agent` so you only need to specify values which differ
#additionalAgents: {}
additionalAgents:
  maven:
    podName: maven
    customJenkinsLabels: maven
    # An example of overriding the jnlp container
    # sideContainerName: jnlp
    image: jenkins/jnlp-agent-maven
    tag: latest
  python:
    podName: python
    customJenkinsLabels: python
    sideContainerName: python
    image: python
    tag: "3"
    command: "/bin/sh -c"
    args: "cat"
    TTYEnabled: true
  dind:
    podName: dind-agent
    customJenkinsLabels: dind-agent
    image: docker.io/warrior7089/dind-client-jenkins-agent
    tag: latest
    envVars:
     - name: DOCKER_HOST
       value: "tcp://localhost:2375"
    alwaysPullImage: true
    yamlTemplate:  |- 
     spec: 
         containers:
           - name: dind-daemon 
             #image: docker:20.10-dind
             image: docker:dind
             args: ["--mtu=1350"]
             securityContext: 
               privileged: true
             env: 
               - name: DOCKER_TLS_VERIFY
                 value: ""
               - name: DOCKER_TLS_CERTDIR
                 value: ""

persistence:
  enabled: true
  ## jenkins data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass:
  annotations: {}
  labels: {}
  accessMode: "ReadWriteOnce"
  size: "8Gi"

这里只调整了下additionalAgents中dind模板配置。只加了一行args: ["--mtu=1350"]。需要mtu比链路上最小值都要小。

然后更新下helm的release

helm upgrade jenkins -f ./jenkins/my-override-values.yaml ./jenkins

如果没有生效,可以删除下对应statefulset,重新更新release (可以放心的是,并不会清数据, pvc还在)

kubectl delete sts jenkins
helm upgrade jenkins -f ./jenkins/my-override-values.yaml ./jenkins

关于docker 中的entrypoint和cmd。

参考: https://stackoverflow.com/questions/21553353/what-is-the-difference-between-cmd-and-entrypoint-in-a-dockerfile

参考: https://www.bmc.com/blogs/docker-cmd-vs-entrypoint/

参考:

而k8s中的comand和args,分别与docker中的entrypoint和cmd对应。用于覆盖镜像的entrypoint和cmd。

参考: https://www.jianshu.com/p/23350af92768

参考: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

物理机作为docker agent

安装agent软件,安装docker环境即可。

略,未试。

使用Docker插件

参考: https://plugins.jenkins.io/docker-plugin/

略,没试过

提供docker命令。但还是要在某个地方装docker daemon的。

使用docker pipeline插件

参考: https://www.jenkins.io/doc/book/pipeline/docker/

参考: https://www.jenkins.io/zh/doc/book/pipeline/docker/

参考: https://plugins.jenkins.io/docker-workflow/

略。暂还在agent中直接使用原生docker命令。暂没有使用该插件。略

附: Jenkins的pipeline语法

相关语法可以参考:

参考: https://www.jenkins.io/doc/book/pipeline/

参考: https://www.jenkins.io/doc/book/pipeline/#scripted-pipeline-fundamentals

参考: https://www.jenkins.io/zh/doc/book/pipeline/syntax/#%E8%84%9A%E6%9C%AC%E5%8C%96%E6%B5%81%E6%B0%B4%E7%BA%BF


评论

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注