Contents
打包部署service-job-executor镜像
备份仓库service-job-executor
新建本地分支
git checkout develop
git checkout -b jtest
新建远程仓库
gitee新建私有仓库service-job-executor
上传分支
git remote add jingmin https://gitee.com/ole12138/service-job-executor.git
# jtest分支设置上游分支,并推送
git push -u jingmin jtest
git push -u jingmin master:master
git push -u jingmin develop:develop
# 推送所有(本地)分支到新仓库
#git push --all jingmin
# 推送所有分支到新仓库(会多个HEAD临时分支)
git push jingmin +refs/remotes/origin/*:refs/heads/*
# 删除远程分支HEAD
git push jingmin --delete HEAD
调整配置
调整springcloud的bootstrap.yml
# 服务名
spring.application.name=SERVICE-JOB-EXECUTOR
# 端口号
server.port=5017
#spring.cloud.nacos.server-addr=nacos.c253e0c129d8f453a82dfb1ae4ba19613.cn-shenzhen.alicontainer.com:80
spring.cloud.nacos.server-addr=nacos-headless.nacos.svc.cluster.local:8848
spring.cloud.nacos.discovery.group=woyun
spring.cloud.nacos.discovery.namespace=765fa359-2e1b-41f3-a4b2-17c3856764fe
spring.cloud.nacos.config.namespace=765fa359-2e1b-41f3-a4b2-17c3856764fe
spring.cloud.nacos.config.group=woyun
spring.cloud.nacos.config.shared-configs[0].data-id=application.properties
spring.cloud.nacos.config.shared-configs[0].group=woyun
spring.cloud.nacos.config.shared-configs[1].data-id=redis.properties
spring.cloud.nacos.config.shared-configs[1].group=woyun
# 每个节点的workerId不同,datacenterId示集群而定。即:同一服务的不同节点,workerId不同;不同服务的节点workerId也不相同
application.workerId=17
application.datacenterId=0
### xxl-job admin address list, such as "http://address" or "http://address01,http://address02"
### 调度中心部署跟地址 [选填]:如调度中心集群部署存在多个地址则用逗号分隔。执行器将会使用该地址进行"执行器心跳注册"和"任务结果回调";为空则关闭自动注册;
#xxl.job.admin.addresses=http://k8sjob.woyunsoft.com/job-admin
xxl.job.admin.addresses=http://service-job-admin:5019/job-admin
#xxl.job.admin.addresses=http://127.0.0.1:5019/job-admin
#xxl.job.admin.addresses=
### xxl-job executor address
xxl.job.executor.appname=renewal-job-executor
xxl.job.executor.address=
xxl.job.executor.ip=
xxl.job.executor.port=9996
### xxl-job, access token
xxl.job.accessToken=
### xxl-job log path
xxl.job.executor.logpath=/mnt/applogs/jobhandler
### 执行器日志文件保存天数 [选填] : 过期日志自动清理, 限制值大于等于3时生效; 否则, 如-1, 关闭自动清理功能;
xxl.job.executor.logretentiondays=30
主要调整nacos地址和命名空间
调整项目pom中xframework-parent版本为2.0.2
调整xxl.job.admin.addresses内容,使用k8s内部地址
本地调试
telepresence connect
本地试运行
没问题的话提交修改
git add *.properties
git commit -m "feat: jtest分支配置”
部署自动化发布
- 登录Jenkins
- new Item, 选择pipeline, 名称为service-job-executor
- 配置pipeline
pipeline通用配置
discard old build, 保留2个
This project is parameterized -> Choice Parameter-> name: branchName, Choices只有一个 **/jtest
pipeline具体配置
- pipeline defination: pipeline script from SCM
- SCM: git
- repositories: https://gitee.com/ole12138/delopy-k8s.git
- credentials: xxx/xxx
- branchToBuild:
*/jtest
- Addtional behaviors:
- Sparse Checkout paths:
service/jtest/Jenkinsfile
- Checkout to sub-directory: jenkins
- Sparse Checkout paths:
- Script Path:
service/jtest/Jenkinsfile
- SCM: git
应用并保存
build with parameters。 选中choice开始构建。
Jenkins配置Config File Provider
不出意外的话,前面的构建会失败。是由于Jenkins中缺了k8s中部署对应镜像的service-job-k8s.yaml配置文件。
Dashboard->Manage Jenkins->Managed files(由Config File Provider插件提供)->Add a new Config
Custom file
id自动生成,这里是 870aa774-a8de-4d2d-86d0-37a19e207efa
名称 service-job-executor-k8s.yaml
内容
//获取所选项目名称
def projectName = env.JOB_NAME;
//定义镜像tag
def build_tag = env.BUILD_TAG;
//定义全局命名空间
def namespace = 'jtest';
//设置资源调度cpu/内存大小
def requestCPU = 100;
def requestMemory = 400;
def limitCPU = 200;
int limitMemoryInt = 800;
if (projectName.indexOf("service-channel") > -1) {
requestCPU = 100;
requestMemory = 400;
limitCPU = 300;
limitMemoryInt = 800;
}
if (projectName.indexOf("wld-business-platform") > -1) {
requestCPU = 100;
requestMemory = 1024;
limitCPU = 500;
limitMemoryInt = 2048;
}
// if(projectName.indexOf("service-third-interface") > -1 || projectName.indexOf("service-provider") > -1 || projectName.indexOf("service-policy") > -1 || projectName.indexOf("service-job-executor") > -1 || projectName.indexOf("service-job-admin") > -1){
// requestCPU=500;
// requestMemory=1024;
// limitCPU=1000;
// limitMemoryInt=2048;
// }
int jvmInitHeapMemonyInt = limitMemoryInt * 0.75;
def jvmMaxHeapMemony = jvmInitHeapMemonyInt;
def config_ev = 'jtest';
def portName = projectName.split("-")[1].substring(0, 2);
def repoNamespace = 'wy_jtest';
requestCPU = requestCPU + 'm';
limitCPU = limitCPU + 'm';
requestMemory = requestMemory + 'Mi';
def limitMemory = limitMemoryInt + 'Mi';
def jvmInitHeapMemony = '-Xms' + jvmInitHeapMemonyInt + 'm';
jvmMaxHeapMemony = '-Xmx' + jvmMaxHeapMemony + 'm';
def serviceName = "";
if (projectName.indexOf("server-openapi-zuul") > -1) {
serviceName = "openapi-gateway";
}
if (projectName.indexOf("server-openapi-h5-zuul") > -1) {
serviceName = "openapi-h5-gateway";
}
if (projectName.indexOf("wld-service-zuul") > -1) {
serviceName = "wld-service-zuul";
}
if (projectName.indexOf("service-job-admin") > -1) {
serviceName = "service-job-admin";
}
if (projectName.indexOf("server-im-zuul") > -1) {
serviceName = "server-im-zuul";
}
if (projectName.indexOf("service-netty") > -1) {
serviceName = "service-netty";
}
if (projectName.indexOf("service-netty-scian") > -1) {
serviceName = "service-netty-scian";
}
pipeline {
agent {
kubernetes {
inheritFrom 'maven-dind-kubectl-agent'
}
}
//agent any
environment {
projectPort = 0
}
stages {
stage('git checkout') {
steps {
container("maven") {
echo "git 拉取代码 项目名称: ${projectName} 所选分支: ${branchName} ${build_tag} $branchName "
checkout([$class: 'GitSCM', branches: [[name: '$branchName']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'gitee_wangjm', url: 'https://gitee.com/ole12138/${JOB_NAME}.git']]]);
}
}
}
stage('mvn build') {
steps {
container("maven") {
script {
if (projectName.indexOf("service-third-interface") > -1) {
withEnv(['MAVEN_OPTS=-Xms1024m -Xmx1024m']) {
/*withMaven(maven: 'maven',mavenSettingsConfig: 'a117f977-9853-43d7-828e-280fe2f1e0a5',jdk: 'jdk') {*/
withMaven(mavenSettingsConfig: 'a117f977-9853-43d7-828e-280fe2f1e0a5') {
sh 'mvn -X -U -T 2 clean package -Dmaven.test.skip=true '
}
}
} else {
/*withMaven(maven: 'maven',mavenSettingsConfig: 'a117f977-9853-43d7-828e-280fe2f1e0a5',jdk: 'jdk') {*/
withMaven(mavenSettingsConfig: 'a117f977-9853-43d7-828e-280fe2f1e0a5') {
sh 'mvn -X -U -T 2 clean package -Dmaven.test.skip=true '
}
}
}
sh "mkdir -p app && cd target && mv `ls *.jar|grep -v sources.jar|grep -v javadoc.jar` ../app/app.jar"
stash includes: 'app/app.jar', name: 'app_stash'
script {
projectPort = sh(returnStdout: true, script: "cat ./target/classes/bootstrap.properties | grep server.port|cut -d'=' -f2 | xargs echo -n ")
echo "项目端口号:$projectPort"
}
echo "项目端口号:${projectPort} "
}
}
}
stage('docker build') {
steps {
container("jnlp") {
//sh "mkdir -p app"
unstash 'app_stash'
echo 'docker镜像 构建'
//Jenkins Dashboard->Credentials->Global->名为harbor的用户密码配置文件
withCredentials([usernamePassword(credentialsId: 'harbor', passwordVariable: 'dockerPassword', usernameVariable: 'dockerUser')]) {
echo "设置jvm动态参数=> 初始化堆内存:${jvmInitHeapMemony} 最大堆内存:${jvmMaxHeapMemony} springprofile环境:${config_ev} "
sh "docker login -u $dockerUser -p $dockerPassword harbor.ole12138.cn"
// sh "docker rmi harbor.ole12138.cn/wy_spc/${projectName}:${build_tag}"
//Jenkins Dashboard->Managed Files->名为Dockerfile的配置文件
configFileProvider([configFile(fileId: '478d3df3-0a15-4c44-ba94-5c83c28cf56e', targetLocation: 'app/Dockerfile')]) {
dir('app') {
sh "docker build --pull --no-cache -t harbor.ole12138.cn/${repoNamespace}/${projectName}:${build_tag} --build-arg CONFIG_ENV=${config_ev} --build-arg INIT_HEAP_MOMERY=${jvmInitHeapMemony} --build-arg MAX_HEAP_MOMERY=${jvmMaxHeapMemony} --build-arg NACOS_NAMESPACE=765fa359-2e1b-41f3-a4b2-17c3856764fe --build-arg NACOS_SERVER=nacos-headless.nacos.svc.cluster.local:8848 . "
}
}
}
}
}
}
stage('docker push') {
steps {
container("jnlp") {
echo '自定义镜像上传私有镜像仓库'
sh "docker push harbor.ole12138.cn/${repoNamespace}/${projectName}:${build_tag}"
// sh 'docker rmi $(docker images | grep "none" | awk "{print $3 }" ) '
}
}
}
// stage('是否确认发布k8s') {
// steps {
// input "是否确认发布该服务到k8s ok?"
// }
// }
stage('k8s apply') {
steps {
container("kubectl") {
withKubeConfig(caCertificate: '', clusterName: '', contextName: '', credentialsId: 'k8sconfig', namespace: 'jtest', serverUrl: '') {
echo "设置动态参数=> 初始化cpu:${requestCPU} 初始化内存:${requestMemory} 最大cpu:${limitCPU} 最大内存:${limitMemory} "
script {
if (projectName =~ "service-netty-scian") {
configFileProvider([configFile(fileId: '0bb0db39-e002-4bfb-a429-6955f2d68454', targetLocation: 'k8s.yaml')]) {
sh "sed -i 's/<NAME_SPACE>/${namespace}/' k8s.yaml"
sh "sed -i 's/<BUILD_TAG>/${build_tag}/' k8s.yaml"
sh "sed -i 's/<PROJECT_NAME>/${projectName}/' k8s.yaml"
sh "sed -i 's/<PORT_NAME>/${portName}/' k8s.yaml"
sh "sed -i 's/<REQUEST_CPU>/${requestCPU}/' k8s.yaml"
sh "sed -i 's/<REQUEST_MEMORY>/${requestMemory}/' k8s.yaml"
sh "sed -i 's/<LIMIT_CPU>/${limitCPU}/' k8s.yaml"
sh "sed -i 's/<LIMIT_MEMORY>/${limitMemory}/' k8s.yaml"
sh "sed -i 's/<repoNamespace>/${repoNamespace}/' k8s.yaml"
sh "sed -i 's/<projectPort>/${projectPort}/' k8s.yaml"
sh "kubectl apply -f k8s.yaml "
}
} else if (projectName =~ "service-netty") {
configFileProvider([configFile(fileId: 'bdcb33b5-37e0-43db-9dd6-da1378d2df18', targetLocation: 'service-netty-k8s.yaml')]) {
sh "sed -i 's/<NAME_SPACE>/${namespace}/' service-netty-k8s.yaml"
sh "sed -i 's/<BUILD_TAG>/${build_tag}/' service-netty-k8s.yaml"
sh "sed -i 's/<PROJECT_NAME>/${projectName}/' service-netty-k8s.yaml"
sh "sed -i 's/<PORT_NAME>/${portName}/' service-netty-k8s.yaml"
sh "sed -i 's/<REQUEST_CPU>/${requestCPU}/' service-netty-k8s.yaml"
sh "sed -i 's/<REQUEST_MEMORY>/${requestMemory}/' service-netty-k8s.yaml"
sh "sed -i 's/<LIMIT_CPU>/${limitCPU}/' service-netty-k8s.yaml"
sh "sed -i 's/<LIMIT_MEMORY>/${limitMemory}/' service-netty-k8s.yaml"
sh "sed -i 's/<repoNamespace>/${repoNamespace}/' service-netty-k8s.yaml"
sh "sed -i 's/<projectPort>/${projectPort}/' service-netty-k8s.yaml"
sh "sed -i 's/<SERVICE_NAME>/${serviceName}/' service-netty-k8s.yaml "
sh "kubectl apply -f service-netty-k8s.yaml "
}
} else if (projectName =~ "server-openapi-zuul" || projectName =~ "wld-service-zuul" || projectName =~ "server-im-zuul" || projectName =~ "server-openapi-h5-zuul") {
//Jenkins Dashboard->Manage Jenkins->Managed Files->名为service-k8s.yaml的配置文件
configFileProvider([configFile(fileId: 'cbb3823e-3c04-4834-9de4-15e0b15770d2', targetLocation: 'service-k8s.yaml')]) {
sh "sed -i 's/<NAME_SPACE>/${namespace}/' service-k8s.yaml"
sh "sed -i 's/<BUILD_TAG>/${build_tag}/' service-k8s.yaml"
sh "sed -i 's/<PROJECT_NAME>/${projectName}/' service-k8s.yaml"
sh "sed -i 's/<PORT_NAME>/${portName}/' service-k8s.yaml"
sh "sed -i 's/<REQUEST_CPU>/${requestCPU}/' service-k8s.yaml"
sh "sed -i 's/<REQUEST_MEMORY>/${requestMemory}/' service-k8s.yaml"
sh "sed -i 's/<LIMIT_CPU>/${limitCPU}/' service-k8s.yaml"
sh "sed -i 's/<LIMIT_MEMORY>/${limitMemory}/' service-k8s.yaml"
sh "sed -i 's/<repoNamespace>/${repoNamespace}/' service-k8s.yaml"
sh "sed -i 's/<projectPort>/${projectPort}/' service-k8s.yaml"
sh "sed -i 's/<SERVICE_NAME>/${serviceName}/' service-k8s.yaml "
sh "cat service-k8s.yaml"
sh "kubectl apply -f service-k8s.yaml "
}
} else if (projectName =~ "service-job-admin") {
//Jenkins Dashboard->Manage Jenkins->Managed Files->名为service-job-k8s.yaml的配置文件
configFileProvider([configFile(fileId: '6d21975b-1637-4fbd-bfaf-692f8d1a86f5', targetLocation: 'service-job-k8s.yaml')]) {
sh "sed -i 's/<NAME_SPACE>/${namespace}/' service-job-k8s.yaml"
sh "sed -i 's/<BUILD_TAG>/${build_tag}/' service-job-k8s.yaml"
sh "sed -i 's/<PROJECT_NAME>/${projectName}/' service-job-k8s.yaml"
sh "sed -i 's/<PORT_NAME>/${portName}/' service-job-k8s.yaml"
sh "sed -i 's/<REQUEST_CPU>/${requestCPU}/' service-job-k8s.yaml"
sh "sed -i 's/<REQUEST_MEMORY>/${requestMemory}/' service-job-k8s.yaml"
sh "sed -i 's/<LIMIT_CPU>/${limitCPU}/' service-job-k8s.yaml"
sh "sed -i 's/<LIMIT_MEMORY>/${limitMemory}/' service-job-k8s.yaml"
sh "sed -i 's/<repoNamespace>/${repoNamespace}/' service-job-k8s.yaml"
sh "sed -i 's/<projectPort>/${projectPort}/' service-job-k8s.yaml"
sh "sed -i 's/<SERVICE_NAME>/${serviceName}/' service-job-k8s.yaml "
sh "kubectl apply -f service-job-k8s.yaml"
}
} else if (projectName =~ "service-job-executor") {
configFileProvider([configFile(fileId: '870aa774-a8de-4d2d-86d0-37a19e207efa', targetLocation: 'service-job-exector-k8s.yaml')]) {
//Jenkins Dashboard->Manage Jenkins->Managed Files->名为service-job-executor-k8s.yaml的配置文件
sh "sed -i 's/<NAME_SPACE>/${namespace}/' service-job-exector-k8s.yaml"
sh "sed -i 's/<BUILD_TAG>/${build_tag}/' service-job-exector-k8s.yaml"
sh "sed -i 's/<PROJECT_NAME>/${projectName}/' service-job-exector-k8s.yaml"
sh "sed -i 's/<PORT_NAME>/${portName}/' service-job-exector-k8s.yaml"
sh "sed -i 's/<REQUEST_CPU>/${requestCPU}/' service-job-exector-k8s.yaml"
sh "sed -i 's/<REQUEST_MEMORY>/${requestMemory}/' service-job-exector-k8s.yaml"
sh "sed -i 's/<LIMIT_CPU>/${limitCPU}/' service-job-exector-k8s.yaml"
sh "sed -i 's/<LIMIT_MEMORY>/${limitMemory}/' service-job-exector-k8s.yaml"
sh "sed -i 's/<repoNamespace>/${repoNamespace}/' service-job-exector-k8s.yaml"
sh "sed -i 's/<projectPort>/${projectPort}/' service-job-exector-k8s.yaml"
sh "sed -i 's/<SERVICE_NAME>/${serviceName}/' service-job-exector-k8s.yaml "
sh "kubectl apply -f service-job-exector-k8s.yaml"
}
} else if (projectName =~ "wld-business-platform") {
configFileProvider([configFile(fileId: '94c14f39-5b1a-49f4-923d-6658a077bc84', targetLocation: 'k8s-wld-v2.yaml')]) {
sh "sed -i 's/<NAME_SPACE>/${namespace}/' k8s-wld-v2.yaml"
sh "sed -i 's/<BUILD_TAG>/${build_tag}/' k8s-wld-v2.yaml"
sh "sed -i 's/<PROJECT_NAME>/${projectName}/' k8s-wld-v2.yaml"
sh "sed -i 's/<PORT_NAME>/${portName}/' k8s-wld-v2.yaml"
sh "sed -i 's/<REQUEST_CPU>/${requestCPU}/' k8s-wld-v2.yaml"
sh "sed -i 's/<REQUEST_MEMORY>/${requestMemory}/' k8s-wld-v2.yaml"
sh "sed -i 's/<LIMIT_CPU>/${limitCPU}/' k8s-wld-v2.yaml"
sh "sed -i 's/<LIMIT_MEMORY>/${limitMemory}/' k8s-wld-v2.yaml"
sh "sed -i 's/<repoNamespace>/${repoNamespace}/' k8s-wld-v2.yaml"
sh "sed -i 's/<projectPort>/${projectPort}/' k8s-wld-v2.yaml"
sh "sed -i 's/<SERVICE_NAME>/${serviceName}/' k8s-wld-v2.yaml"
sh "kubectl apply -f k8s-wld-v2.yaml"
}
} else {
configFileProvider([configFile(fileId: 'bad53479-629f-40c9-af73-713a09515afe', targetLocation: 'k8s.yaml')]) {
sh "sed -i 's/<NAME_SPACE>/${namespace}/' k8s.yaml"
sh "sed -i 's/<BUILD_TAG>/${build_tag}/' k8s.yaml"
sh "sed -i 's/<PROJECT_NAME>/${projectName}/' k8s.yaml"
sh "sed -i 's/<PORT_NAME>/${portName}/' k8s.yaml"
sh "sed -i 's/<REQUEST_CPU>/${requestCPU}/' k8s.yaml"
sh "sed -i 's/<REQUEST_MEMORY>/${requestMemory}/' k8s.yaml"
sh "sed -i 's/<LIMIT_CPU>/${limitCPU}/' k8s.yaml"
sh "sed -i 's/<LIMIT_MEMORY>/${limitMemory}/' k8s.yaml"
sh "sed -i 's/<repoNamespace>/${repoNamespace}/' k8s.yaml"
sh "sed -i 's/<projectPort>/${projectPort}/' k8s.yaml"
sh "kubectl apply -f k8s.yaml "
}
}
}
}
}
}
}
stage('pod status check') {
steps {
echo 'k8s apply success,start check pod status!'
script {
if (projectName =~ "server-im") {
configFileProvider([configFile(fileId: '24f98fe8-6bf1-4b4d-b85c-31f7cd1f0303', targetLocation: 'pod_check_tst.sh')]) {
sh "sed -i 's/<PROJECT_NAME>/${projectName}/' pod_check_tst.sh"
}
sh "sh pod_check_tst.sh"
}
}
}
}
}
post {
always {
echo 'This will always run'
}
success {
echo 'This will run only if successful'
// withCredentials([string(credentialsId: 'LTAI4GFbYp1f7femSKdVVqqJ', variable: 'secret')]) {
// sh "java -jar ../../lib/aliyun-docker-registry-api-1.0-SNAPSHOT.jar LTAI4GFbYp1f7femSKdVVqqJ $secret ${repoNamespace} ${projectName} 2"
// }
}
failure {
echo 'This will run only if failed'
}
unstable {
echo 'This will run only if the run was marked as unstable'
}
changed {
echo 'This will run only if the state of the Pipeline has changed'
echo 'For example, if the Pipeline was previously failing but is now successful'
}
}
}
这里主要改了下service的type,改成了clusterIP。
另外改了下pod健康检查的时延。
调整deploy-k8s项目中Jenkinsfile配置
调整deploy-k8s项目下service/jtest/Jenkinsfile
中service-job-executor判断语句下configFileProvider语句中configFile的fileId, 改为上面添加的文件的id
提交deploy-k8s项目中修改的文件service/jtest/Jenkinsfile
和service/jtest/Jenkinsfile
构建成功后到k8s中看下jtest空间是否有对应的depoyment及pod跑起来。
nacos 看下服务列表jtest空间下有没有出现新的服务
发表回复