Contents
metallb安装
参考:https://metallb.universe.tf/installation/
参考: https://www.lixueduan.com/posts/cloudnative/01-metallb/
场景
场景: 私网部署k8s,想要使用loadbalancer
场景: https://www.reddit.com/r/kubernetes/comments/wk38f9/networking_how_to_make_metallb_communicate_with/
prerequisite
私网中已经部署好了k8s
安装k8s包管理工具helm (或者也可以采用metallb官网手动安装yaml的方式: https://metallb.universe.tf/installation/#installation-by-manifest)
安装helm
类似于包管理工具,专用于k8s组件
yum install helm
调整kube-proxy
参考:https://metallb.universe.tf/installation/#preparation
kubectl edit configmap -n kube-system kube-proxy
设置其中相关字段(只关注mode和strictARP, 如果不是ipvs模式,则不用设置):
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true
实际的配置文件会很长,只需要调整下mode和strictARP字段即可
查看下对应的pod实例
kubectl -n kube-system get pod
删除kube-proxy-xxx实例,k8s会自动创建新的实例(我这边是这两个实例)
kubectl -n kube-system delete pod kube-proxy-q8tjx kube-proxy-v9rvk
重新查看下对应的pod实例,确认已重新创建kube-proxy实例
kubectl -n kube-system get pods
helm方式安装metallb
metallb的github地址: https://github.com/metallb/metallb
metallb的官方文档: https://metallb.universe.tf/installation/#installation-with-helm
下载metallb的helm包
helm repo add metallb https://metallb.github.io/metallb
# 这里看了下values中的默认配置好像没有需要修改的,等下直接用官方包即可
helm pull metallb/metallb --untar
创建metallb-system命名空间
kubectl create ns metallb-system
当前操作的默认命名空间改为 metallb-system
kubectl config set-context --current --namespace metallb-system
使用helm创建metallb组件
helm install metallb metallb/metallb
创建ip池
参考: https://metallb.universe.tf/configuration/#layer-2-configuration
[root@jingmin-kube-archlinux metallb-cr]# cat ./first-pool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.100-192.168.1.149
应用
kubectl apply -f ./first-pool.yaml
关联ip池与ip声明服务
将ip池与2层声明服务关联(当有请求loadbalaner中的ip时,直接返回mac)
[root@jingmin-kube-archlinux metallb-cr]# cat ./example-l2advertisement.yaml
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
应用
kubectl apply -f ./example-l2advertisement.yaml
Q
L2 模式下需要各个节点间 7946 端口联通? speaker用到了这个端口
创建公网IP池
后来拉了电信专线, 一组是4个IP, 要了两组公网IP. (8个ip,有5个能用)
183.129.155.0/29网段. 183.129.155.0是网络号, 183.129.155.1是电信那边的网关, 183.129.155.7是广播地址. 剩下的5个ip可用.
这里我的路由器占了一个ip, 还剩4个可用.
创建公网ip池
# cat public-pool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: public-pool
namespace: metallb-system
spec:
addresses:
- 183.129.155.3-183.129.155.6
关联ip池与ip声明服务
如下图所示, 路由器(也是台Linux)+3台主机组了K8S, 只有路由器暴露到公网.
这里公网IP池中的IP, 只允许在路由器上宣告. 否则公网无法访问到.

参考: https://metallb.universe.tf/configuration/_advanced_l2_configuration/
将ip池与2层声明服务关联(当有请求loadbalaner中的ip时,直接返回mac)
# cat ./public-l2advertisement.yaml
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: public
namespace: metallb-system
spec:
ipAddressPools:
- public-pool
nodeSelectors:
- matchLabels:
kubernetes.io/hostname: jingmin-kube-master1
注意这里的nodeSelectors, 指定了router所在的主机.
应用
kubectl apply -f ./public-l2advertisement.yaml
eg: 指定ip
参考: https://metallb.universe.tf/usage/#requesting-specific-ips
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
metallb.universe.tf/loadBalancerIPs: 192.168.1.100
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
eg: 指定ip池
参考: https://metallb.universe.tf/usage/#requesting-specific-ips
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
eg: 使用相同的ip
公网ip是很贵的. 我这边有效的也才4个.
考虑多个Loadbalancer使用相同ip.
参考: https://metallb.universe.tf/usage/#ip-address-sharing
By default, Services do not share IP addresses. If you have a need to colocate services on a single IP, you can enable selective IP sharing by adding the metallb.universe.tf/allow-shared-ip
annotation to services.
The value of the annotation is a “sharing key.” Services can share an IP address under the following conditions:
- They both have the same sharing key.
- They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).
- They both use the
Cluster
external traffic policy, or they both point to the exact same set of pods (i.e. the pod selectors are identical).
apiVersion: v1
kind: Service
metadata:
name: dns-service-tcp
namespace: default
annotations:
metallb.universe.tf/allow-shared-ip: "key-to-share-1.2.3.4"
spec:
type: LoadBalancer
loadBalancerIP: 1.2.3.4
ports:
- name: dnstcp
protocol: TCP
port: 53
targetPort: 53
selector:
app: dns
---
apiVersion: v1
kind: Service
metadata:
name: dns-service-udp
namespace: default
annotations:
metallb.universe.tf/allow-shared-ip: "key-to-share-1.2.3.4"
spec:
type: LoadBalancer
loadBalancerIP: 1.2.3.4
ports:
- name: dnsudp
protocol: UDP
port: 53
targetPort: 53
selector:
app: dns
目前在使用的
annotations:
metallb.universe.tf/address-pool: public-pool
metallb.universe.tf/allow-shared-ip: key-to-share-183.129.155.3
发表回复