Contents
Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what? Kubernetes NodePort、LoadBalancer、Ingress?我什么时候应该使用什么?
转载来源:https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
Recently, someone asked me what the difference between NodePorts, LoadBalancers, and Ingress were. They are all different ways to get external traffic into your cluster, and they all do it in different ways. Let’s take a look at how each of them work, and when you would use each. 最近,有人问我 NodePorts、LoadBalancers 和 Ingress 之间有什么区别。它们都是将外部流量引入集群的不同方法,而且它们的实现方式也各不相同。让我们看看它们各自的工作原理以及何时使用它们。
*Note:* Everything here applies to Google Kubernetes Engine. If you are running on another cloud, on prem, with minikube, or something else, these will be slightly different. I’m also not going into deep technical details. If you are interested in learning more, the official documentation is a great resource! 注意:此处的所有内容均适用于 Google Kubernetes Engine。如果您在另一个云、本地、使用 minikube 或其他东西上运行,这些会略有不同。我也不会深入探讨技术细节。如果您有兴趣了解更多信息,官方文档是一个很好的资源!
ClusterIP 集群IP
A ClusterIP service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access. ClusterIP 服务是默认的 Kubernetes 服务。它为您提供集群内的服务,集群内的其他应用程序可以访问该服务。没有外部访问。
The YAML for a ClusterIP service looks like this: ClusterIP 服务的 YAML 如下所示:
apiVersion: v1
kind: Service
metadata:
name: my-internal-service
spec:
selector:
app: my-app
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
If you can’t access a ClusterIP service from the internet, why am I talking about it? Turns out you can access it using the Kubernetes proxy! 如果您无法从互联网访问 ClusterIP 服务,我为什么要谈论它呢?事实证明,您可以使用 Kubernetes 代理访问它!

Thanks to 谢谢
for the diagrams 对于图表
Start the Kubernetes Proxy: 启动 Kubernetes 代理:
$ kubectl proxy --port=8080
Now, you can navigate through the Kubernetes API to access this service using this scheme: 现在,您可以使用以下方案浏览 Kubernetes API 来访问此服务:
http://localhost:8080/api/v1/proxy/namespaces/
So to access the service we defined above, you could use the following address: 因此,要访问我们上面定义的服务,您可以使用以下地址:
http://localhost:8080/api/v1/proxy/namespaces/default/services/my-internal-service:http/
When would you use this? 你什么时候会用这个?
There are a few scenarios where you would use the Kubernetes proxy to access your services. 在某些情况下,您会使用 Kubernetes 代理来访问您的服务。
- Debugging your services, or connecting to them directly from your laptop for some reason 调试您的服务,或出于某种原因直接从您的笔记本电脑连接到它们
- Allowing internal traffic, displaying internal dashboards, etc. 允许内部流量、显示内部仪表板等。
Because this method requires you to run kubectl as an authenticated user, you should NOT use this to expose your service to the internet or use it for production services. 由于此方法要求您以经过身份验证的用户身份运行 kubectl,因此您不应使用此方法将您的服务公开到互联网或将其用于生产服务。
NodePort 节点端口
A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service. NodePort 服务是将外部流量直接发送到您的服务的最原始方式。 NodePort,顾名思义,在所有节点(VM)上打开一个特定端口,发送到该端口的任何流量都会转发到服务。

This isn’t the most technically accurate diagram, but I think it illustrates the point of how a NodePort works 这不是技术上最准确的图表,但我认为它说明了 NodePort 如何工作的要点
The YAML for a NodePort service looks like this: NodePort 服务的 YAML 如下所示:
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
spec:
selector:
app: my-app
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30036
protocol: TCP
Basically, a NodePort service has two differences from a normal “ClusterIP” service. First, the type is “NodePort.” There is also an additional port called the nodePort that specifies which port to open on the nodes. If you don’t specify this port, it will pick a random port. 基本上,NodePort 服务与普通的“ClusterIP”服务有两个不同之处。首先,类型是“NodePort”。还有一个称为 nodePort 的附加端口,用于指定在节点上打开哪个端口。如果您不指定此端口,它将选择一个随机端口。 Most of the time you should let Kubernetes choose the port; as 大多数时候你应该让 Kubernetes 选择端口;作为
says, there are many caveats to what ports are available for you to use. 说,对于可供您使用的端口有很多注意事项。
When would you use this? 你什么时候会用这个?
There are many downsides to this method: 这种方法有很多缺点:
- You can only have one service per port 每个端口只能有一项服务
- You can only use ports 30000–32767 您只能使用端口 30000–32767
- If your Node/VM IP address change, you need to deal with that 如果您的节点/VM IP 地址发生变化,您需要进行处理
For these reasons, I don’t recommend using this method in production to directly expose your service. If you are running a service that doesn’t have to be always available, or you are very cost sensitive, this method will work for you. 出于这些原因,我不建议在生产中使用此方法来直接公开您的服务。如果您运行的服务不必始终可用,或者您对成本非常敏感,那么此方法将适合您。 A good example of such an application is a demo app or something temporary. 此类应用程序的一个很好的例子是演示应用程序或临时应用程序。
LoadBalancer 负载均衡器
A LoadBalancer service is the standard way to expose a service to the internet. On GKE, this will spin up a Network Load Balancer that will give you a single IP address that will forward all traffic to your service. LoadBalancer 服务是向互联网公开服务的标准方法。在 GKE 上,这将启动一个网络负载均衡器,该负载均衡器将为您提供一个 IP 地址,将所有流量转发到您的服务。

Thanks to 谢谢
for the diagrams 对于图表
When would you use this? 你什么时候会用这个?
If you want to directly expose a service, this is the default method. All traffic on the port you specify will be forwarded to the service. There is no filtering, no routing, etc. 如果您想直接公开服务,这是默认方法。您指定的端口上的所有流量都将转发到该服务。没有过滤、没有路由等。 This means you can send almost any kind of traffic to it, like HTTP, TCP, UDP, Websockets, gRPC, or whatever. 这意味着您可以向其发送几乎任何类型的流量,例如 HTTP、TCP、UDP、Websockets、gRPC 等。
The big downside is that each service you expose with a LoadBalancer will get its own IP address, and you have to pay for a LoadBalancer per exposed service, which can get expensive! 最大的缺点是,您使用 LoadBalancer 公开的每个服务都将获得自己的 IP 地址,并且您必须为每个公开的服务支付 LoadBalancer 费用,这可能会很昂贵!
Ingress 入口
Unlike all the above examples, Ingress is actually NOT a type of service. Instead, it sits in front of multiple services and act as a “smart router” or entrypoint into your cluster. 与上述所有示例不同,Ingress 实际上不是一种服务。相反,它位于多个服务前面,充当“智能路由器”或集群的入口点。
You can do a lot of different things with an Ingress, and there are many types of Ingress controllers that have different capabilities. 您可以使用 Ingress 做很多不同的事情,并且有许多类型的 Ingress 控制器具有不同的功能。
The default GKE ingress controller will spin up a HTTP(S) Load Balancer for you. This will let you do both path based and subdomain based routing to backend services. For example, you can send everything on foo.yourdomain.com to the foo service, and everything under the yourdomain.com/bar/ path to the bar service. 默认 GKE 入口控制器将为您启动 HTTP(S) 负载均衡器。这将允许您对后端服务执行基于路径和基于子域的路由。例如,您可以将 foo.yourdomain.com 上的所有内容发送到 foo 服务,并将 yourdomain.com/bar/ 路径下的所有内容发送到 bar 服务。

Thanks to 谢谢
for the diagrams 对于图表
The YAML for a Ingress object on GKE with a L7 HTTP Load Balancer might look like this: 具有 L7 HTTP 负载均衡器的 GKE 上 Ingress 对象的 YAML 可能如下所示:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
backend:
serviceName: other
servicePort: 8080
rules:
- host: foo.mydomain.com
http:
paths:
- backend:
serviceName: foo
servicePort: 8080
- host: mydomain.com
http:
paths:
- path: /bar/*
backend:
serviceName: bar
servicePort: 8080
When would you use this? 你什么时候会用这个?
Ingress is probably the most powerful way to expose your services, but can also be the most complicated. There are many types of Ingress controllers, from the Google Cloud Load Balancer, Nginx, Contour, Istio, and more. There are also plugins for Ingress controllers, like the cert-manager, that can automatically provision SSL certificates for your services. Ingress 可能是公开服务的最强大的方式,但也可能是最复杂的。 Ingress 控制器有多种类型,例如 Google Cloud Load Balancer、Nginx、Contour、Istio 等。还有 Ingress 控制器的插件,例如 cert-manager,可以自动为您的服务提供 SSL 证书。
Ingress is the most useful if you want to expose multiple services under the same IP address, and these services all use the same L7 protocol (typically HTTP). 如果您想在同一IP地址下公开多个服务,并且这些服务都使用相同的L7协议(通常是HTTP),那么Ingress是最有用的。 You only pay for one load balancer if you are using the native GCP integration, and because Ingress is “smart” you can get a lot of features out of the box (like SSL, Auth, Routing, etc) 如果您使用本机 GCP 集成,则只需为一个负载均衡器付费,并且由于 Ingress 很“智能”,您可以获得许多开箱即用的功能(例如 SSL、Auth、路由等)
发表回复