@@ -61,63 +61,6 @@ Note that the kube-proxy starts up in different modes, which are determined by i
61
61
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup.
62
62
For example, if your operating system doesn't allow you to run iptables commands,
63
63
the standard kernel kube-proxy implementation will not work.
64
- Likewise, if you have an operating system which doesn't support ` netsh ` ,
65
- it will not run in Windows userspace mode.
66
-
67
- ### User space proxy mode {#proxy-mode-userspace}
68
-
69
- {{< feature-state for_k8s_version="v1.23" state="deprecated" >}}
70
-
71
- This (legacy) mode uses iptables to install interception rules, and then performs
72
- traffic forwarding with the assistance of the kube-proxy tool.
73
- The kube-procy watches the Kubernetes control plane for the addition, modification
74
- and removal of Service and EndpointSlice objects. For each Service, the kube-proxy
75
- opens a port (randomly chosen) on the local node. Any connections to this _ proxy port_
76
- are proxied to one of the Service's backend Pods (as reported via
77
- EndpointSlices). The kube-proxy takes the ` sessionAffinity ` setting of the Service into
78
- account when deciding which backend Pod to use.
79
-
80
- The user-space proxy installs iptables rules which capture traffic to the
81
- Service's ` clusterIP ` (which is virtual) and ` port ` . Those rules redirect that traffic
82
- to the proxy port which proxies the backend Pod.
83
-
84
- By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.
85
-
86
- {{< figure src="/images/docs/services-userspace-overview.svg" title="Services overview diagram for userspace proxy" class="diagram-medium" >}}
87
-
88
-
89
- #### Example {#packet-processing-userspace}
90
-
91
- As an example, consider the image processing application described [ earlier] ( #example )
92
- in the page.
93
- When the backend Service is created, the Kubernetes control plane assigns a virtual
94
- IP address, for example 10.0.0.1. Assuming the Service port is 1234, the
95
- Service is observed by all of the kube-proxy instances in the cluster.
96
- When a proxy sees a new Service, it opens a new random port, establishes an
97
- iptables redirect from the virtual IP address to this new port, and starts accepting
98
- connections on it.
99
-
100
- When a client connects to the Service's virtual IP address, the iptables
101
- rule kicks in, and redirects the packets to the proxy's own port.
102
- The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.
103
-
104
- This means that Service owners can choose any port they want without risk of
105
- collision. Clients can connect to an IP and port, without being aware
106
- of which Pods they are actually accessing.
107
-
108
- #### Scaling challenges {#scaling-challenges-userspace}
109
-
110
- Using the userspace proxy for VIPs works at small to medium scale, but will
111
- not scale to very large clusters with thousands of Services. The
112
- [ original design proposal for portals] ( https://github.com/kubernetes/kubernetes/issues/1107 )
113
- has more details on this.
114
-
115
- Using the userspace proxy obscures the source IP address of a packet accessing
116
- a Service.
117
- This makes some kinds of network filtering (firewalling) impossible. The iptables
118
- proxy mode does not
119
- obscure in-cluster source IPs, but it does still impact clients coming through
120
- a load balancer or node-port.
121
64
122
65
### ` iptables ` proxy mode {#proxy-mode-iptables}
123
66
@@ -135,7 +78,7 @@ is handled by Linux netfilter without the need to switch between userspace and t
135
78
kernel space. This approach is also likely to be more reliable.
136
79
137
80
If kube-proxy is running in iptables mode and the first Pod that's selected
138
- does not respond, the connection fails. This is different from userspace
81
+ does not respond, the connection fails. This is different from the old ` userspace `
139
82
mode: in that scenario, kube-proxy would detect that the connection to the first
140
83
Pod had failed and would automatically retry with a different backend Pod.
141
84
@@ -148,7 +91,8 @@ having traffic sent via kube-proxy to a Pod that's known to have failed.
148
91
149
92
#### Example {#packet-processing-iptables}
150
93
151
- Again, consider the image processing application described [ earlier] ( #example ) .
94
+ As an example, consider the image processing application described [ earlier] ( #example )
95
+ in the page.
152
96
When the backend Service is created, the Kubernetes control plane assigns a virtual
153
97
IP address, for example 10.0.0.1. For this example, assume that the
154
98
Service port is 1234.
@@ -162,10 +106,7 @@ endpoint rules redirect traffic (using destination NAT) to the backends.
162
106
163
107
When a client connects to the Service's virtual IP address the iptables rule kicks in.
164
108
A backend is chosen (either based on session affinity or randomly) and packets are
165
- redirected to the backend. Unlike the userspace proxy, packets are never
166
- copied to userspace, the kube-proxy does not have to be running for the virtual
167
- IP address to work, and Nodes see traffic arriving from the unaltered client IP
168
- address.
109
+ redirected to the backend without rewriting the client IP address.
169
110
170
111
This same basic flow executes when traffic comes in through a node-port or
171
112
through a load-balancer, though in those cases the client IP address does get altered.
0 commit comments