Kubernetes can connect pods on different node nodes in the cluster, and by default, each pod is accessible to each other. However, in some scenarios, different pods should not be interoperable, and access control is required at this time. So how does it work?
Brief introduction
?? Kubernetes provides Networkpolicy feature, which supports network access control by namespace and by pod level. It uses the label to specify the namespaces or pod, and the underlying is implemented with Iptables. This article briefly describes how Kubernetes Networkpolicy works on the calico.
Control polygon Data Flow
?? Network policy is a kubernetes resource that is defined, stored, and configured to take effect in the process. The following is a brief process:
- Create network Policy resources through KUBECTL client;
- Calico Policy-controller listens to the network Policy resource, obtains the ETCD database which writes to Calico;
- Node on Calico-felix gets the policy resource from the ETCD database, calling Iptables to do the appropriate configuration.
Resource configuration Template
?? Network policy supports access control at the pod and namespace levels, which can be referenced in the following templates.
Specify POD label access
?? We have access control for all pods with the "role:backend" tag for namespace MyNS: only the pod labeled "Role:frontend" is allowed, and the TCP port is 6379 of the data flow, and no other traffic is allowed.
kind: NetworkPolicyapiVersion:metadata: name: allow-frontend namespace: mynsspec: podSelector: matchLabels: role: backend ingress: -from: -podSelector: matchLabels: role: frontend ports: -protocol: TCP port: 6379
Specify Namespaces label access
?? We're going to have access control over all pods labeled "role:frontend": Only the pods labeled "user:bob" are allowed, and the TCP port is 443 of the data flow, and no other traffic is allowed.
kind: NetworkPolicyapiVersion:metadata: name: allow-tcp-443spec: podSelector: matchLabels: role: frontend ingress: -ports: -protocol: TCP port: 443 from: -namespaceSelector: matchLabels: user:
NETWORKPOLICY data structure definition
?? After reading the above example, we must have a certain understanding of Networkpolicy resource objects. Next we look at the kubernetes definition of the interface:
typeNetworkpolicystruct{Typemeta Objectmeta Spec Networkpolicyspec}typeNetworkpolicyspecstruct{Podselector unversioned. Labelselector' JSON: ' Podselector 'Ingress []networkpolicyingressrule' JSON: ' Ingress,omitempty '}typeNetworkpolicyingressrulestruct{Ports *[]networkpolicyport' JSON: ' Ports,omitempty 'From *[]networkpolicypeer' JSON: ' From,omitempty '}typeNetworkpolicyportstruct{Protocol *api. Protocol' JSON: ' Protocol,omitempty 'Port *intstr. Intorstring' JSON: ' Port,omitempty '}typeNetworkpolicypeerstruct{Podselector *unversioned. Labelselector' JSON: ' Podselector,omitempty 'Namespaceselector *unversioned. Labelselector' JSON: ' Namespaceselector,omitempty '}
?? In short, the resource specifies the pod "controlled access pod" and "Admission Pod", which can be configured from the selector of the spec's podselector and Ingress-from.
?? Next we'll look at the Kubernetes+calico Network policy implementation details.
Beta version
?? The following is the version of the component used in the test:
- Kubernetes
- master:v1.9.0
- node:v1.9.0
- Calico
- v2.5.0
- Calico-policy-controller
- quay.io/calico/kube-policy-controller:v0.7.0
Run Configuration
- Calico side, new resources except basic configuration:
- Service-account:calico-policy-controller
- Rbac:
- Servicerole:calico-policy-controller
- Servicerolebinding:calico-policy-controller
- Deployment:calico-policy-controller
- Kubernets side, new network policy resources;
Running state
?? In the original normal working kubernetes cluster, we added a new Calico-policy-controller container, which mainly runs the controller process:
- Calico-policy-controller:
Process
/ # ps aux PID USER TIME COMMAND 1 root 0:00 /pause 7 root 0:00 /dist/controller 13 root 0:12 /dist/controller
Port:
/ # netstat -apn | grep contr tcp 0 0 10.138.102.219:45488 10.138.76.26:2379 ESTABLISHED 13/controller tcp 0 0 10.138.102.219:44538 101.199.110.26:6443 ESTABLISHED 13/controller
?? As we can see, the controller process is started, the process established two ports: 6443 corresponds to the kubernetes api-server port, and 2379 corresponds to the Calico ETCD port.
Calico-felix configuration packet trend for policy
?? is the calico flow process (found here). Each node's Calico-felix from the ETCD database to take the POLICY information, with iptables to do the underlying implementation, the main thing is:Cali-pi-[policy] @filter this chain.
Tag bits used during Network policy message processing:
0x2000000: Has the policy rule been detected, 1 indicates that the
Symbol Explanation:
From-xxx:xxx messages issued by
TW: shorthand, to wordkoad endpoint;
To-xxx: Messages sent to XXX;
Po: shorthand, Policy outbound;
cali-: prefix, calico rule chain,
Pi: shorthand, policy inbound;
WL: Shorthand, workload endpoint;
Pro: abbreviated, Profile outbound;
FW: Shorthand, from workload endpoint;
pri: Shorthand, Profile inbound.
(Receive PKT) [email protected], [email protected], [email protected] | ^ | | (-I cali+) | | +---(from workload endpoint)----+ | | (Dest may container ' s floating IP) [Email protected] | (Rotuer decision) | +--------------------------------------------+ | | [Email protected] [Email protected] (-I cali+) | (-I cali+) | (-o cali+) +----------------------------+ +------------+-------------+ | | | | | Cali-wl-to-host Cali-from-host-endpoint | Cali-from-host-endpoint | @filter @filter | @filter | | < END > | | | | | Cali-to-host-endpoint | | | @filter | | Would return to Nat ' s | < END > | | cali-postrouting | | [email protected] <---------------------+ [email protected] | \--------------+ | +-----------------------+ | +----------------------+ | | | | | CALI-FW-CALI0EF24B1 Cali-fw-cali0ef24b2 | Cali tw-cali03f24b1 cali-tw-cali03f24b2 @filter @filter | Filter @filter (-i cali0ef24b1) (-I cali0ef24b2) | (-o cali0ef24b1) (-o cali0ef24b2) | | | | | +-----------------------+ | +----------------------+ | | | Cali-po-[policy] @filter | Cali-pi-[policy] @filter | | | Cali-pro-[profile] @filter | Cali-pri-[profile] @filter | | | < END > +------------> [email protected] + ---------->/| | [Email protected] | | | [Email protected] | | | (If dip is local:send-to-lookup) +---------+--------+ (Else:send to Nic ' s Qdisc) | | < END > [email protected] | | | +------------------+ ^ (-o cali+) | [email protected] ^ (send pkt) | (Router descition), [email&Nbsp;protected], [email protected]
?? The following is an observation of the corresponding iptables processing by accessing the pod of the "Prohibit all traffic" policy:
Before traffic enters
[[email protected] ~]# iptables -nxvL cali-tw-cali1f79f9e08f2 -t filterChain cali-tw-cali1f79f9e08f2 (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:fthBuDq5I1oklYOL */ /* Start of policies */ MARK and 0xfdffffff 0 0 cali-pi-default.web-deny-all all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Kp-Liqb4hWavW9dD */ mark match 0x0/0x2000000 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Qe6UBTrru3RfK2MB */ /* Drop if no policies passed packet */ mark match 0x0/0x2000000
After traffic enters
[[email protected] ~]# iptables -nxvL cali-tw-cali1f79f9e08f2 -t filter Chain cali-tw-cali1f79f9e08f2 (1 references) pkts bytes target prot opt in out source destination 3 180 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:fthBuDq5I1oklYOL */ /* Start of policies */ MARK and 0xfdffffff 3 180 cali-pi-default.web-deny-all all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Kp-Liqb4hWavW9dD */ mark match 0x0/0x2000000 3 180 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Qe6UBTrru3RfK2MB */ /* Drop if no policies passed packet */ mark match 0x0/0x2000000
?? As you can see, the pkts of the drop is changed from 0 to 3. That is, the packet is processed by Mark, Cali-pi-default.web-deny-all two target, and is marked with a "deny" condition, which flows through to drop discarded.
Process Analysis Cases
?? The following is a "No traffic entry" test case, which looks at the overall process.
Model
- DENY all traffic to an application
View App-web's Tags
?? A service named Web is created under the namespace of default. Its IP and tags are as follows:
[[email protected]st02 /home/test]# kubectl get service --all-namespaces | grep webdefault web ClusterIP 192.168.82.141 <none> 80/TCP 1d[[email protected] /home/test/]# kubectl get pod --all-namespaces -o wide --show-labels | grep webdefault web-667bdcb4d8-cpvbb 1/1 Running 0 1d 10.139.54.158 host30.add.bjdt.qihoo.net app=web,pod-template-hash=2236876084
Configure Policy
?? First, view the k8s resources through Kubectl:
[[email protected] /home/test]# kubectl get networkpolicy web-deny-all -o yaml-apiVersion: extensions/v1beta1 kind: NetworkPolicy metadata: name: web-deny-all namespace: default spec: podSelector: matchLabels: app: web policyTypes: - Ingress
?? Next, view the Calico resources through Calicoctl and Etcdctl:
[[Email protected]/home/test]# Calicoctl Get policy Default.web-deny-all-o yaml- apiversion:V1Kind:Policymetadata: Name:Default.web-deny-allSpec: Egress: - Action:AllowDestination: {} Source: {} Order:1000selector:Calico/k8s_ns = = ' Default ' && app = = ' web '[[Email protected]/home/test]#/home/test/etcdctl-wrapper-v2.sh Get/calico/v1/policy/tier/default/policy/default.web-deny-all{"Outbound_rules": [{"Action": "Allow"}], "order": +, "inbound_rules": [], "selector": "Calico/k8s_ns = = ' Default ' && app = = ' web '}
View the log increase && remove policy for Felix for Network policy configuration
2018-02-11 11:13:22.029 [INFO][257] label_inheritance_index.go 203: Updating selector selID=Policy(name=default.api-allow)2018-02-11 09:39:35.642 [INFO][257] label_inheritance_index.go 209: Deleting selector Policy(name=default.api-allow)
View the Iptables rule on node
[[email protected] ~]# IPTABLES-NXVL cali-tw-cali96bc57f337achain cali-tw-cali96bc57f337a (1 references) pkts Bytes Target prot opt in off source destination 0 0 ACCEPT All--* * 0.0.0.0/0 0.0.0.0/0/* Cali:osvcrqj8u46fxqej */ctstate related,established 0 0 D ROP All--* * 0.0.0.0/0 0.0.0.0/0/* cali:nudtdcphcvic4flm */ctstate INVALID 2 MARK All--* * 0.0.0.0/0 0.0.0.0/0/* CALI:QWGVPDFBXRYGBHJV */MA RK and 0XFEFFFFFF 2 MARK All-* * * 0.0.0.0/0 0.0.0.0/0/* CALI:FN PCHECLLWO_KG1U */* Start of policies */MARK and 0XFDFFFFFF 2 Cali-pi-default.web-deny-all All--* * 0.0.0.0/0 0.0.0.0/0/* CALI:IBECYP2JURQBR2JS */Mark match 0x0/0x2000000 0 0 RETURN All--* * 0.0.0.0/0 0.0.0.0/0/* Cali:dib1kwxuzz8dgrje */* Return if policy accepted */Mark match 0x100 0000/0x1000000 2 DROP All--* * 0.0.0.0/0 0.0.0.0/0/* cali:1o4px UPSWZ0ZQJNR */* Drop If no policies passed packet */Mark match 0x0/0x2000000 0 0 cali-pri-k8s-pod-network All--* * * 0.0.0.0/0 0.0.0.0/0/* Cali:rb9gdlntqsxl3sen */0 0 RETURN All--* * * 0.0.0.0/0 0.0.0.0/0 */CALI:S2LDMKNLGP_JSPKK */* Return if profile accept Ed */Mark match 0x1000000/0x1000000 0 0 DROP All--* * 0.0.0.0/0 0.0.0.0/0 /* CALI:Q8OKJMM7E9TCFSQR */* Drop if no profiles matched */
Access the service from another pod
[[email protected] /home/test]# kubectl run --rm -i -t --image=alpine test-$RANDOM -- shIf you don‘t see a command prompt, try pressing enter./ # wget -qO- --timeout=3 http://192.168.82.141:80wget: download timed out/ #
?? Visible, access to the service's 80 port failed; The pod corresponding to the ping is tried:
[[email protected] /]# ping 10.139.54.158PING 10.139.54.158 (10.139.54.158) 56(84) bytes of data.^C--- 10.139.54.158 ping statistics ---45 packets transmitted, 0 received, 100% packet loss, time 44000ms
?? Pinging the pod is also a failure, reaching the "Prohibit all traffic ingress" expectation.
Summarize
?? Kubernetes's Networkpolicy realizes the access control and solves the problem of some network security. But up to now, Kubernetes, Calico to its support is not complete, some features (egress, etc.) are still in progress, on the other hand calico each node on the configuration of a large number of iptables rules, coupled with the increase in the control of different dimensions, resulting in operation and troubleshooting difficulties. Therefore, for the network access control needs of users, can be used also need to consider comprehensively.
Resources:
- Securing Kubernetes Cluster Networking
- Github:ahmetb/kubernetes-network-policy-recipes
- Networkpolicy API
- Principle, network mode and use of calico networks
Analysis on the working principle of Kubernetes Networkpolicy