"Kubernetes" k8s network isolation Scheme

Source: Internet
Author: User
Tags benchmark k8s

Resources:

k8s-Network Isolation Reference
Opencontrail is a open source network virtualization platform for the cloud. –kube-o-contrail–get your hands dirty with Kubernetes and Opencontrail
Opencontrail is a open source network virtualization platform for the cloud.
Opencontrail Architecture Document-flying Eagle's diary-NetEase Blog
Opencontrail Study (i)-wanjia19870902 's Column-Blog channel-csdn.net
Calico-project Calico Documentation
Make the most from Kubernetes ' new Network Policy api-the new Stack
Project Calico | A Pure Layer 3 approach to Virtual Networking
High-performance network strategy in kubernetes clusters-good rain cloud help-Segmentfault
Romana/romana:the romana project-installation scripts, documentation, issue tracker and wiki. Start here.

Since the release of Kubernetes 1.3 in July, users have been able to define and implement network policies in their clusters. These policies are firewall rules that specify the types of data that are allowed to flow in and out. If required, kubernetes can block all traffic that is not explicitly allowed. This paper introduces the network policy of k8s and tests the network performance.

Network Policy

K8s network policies are applied to pod groups identified by common tags. You can then use tags to emulate traditional segmented networks, which are often used to isolate layers in multi-tiered applications: for example, you can identify front-end and back-end pods through a specific "segment" tag. Policies control the flow of traffic between these segments and even control traffic from external sources.

Segmented traffic

What does this mean for application developers? Finally, Kubernetes acquired the necessary capabilities to provide defense in depth. Traffic can be segmented, and different parts of the application can be protected independently. For example, you can easily protect each service with a specific network policy: all panes identified by the replication controller behind the server are identified by a specific label. Therefore, you can use the same label to apply policies to these pods.

Deep defense has long been recommended as a best practice. On AWS and OpenStack, this isolation between different parts of an application or layers can be easily implemented by applying security groups to VMS.

However, this isolation of the container is not possible until the network policy. Vxlan overrides provide Simple network isolation, but application developers need finer-grained control over the traffic-access pod. As can be seen from this simple example, the Kubernetes network policy can manage traffic based on source and origin, protocol, and port.

apiVersion: extensions/v1beta1kind: NetworkPolicymetadata: name: pol1spec: podSelector:   matchLabels:     role: backend ingress:   - from:   - podSelector:      matchLabels:       role: frontend   ports:   - protocol: tcp     port: 80
Not all network back-end support policies

Network strategy is an exciting feature that the Kubernetes community has been working for a long time. However, it requires a network backend that can apply the policy. For example, a simple routing network or a commonly used flannel network program cannot itself apply a network policy.

Today Kubernetes has only a few network components with policy features: Romana,calico and canal; with weave in the near future to instruct support. Red Hat's openshift also includes network policy features.

We chose Romana as the back end of these tests because it configures the pod to use a locally routable IP address in the full L3 configuration. Therefore, network policies can be applied directly by hosts in the Linux kernel using iptables rules. This result is a high-performance, easy-to-manage network.

Testing the performance impact of network policies

After you apply a network policy, you need to check the network groupings against these policies to verify that this type of business is allowed. However, what is the performance penalty for applying a network policy to each packet? We can use all of the policy features without impacting application performance? We decided to find out by running some tests.

Before delving into these tests, it is worth mentioning that "performance" is a tricky measurement, especially for network performance. Throughput (that is, data transfer speed measured in GPBS) and latency (time to complete the request) are a common measure of network performance. Article: K8s Network latency and comparison k8s network scenarios have examined the performance impact of running coverage networks on throughput and latency. What we learned from these tests is that the Kubernetes network is usually quite fast, and the server has no trouble making 1G links saturated, with or without coverage. Only when you have a 10G network, you need to start thinking about the overhead of encapsulation.

This is because during a typical network performance benchmark, there is no application logic for host CPU execution, making it available for any network processing that is needed. to do this, we run our tests in a range of operations that do not cause the link or CPU to saturate. This has the effect of isolating the effect that the network policy rule has on the host. for these tests, we decided to measure the latency measured by the average time required to complete HTTP requests within a range of response sizes.

Test steps:

Hardware

Both servers use the Intelcore i5-5250u CPU (2 cores, 2 threads per core), running speed 1.60ghz,16gbram and 512GB SSD.

    • Nic:intel Ethernet Connection I218-v (Rev 03)

    • Ubuntu14.04.5

    • Kubernetes 1.3 (Validation sample on v1.4.0-beta.5)

    • Romana v0.9.3.1

    • Client and server load test software.

For testing, we have a client pod that sends 2,000 HTTP requests to the server pod. The HTTP request is sent by the client pod to ensure that the server and network are not saturated at a rate. We also make sure that each request starts a new TCP session by disabling persistent connections (such as HTTP keep-alive). We run each test with a different response size and measure the average request duration (how long it takes to complete a request of that size). Finally, we use different policy configurations to repeat each set of measurements.

Romana detects when Kubernetes network policy is created, translates it into Romana's own policy format and applies it to all hosts. Currently, the Kubernetes Network policy applies only to ingress traffic. This means that outgoing traffic is not affected.

First, we conducted a test without any policy to establish a baseline. Then, we run the test again, increasing the number of policies for the test network segment. Policies are the common "allow traffic for a given protocol and port" format. To ensure that the packet must traverse all policies, we create a policy that does not match the packet, and finally a policy that will result in the packet being accepted.

The following table shows the results (in milliseconds) of the different request sizes and the number of policies:

What we see here is that as the number of strategies increases, even after 200 policies are applied, the processing of network policies introduces a very small delay, never exceeding 0.2ms. For all practical purposes, a meaningful delay is not introduced when the network policy is applied. It is also worth noting that the response size increased from 0.5k to 1.0k has little effect. This is because for very small responses, the fixed overhead of creating a new connection dictates the overall response time (that is, the same number of groupings is transferred).

Note: the 0.5k and 1k lines are in the?. 8ms overlap

Even as a percentage of benchmark performance, the impact is still small. The following table shows that for the smallest response size, the worst-case delay remains at 7% or less, up to 200 policies. For a larger response size, the delay drops to about 1%.

Also of interest in these results is that as the number of strategies increases, we notice that larger requests experience a smaller relative (i.e., percentage) performance degradation.

This is because when Romana installs the Iptables rule, it ensures that packets belonging to the established connection are evaluated first. You only need to traverse the complete list of policies for the first packet of a connection. After that, the connection is considered "established" and the state of the connection is stored in the Quick lookup table. Therefore, for larger requests, most of the connected packets are quickly found in the established table, rather than a full traversal of all rules. The performance of this iptables optimization result is largely independent of the number of network policies.

Such "flow tables" are common optimizations in network devices, and it seems that iptables using the same technology is quite effective.

It is also worth noting that in practice, a fairly complex application can configure several dozen rules for each segment. Similarly, public network optimization techniques such as websockets and persistent connections can even further improve the performance of network policies (especially for small request sizes), since connections remain open for longer, benefiting from established connection optimizations.

These tests are performed using Romana as the backend policy provider, and other network policy implementations may produce different results. However, these tests show that for almost every application deployment scenario, you can use Romana as the network backend to apply network policies without any negative performance impact.

If you want to try it yourself, we recommend using Romana. In our GitHub code warehouse, you can find an easy-to-use installer that works with the Aws,vagrant VM or any other server.

Summarize

Through the above function introduction and test analysis, k8s can control the flow between applications with a smaller granularity. Network performance losses are within acceptable range.

Good rain clouds to help the current production environment using the k8s 1.2.x version, we use a version of the k8s has no Network policy control functions, so we are based on the network plug-in way to achieve access control.

We are performing performance and compatibility testing of the k8s 1.3.x production environment and will then upgrade all Enterprise Editions, and the Community edition will be upgraded on 25th of the month following the enterprise upgrade.

I will follow the Calico and k8s in a way to complete the network interoperability and network isolation control and performance of the loss of testing and analysis, in future articles I will test the situation with you to share and discuss.

Original link: http://blog.kubernetes.io/2016/09/high-performance-network-policies-kubernetes.html

"Kubernetes" k8s network isolation Scheme

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.