TCP sending series-manage the sending cache (2) (1)

Source: Internet
Author: User

TCP sending series-manage the sending cache (2) (1)

TCP sending Cache Management takes place at two layers: A single Socket and the entire TCP layer.

The previous blog talked about sending Cache Management on a single Socket layer. Now let's take a look at sending Cache Management on the entire TCP layer.

Determine whether the request for sending the cache is valid from the TCP layer

Sk_stream_memory_free () is called to determine whether the size of the sock sending queue exceeds

If the sock sending cache limit is exceeded, go to sleep and wait for the sock sending cache writable events.

This is to determine whether the sending cache can be allocated at the individual socket level.

After you call sk_stream_alloc_skb () to send the request cache, you must determine whether the request is valid on the TCP layer.

If it is invalid, use _ kfree_skb () to release the applied skb. It can be seen that the request to send the cache must go through two levels.

To determine whether the request for sending the cache is valid at the TCP level, you need to consider the memory usage at the TCP level and the socket

Sending cache usage. Sk-> sk_forward_alloc indicates the cache size pre-allocated to sock, which is the memory size that sock has allocated in advance.

After applying for a new sending cache, if sk-> sk_forward_alloc <skb-> truesize is found, the pre-allocated cache is used up,

You need to call sk_wme_schedule () to determine the validity from the TCP layer. Otherwise, you do not need to check the validity.

[Java] static inline bool sk_wmem_schedule (struct sock * sk, int size)
{/* The TCP layer uses statistical memory, so the condition is false */if (! Sk_has_account (sk) return true;/* If the used memory skb-> truesize is less than the size of sk pre-allocated and unused cache, * further check is not required. Otherwise, you need to determine whether the request for sending the cache is valid at the TCP level. */Return size <= sk-> sk_forward_alloc | _ sk_mem_schedule (sk, size, SK_MEM_SEND);} static inline bool sk_has_account (struct sock * sk) {/* return ture if protocol supports memory accounting */return !! Sk-> sk_prot-> memory_allocated;}/* return minimum truesize of one skb containing X bytes of data */# define SKB_TRUESIZE (X) + \ SKB_DATA_ALIGN (sizeof (struct sk_buff) + \ SKB_DATA_ALIGN (sizeof (struct skb_shared_info )))

_ Sk_mem_schedule () is used to determine from the TCP layer whether the request for sending the cache is legal. If it is legal,

The pre-allocated cache sk-> sk_forward_alloc and total TCP layer memory usage tcp_memory_allocated will be updated,

The unit of the latter is page.

Q: under which circumstances is the request to send the cache valid?

1. the TCP layer memory usage is lower than the minimum sysctl_tcp_mem [0].

2. sock's sending Cache Usage is lower than the minimum sysctl_tcp_wmem [0].

3. the TCP layer is not in the memory pressure state, that is, the memory usage of the TCP layer is lower than sysctl_tcp_wmem [1].

4. the TCP layer is under memory pressure, but the memory used by the current socket is not too high.

5. the TCP layer memory usage exceeds the maximum sysctl_tcp_wmem [2]. After the upper limit of the sending cache is reduced, the total size of the sending queue exceeds

The maximum sending cache is reached. Therefore, it will enter sleep and wait, so it is also legal.

It can be seen that in most cases, the request to send the cache is legal, unless the TCP memory usage has reached the limit.

In addition to determining the validity of this request for sending the cache, __sk_mem_schedule () has also done the following:

1. If the TCP memory usage is lower than the minimum sysctl_tcp_mem [0], the TCP memory pressure sign tcp_memory_pressure is cleared.

2. If the TCP memory usage is higher than the pressure value sysclt_tcp_mem [1], set the TCP memory pressure sign tcp_memory_pressure to 1.

3. If the TCP memory usage exceeds the maximum sysctl_tcp_mem [2], the maximum sk-> sk_sndbuf of the sock sending cache is reduced.

When the return value is 1, it indicates that the request for sending the cache is legal; if the return value is 0, it indicates that the request is illegal.

[Java]/* increase sk_forward_alloc and memory_allocated
* @ Sk: socket * @ size: memory size to allocate * @ kind: allocation type * If kind is SK_MEM_SEND, it means wmem allocation. * Otherwise it means rmem allocation. this function assumes that * protocols which have memory pressure use sk_wmem_queued as * write buffer accounting. */int _ sk_mem_schedule (struct sock * sk, int size, int kind) {struct proto * prot = sk-> sk_prot;/* The instance is tcp_prot */int amt = sk_mem_pa Ges (size);/* converts the size to the number of pages and rounded up X/long allocated; int parent_status = UNDER_LIMIT; sk-> sk_forward_alloc + = amt * SK_MEM_QUANTUM; /* update the pre-allocated cache size * // * The updated TCP memory usage tcp_memory_allocated, in the unit of page */allocated = sk_memory_allocated_add (sk, amt, & parent_status ); /* Under limit. if TCP memory usage is lower than the minimum sysctl_tcp_mem [0] */if (parent_status = UNDER_LIMIT & allocated <= sk_prot_mem_limits (sk, 0) {sk_leave_memory_pressure (s K);/* clear the memory pressure mark tcp_memory_pressure */return 1;}/* Under pressure. (we or our parents ). * If the TCP memory usage is higher than the pressure value sysclt_tcp_mem [1], set the memory pressure mark * tcp_memory_pressure of the TCP layer to 1. */If (parent_status> SOFT_LIMIT) | allocated> sk_prot_mem_limits (sk, 1) sk_enter_memory_pressure (sk);/* Over hard limit (we or our parents ). * If the TCP layer memory usage is higher than the maximum sysctl_tcp_mem [2], the maximum size of the sock sending cache * sk-> sk_sndbuf is reduced. */If (parent_status = OVER_LIMIT | (allocated> sk_prot_mem_limits (sk, 2) goto suppress_allocation; /* guarantee minimum buffer size under pressure * // * Whether sent or received, ensure that sock has at least sysctl_tcp _ {r, w} mem [0] memory available */if (kind = SK_MEM_RECV) {if (atomic_read (& sk-> sk_rmem_alloc) <prot-> sysctl_rmem [0]) return 1;} else {/* SK_MEM_SEND */if (sk-> sk_type = SOCK_STREAM) {if (sk-> sk_wmem_queued <prot-> sysctl_w Mem [0]) return 1;} else if (atomic_read (& sk-> sk_wmem_alloc) <prot-> sysctl_wmem [0]) return 1;} if (sk_has_memory_pressure (sk )) {int alloc;/* if TCP is not in the memory pressure status, return */if (! Sk_under_memory_pressure (sk) return 1; alloc = sk_sockets_allocated_read_positive (sk);/* Number of sockets currently using TCP * // * If the memory used by the current socket is not too high, returns true */if (then (sk, 2)> alloc * sk_mem_pages (sk-> sk_wmem_queued + atomic_read (& sk-> sk_rmem_alloc) + sk-> sk_forward_alloc )) return 1;} suppress_allocation: if (kind = SK_MEM_SEND & sk-> sk_type = SOCK_STREAM) {/* reduces the upper limit of the sock sending buffer, so that sndbuf cannot exceed half of the total size of the sending queue, * no less than two MIN_TRUESIZE of packets. */Sk_stream_moderate_sndbuf (sk);/* Fail only if socket is under its sndbuf. * In this case we cannot block, so that we have to fail. */if (sk-> sk_wmem_queued + size> = sk-> sk_sndbuf) return 1;} trace_sock_exceed_buf_limit (sk, prot, allocated);/* go here, if the request for sending the cache is illegal, cancel the previous memory count update * // * Alas. undo changes. */sk-> sk_forward_alloc-= amt * SK_MEM_QUANTUM; sk_memory_allocated_sub (sk, amt); return 0;}/* converts the number of bytes (amt) to the number of pages, rounded up */static inline int sk_mem_pages (int amt) {return (amt + SK_MEM_QUANTUM-1)> SK_MEM_QUANTUM_SHIFT;} # define SK_MEM_QUANTUM (int) PAGE_SIZE) /* return the updated TCP inner usage tcp_memory_allocated, in the unit of page */static inline long sk_memory_allocated_add (struct sock * sk, int amt, int * parent_status) {struct proto * prot = sk-> sk_prot;/* Cgroup related, skipped */if (mem_cgroup_sockets_enabled & sk-> sk_crl ){...} return atomic_long_add_return (amt, prot-> memory_allocated );}


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.