Ros Fetch number Threading analysis (4): Without assembly: socket option SO_SNDBUF,SO_RCVBUF effects on bandwidth and CPU

Source: Internet
Author: User
Tags cpu usage

Yesterday, with Ros, Iperf, Nettest tested the bandwidth and CPU occupancy of the CMM02NODE06-->CMM02NODE01, sending a packet length of 2KB, the test results are:

Bandwidth send-side CPU utilization at the receiving end CPU usage

ROS 4.42gb/s, 80% 100%

Iperf 4.66gb/s, 100% 70%

Nettest 4.38gb/s, 100% 71%

found that both Iperf and nettest are send-side CPU reached 100%, and the receiving end only accounted for the 70%;ros is the receiving end CPU reached 100%, the sending side CPU accounted for 80%. Why is the CPU utilization of ROS different from the other two software?

Why is the CPU utilization at the receiving end of Ros 100%? Eliminate the effects of ROS query, judgment, buffer, etc. to simplify the number of ROS to a direct recv thread:

1. sequentialinputhandler_directrecv.cpp file, replace Sequentialinputhandler in requriment with Sequentialinputhandler_ DIRECTRECV, mainly readytoreadout and Releasefragment commented out. See:

Http://files.cnblogs.com/files/zengtx/ROS_SequentialInputHandler_directrecv.pdf

2. The Inputfragment function in SequentialDataChannel.cpp is commented out and replaced by simple:

/*****************************************************/
unsigned int sequentialdatachannel::inputfragment (void)
/*****************************************************/
{
if (!buffer) buffer = new char[m_robpagesize*100000*401];
Char buffer[2048*20];
This->getnextfragment ((unsigned int*) buffer, (unsigned long) m_robpagesize);
return 0;
}

In this way, the number model of the Ros receiver is simplified to a while loop that blocks recv.

At this point the test results are: (msg_size = 2048) Bandwidth is 6.7gb/s, the CPU utilization of the receiver is still 100%, the sending side CPU utilization is 83%.

Bandwidth send-side CPU utilization at the receiving end CPU usage

ROS 4.42gb/s, 80% 100%

Ros_directrecv 6.7gb/s 83% 100%

Iperf 4.66gb/s, 100% 70%

Nettest 4.38gb/s, 100% 71%

When the receiving thread is simplified, the proportion of time used to recv increases, which makes the CPU occupancy rate of the sending end increase a bit. But why does the receiving end CPU reach 100% before the receiving end CPU? Why is the bandwidth of ros_directrecv higher than the bandwidth of iperf,nettest?

By comparing the code of Nettest and Ros, it was found that there were two more socket option settings in the ROS code: SO_SNDBUF,SO_RCVBUF, both of which were set to 256*1024. Comment out the two settings options and test again.

Bandwidth send-side CPU utilization at the receiving end CPU usage

ROS 4.42gb/s, 80% 100%

Ros_directrecv 6.7gb/s 83% 100%

Ros_directrecv_nosetsockopt 4.46gb/s 100% 70%

Iperf 4.66gb/s, 100% 70%

Nettest 4.38gb/s, 100% 71%

When the SO_SNDBUF,SO_RCVBUF option is set, the measured bandwidth becomes smaller, the CPU utilization at the sending end is 100%, the CPU at the receiving end is 70%, and the test result of Iperf,nettest is close.

READ: http://bbs.csdn.net/topics/310236933

"So_sndbuf and So_rcvbuf set the size of the system buffer, in the acceptance and transmission of data directly affect the behavior of the system traversal buffer, such as you actually send less data, receive buffer is very large, the system does invalid traversal content will increase, will certainly affect the efficiency." ”

SO_SNDBUF, SO_RCVBUF when the transmission packet length is small, the CPU is the bottleneck, these two option values will affect the system performance, the specific value is set to how much optimal, need to do specific test determination.

Ros Fetch number Threading analysis (4): Without assembly: socket option SO_SNDBUF,SO_RCVBUF effects on bandwidth and CPU

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.