Analyze hardware bottlenecks that affect streaming media server performance "retransmission"

Source: Internet
Author: User

As a basic functional unit to serve the users, the performance of the streaming media server directly affects the service capability of the streaming media system. In the measurement of streaming media server, the most important indicator is the flow of output capacity and the number of concurrent requests can be supported, the following we take the local hard disk as the storage media streaming server as an example, first of its working process is simple analysis:
(1) Read the contents of the streaming media from the hard disk disk, through the hard disk interface circuit (SCSI, IDE), the PCI bus and the system internal bus storage into memory (on the way through the hard disk control card and PCI controller two conversion interface).
(2) Before the streaming media file is sent to the network, the CPU needs to do some processing in the in-memory stream media file fragment, for example, copy, slice, package by protocol, etc.
(3) After packaging the contents of the file in memory through the system internal bus, PCI Controller, PCI bus to reach the network card.
(4) The network card will be sent to the external network after wrapping the contents of the file again.
As you can see from the above analysis, there are four key factors that affect the performance of the streaming media server in terms of hardware: CPU processing power, memory, disk read capability, and network throughput rate.

   (1) CPU processing power
The CPU of the streaming media server mainly carries on the content file copy, the segmentation and according to the protocol package and so on, and carries on the response and processing to the user's various service requests (such as fast-forward, rewind, search, etc.) and the maintenance and retrieval of the server list information.
The CPU processing power requirements, with the need to support user concurrent access and the increase in the number of services, when the number of concurrent users, the more distributed programs on demand, the higher the CPU processing requirements. When the live broadcast or user on-demand single file, the server provides users with the same content, only need to read a copy of the source content, then the content of the replication, distribution operations, and when the user on-demand different programs, not only to carry out content distribution operations, but also need to extract content from multiple program sources, More disk reads and read and write operations require more processes to be started, smaller time slices per process allocated, and more process switching operations to be added. As a result, servers that are also configured for live streaming services can serve up to thousands of users at the same time, but for on-demand services, only hundreds of users are served.
In addition, the CPU load is different at different stages of streaming media services. In the initial stage of streaming media connection, in addition to normal file copy, segmentation and protocol packaging work, there will be more interaction requests to be processed, and in order to reduce the user waiting for the cache time, some systems in a short period of time to improve file transfer speed, which leads to more file reading and processing work, More resource-intensive than a smooth connection phase.
   (2) ability to read raw data
The original streaming media files are mainly stored in local hard disk, NAS or SAN storage devices. Regardless of the way data is stored, the ability to read raw data files directly affects the performance of the server. The requirement for read capability is very much related to the number of business types and user requests. Live streaming has the lowest requirement for data read rates, but it needs to serve a number of users, simply fetch a copy of the data from the data source and replicate it, but on-demand needs to read a different data source for each user and a much greater reading pressure on the data source.
The reading ability of raw data is the biggest performance bottleneck of streaming media server, which is mainly limited by the speed of storage device. At present, the main use of two solutions: first, the use of RAID technology and disk array, improve hardware speed, but the price is high; another approach is to use file caching technology, if a few users on the same program, you can read from the cache instead of the storage device data, reduce storage pressure. However, the effect of this method is limited, as the number of on-demand user requests increase, users on-demand programs will be more and more dispersed, the proportion of hits in the cache will gradually decrease, when the hit rate is reduced to a certain extent, can not reduce the storage device read pressure.
   (3) Memory
In a streaming media server, memory can be divided into two parts by its purpose. Part of the memory used to process each user's streaming media requests and services, the average user's memory usage depends on the publication type and encoding settings of the streaming media content, such as bitrate, packet size, number of audio streams and video streams, and so on. Depending on the user behavior, the number of target users for the service, the dispersion rate of the user request connection flow, and the publishing point type, you can estimate how much memory a streaming media server needs to use.
The other part is used to cache data files. When the server processes, sends, and reads data from the storage device, it needs to cache the content through memory. When there is not enough memory, a paging phenomenon occurs. Memory paging can cause unpredictable delays. Large physical memory minimizes the latency caused by memory paging, and more memory can provide more file caches, reducing the impact of storage device read-ability bottlenecks and improving server performance.
   (4) network throughput rate
The service capability of the server network interface affects the transmission of data, and when the network bandwidth is insufficient, it causes the delay of data sending and receiving, which leads to the interruption of user Service. The server's network throughput is only related to the number of users and the encoding rate for on-demand programs.
For a streaming media server, any of these factors can cause the server to not function properly. In addition, the various performance factors and processing links are interrelated, mutual influence. For example, when there is not enough memory, there will be a lot of memory paging operation, but also cause the CPU usage rate to rise quickly to 100%, increase memory, increase the memory capacity of the server, and reduce the hard disk read operation. Soin the server configuration, to try to achieve the performance of all aspects of coordination and matching, the performance of the balance, while taking into account the different components of the price, such as the high price of storage devices, can be configured with a lower CPU and large memory to improve the cost-effective equipment.

Analyze hardware bottlenecks that affect streaming media server performance "retransmission"

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.