FFMPEG and ffserver can be used together to implement real-time Streaming Media Services. The real-time data comes from the camera. Program And network conditions, here the client will still lag behind the image obtained by the camera locally, or even not see the picture (even farther) at the worst ), here we are concerned about how FFMPEG and ffserver work together, understand the relationship between them, and be specific to individual issues. Ffserver is started before FFMPEG. When started, it needs to add the parameter-F to specify its configuration file. The configuration file contains the configuration of the stream transmitted to the client (such as encoding mode, frame rate, sampling Rate ......), You can also configure feed1.ffm and other configurations. What is feed1.ffm? It can be understood as a buffer file. The following describes how it is used. After ffserver is started, feed1.ffm will be created. If you open feed1.ffm, you will find that the content at the beginning of feed1.ffm has been written. You can find the keyword FFM and transfer the configuration information of the stream to the client. When feed1.ffm is used for buffering, this information will not be overwritten. Let's take it as the header of the feed1.ffm file. After the ffserver is started, FFmpeg is started. A key parameter is "http: // ip: 8090/feed1.ffm". The IP address is the IP address of the host running ffserver, if both FFMPEG and ffserver are running in the same system, use localhost. After FFmpeg is started, it establishes a connection (short connection) with ffserver. Through this first connection, FFMPEG obtains the output stream configuration from ffserver to the client, and use these configurations as their own encoding output configuration. Then FFMPEG disconnects the connection and establishes a connection with ffserver again (persistent connection ), using this connection, FFMPEG will send the encoded data to ffserver. If you observe the output of ffserver, you will find that there will be two HTTP 200 requests during this period, which is the process of two connections. After obtaining data from the camera, FFmpeg is encoded according to the encoding method of the output stream and then sent to ffserver. After FFMPEG data is received by FFMPEG, if there is no playback request on the network, write the data to the cache of feed1.ffm, add some header information to the data during write, and then split the data into blocks. Each block is 4096b (each block also has a structure). When the feed1.ffm size reaches ffserver. after the size specified in Conf, the data will be written from the file (skip the header) to overwrite the old data. Until there is a playback request on the network, ffserver reads data from feed1.ffm and sends it to the client. The above rough description of the relationship between FFMPEG and ffserver in Real-Time Streaming Media Services, these views are reading FFMPEG earlier Code (Very old), I don't know whether the architecture has changed. I threw the bricks out, and hope everyone can use the Jade to smash the bricks.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.