FFMPEG and ffserver can be used together to implement real-time Streaming Media Services. The real-time data comes from the camera. If you consider the application and network conditions, the client will still lag behind the image obtained by the camera locally. In the worst case, the image will not even be visible (even farther ), here we are concerned about how FFMPEG and ffserver work together, understand the relationship between them, and be specific to individual issues.
Ffserver is started before FFMPEG. When started, it needs to add the parameter-F to specify its configuration file. The configuration file contains the configuration of the stream transmitted to the client (such as encoding mode, frame rate, sampling Rate ......), You can also configure feed1.ffm and other configurations. What is feed1.ffm? It can be understood as a buffer file. The following describes how it is used. After ffserver is started, feed1.ffm will be created. If you open feed1.ffm, you will find that the content at the beginning of feed1.ffm has been written. You can find the keyword FFM and transfer the configuration information of the stream to the client. When feed1.ffm is used for buffering, this information will not be overwritten. Let's take it as the header of the feed1.ffm file.
After the ffserver is started, FFmpeg is started. A key parameter is "http: // ip: 8090/feed1.ffm". The IP address is the IP address of the host running ffserver, if both FFMPEG and ffserver are running in the same system, use localhost. After FFmpeg is started, it establishes a connection (short connection) with ffserver. Through this first connection, FFMPEG obtains the output stream configuration from ffserver to the client, and use these configurations as their own encoding output configuration. Then FFMPEG disconnects the connection and establishes a connection with ffserver again (persistent connection ), using this connection, FFMPEG will send the encoded data to ffserver. If you observe the output of ffserver, you will find that there will be two HTTP 200 requests during this period, which is the process of two connections.
After obtaining data from the camera, FFmpeg is encoded according to the encoding method of the output stream and then sent to ffserver. After FFMPEG data is received by FFMPEG, if there is no playback request on the network, write the data to the cache of feed1.ffm, add some header information to the data during write, and then split the data into blocks. Each block is 4096b (each block also has a structure). When the feed1.ffm size reaches ffserver. after the size specified in Conf, the data will be written from the file (skip the header) to overwrite the old data. Until there is a playback request on the network, ffserver reads data from feed1.ffm and sends it to the client.
The above rough description shows the relationship between FFMPEG and ffserver in Real-Time Streaming Media Services. These ideas are obtained when I read the FFMPEG code earlier (very old). I do not know whether the architecture has changed, brother, I threw the bricks out and hoped that everyone would use the jade.
Reply
Thank you very much for your brilliant analysis at on. The younger brother is reading the source code of ffserver recently. He looks a little too big. If you have any questions, I 'd like to ask.
I use H. 264 encoder. After reading output_example, I found that the encoded data format should be bound to the avpacket format. I would like to ask, what data format does ffserver use for RTP packaging, does it directly package the cached data after encoding, or does it package the avpacket data using RTP?
I hope my eldest brother can give some advice.
ReplyHappy I understand that the cache data you mentioned here is the data in feed1.ffm, so the data should be retrieved from here when it is sent, when the decoded data of a frame is buffered, it is split into one (4096) block and stored in feed1.ffm. When sending, the data of this block is assembled to restore the decoded data to avpacket-> data, and then sent to the client in what format, for example, we will package the data in RTP format and send it to the client.
ReplyHappy: I understand that the cached data here is the data in feed1.ffm, so the data should be retrieved from the cache when it is sent out, when the decoded data of a frame is buffered, it is split into one piece (40
I am so glad to see your reply. Thank you very much for not understanding it. Please try again.
The encoding function in FFMPEG should be
Out_size = avcodec_encode_video (avcodeccontext * avctx, uint8_t * Buf, int buf_size, const avframe * PICT );
At this time, the encoded data is stored in the Buf. I would like to ask if the data in the Buf should be composed of a nalunit?
If yes, when I check the memory image, I find that only the first data is 0x000001, and then it will be common data. That is to say, the encoded data seems to have only one nalunit, however, my input data PICT is a frame of image (a frame of image captured by the camera) which should be composed of many nalunits. I would like to ask why? Is the encoding Parameter not set?
Because I want to package every nalunit. If this is the case now: Every package is a frame of data, there may be a big problem during transmission. Please try again!
Reply: I'm so glad to see your reply. Thank you very much for not understanding it. Please try again.
The encoding function in FFMPEG should be
Out_size = avcodec_encode_video (avcodeccontext * avctx
Sorry, I am not quite sure about this. You can ask admin or ask in a forum or group.
Reply: Sorry, I am not quite sure about this. You can ask admin or go to the forum or group to ask.
Thank you very much.
ReplyMay I ask if FFmpeg is used as the receiver?
If the server uses ffserver + FFMPEG, what should the receiver use?
In addition to VLC, I would like to ask if FFMPEG's own receiver has a routine similar to ffserver?
Reply, It seems that ffplay can be used as a client?
Can ffplay receive RTP packets from the network? Then decode and call the SDL display, right?
Thank you, Wang lile.
ReplyNing: Ha, it seems that ffplay can be used as a client?
Can ffplay receive RTP packets from the network? Then decode and call the SDL display, right?
Thank you.
If ffplay wants to play RTSP/RTP streams from ffserver, you may need to modify the code. I don't know whether the latest ffplay can be played directly, that is, like ffplay rtsp: // IP: 5454/file, you can try.
I tried to send the data after FFMPEG encoding directly to the client through RTP. Without ffserver, it is on the server:
FFmpeg-parameter... RTP: // client_ip: client_port? Localport = server_port
In this form, when the client uses ffplay to receive data, it is similar:
Ffplay RTP: // server_ip: server_port? Localport = client_port
Reply. Well, I understand. Thank you, Daniel.
ReplyHello!
I use the ffserver command to start the server
Use FFMPEG-I bj.3gp-F FLV http: localhost: 8090/feed1.ffm
The page can be correctly displayed on both RealPlayer and vlc.
But it cannot be played normally. Do you know how to solve this problem?
Ffserver uses the built-in ffserver. conf in the source code. Thank you !~
Reply: Hello!
I use the ffserver command to start the server
Use FFMPEG-I bj.3gp-F FLV http: localhost: 8090/feed1.ffm
The page can be correctly displayed on both RealPlayer and vlc.
But cannot play normally
What is normal page display and normal playback?
What is the URL requested by the client?
Is it true that the 3GP has been converted but the client's request has not yet arrived?
ReplyEnter http: // 192.168.0.190: 8090/stat.html in realplay to display
Ffserver status
Available streams
Path served
Conns
Bytes format Bit Rate
Kbits/S Video
Kbits/s
Codec audio
Kbits/s
Codec feed
Test1.mpg 1 63 MPEG 96 64 mpeg1video 32 MP2 feed1.ffm
Test. ASF 3960 3094 K asf_stream 320 256 msmpeg4 64 libmp 3lame feed1.ffm
Stat.html 2 2142 ----
Index.html 0 0 ----
Feed feed1.ffm
Stream type kbits/s codec Parameters
0 audio 32 MP2 1 channel (s), 44100Hz
1 Video 64 mpeg1video 160x128, q = 3-31, FPS = 3
2 audio 64 libmp 3lame 1 channel (s), 22050Hz
3 video 256 msmpeg4 352x240, q = 3-31, FPS = 15
Connection status
Number of connections: 1/1000
Bandwidth in use: 0 k/1000 K
# File IP proto state target bits/sec actual bits/sec bytes transferred
1 stat.html 192.168.0.95 HTTP/1.1 http_wait_request 0 0 0
--------------------------------------------------------------------------------
Generated at wed APR 29 09:41:07 2009
However, neither test1.mpg nor test. ASF can be played on the page.
Even when you click in the playback bar, thunder automatically starts to download test. ASF and test1.mpg by default.
If you enter 192.168.0.190: 8090/test. ASF or 192.168.0.190: 8090/test1.mpg in VLC, the connection status will remain displayed and your input will not be enabled...
What should be the order of 3GP conversion and client opening? It seems that I cannot play the video correctly, whether it is converting or opening the client first.