Implement real-time video transmission in LAN Using VC ++

Source: Internet
Author: User
    • Abstract: In this paper, a common real-time video transmission solution is proposed for different local networks. Based on the use of DivX codec, the idea of compressing, organizing frames, sending and receiving, and extracting the entire process is proposed. The specific implementation scheme and the core of VC ++ are as follows:Source codeAnd transmission control policies effectively ensure high-quality real-time video transmission.
Keywords Customer/server; real-time video transmission; DivX

Introduction

Real-time video transmission within the LAN has been widely used. Most of the local networks used for video transmission are wired LAN, because the wired LAN technology is mature, the transmission speed is fast, and the stability is good. However, if the video data volume is large and the wired network is unstable, causing data congestion. A long time may lead to serious latency. If the working environment is not fixed, mobility is required, so we need to adopt a wireless network. Nowadays, the work of the wireless network adapter becomes unstable as the environment changes, which will cause a significant reduction in the quality of video transmission, it is easy to cause the phenomenon of shadow, jitter, and screen. This article proposes a common real-time video transmission solution for different local networks. It uses the VC ++ self-encapsulated windows vfw sdk for secondary development and uses the DivX codec, according to the established Transmission Policy, it can effectively solve the problem of video image duplication, jitter, and blurred screen caused by local instability of the network.

Problems with real-time video transmission in LAN

In order to effectively and high-quality video streams are transmitted on the LAN, a variety of technologies are required, including video compression, encoding, and application layer quality control technologies.

Network bandwidth is limited, so the need to compress and transmit video images, MPEG-4 is widely used in real-time video transmission in the network environment, because the MPEG-4 has: can achieve a high compression ratio; it has flexible encoding and decoding complexity, Object-based encoding, allows interaction between video and audio objects, and has strong fault tolerance capabilities. In this paper, the DivX decoder is used to encode and compress the video. In fact, DivX = (video) MPEG-4 + (audio) MP3.

The Application Layer Quality Control Technology currently uses the RTP/RTCP protocol to ensure low-latency and high-quality transmission of video streams in the network. The RTP data transmission protocol is used to stream and load audio and video data. RTCP is used to control the transmission of RTP data packets. This protocol uses the client (receiver) to feedback the network status, and the server (sender) to adjust the speed and compression ratio of information collection and sending. However, if the image acquisition speed is fixed, the software needs to compress and decompress the image. Adjusting the acquisition speed will cause the collected data to be uncompressed and directly discarded, to adjust the compression ratio of the encoder, You need to reset the parameters of the encoder, restart the encoder, and adjust the corresponding decoder. This process takes a long time and cannot meet the real-time requirements. Therefore, this article does not use the RTP/RTCP protocol, but starts from the sending end to determine the network conditions in real time, and uses the "stop" policy for real-time transmission.

There are two protocols for network communication: TCP and UDP. UDP is more suitable for Video Transmission in the network environment, but it does not provide error checking and Error Correction functions. Once the network is blocked, A large number of datagram files will be lost. DivX encoding and decoding technology is coded and decoded in frames, which are divided into key frames and non-key frames. In the transmission process, due to the high compression rate, as long as the error bit in one frame affects hundreds or even thousands of other bits, it directly causes blurred images and blurred images. Image clarity can be restored only when the next key frame arrives. To ensure the correctness of transmission, you need to develop protocols at the application layer. In this way, the advantage of UDP is no longer available. Therefore, TCP is used for network communication. The integrated use of VFW and streaming media technology can help with the "stop" control policy to better solve the problem of duplicate shadows, jitter, and blurred screens that are easily caused by real-time video transmission in the LAN.

Real-Time Video Transmission implementation

In order to achieve real-time video transmission, the general idea is to send the least redundant information and send the latest video to the maximum extent.

Local Area Network Real-Time Video Transmission adopts the server/client mode and VC ++. Its workflow 1 is shown in.

Figure 1 real-time video transmission Workflow

Video Capture uses avicap to capture video images from the video capture card. A Bitmap video frame is obtained and then compressed using the DivX encoder, with WinSock, the compressed video data is transmitted in real time in the LAN, and the received data is handed over to the DivX decoder for decompression. Finally, the video is displayed.

In VC ++, VFW technology is used. The client registers the callback function through capsetcallbackonframe (). After the acquisition card collects an image, the system automatically calls the callback function, then use the icseqcompressframe () function in the callback function for compression. Then, Use WinSock to send the compressed data to the server. After receiving a frame, the server submits it to icdecompress () for decompression, and uses setdibitstodevice () to display the image.

1. Video Frame Creation

The data collected by the video is a bitmap video frame. After the DivX encoder is compressed, an MPEG4 stream in frame format is formed. The DivX decoder is also decompressed in frame format. Therefore, we propose to send video data streams in frames. To make it easy to extract a frame at the receiving end, a frame is constructed in the format shown in 2.

Frame start mark Frame Size Frame number Frame Type Frame data

Figure 2 video frame format

A complete frame consists of five fields. The meaning of each field is as follows: the frame start Mark marks the start of a frame, occupying 4 bytes of space. Set it to 0 xffffffff. Frame Size, indicating the size of the entire frame, including the size of five fields, occupying 4 bytes of space. The frame number, which indicates the sequence number of frames and occupies 4 bytes of space. Frame Type, indicating whether the frame is a key frame and occupies 1 byte space. Frame data, which stores the complete data of the compressed frame.

2. Transmission of Video Frames

For real-time video transmission, compressed data must be continuously sent to the receiver. Create a thread on the sender to send data. At the same time, the main thread continues to collect and compress data. The workflow of sending thread 3 is shown in.

Figure 3 sending thread Workflow

Assume that the created thread is named sendthread and the core isCodeThe implementation is as follows:

While (1)
{
Isok = true; // ready
Suspendthread (sendthread); // suspends a thread
Isok = false; // The thread is sending data
Int length = framelength; // The length of the data to be sent.
If (length <50000) {// determines whether the data is normal
Int n = 0;
Int sendcount = 0;
While (length> 0 ){
N = Send (sock, (char *) imagebuf + sendcount, length, 0); // send data,
// Imagebuf is a pointer pointing to the data frame to be sent
If (n = socket_error) // If a network exception occurs, exit the thread.
Break;
Length-= N;
Sendcount + = N;
}
}
}

The data frame sent in the thread is a data frame formed according to the method in the previous section. This method ensures that the current frame being sent can completely reach the receiving end.

Note that at the beginning of the thread or every time a frame is sent, the thread is suspended, waiting for the outside to wake up. This task is completed by the callback function. In the callback function, it is determined that if the sending thread is ready (in the suspended state), the image will be compressed, and then the thread will wake up to send the compressed data, otherwise, the system jumps out and waits for the next call to the callback function. This policy is called "Stop" and will be detailed later.

3. Receive Video Frames

The most important thing at the receiving end is to extract a complete frame from the received data stream. The method is to first find the frame starting sign from the data stream, then extract the frame size from the data next to it, and then read the remaining data of the frame from the receiving buffer. Then look for the Starting Sign of the next frame. Figure 4 shows the workflow of the acceptor.

 

Similarly, the acceptor creates a thread specifically used to receive data. Assume that the thread is named recthread and the core code is implemented as follows:

While (temp! = Socket_error)
{
If (! Isstart) {// indicates whether frame data starts. True indicates start.
If (endnum> 3) // The endnum record currently receives unprocessed data
Endnum = 0;
Temp = Recv (clisock, (char *) (recbuf + endnum),); // read data from the buffer zone
Startpos = serchstr (temp + endnum); // you can specify the start flag of a frame.
If (startpos! =-1 ){
Isstart = true;
Endnum = temp + endNum-startPos-4;
Memcpy (imagebuf, recbuf + startpos + 4, endnum); // save frame data
}
Else {
Memcpy (recbuf, recbuf + temp + endNum-3, 3); // save the last three bytes of data
Endnum = 3;
}
}
Else {
If (endnum <4) {// determine the data that follows the start flag. If the value is smaller than 4, the frame size cannot be obtained.
Temp = Recv (clisock, (char *) (recbuf),); // read data
Memcpy (imagebuf + endnum, recbuf, temp); // save data
Endnum + = temp;
If (endnum <4)
Continue;
Framesize = * (int *) imagebuf); // get the frame size
If (framesize <500 | framesize> 50000) {// Exception Handling (invalid frame size)
Isstart = false; // discard data and search for the frame start mark again
Endnum = 0;
Continue;
}
Framesize-= endnum + 4;
}
Else {
While (framesize> 0 & temp! = Socket_error) {// obtain the remaining data of the complete Frame
Temp = Recv (clisock, (char *) (imagebuf + endnum), framesize, 0 );
Endnum + = temp;
Framesize-= temp;
}
If (framesize <= 0) {// sets the frame end position, decompress
Isstart = false;
Endnum = 0;
Decompress (); // determines the validity of the data and calls icdecompress for decompression.
}
}
}
}

AboveProgramThe execution result is to save the complete frame (except the frame start mark) in imagebuf.

4. "Stop" control policy

If the LAN communication speed is very high and the operation is stable, real-time video transmission is performed according to the method described above, which can achieve very good results without any control policies. However, in many cases, exceptions may occur on the network, resulting in a significant decrease in the data transmission rate, resulting in a backlog of data on the sending end, and the data waiting for sending cannot be sent normally. In this case, a certain policy should be taken to control the sender to meet the real-time requirement.

In the above sending program, the isok variable is used to indicate whether the current frame of the sending end has been sent. If the sending end is set to true, it also indicates that the sending end is ready to continue sending data, otherwise, the value is false. Isok can be used to notify the video collection and compression thread. If isok is true, the video can be collected and compressed, and then the sending thread is awakened to continue sending new frame data. Otherwise, the thread will wait, until the network can continue sending data (isok is true ). Of course, Video Acquisition continues, so when the network is congested, as long as the encoder is not compressed, it can be solved; when the network returns to normal, continue to compress the transmission, in other words, when the network is blocked, the frames waiting for sending are directly discarded to ensure that the latest Compressed Frames are sent once the network recovers. Of course, you must ensure that once a frame is sent, it must be completely sent.

Real-Time Video Transmission Based on such a "stop and so on" Policy only brings about one problem: when the network quality is poor, the moving target in the receiver screen will be instantly moved. However, this policy ensures that there will be no such phenomena as shadow, jitter, and screen.

Conclusion

The proposed real-time video transmission scheme is tested in a m lan, 10 m lan, and 11 m Wireless LAN. During the test, move a target in front of the camera (sender) and observe the video display at the receiver. Multiple tests were conducted in different local networks. The testing time ranges from 10 minutes to 30 minutes, and the target's motion speed was changed. Finally, summarize the data to obtain the statistical result. The test results are shown in table 1.

Table 1 test results under different LAN

Strenuous Exercise Normal exercise Slow motion
M LAN Clear and smooth Images Clear and smooth Images Clear and smooth Images
10 M LAN Occasionally paused, frame drop rate around 1% Clear images and smooth human eyes Clear and smooth Images
11 m Wireless LAN Frequent pauses, frame drop rate 5%-6% Frequent pauses, frame drop rate 2%-3% Occasionally paused, frame drop rate around 1%

Where,

Note: The 11 m wireless network card is connected to the PC through the USB interface. It will be better if the USB interface is used.

according to the actual test results, the effect is good. In addition to moving instantly, the image can be kept clear, this eliminates duplication and Jitter Caused by poor network quality, and can meet the requirements of real-time transmission for different local networks.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.