3. Dynamic Distribution and implementation of cache areas.
Based on this fact, the audio data is much smaller than the video data. Therefore, the audio buffer and the video buffer are managed separately. According to statistics, the RTP data packet size of the same media stream is similar. For example, the audio size is about several hundred bytes, and the video size is 1.3 Kbytes. The video buffer is divided into two parts. Some are in the Active Data zone and some are in the idle zone. After RTP receives the data, it obtains a memory block from the free zone. The size of the block is greater than or equal to the sum of the received data and the packet header. After obtaining the memory block, copy the data and insert it into the active data zone based on the frame sequence number and package sequence number for the player to call. The player acquires, decodes, and plays a frame of data in the data zone. After use, the data block is not released immediately, but is inserted into the idle data zone in an orderly manner.
This idea of memory reuse needs to use the pointer array and pointer queue introduced in the previous section. Pointer data is used to manage idle data blocks. Insert and search in order by size. The pointer queue is used to manage sorted data frames and emit data according to the frame serial number and package serial number. Here we divide a video frame into several RTP packets.
First, define cmyptrarray m_freelist to manage idle data ordered chains.
The following describes how to apply for a memory block in the free zone:
Ppframebuf is the returned data block pointer. frame_buffer is a virtual structure, including the actual buffer length, pointer to buffer, and other custom members. nlen is the applied size.
Void getonefreebuf (frame_buffer ** ppframebuf, DWORD nlen)
{
If (null = ppframebuf)
Return;
Frame_buffer * pframebuf = NULL;
// Add mutex code here
If (m_freelist.getsize ()> 0)
{
Int nstart = 0, nend = m_freelist.getsize ();
Int ncur = (nstart + nend)/2;
Frame_buffer * ptmpbuf = NULL;
While (1)
{
Ptmpbuf = (frame_buffer *) m_freelist.getat (ncur );
If (ptmpbuf-> nbufsize> = nlen) & (ptmpbuf-> nbufsize-nlen) <= 100 ))
Break;
Else if (ptmpbuf-> nbufsize> nlen)
Nend = ncur;
Else
Nstart = ncur;
Ncur = (nstart + nend)/2;
If (ncur = nstart | ncur = nend)
{
Ptmpbuf = (frame_buffer *) m_freelist.getat (ncur );
Break;
}
}
If (ncur> = m_freelist.getsize ())
Ncur = m_freelist.getsize ()-1;
If (ptmpbuf-> nbufsize <nlen & ncur <(m_freelist.getsize ()-1 ))
{
Ncur ++;
Ptmpbuf = (frame_buffer *) m_freelist.getat (ncur );
}
If (ptmpbuf-> nbufsize> = nlen)
{
Pframebuf = ptmpbuf;
//
M_freelist.removeat (ncur );
}
}
* Ppframebuf = NULL;
If (null = pframebuf)
{
Pframebuf = new frame_buffer;
If (null = pframebuf)
{
Return;
}
Pframebuf-> nbufsize = nlen;
Pframebuf-> pframedata = new byte [nlen + 100];
If (null = pframebuf-> pframedata)
{
Delete pframebuf;
Return;
}
}
// Trace ("cstreambuffer: getonefreebuf, needsize: % d, actualsize: % d/N", nlen, pframebuf-> nbufsize );
* Ppframebuf = pframebuf;
}
The method for releasing a memory block to idle is as follows:
Void releasebuf (frame_buffer * pframebuf) // sort from small to large
{
If (null = pframebuf | null = pframebuf-> pframedata | 0 = pframebuf-> nbufsize)
Return;
Frame_buffer * ptmpbuf = NULL;
// Add mutex code
DWORD ncur = 0;
If (m_freelist.getsize ()> 0)
{
DWORD nstart = 0, nend = m_freelist.getsize ();
Ncur = (nstart + nend)/2;
While (1)
{
Ptmpbuf = (frame_buffer *) m_freelist.getat (ncur );
If (ptmpbuf-> nbufsize = pframebuf-> nbufsize)
Break;
Else if (ptmpbuf-> nbufsize> pframebuf-> nbufsize)
Nend = ncur;
Else
Nstart = ncur;
Ncur = (nstart + nend)/2;
If (ncur = nstart | ncur = nend)
Break;
}
If (ncur> 0 & ptmpbuf-> nbufsize> pframebuf-> nbufsize)
Ncur --;
}
M_freelist.insertat (ncur, pframebuf );
}
When the program exits, remember to release all requested space.
Iv. Buffer Design
With the technical foundation above. We can easily design our own buffer. Because the audio data is an order of magnitude smaller than the video data. Therefore, you can package an audio frame into an RTP package. A video frame is packaged into multiple RTP packets. Corresponding, on the client. We receive an RTP audio package which can be directly sent to the audio frame queue for the player to use. After receiving the RTP video package, you must first construct a complete frame for the player to use. The buffer is as follows: