Android recording data transmission

Source: Internet
Author: User
Tags constant definition

Today, let's take a look at how the recording data in Android comes from.

Start with audiorecord.

The interface for obtaining recording data in audiorecord is: audiorecord: Read.
First, call the function obtainbuffer to obtain the address of the recording data.
Then, use memcpy to copy the recording data.

The data source is the obtainbuffer function.

Let's take a look at the function audiorecord: obtainbuffer.
The main function is to assign values to the passed audiobuffer.
Audiobuffer is of the buffer * type.

Look at the buffer class:

Class Buffer
{
Public:
Enum {
Mute = 0x00000001
};
Uint32_t flags;
Int channelcount;
Int format;
Size_t framecount;
Size_t size;
Union {
Void * Raw;
Short * I16;
Int8_t * i8;
};
};

The following figure stores the data:
Union {
Void * Raw;
Short * I16;
Int8_t * i8;
};

The code for assigning values to this item in the function audiorecord: obtainbuffer is as follows:
Audiobuffer-> raw = (int8_t *) cblk-> buffer (U );

The origins of cblk:
Audio_track_cblk_t * cblk = mcblk;

The mcblk value is assigned to the audiorecord: openrecord function:
Mcblk = static_cast <audio_track_cblk_t *> (cblk-> pointer ());
Mcblk-> buffers = (char *) mcblk + sizeof (audio_track_cblk_t); // The mcblk header stores structure information, followed by data

Function audio_track_cblk_t: Buffer implementation:
Void * audio_track_cblk_t: buffer (uint64_t offset) const
{
Return (int8_t *) This-> buffers + (offset-userbase) * This-> framesize;
}

Visible data is stored in the audio_track_cblk_t struct.

Where is the data written to the structure audio_track_cblk_t?

In the audiorecord: obtainbuffer function, when obtaining the buffer address, the audio_track_cblk_t: framesready function is called to determine how much data is ready.

The audio_track_cblk_t function audio_track_cblk_t: framesready is also called during data playback.
When writing data to audio_track_cblk_t, The audio_track_cblk_t: framesavailable function is called.

The same must be true for recordings.
That is to say, find the place where the function audio_track_cblk_t: framesavailable is called, and then find the place where data is written to audio_track_cblk_t.

Recording-related, where audio_track_cblk_t: framesavailable is called:
Audioflinger: recordthread: recordtrack: getnextbuffer function.

The function audioflinger: recordthread: recordtrack: getnextbuffer is used to assign values to the passed audiobufferprovider: buffer.

Audiobufferprovider: Buffer struct type:
Struct buffer {
Union {
Void * Raw;
Short * I16;
Int8_t * i8;
};
Size_t framecount;
};

The following figure stores the data:
Union {
Void * Raw;
Short * I16;
Int8_t * i8;
};

Function audioflinger: recordthread: recordtrack: getnextbuffer:
Buffer-> raw = getbuffer (S, framesreq );

The audioflinger: threadbase: trackbase: getbuffer function returns an int8_t pointer bufferstart.
The code for assigning a value to bufferstart is as follows:
Int8_t * bufferstart = (int8_t *) mbuffer + (offset-cblk-> serverbase) * cblk-> framesize;

The mbuffer value is assigned to the constructor audioflinger: threadbase: trackbase:
Mbuffer = (char *) mcblk + sizeof (audio_track_cblk_t );
The origins of mcblk:
Mcblk = static_cast <audio_track_cblk_t *> (mcblkmemory-> pointer (); or new (mcblk) audio_track_cblk_t ();
This has been viewed during the study of playing audio streams, so it will not be repeated here.

Which of the following must have called the audioflinger: recordthread: recordtrack: getnextbuffer to obtain a buffer, and then write the recording data to the buffer.

Sousearch calls the function audioflinger: recordthread: recordtrack: getnextbuffer. The recording-related calls include the following:
Audioflinger: recordthread: threadloop Function

In the audioflinger: recordthread: threadloop function, the audioflinger: recordthread: recordtrack: getnextbuffer function is called to obtain the buffer.
Then assign the buffer value to DST:
Int8_t * DST = buffer. i8 + (buffer. framecount-framesout) * mactivetrack-> mcblk-> framesize;

The following two cases are discussed:
++ Not when resampling is required-start ++ ++
There are two places to write data to DST:
While (framesin --){
* Dst16 ++ = * src16;
* Dst16 ++ = * src16 ++;
}
Or:
While (framesin --){
* Dst16 ++ = (int16_t) (int32_t) * src16 + (int32_t) * (src16 + 1)> 1 );
Src16 + = 2;
}

That is to say, the data source is SRC. Let's look at the source of SRC.
Int8_t * src = (int8_t *) mrsmpinbuffer + mrsmpinindex * mframesize;

The value assigned to mrsmpinbuffer is in the audioflinger: recordthread: readinputparameters function:
Mrsmpinbuffer = new int16_t [mframecount * mchannelcount];

Here, we just created a new buffer to find the data source and check where to write the data.
The data written to mrsmpinbuffer is also in the audioflinger: recordthread: threadloop function:
Mbytesread = minput-> Read (mrsmpinbuffer, minputbytes );
-------------------------------------- Case where resampling is not required-end -----------------------------------------------
++ Resampling-start ++ ++
Where data is written to DST:
While (framesout --){
* DST ++ = (int16_t) (int32_t) * SRC + (int32_t) * (SRC + 1)> 1 );
SRC + = 2;
}

That is to say, the data source is SRC. Let's look at the source of SRC.
Int16_t * src = (int16_t *) mrsmpoutbuffer;

The value assigned to mrsmpinbuffer is in the audioflinger: recordthread: readinputparameters function:
Mrsmpinbuffer = new int16_t [mframecount * mchannelcount];

Here, we just created a new buffer to find the data source and check where to write the data.
The data written to mrsmpinbuffer is in the audioflinger: recordthread: getnextbuffer function:
Mbytesread = minput-> Read (mrsmpinbuffer, minputbytes );

Let's take a look at the call to the audioflinger: recordthread: getnextbuffer function:

First, the audioflinger: recordthread: threadloop function audioresamplerorder1: resample is called.
Mresampler-> resample (mrsmpoutbuffer, framesout, this );
The function audioresamplerorder1: resample calls the function audioresamplerorder1: resamplemono16:
Resamplemono16 (Out, outframecount, provider );
The audioresamplerorder1: resamplemono16 function calls the audioflinger: recordthread: getnextbuffer:
Provider-> getnextbuffer (& mbuffer );

The mresampler value is assigned in the audioflinger: recordthread: readinputparameters function:
Mresampler = audioresampler: Create (16, channelcount, mreqsamplerate );

Function audioresampler: Create returns different resampler values based on the parameters:
Switch (quality ){
Default:
Case low_quality:
Logv ("Create linear resampler ");
Resampler = new audioresamplerorder1 (bitdepth, inchannelcount, samplerate );
Break;
Case med_quality:
Logv ("create cubic resampler ");
Resampler = new audioresamplercubic (bitdepth, inchannelcount, samplerate );
Break;
Case high_quality:
Logv ("create sinc resampler ");
Resampler = new audioresamplersinc (bitdepth, inchannelcount, samplerate );
Break;
}

According to the call to the function audioflinger: recordthread: getnextbuffer, mrsmpoutbuffer is finally passed to the function audioresamplerorder1: resamplemono16 as the parameter out.
The final place for writing data to mrsmpoutbuffer is in the audioresamplerorder1: resamplemono16 function:

// Handle Boundary Case
While (inputindex = 0 ){
// LogE ("Boundary Case \ n ");
Int32_t sample = interp (mx0l, in [0], phasefraction );
Out [outputindex ++] + = VL * sample;
Out [outputindex ++] + = VR * sample;
Advance (& inputindex, & phasefraction, phaseincrement );
If (outputindex = outputsamplecount)
Break;
}

Or:

While (outputindex <outputsamplecount & inputindex <mbuffer. framecount ){
Int32_t sample = interp (in [inputIndex-1], in [inputindex],
Phasefraction );
Out [outputindex ++] + = VL * sample;
Out [outputindex ++] + = VR * sample;
Advance (& inputindex, & phasefraction, phaseincrement );
}

The origins of in:
Int16_t * In = mbuffer. I16;

Mbuffer assignment:
Provider-> getnextbuffer (& mbuffer );

Return to the call to the function audioflinger: recordthread: getnextbuffer.

The audioflinger: recordthread: getnextbuffer function first calls audiostreaminalsa: Read to obtain the data pointer.
Mbytesread = minput-> Read (mrsmpinbuffer, minputbytes );

Then assign the data address to the passed audiobufferprovider: buffer pointer:
Buffer-> raw = mrsmpinbuffer + mrsmpinindex * channelcount;

Next let's take a look at how resampling processes data.

Int32_t sample = interp (in [inputIndex-1], in [inputindex],
Phasefraction );

Source of phasefraction:
Uint32_t phasefraction = mphasefraction;

Phasefraction is passed to the advance function as a parameter:
Advance (& inputindex, & phasefraction, phaseincrement );

The function advance assigns a value to phasefraction:
Static inline void advance (size_t * index, uint32_t * frac, uint32_t Inc ){
* Frac + = Inc;
* Index + = (size_t) (* frac> knumphasebits );
* Frac & = kphasemask;
}

Constant definition:
// Number of BITs for Phase Fraction-28 BITs allows nearly 8x downsampling
Static const int knumphasebits = 28;

// Phase Mask for Fraction
Static const uint32_t kphasemask = (1lu <knumphasebits)-1;

Let's look at the interp function:
Static inline int32_t interp (int32_t x0, int32_t X1, uint32_t f ){
Return x0 + (x1-X0) * (int32_t) (F> kpreinterpshift)> knuminterpbits );
}

Constant definition:
// Number of BITs used in Interpolation multiply-15 bits avoids Overflow
Static const int knuminterpbits = 15;

// Bits to shift the Phase Fraction down to avoid overflow
Static const int kpreinterpshift = knumphasebits-knuminterpbits;

Let's look at VL and Vr:
Int32_t VL = mvolume [0];
Int32_t Vr = mvolume [1];

Mvolume is assigned a value in the audioresampler: setvolume function:
Void audioresampler: setvolume (int16_t left, int16_t right ){
// Todo: implement anti-Zipper Filter
Mvolume [0] = left;
Mvolume [1] = right;
}

The audioresampler: setvolume function is called in the audioflinger: recordthread: readinputparameters function:
Mresampler-> setvolume (audiomixer: unity_gain, audiomixer: unity_gain );

Constant definition:
Static const uint16_t unity_gain = 0x1000;
-------------------------------------- Case where resampling is required-end -----------------------------------------------

Yes. Whether resampling or not, you can call the audiostreaminalsa: Read function to obtain data.

The audiostreaminalsa: Read function calls the snd_pcm_mmap_readi function in ALSA lib or the snd_pcm_readi function to obtain data:
If (mhandle-> MMAP)
N = snd_pcm_mmap_readi (mhandle-> handle, buffer, frames );
Else
N = snd_pcm_readi (mhandle-> handle, buffer, frames );

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.