I recently watched the playback latency problem in Android. After reading the code, I found that the latency function in the audiotrack class has the following comments:
/* Returns this track's latency in milliseconds. * This includes the latency due to AudioTrack buffer size, AudioMixer (if any) * and audio hardware driver. */
Powerful enough. I used to calculate the latency Based on the buffer when I was still waiting a few days ago. I could just call a function.
Let's take a look at the implementation of the function audiotrack: latency:
uint32_t AudioTrack::latency() const{ return mLatency;}
No meaning. The member variables are directly returned.
Where is mlatency assigned?
In the audiotrack: createtrack function, the mlatency value is assigned:
mLatency = afLatency + (1000*mCblk->frameCount) / sampleRate;
Among them, aflatency is the hardware latency.
(1000 * mcblk-> framecount)/samplerate: Calculate the latency caused by audiotrack Buffer Based on the buffer of audio_track_cblk_t in audiotrack.
The origin of aflatency is also in the audiotrack: createtrack function:
uint32_t afLatency; if (AudioSystem::getOutputLatency(&afLatency, streamType) != NO_ERROR) { return NO_INIT; }
In the audiosystem: getoutputlatency function, the corresponding output is obtained based on stream type, and the output description is obtained.
If the result is successful, the latency in the output description is used; otherwise, the audioflinger is obtained and the latency in audioflinger is used.
The Code is as follows:
status_t AudioSystem::getOutputLatency(uint32_t* latency, int streamType){ OutputDescriptor *outputDesc; audio_io_handle_t output; if (streamType == DEFAULT) { streamType = MUSIC; } output = getOutput((stream_type)streamType); if (output == 0) { return PERMISSION_DENIED; } gLock.lock(); outputDesc = AudioSystem::gOutputs.valueFor(output); if (outputDesc == 0) { gLock.unlock(); const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger(); if (af == 0) return PERMISSION_DENIED; *latency = af->latency(output); } else { *latency = outputDesc->latency; gLock.unlock(); } LOGV("getOutputLatency() streamType %d, output %d, latency %d", streamType, output, *latency); return NO_ERROR;}
First look at latency in audioflinger:
In the audioflinger: latency function, obtain the playbackthread corresponding to the output, and then obtain the latency of playbackthread:
return thread->latency();
Look at the function audioflinger: playbackthread: latency ():
uint32_t AudioFlinger::PlaybackThread::latency() const{ if (mOutput) { return mOutput->latency(); } else { return 0; }}
In my project, moutput is actually audiostreamoutalsa.
Audiostreamoutalsa: latency () function:
# Define usec_to_msec (x) (x + 999)/1000) uint32_t audiostreamoutalsa: latency () const {// converts microseconds to milliseconds // Android wants latency in milliseconds. return usec_to_msec (mhandle-> latency );}
The constructor of the parent class alsastreamops in mhandler is assigned a value.
The handle parameter in the audiostreamoutalsa constructor is used.
The audiostreamoutalsa object is created in the audiohardwarealsa: openoutputstream function:
out = new AudioStreamOutALSA(this, &(*it));
It assignment:
ALSAHandleList::iterator it = mDeviceList.begin();
The mdevicelist value is assigned to the constructor audiohardwarealsa:
mALSADevice->init(mALSADevice, mDeviceList);
The init function is actually the s_init function:
static status_t s_init(alsa_device_t *module, ALSAHandleList &list){ LOGD("Initializing devices for IMX51 ALSA module"); list.clear(); for (size_t i = 0; i < ARRAY_SIZE(_defaults); i++) { _defaults[i].module = module; list.push_back(_defaults[i]); } return NO_ERROR;}
_ Defaults definition:
static alsa_handle_t _defaults[] = { { module : 0, devices : IMX51_OUT_DEFAULT, curDev : 0, curMode : 0, handle : 0, format : SND_PCM_FORMAT_S16_LE, // AudioSystem::PCM_16_BIT channels : 2, sampleRate : DEFAULT_SAMPLE_RATE, latency : 200000, // Desired Delay in usec bufferSize : 6144, // Desired Number of samples modPrivate : (void *)&setDefaultControls, }, { module : 0, devices : IMX51_IN_DEFAULT, curDev : 0, curMode : 0, handle : 0, format : SND_PCM_FORMAT_S16_LE, // AudioSystem::PCM_16_BIT channels : 2, sampleRate : DEFAULT_SAMPLE_RATE, latency : 250000, // Desired Delay in usec bufferSize : 6144, // Desired Number of samples modPrivate : (void *)&setDefaultControls, },};
Latency specified here:
latency : 200000, // Desired Delay in usec
Let's look back. What if we find the output description in the audiosystem: getoutputlatency function?
The output description is created in the constructor of audiopolicymanagerbase.
Among them, latency is obtained by calling the function mpclientinterface-> openoutput:
mHardwareOutput = mpClientInterface->openOutput(&outputDesc->mDevice, &outputDesc->mSamplingRate, &outputDesc->mFormat, &outputDesc->mChannels, &outputDesc->mLatency, outputDesc->mFlags);
Actually, the audioflinger: openoutput function is called.
Assign a value to latency:
if (pLatencyMs) *pLatencyMs = thread->latency();
It joins the river above.