以前看這個函數的時候,對min frame是如何計算得來的,並不是很瞭解。
今天又看了看,終於有點頭緒了。
status_t AudioTrack::getMinFrameCount( int* frameCount, int streamType, uint32_t sampleRate){ int afSampleRate; if (AudioSystem::getOutputSamplingRate(&afSampleRate, streamType) != NO_ERROR) { return NO_INIT; } int afFrameCount; if (AudioSystem::getOutputFrameCount(&afFrameCount, streamType) != NO_ERROR) { return NO_INIT; } uint32_t afLatency; if (AudioSystem::getOutputLatency(&afLatency, streamType) != NO_ERROR) { return NO_INIT; } // Ensure that buffer depth covers at least audio hardware latency uint32_t minBufCount = afLatency / ((1000 * afFrameCount) / afSampleRate); if (minBufCount < 2) minBufCount = 2; *frameCount = (sampleRate == 0) ? afFrameCount * minBufCount : afFrameCount * minBufCount * sampleRate / afSampleRate; return NO_ERROR;}
先看看下面這一段代碼:
int afSampleRate; if (AudioSystem::getOutputSamplingRate(&afSampleRate, streamType) != NO_ERROR) { return NO_INIT; }
從字面意思上,也基本上可以看出來,是去擷取output 裝置的sampling rate的。
如何擷取到的呢?
一步一步來吧。
函數AudioSystem::getOutputSamplingRate中會首先根據stream type擷取對應的output,然後嘗試擷取output的描述。
若擷取成功,取output描述的samplerate:
*samplingRate = outputDesc->samplingRate;
否則,取AudioFlinger的sample rate:
*samplingRate = af->sampleRate(output);
先看audio flinger的sample rate是如何取得的。
函數AudioFlinger::sampleRate中,找到output對應的thread,取對應thread的sample rate。
函數AudioFlinger::ThreadBase::sampleRate中直接返回了成員變數mSampleRate。
mSampleRate是在函數AudioFlinger::PlaybackThread::readOutputParameters中被賦值:
mSampleRate = mOutput->sampleRate();
mOutput->sampleRate,真正調用的是AudioStreamOutALSA對象的函數。
函數定義在其父類ALSAStreamOps中:
uint32_t ALSAStreamOps::sampleRate() const{ return mHandle->sampleRate;}
mHandle的賦值在ALSAStreamOps的建構函式中。使用的是建構函式參數handle。
AudioStreamOutALSA對象中函數AudioHardwareALSA::openOutputStream被建立:
out = new AudioStreamOutALSA(this, &(*it));
其中it即為建構函式參數handle。
it的賦值:
ALSAHandleList::iterator it = mDeviceList.begin();
mDeviceList的賦值在AudioHardwareALSA的建構函式中:
mALSADevice->init(mALSADevice, mDeviceList);
init函數其實就是s_init函數:
static status_t s_init(alsa_device_t *module, ALSAHandleList &list){ LOGD("Initializing devices for IMX51 ALSA module"); list.clear(); for (size_t i = 0; i < ARRAY_SIZE(_defaults); i++) { _defaults[i].module = module; list.push_back(_defaults[i]); } return NO_ERROR;}
_defaults的定義:
static alsa_handle_t _defaults[] = { { module : 0, devices : IMX51_OUT_DEFAULT, curDev : 0, curMode : 0, handle : 0, format : SND_PCM_FORMAT_S16_LE, // AudioSystem::PCM_16_BIT channels : 2, sampleRate : DEFAULT_SAMPLE_RATE, latency : 200000, // Desired Delay in usec bufferSize : 6144, // Desired Number of samples modPrivate : (void *)&setDefaultControls, }, { module : 0, devices : IMX51_IN_DEFAULT, curDev : 0, curMode : 0, handle : 0, format : SND_PCM_FORMAT_S16_LE, // AudioSystem::PCM_16_BIT channels : 2, sampleRate : DEFAULT_SAMPLE_RATE, latency : 250000, // Desired Delay in usec bufferSize : 6144, // Desired Number of samples modPrivate : (void *)&setDefaultControls, },};
sampleRate原來是在這兒指定的:
sampleRate : DEFAULT_SAMPLE_RATE,
DEFAULT_SAMPLE_RATE的定義為44100.
所以,afSampleRate的值其實就是44100.
回頭看看,若是在函數AudioSystem::getOutputSamplingRate中找到了output的描述,情況又是怎樣的呢?
output描述是在AudioPolicyManagerBase的建構函式中被建立。
其中,latency是通過調用函數mpClientInterface->openOutput取得:
mHardwareOutput = mpClientInterface->openOutput(&outputDesc->mDevice, &outputDesc->mSamplingRate, &outputDesc->mFormat, &outputDesc->mChannels, &outputDesc->mLatency, outputDesc->mFlags);
其實就是調用了函數AudioFlinger::openOutput。
其中對SamplingRate的賦值:
if (pSamplingRate) *pSamplingRate = samplingRate;
samplingRate的來曆:
AudioStreamOut *output = mAudioHardware->openOutputStream(*pDevices, (int *)&format, &channels, &samplingRate, &status);
函數AudioHardwareALSA::openOutputStream中對samplingRate的賦值:
err = out->set(format, channels, sampleRate);
函數ALSAStreamOps::set中對sampleRate的處理:
if (rate && *rate > 0) { if (mHandle->sampleRate != *rate) return BAD_VALUE; } else if (rate) *rate = mHandle->sampleRate;
與前面的那條河流匯合了。
FrameCount與sampleRate的流程類似,下面只說是其中不同的地方。
AudioFlinger::PlaybackThread::readOutputParameters函數中:
mFrameSize = (uint16_t)mOutput->frameSize(); mFrameCount = mOutput->bufferSize() / mFrameSize;
函數frameSize來自於類AudioStreamOut:
/** * return the frame size (number of bytes per sample). */ uint32_t frameSize() const { return AudioSystem::popCount(channels())*((format()==AudioSystem::PCM_16_BIT)?sizeof(int16_t):sizeof(int8_t)); }
函數ALSAStreamOps::channels的實現:
uint32_t ALSAStreamOps::channels() const{ unsigned int count = mHandle->channels; uint32_t channels = 0; if (mHandle->curDev & AudioSystem::DEVICE_OUT_ALL) switch(count) { case 4: channels |= AudioSystem::CHANNEL_OUT_BACK_LEFT; channels |= AudioSystem::CHANNEL_OUT_BACK_RIGHT; // Fall through... default: case 2: channels |= AudioSystem::CHANNEL_OUT_FRONT_RIGHT; // Fall through... case 1: channels |= AudioSystem::CHANNEL_OUT_FRONT_LEFT; break; } else switch(count) { default: case 2: channels |= AudioSystem::CHANNEL_IN_RIGHT; // Fall through... case 1: channels |= AudioSystem::CHANNEL_IN_LEFT; break; } return channels;}
看看channels在_defaults中的定義:
channels : 2,
函數ALSAStreamOps::format實現:
int ALSAStreamOps::format() const{ int pcmFormatBitWidth; int audioSystemFormat; snd_pcm_format_t ALSAFormat = mHandle->format; pcmFormatBitWidth = snd_pcm_format_physical_width(ALSAFormat); switch(pcmFormatBitWidth) { case 8: audioSystemFormat = AudioSystem::PCM_8_BIT; break; default: LOG_FATAL("Unknown AudioSystem bit width %i!", pcmFormatBitWidth); case 16: audioSystemFormat = AudioSystem::PCM_16_BIT; break; } return audioSystemFormat;}
看看format在_defaults中的定義:
format : SND_PCM_FORMAT_S16_LE, // AudioSystem::PCM_16_BIT
PCM_8_BIT與PCM_16_BIT的定義:
// Audio sub formats (see AudioSystem::audio_format). enum pcm_sub_format { PCM_SUB_16_BIT = 0x1, // must be 1 for backward compatibility PCM_SUB_8_BIT = 0x2, // must be 2 for backward compatibility };
所以,
mFrameSize = (uint16_t)mOutput->frameSize();
的結果其實就是4.
函數ALSAStreamOps::bufferSize的實現:
size_t ALSAStreamOps::bufferSize() const{ snd_pcm_uframes_t bufferSize = mHandle->bufferSize; snd_pcm_uframes_t periodSize;// 掉進了難纏的alsa lib,先不去看了。 snd_pcm_get_params(mHandle->handle, &bufferSize, &periodSize); size_t bytes = static_cast<size_t>(snd_pcm_frames_to_bytes(mHandle->handle, bufferSize)); // Not sure when this happened, but unfortunately it now // appears that the bufferSize must be reported as a // power of 2. This might be for OSS compatibility. for (size_t i = 1; (bytes & ~i) != 0; i<<=1) bytes &= ~i; return bytes;}
看看bufferSize在_defaults中的定義:
bufferSize : 6144, // Desired Number of samples
所以,不考慮alsa lib,下面:
mFrameCount = mOutput->bufferSize() / mFrameSize;
的運算結果為:6144 / 4 = 1536
即afFrameCount為1536.
關於latency的流程就不再看了。
在_defaults中,latency的定義為:
latency : 200000, // Desired Delay in usec
根據以下計算公式:
#define USEC_TO_MSEC(x) ((x + 999) / 1000)
可以算得,afLatency的結果其實就是200.
變數的值都知道了,所以minBufCount也就可以算出來了:
// Ensure that buffer depth covers at least audio hardware latency uint32_t minBufCount = afLatency / ((1000 * afFrameCount) / afSampleRate);
minBufCount = 200 / ((1000×1536)/44100) = 5.
消化消化。
afFrameCount的意思是硬體buffer裡能放多少frame,afFrameCount/afSampleRate的意思是,播放一次硬體buffer中的資料需要多少時間,算出來的單位是秒。
再乘以個1000,即將秒轉為了毫秒。
這樣就與afLatency的單位一致了。
這樣算出來的結果,就是說,為了滿足硬體延遲,軟體側的buffer大小隻是要是硬體側buffer大小的多少倍。
感覺還是沒消化徹底。
什麼是硬體延遲?為什麼軟體側的buffer size需要這個倍數呢?
硬體延遲,就是說,硬體側可能會延遲這麼久,也就是說硬體可能有這麼長的時間都沒有從軟體側取資料。
而軟體側還在不停地寫資料,為了保證軟體側的資料不被丟失,就需要軟體側的buffer足夠大。
多大才是足夠大呢?
就是在硬體允許的最大時間,不取資料的情況下,軟體側的buffer也不至於爆倉。
這樣基本上消化徹底了。
frameCount的計算與傳入參數sampleRate有關:
*frameCount = (sampleRate == 0) ? afFrameCount * minBufCount : afFrameCount * minBufCount * sampleRate / afSampleRate;
如果sampleRate為0,frameCount為 7680.
如果sampleRate不為0,frameCount為7680×sampleRate/44100.
消化一下。
既然上面已經算出來了軟體buffer 大小隻是要是硬體的多少倍,而硬體buffer中包含的frame個數已經知道為afFrameCount,
算出軟體buffer中包含多少frame也沒什麼困難了。
如果軟體側過來的資料與硬體側的sampling rate未指定,或者與硬體側的一樣,軟體側的buffer能裝的frame個數即為afFrameCount * minBufCount。
如果軟體側的sampling rate與硬體側不一致,就拿上面的結果再乘以個sampleRate / afSampleRate即可。