Android Audio延遲(latency)

來源:互聯網
上載者:User

最近在看Android中播放延遲的問題,看了下代碼,發現AudioTrack類中的函數latency有以下注釋:

    /* Returns this track's latency in milliseconds.     * This includes the latency due to AudioTrack buffer size, AudioMixer (if any)     * and audio hardware driver.     */

夠強大,前幾天自己還叭叭根據buffer計算延遲呢,原來可以調個函數就搞定。

看看函數AudioTrack::latency()的實現吧:

uint32_t AudioTrack::latency() const{    return mLatency;}

沒什麼內涵,直接返回了成員變數。
mLatency是在哪兒被 賦值的呢?

AudioTrack::createTrack函數中對mLatency進行了賦值:

    mLatency = afLatency + (1000*mCblk->frameCount) / sampleRate;

其中afLatency是硬體的延遲。
(1000*mCblk->frameCount) / sampleRate,這一坨,是根據AudioTrack中的audio_track_cblk_t的buffer,計算AudioTrack buffer導致的延遲。

afLatency的來曆,也在函數AudioTrack::createTrack中:

    uint32_t afLatency;    if (AudioSystem::getOutputLatency(&afLatency, streamType) != NO_ERROR) {        return NO_INIT;    }

AudioSystem::getOutputLatency函數中首先根據stream type擷取對應的output,然後嘗試擷取output的描述.
若取得成功,則使用output描述中的延遲;否則,擷取AudioFlinger,並使用AudioFlinger中的延遲。
具體代碼如下:

status_t AudioSystem::getOutputLatency(uint32_t* latency, int streamType){    OutputDescriptor *outputDesc;    audio_io_handle_t output;    if (streamType == DEFAULT) {        streamType = MUSIC;    }    output = getOutput((stream_type)streamType);    if (output == 0) {        return PERMISSION_DENIED;    }    gLock.lock();    outputDesc = AudioSystem::gOutputs.valueFor(output);    if (outputDesc == 0) {        gLock.unlock();        const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();        if (af == 0) return PERMISSION_DENIED;        *latency = af->latency(output);    } else {        *latency = outputDesc->latency;        gLock.unlock();    }    LOGV("getOutputLatency() streamType %d, output %d, latency %d", streamType, output, *latency);    return NO_ERROR;}

先看看AudioFlinger中的latency:
AudioFlinger::latency函數中,首先擷取output對應的PlaybackThread,然後取得PlaybackThread的latency:

    return thread->latency();

看看函數AudioFlinger::PlaybackThread::latency():

uint32_t AudioFlinger::PlaybackThread::latency() const{    if (mOutput) {        return mOutput->latency();    }    else {        return 0;    }}

我做的這個項目中,mOutput其實就是AudioStreamOutALSA。
AudioStreamOutALSA::latency()函數:

#define USEC_TO_MSEC(x) ((x + 999) / 1000)uint32_t AudioStreamOutALSA::latency() const{// 將微秒轉化為毫秒    // Android wants latency in milliseconds.    return USEC_TO_MSEC (mHandle->latency);}

mHandler中父類ALSAStreamOps的建構函式中被賦值。
用的是AudioStreamOutALSA建構函式中的參數handle。

AudioStreamOutALSA對象在函數AudioHardwareALSA::openOutputStream中被建立:

            out = new AudioStreamOutALSA(this, &(*it));

it的賦值:

ALSAHandleList::iterator it = mDeviceList.begin();

mDeviceList的賦值在AudioHardwareALSA的建構函式中:

            mALSADevice->init(mALSADevice, mDeviceList);

init函數其實就是s_init函數:

static status_t s_init(alsa_device_t *module, ALSAHandleList &list){    LOGD("Initializing devices for IMX51 ALSA module");    list.clear();    for (size_t i = 0; i < ARRAY_SIZE(_defaults); i++) {        _defaults[i].module = module;        list.push_back(_defaults[i]);    }    return NO_ERROR;}

_defaults的定義:

static alsa_handle_t _defaults[] = {    {        module      : 0,        devices     : IMX51_OUT_DEFAULT,        curDev      : 0,        curMode     : 0,        handle      : 0,        format      : SND_PCM_FORMAT_S16_LE, // AudioSystem::PCM_16_BIT        channels    : 2,        sampleRate  : DEFAULT_SAMPLE_RATE,        latency     : 200000, // Desired Delay in usec        bufferSize  : 6144, // Desired Number of samples        modPrivate  : (void *)&setDefaultControls,    },    {        module      : 0,        devices     : IMX51_IN_DEFAULT,        curDev      : 0,        curMode     : 0,        handle      : 0,        format      : SND_PCM_FORMAT_S16_LE, // AudioSystem::PCM_16_BIT        channels    : 2,        sampleRate  : DEFAULT_SAMPLE_RATE,        latency     : 250000, // Desired Delay in usec        bufferSize  : 6144, // Desired Number of samples        modPrivate  : (void *)&setDefaultControls,    },};

原來就是在這兒指定的latency:

        latency     : 200000, // Desired Delay in usec

回頭看看,若是在函數AudioSystem::getOutputLatency中找到了output的描述,情況又是怎樣的呢?

output描述是在AudioPolicyManagerBase的建構函式中被建立。
其中,latency是通過調用函數mpClientInterface->openOutput取得:

    mHardwareOutput = mpClientInterface->openOutput(&outputDesc->mDevice,                                    &outputDesc->mSamplingRate,                                    &outputDesc->mFormat,                                    &outputDesc->mChannels,                                    &outputDesc->mLatency,                                    outputDesc->mFlags);

其實就是調用了函數AudioFlinger::openOutput。
其中對latency的賦值:

        if (pLatencyMs) *pLatencyMs = thread->latency();

與前面的那條河流匯合了。

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.