Android java-layer audio analysis and understanding (3) Call-related

Source: Internet
Author: User
Tags call back

Android java-layer audio analysis and understanding (3) Call-related

Multiple apps in Android need to support audio playback. When multiple applications need to output audio at the same time, is it all output? Or output one of them? If one of them is output, which one is output? What are the criteria for defining? To process these relationships. Android 2.3 introduced the AudioFocus mechanism and used it until now.

 

1 Introduction to AudioFocus

 

AudioFocus is a preemptive mechanism without the concept of priority. Generally, the application that was last applied to use audiofoucos or obtained the current AudioFocus will suspend the application that was previously applied. However, there is a special case, that is, a call. As an important function of a mobile phone, Telephony applies for AudioFocus from the bell to the call reception. All applications cannot obtain the audio focus from Telephony, but Telephony can obtain AudioFocus from any application.

 

Currently, the AudioFocus status in Android6.0 is as follows:


We can see from the above AudioFocus status that the actual AudioFocus status corresponds. When the latter application goes to GainAudioFocus, the former application corresponds to LossAudioFocus. The form of Gain for the latter application should be notified by the corresponding Loss. However, when an application receives the Loss notification, it is internal to the application. Because the AudioFoucs mechanism is a creation mechanism rather than a mandatory mechanism. The use of this mechanism only improves the user experience, and does not even affect applications and systems.

When the application outputs audio through the AudioFocus mechanism, it can pause the output because other applications obtain AudioFocus. However, the application can also output audio without using AudioFocus. In this case, all the audio output logic of the application is controlled by itself. It is not affected by the AudioFocus mechanism.

2 AudioFocus implementation

 

The AudioFocus mechanism is actually equivalent to a stack.

 


As shown in, the AudioFocus mechanism is implemented in the form of stacks. Applications located at the top of the stack can obtain AudioFocus to output audio. AudioFocus is temporarily lost for non-stack top applications until all applications on the top of the stack are closed or AudioFocus is automatically lost.

The implementation of AudioFocus starts with AudioManager. java.

Generally, the application calls AudioManager's requestAudioFocus () method to obtain AudioFocus. This time, unlike the past, AudioManager not only directly obtains the handle of AudioService as a transmitter, but also passes the parameter over, But first through a series of judgments and encapsulation, by reload twice, save the application-related messages and call requestAudioFocus () in AudioSercvice to further process the AudioFocus operation.


 

So let's take a look at the main content of AudioManager's requestAudioFocus:

Public int requestAudioFocus (OnAudioFocusChangeListener l, @ NonNull AudioAttributes requestAttributes, int durationHint, int flags, AudioPolicy ap) throws IllegalArgumentException {...... // Judge a series of parameters: int status = AUDIOFOCUS_REQUEST_FAILED; // put the listener of the application requesting AudioFocus into mAudioFocusIdListenerMap registerAudioFocusListener (l); IAudioService service = getService (); try {// call AudioService's requestAudioFocus () status = service. requestAudioFocus (requestAttributes, durationHint, mICallBack, mAudioFocusDispatcher, getIdForAudioFocusListener (l), getContext (). getOpPackageName ()/* package name */ , Flags, ap! = Null? Ap. cb (): null);} catch (RemoteException e) {Log. e (TAG, "Can't call requestAudioFocus () on AudioService:", e);} return status ;}

In requestAudioFocus (), the difference is that requestAudioFocus () processes the input parameters step by step, and stores the listener of the application in mAudioFocusIdListenerMap. The input parameters include mAudioFocusDispatcher and getIdForAudioFocusListener (l ). GetIdForAudioFocusListener (l) is clearly the name of the listener. What exactly does mAudioFocusDispatcher do?

 

Next, let's take a look at mAudioFocusDispatcher:

 

    private final IAudioFocusDispatcher mAudioFocusDispatcher = new IAudioFocusDispatcher.Stub() {        public void dispatchAudioFocusChange(int focusChange, String id) {            Message m = mAudioFocusEventHandlerDelegate.getHandler().obtainMessage(focusChange, id);            mAudioFocusEventHandlerDelegate.getHandler().sendMessage(m);        }    };

 

As shown above, mAudioFocusDispatcher is an implementation of IaudioFocusDispatcher. There is a method in it that calls mAudioFocusEventHandlerDelegate to process the message content. What is mAudioFocusEventHandlerDelegate? Let's continue.

    private class FocusEventHandlerDelegate {        private final Handler mHandler;        FocusEventHandlerDelegate() {            Looper looper;            if ((looper = Looper.myLooper()) == null) {                looper = Looper.getMainLooper();            }            if (looper != null) {                // implement the event handler delegate to receive audio focus events                mHandler = new Handler(looper) {                    @Override                    public void handleMessage(Message msg) {                        OnAudioFocusChangeListener listener = null;                        synchronized(mFocusListenerLock) {                            listener = findFocusListener((String)msg.obj);                        }                        if (listener != null) {                            Log.d(TAG, "AudioManager dispatching onAudioFocusChange("                                    + msg.what + ") for " + msg.obj);                            listener.onAudioFocusChange(msg.what);                        }                    }                };            } else {                mHandler = null;            }        }        Handler getHandler() {            return mHandler;        }    }

From the above we can see that calling mAudioFocusEventHandlerDelegate is to call back the onAudioFocusChange method of the input listener to process related content. Then we can directly process the information through l. onAudioFocusChange (focusChange). Why do we need to use a handle to handle this problem? So I checked the cause as follows: Because the callback is still in the Binder call thread, if an exception or blocking or even Malicious delay occurs because of a problem with the listener Code passed in by the user, the other end of the Binder will crash or be blocked due to exceptions. So far, AudioService has fulfilled the obligation of notification. Handler should be used to send subsequent operations to another thread, so that AudioService can stay as far away as possible from the actual impact of callback.

 

So now we can go to AudioService to see how the AudioService is processed.

AudioService. requestAudioFocus () is as follows:

Public int requestAudioFocus (AudioAttributes aa, int durationHint, IBinder cb, IAudioFocusDispatcher fd, String clientId, String callingPackageName, int flags, IAudioPolicyCallback pcb) {// check permission if (flags & AudioManager. AUDIOFOCUS_FLAG_LOCK) = AudioManager. AUDIOFOCUS_FLAG_LOCK) {// strange place 1 if (AudioSystem. IN_VOICE_COMM_FOCUS_ID.equals (clientId) {if (PackageManager. PERMISSION_GRANTED! = MContext. checkCallingOrSelfPermission (android. manifest. permission. MODIFY_PHONE_STATE) {Log. e (TAG, "Invalid permission to (un) lock audio focus", new Exception (); return AudioManager. AUDIOFOCUS_REQUEST_FAILED;} else {// register an AudioPolicy to obtain AudioFocus synchronized (mAudioPolicies) {// strange place 2 if (! MAudioPolicies. containsKey (pcb. asBinder () {Log. e (TAG, "Invalid unregistered AudioPolicy to (un) lock audio focus"); return AudioManager. AUDIOFOCUS_REQUEST_FAILED; }}} return mMediaFocusControl. requestAudioFocus (aa, durationHint, cb, fd, clientId, callingPackageName, flags );}

If you take a closer look at the above, it may be strange to have two places. First, AudioManager. AUDIOFOCUS_FLAG_LOCK, and second mAudioPolicies. AudioManager. AUDIOFOCUS_FLAG_LOCK indicates that AudioFocus is locked. In this state, if the application needs to obtain AudioFoucs, it must first go to registerAudioPolicy. This is related to the AudioPolicy.

 

Here, the AudioPolicy and AudioPolicyService have no relationship at all. On the java layer, the former is only responsible for providing simple audio routing and audio focus management. On the C ++ layer, the latter is responsible for processing audio-related routing policies. Here, we generally do not use AudioPoluicy and AudioManager. AUDIOFOCUS_FLAG_LOCK, so this will be resolved in 4.3.

 

Back to AudioService. requestAudioFocus (), we can actually find that the real processing of AudioFocus is MediaFocusControl ......


So now let's take a look at how MediaFocusControl works.

 

Protected int requestAudioFocus (AudioAttributes aa, int focusChangeHint, IBinder cb, IAudioFocusDispatcher fd, String clientId, String callingPackageName, int flags ){...... // Skip various judgment synchronized (mAudioFocusLock ){...... // Use AudioFocusDeathHandler to listen to AudioFocusDeathHandler afdh = new round (cb); try {cb. linkToDeath (afdh, 0);} catch (RemoteException e ){...... Return AudioManager. AUDIOFOCUS_REQUEST_FAILED;} // mFocusStack is the stack where the AudioFocus mechanism is located. // if the application applying for AudioFocus has obtained the AudioFocus, no processing is performed. if (! MFocusStack. empty () & mFocusStack. peek (). hasSameClient (clientId )){......} // If a request for this application already exists in the Stack, remove it first removeFocusStackEntry (clientId, false/* signal */, false/* policyfocusfollowers */); // create a new FocusRequester object final FocusRequester nfr = new FocusRequester (aa, focusChangeHint, flags, fd, cb, clientId, afdh, callingPackageName, Binder. getCallingUid (), this); if (focusGrantDelayed) {// In the AudioFocus status that can be obtained by delay (in CALL), first put it into the Stack final int requestResult = pushBelowLo CkedFocusOwners (nfr); if (requestResult! = AudioManager. AUDIOFOCUS_REQUEST_FAILED) {policyextpolicyfocusgrant_syncaf (nfr. toAudioFocusInfo (), requestResult);} return requestResult;} else {if (! MFocusStack. empty () {// make the previous application that occupies AudioFocus lose AudioFocus propagateFocusLossFromGain_syncAf (focusChangeHint);} // put the current request on the top of the stack mFocusStack. push (nfr);} // notify the system to obtain AudioFocus policyextpolicyfocusgrant_syncaf (nfr. toAudioFocusInfo (), AudioManager. AUDIOFOCUS_REQUEST_GRANTED);} // synchronized (mAudioFocusLock) return AudioManager. AUDIOFOCUS_REQUEST_GRANTED ;}

 

Here we can see the real requestAudioFocus () processing. The main process is:

1) determine whether to delay obtaining AudioFocus (call status) and whether the application already has AudioFocus.

2) Remove AudioFocus from the previous application that occupies AudioFocus, and call back its onAudioFocusChange method through a series of operations. (The process is simple and not described in detail)

3) place the current application FocusRequester to the top of the stack and notify the system's current application to obtain AudioFocus.

 

The operations that lose the AudioFocus (abandonAudioFocus) and requestAudioFocus are similar. So it is not detailed here.

 

3. Use of AudioPolicy

Finally, let's briefly introduce the java-layer AudioPolicy. AudioService has a HashMap object mAudioPolicies, which stores information about all applications that have been registerAudioPolicy. Through registerAudioPolicy (), we can lock AudioFocus and do not allow modification of applications that do not register AudioPolicy. However, this function is not used most of the time.

This is a simple analysis of audio call problems. Again, AudioFocus is a recommended non-forcible mechanism. We do not apply AudioFocus, which will not affect the audio output and input of applications.
 

 


 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.