IOS audio playback (2): AudioSession Conversion

Source: Internet
Author: User

IOS audio playback (2): AudioSession Conversion

 

 

Preface

 

This is the second article in the iOS audio playback series.

Before implementing the seven steps described in the previous article, you must also face a troublesome problem, AudioSession.

Introduction to AudioSession

The main functions of AudioSession include ):

  1. Are you sure you want to use audio in your app? Or recording ?)
  2. Select an appropriate input/output device for your app (for example, the input microphone, the output is a headset, mobile phone amplifier, or airplay)
  3. Coordinates the audio playback of your app, the system, and other app behaviors (for example, You need to interrupt a phone call, recover the phone call, and mute the song when you press the mute button)

    AudioSession

    There are two classes related to AudioSession:

    1. AudioToolBoxInAudioSession
    2. AVFoundationInAVAudioSession

      The AudioSession has been marked as depracated In SDK 7, while the AVAudioSession class already exists in iOS 3, however, many of the methods and variables are available after iOS 6 or even iOS 7. Therefore, you can choose according to the following criteria:

      • If the minimum version supports iOS 5, you can useAudioSession, You can also useAVAudioSession;
      • If the minimum version supports iOS 6 or later, useAVAudioSession

        The following usesAudioSessionClass is used as an example to describe how to use AudioSession-related functions (unfortunately, I need to support iOS 5 .. T-T, useAVAudioSessionYou can find the corresponding method to use in the header file. I will explain the points you need to pay attention ).

        Note: When AVAudioPlayer/AVPlayer is used, you do not need to worry about AudioSession-related issues. Apple has encapsulated the AudioSession processing process, however, the response after the music is interrupted is still needed (for example, after the music is paused, the UI status also needs to change. This should be done through KVO .. I have never tried to guess >_< ).

        Initialize AudioSession

        UseAudioSessionClass first needs to call the initialization method:

        1234
        extern OSStatus AudioSessionInitialize(CFRunLoopRef inRunLoop,                                       CFStringRef inRunLoopMode,                                       AudioSessionInterruptionListener inInterruptionListener,                                       void *inClientData);

        The first two parameters are generally set.NULLIndicates that the AudioSession runs on the main thread (but does not indicate that audio-related processing runs on the main thread, but AudioSession). The third parameter needs to be input one by oneAudioSessionInterruptionListenerMethod of the type, which is used as the callback when the AudioSession is interrupted. The fourth parameter represents the object to be included when the callback is interrupted (that is, return to inClientData In the method, as shown below, it can be understood as context in UIView animation ).

        1
        typedef void (*AudioSessionInterruptionListener)(void * inClientData, UInt32 inInterruptionState);

        This is just the beginning. There are two problems:

        First, AudioSessionInitialize can be executed multiple times,AudioSessionInterruptionListenerIt can only be set once, which means that the interrupt callback method is a static method. Once the initialization is successful, all interrupts will be called back to this method, even if you call AudioSessionInitialize again and pass in another static method as a parameter, the method set for the first time will be called back when it is interrupted.

        This scenario is not uncommon. For example, your app requires both playing songs and recording. Of course, you cannot know which function the user will call first, therefore, you must call AudioSessionInitialize to register the interrupt method in both the playing and recording modules, but the interrupt callback will only take effect in the module that was previously registered. It hurts... Therefore, the best way to use AudioSession is to generate a class for separate management, and receive interrupt callbacks and send custom interrupt notifications in a unified manner, in the module that requires the use of AudioSession, receive the notification and perform corresponding operations.

        Apple also noticed this, so the Initialize method was first canceled in AVAudioSession and changed to the singleton method.sharedInstance. All interrupts on iOS 5 need to be set throughid delegateAnd implement the callback method, which also has the above problem, so in iOS 5 using AVAudioSession, a class still needs to manage the AudioSession separately. After iOS 6, Apple finally changed the interrupt to a notification form .. This is now scientific.

        Second, the fourth parameter inClientData of the AudioSessionInitialize method, that is, the first parameter of the callback method. As mentioned above, interrupt callback is a static method, and the purpose of this parameter is to get context (context information) during callback ), therefore, this inClientData must be an object with a long enough lifecycle (the premise is that you actually need to use this parameter). If this object is dealloc, the inClientData obtained during callback is a wild pointer. It is also necessary to construct a class for separately managing audiosessions, because the lifecycle of this class is as long as that of AudioSession. We can save the context in this class.

        Listen to RouteChange events

        If you want to implement a function similar to "pause a song when you unplug the headset", you need to listen to the RouteChange event:

        12345678
        extern OSStatus AudioSessionAddPropertyListener(AudioSessionPropertyID inID,                                                AudioSessionPropertyListener inProc,                                                void *inClientData);                                              typedef void (*AudioSessionPropertyListener)(void * inClientData,                                             AudioSessionPropertyID inID,                                             UInt32 inDataSize,                                             const void * inData);

        Call the preceding method to pass the AudioSessionPropertyID parameter.kAudioSessionProperty_AudioRouteChange, AudioSessionPropertyListener parameter to pass the corresponding callback method. The inClientData parameter is the same as the AudioSessionInitialize method.

        As a static callback method, unified management is required. When a callback is received, the first parameter inData can be convertedCFDictionaryRefObtain the value corresponding to the kAudioSession_AudioRouteChangeKey_Reason key value (it should be a CFNumberRef) from the kAudioSession_AudioRouteChangeKey_Reason key value. After obtaining the information, you can send a custom notification to other moduleskAudioSessionRouteChangeReason_OldDeviceUnavailableIt can be used to "disconnect the headset and pause the song ").

        1234567891011
        // AudioRouteChangeReason enumeration enum {random = 0, random = 1, random = 2, kAudioSessionRouteChangeReason_CategoryChange = 3, random = 4, random = 6, random = 7, kAudioSessionRouteChangeReason_RouteConfigurationChange = 8 };
        123456789101112
        // AVAudioSession AudioRouteChangeReason enumeration typedef NS_ENUM (NSUInteger, AVAudioSessionRouteChangeReason) {region = 0, region = 1, region = 2, region = 3, region = 4, region = 6, AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory = 7, AVAudioSessionRouteChangeReasonRouteConfigurationChange NS_ENUM_AVAILABLE_IOS (7_0) = 8}

        Note: For iOS 5AVAudioSessionBecauseAVAudioSessionDelegateDoes not define related methods. You still need to use this method to implement listening. In iOS 6, you can directly listen to AVAudioSession notifications.

        Here, the implementation of the two methods is based onAudioSessionClass (UseAVAudioSession).

        1. Determine if headphones are inserted:

        123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
        + (BOOL)usingHeadset{#if TARGET_IPHONE_SIMULATOR    return NO;#endif    CFStringRef route;    UInt32 propertySize = sizeof(CFStringRef);    AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &propertySize, &route);    BOOL hasHeadset = NO;    if((route == NULL) || (CFStringGetLength(route) == 0))    {        // Silent Mode    }    else    {        /* Known values of route:         * Headset         * Headphone         * Speaker         * SpeakerAndMicrophone         * HeadphonesAndMicrophone         * HeadsetInOut         * ReceiverAndMicrophone         * Lineout         */        NSString* routeStr = (__bridge NSString*)route;        NSRange headphoneRange = [routeStr rangeOfString : @Headphone];        NSRange headsetRange = [routeStr rangeOfString : @Headset];        if (headphoneRange.location != NSNotFound)        {            hasHeadset = YES;        }        else if(headsetRange.location != NSNotFound)        {            hasHeadset = YES;        }    }    if (route)    {        CFRelease(route);    }    return hasHeadset;}

        2. Determine whether Airplay is enabled (from StackOverflow ):

        12345678910111213141516171819202122
        + (BOOL)isAirplayActived{    CFDictionaryRef currentRouteDescriptionDictionary = nil;    UInt32 dataSize = sizeof(currentRouteDescriptionDictionary);    AudioSessionGetProperty(kAudioSessionProperty_AudioRouteDescription, &dataSize, &currentRouteDescriptionDictionary);    BOOL airplayActived = NO;    if (currentRouteDescriptionDictionary)    {        CFArrayRef outputs = CFDictionaryGetValue(currentRouteDescriptionDictionary, kAudioSession_AudioRouteKey_Outputs);        if(outputs != NULL && CFArrayGetCount(outputs) > 0)        {            CFDictionaryRef currentOutput = CFArrayGetValueAtIndex(outputs, 0);            //Get the output type (will show airplay / hdmi etc            CFStringRef outputType = CFDictionaryGetValue(currentOutput, kAudioSession_AudioRouteKey_Type);            airplayActived = (CFStringCompare(outputType, kAudioSessionOutputRoute_AirPlay, 0) == kCFCompareEqualTo);        }        CFRelease(currentRouteDescriptionDictionary);    }    return airplayActived;}
        Set category

        In the next step, set the AudioSession Category and useAudioSessionCall the following interface

        123
        extern OSStatus AudioSessionSetProperty(AudioSessionPropertyID inID,                                        UInt32 inDataSize,                                        const void *inData);

        If you want to play the video, run the following code:

        1234
        UInt32 sessionCategory = kAudioSessionCategory_MediaPlayback;AudioSessionSetProperty (kAudioSessionProperty_AudioCategory,                         sizeof(sessionCategory),                         &sessionCategory);

        UseAVAudioSessionCall the following interface

        1234
        /* set session category */- (BOOL)setCategory:(NSString *)category error:(NSError **)outError;/* set session category with options */- (BOOL)setCategory:(NSString *)category withOptions: (AVAudioSessionCategoryOptions)options error:(NSError **)outError NS_AVAILABLE_IOS(6_0);

        As for the type of Category, it is described in the official documentation. I will not go into details here. You can set the Category according to the functions you need.

        123456789
        // AudioSession AudioSessionCategory enumeration enum {region = 'ambi', region = 'solo', region = 'medi', region = 'reca', kAudioSessionCategory_PlayAndRecord = 'plar ', kAudioSessionCategory_AudioProcessing = 'proc '};
        1234567891011121314151617181920
        // AudioSession's AudioSessionCategory string/* Use this category for background sounds such as rain, car engine noise, etc. mixes with other music. */AVF_EXPORT NSString * const AVAudioSessionCategoryAmbient;/* Use this category for background sounds. other music will stop playing. */AVF_EXPORT NSString * const AVAudioSessionCategorySoloAmbient;/* Use this category for music tracks. */AVF_EXPORT NSString * const AVAudioSessionCategoryPlayback;/* Use this category when recording audio. */AVF_EXPORT NSString * const AVAudioSessionCategoryRecord;/* Use this category when recording and playing back audio. */AVF_EXPORT NSString * const AVAudioSessionCategoryPlayAndRecord;/* Use this category when using a hardware codec or signal processor while not playing or recording audio. */AVF_EXPORT NSString * const AVAudioSessionCategoryAudioProcessing;
        Enable

        With Category, you can start AudioSession. the startup method is as follows:

        12345678
        // AudioSession start method extern OSStatus AudioSessionSetActive (Boolean active); extern OSStatus AudioSessionSetActiveWithFlags (Boolean active, UInt32 inFlags); // AVAudioSession start method-(BOOL) setActive :( BOOL) active error :( NSError **) outError;-(BOOL) setActive :( BOOL) active withFlags :( NSInteger) flags error :( NSError **) outError NS_DEPRECATED_IOS (4_0, 6_0 ); -(BOOL) setActive :( BOOL) active witexceptions :( AVAudioSessionSetActiveOptions) options error :( NSError **) outError NS_AVAILABLE_IOS (6_0 );

        After the startup method is called, you must determine whether the startup is successful. startup failure often exists. For example, a front-end app is playing, if your app is in the background and wants to start AudioSession, it will return a failure.

        Generally, we can call the first method to start and stop AudioSession. However, if you are creating an instant speech communication app (similar to or similar to Yixin), you must note that the second method is required when deactive AudioSession is passed in.kAudioSessionSetActiveFlag_NotifyOthersOnDeactivation(AVAudioSessionInput the options ParameterAVAudioSessionSetActiveOptionNotifyOthersOnDeactivation). When your app deactive its AudioSession, the system will notify the last interrupted playback app to terminate the interrupt (that is, the interrupt callback mentioned above ), if your app passes the parameter NotifyOthersOnDeactivation during deactive, other apps will receive an additional parameter when receiving the callback for interrupt completion.kAudioSessionInterruptionType_ShouldResumeOtherwise, it is ShouldNotResume (AVAudioSessionInterruptionOptionShouldResume), You can determine whether to continue playing the video based on the parameter value.

        The general process is as follows:

        1. A music software, A, is playing;
        2. The user opens your software to play the conversation voice and AudioSession is active;
        3. Music Software A is interrupted and receives the InterruptBegin event;
        4. After the speech playing ends, AudioSession is deactive and the parameter NotifyOthersOnDeactivation is input;
        5. When the music software A receives the InterruptEnd event, view the Resume parameter. If ShouldResume controls the audio to continue playing, the ShouldNotResume will be interrupted;

          The official document provides an image to illustrate this phenomenon:

          However, some voice communication software and some music software are ignored.NotifyOthersOnDeactivationAndShouldResumeAs a result, we often receive such user feedback:

          Your app won't continue playing after listening to a paragraph using the xx voice software, but the xx music software can continue playing.

          Okay, I just spoke about it. Ignore me.

          Supplement: 7.19 update:

          Even if you have already calledAudioSessionInitializeMethod. In some cases, the AudioSession may fail after being interrupted. You need to call it again.AudioSessionInitializeMethod to regenerate the AudioSession. OtherwiseAudioSessionSetActive560557673 will be returned (Other AudioSession methods are the same, and AudioSession must be initialized before all methods are called), converted to string and then "! InikAudioSessionNotInitializedIOS 5.1.x is prone to this situation, and iOS 6.x and 7.x occasionally occur (The specific reason is unknown.It seems to be called directly when it is interrupted.AudioOutputUnitStop).

          ThereforeAudioSessionSetActiveThe error code should be determined. If the error code above needs to be re-initialized, AudioSession.

          Attach OSStatus to string:

          12345678910111213141516171819202122
          #import 
                        
                         NSString * OSStatusToString(OSStatus status){    size_t len = sizeof(UInt32);    long addr = (unsigned long)&status;    char cstring[5];    len = (status >> 24) == 0 ? len - 1 : len;    len = (status >> 16) == 0 ? len - 1 : len;    len = (status >>  8) == 0 ? len - 1 : len;    len = (status >>  0) == 0 ? len - 1 : len;    addr += (4 - len);    status = EndianU32_NtoB(status);        // strings are big endian    strncpy(cstring, (char *)addr, len);    cstring[len] = 0;    return [NSString stringWithCString:(char *)cstring encoding:NSMacOSRomanStringEncoding];}
                        
          Interrupt handling

          After the AudioSession is started normally, the audio can be played. The following describes how to handle the interruption. Previously, we mentioned that interrupt callbacks should be centrally managed in iOS 5, and custom notifications should be sent when the interrupt starts and ends.

          UseAudioSessionInterrupt callback should be obtained firstkAudioSessionProperty_InterruptionTypeThen, send a custom notification with corresponding parameters.

          12345678910111213
          static void MyAudioSessionInterruptionListener(void *inClientData, UInt32 inInterruptionState){    AudioSessionInterruptionType interruptionType = kAudioSessionInterruptionType_ShouldNotResume;    UInt32 interruptionTypeSize = sizeof(interruptionType);    AudioSessionGetProperty(kAudioSessionProperty_InterruptionType,                            &interruptionTypeSize,                            &interruptionType);    NSDictionary *userInfo = @{MyAudioInterruptionStateKey:@(inInterruptionState),                               MyAudioInterruptionTypeKey:@(interruptionType)};    [[NSNotificationCenter defaultCenter] postNotificationName:MyAudioInterruptionNotification object:nil userInfo:userInfo];}

          The processing method after receiving the notification is as follows (note the ShouldResume parameter ):

          12345678910111213141516171819202122232425
          -(Void) interruptionNotificationReceived :( NSNotification *) notification {UInt32 interruptionState = [notification. userInfo [MyAudioInterruptionStateKey] unsignedIntValue]; AudioSessionInterruptionType interruptionType = [notification. userInfo [Response] unsignedIntValue]; [self response: interruptionState type: interruptionType];}-(void) response :( UInt32) interruptionState type :( partial) interruptionType {if (interruptionState = response) {// control UI, pause playing} else if (interruptionState = kAudioSessionEndInterruption) {if (interruptionType = enabled) {OSStatus status = AudioSessionSetActive (true ); if (status = noErr) {// control the UI, continue playing }}}}
          Summary

          The topic about AudioSession ends here (the codeword is really tired ..). Summary:

          • If the minimum version supports iOS 5, you can useAudioSessionYou can also consider usingAVAudioSessionYou need to have a class to manage all the callbacks of the AudioSession in a unified manner, and send the corresponding custom notifications after receiving the callback;
          • If the minimum version supports iOS 6 or later, useAVAudioSession, You do not need to manage them in a unified manner. You only need to receive notifications from AVAudioSession;
          • Reasonable Selection Based on application scenariosCategory;
          • When deactive, pay attention to the application scenario of the app to choose whether to use it properly.NotifyOthersOnDeactivationParameters;
          • Pay attention to InterruptEnd eventsShouldResume. Sample Code

            I wrote it here.AudioSessionIf you need to support iOS 5, you can use it.

            Next announcement

            The next article describes how to useAudioFileStreamerSeparation of audio frames and how to use themAudioQueue.

            The next article describes how to useAudioFileStreamerExtract audio file format information and separate audio frames.

            References

            AudioSession

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.