If the minimum version supports iOS 6 or later, useAVAudioSession
The following usesAudioSession
Class is used as an example to describe how to use AudioSession-related functions (unfortunately, I need to support iOS 5 .. T-T, useAVAudioSession
You can find the corresponding method to use in the header file. I will explain the points you need to pay attention ).
Note: When AVAudioPlayer/AVPlayer is used, you do not need to worry about AudioSession-related issues. Apple has encapsulated the AudioSession processing process, however, the response after the music is interrupted is still needed (for example, after the music is paused, the UI status also needs to change. This should be done through KVO .. I have never tried to guess >_< ).
Initialize AudioSessionUseAudioSession
Class first needs to call the initialization method:
1234 |
extern OSStatus AudioSessionInitialize(CFRunLoopRef inRunLoop, CFStringRef inRunLoopMode, AudioSessionInterruptionListener inInterruptionListener, void *inClientData);
|
The first two parameters are generally set.NULL
Indicates that the AudioSession runs on the main thread (but does not indicate that audio-related processing runs on the main thread, but AudioSession). The third parameter needs to be input one by oneAudioSessionInterruptionListener
Method of the type, which is used as the callback when the AudioSession is interrupted. The fourth parameter represents the object to be included when the callback is interrupted (that is, return to inClientData In the method, as shown below, it can be understood as context in UIView animation ).
1 |
typedef void (*AudioSessionInterruptionListener)(void * inClientData, UInt32 inInterruptionState);
|
This is just the beginning. There are two problems:
First, AudioSessionInitialize can be executed multiple times,AudioSessionInterruptionListener
It can only be set once, which means that the interrupt callback method is a static method. Once the initialization is successful, all interrupts will be called back to this method, even if you call AudioSessionInitialize again and pass in another static method as a parameter, the method set for the first time will be called back when it is interrupted.
This scenario is not uncommon. For example, your app requires both playing songs and recording. Of course, you cannot know which function the user will call first, therefore, you must call AudioSessionInitialize to register the interrupt method in both the playing and recording modules, but the interrupt callback will only take effect in the module that was previously registered. It hurts... Therefore, the best way to use AudioSession is to generate a class for separate management, and receive interrupt callbacks and send custom interrupt notifications in a unified manner, in the module that requires the use of AudioSession, receive the notification and perform corresponding operations.
Apple also noticed this, so the Initialize method was first canceled in AVAudioSession and changed to the singleton method.sharedInstance
. All interrupts on iOS 5 need to be set throughid delegate
And implement the callback method, which also has the above problem, so in iOS 5 using AVAudioSession, a class still needs to manage the AudioSession separately. After iOS 6, Apple finally changed the interrupt to a notification form .. This is now scientific.
Second, the fourth parameter inClientData of the AudioSessionInitialize method, that is, the first parameter of the callback method. As mentioned above, interrupt callback is a static method, and the purpose of this parameter is to get context (context information) during callback ), therefore, this inClientData must be an object with a long enough lifecycle (the premise is that you actually need to use this parameter). If this object is dealloc, the inClientData obtained during callback is a wild pointer. It is also necessary to construct a class for separately managing audiosessions, because the lifecycle of this class is as long as that of AudioSession. We can save the context in this class.
Listen to RouteChange eventsIf you want to implement a function similar to "pause a song when you unplug the headset", you need to listen to the RouteChange event:
12345678 |
extern OSStatus AudioSessionAddPropertyListener(AudioSessionPropertyID inID, AudioSessionPropertyListener inProc, void *inClientData); typedef void (*AudioSessionPropertyListener)(void * inClientData, AudioSessionPropertyID inID, UInt32 inDataSize, const void * inData);
|
Call the preceding method to pass the AudioSessionPropertyID parameter.kAudioSessionProperty_AudioRouteChange
, AudioSessionPropertyListener parameter to pass the corresponding callback method. The inClientData parameter is the same as the AudioSessionInitialize method.
As a static callback method, unified management is required. When a callback is received, the first parameter inData can be convertedCFDictionaryRef
Obtain the value corresponding to the kAudioSession_AudioRouteChangeKey_Reason key value (it should be a CFNumberRef) from the kAudioSession_AudioRouteChangeKey_Reason key value. After obtaining the information, you can send a custom notification to other moduleskAudioSessionRouteChangeReason_OldDeviceUnavailable
It can be used to "disconnect the headset and pause the song ").
1234567891011 |
// AudioRouteChangeReason enumeration enum {random = 0, random = 1, random = 2, kAudioSessionRouteChangeReason_CategoryChange = 3, random = 4, random = 6, random = 7, kAudioSessionRouteChangeReason_RouteConfigurationChange = 8 };
|
123456789101112 |
// AVAudioSession AudioRouteChangeReason enumeration typedef NS_ENUM (NSUInteger, AVAudioSessionRouteChangeReason) {region = 0, region = 1, region = 2, region = 3, region = 4, region = 6, AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory = 7, AVAudioSessionRouteChangeReasonRouteConfigurationChange NS_ENUM_AVAILABLE_IOS (7_0) = 8}
|
Note: For iOS 5AVAudioSession
BecauseAVAudioSessionDelegate
Does not define related methods. You still need to use this method to implement listening. In iOS 6, you can directly listen to AVAudioSession notifications.
Here, the implementation of the two methods is based onAudioSession
Class (UseAVAudioSession
).
1. Determine if headphones are inserted:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748 |
+ (BOOL)usingHeadset{#if TARGET_IPHONE_SIMULATOR return NO;#endif CFStringRef route; UInt32 propertySize = sizeof(CFStringRef); AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &propertySize, &route); BOOL hasHeadset = NO; if((route == NULL) || (CFStringGetLength(route) == 0)) { // Silent Mode } else { /* Known values of route: * Headset * Headphone * Speaker * SpeakerAndMicrophone * HeadphonesAndMicrophone * HeadsetInOut * ReceiverAndMicrophone * Lineout */ NSString* routeStr = (__bridge NSString*)route; NSRange headphoneRange = [routeStr rangeOfString : @Headphone]; NSRange headsetRange = [routeStr rangeOfString : @Headset]; if (headphoneRange.location != NSNotFound) { hasHeadset = YES; } else if(headsetRange.location != NSNotFound) { hasHeadset = YES; } } if (route) { CFRelease(route); } return hasHeadset;}
|
2. Determine whether Airplay is enabled (from StackOverflow ):
12345678910111213141516171819202122 |
+ (BOOL)isAirplayActived{ CFDictionaryRef currentRouteDescriptionDictionary = nil; UInt32 dataSize = sizeof(currentRouteDescriptionDictionary); AudioSessionGetProperty(kAudioSessionProperty_AudioRouteDescription, &dataSize, ¤tRouteDescriptionDictionary); BOOL airplayActived = NO; if (currentRouteDescriptionDictionary) { CFArrayRef outputs = CFDictionaryGetValue(currentRouteDescriptionDictionary, kAudioSession_AudioRouteKey_Outputs); if(outputs != NULL && CFArrayGetCount(outputs) > 0) { CFDictionaryRef currentOutput = CFArrayGetValueAtIndex(outputs, 0); //Get the output type (will show airplay / hdmi etc CFStringRef outputType = CFDictionaryGetValue(currentOutput, kAudioSession_AudioRouteKey_Type); airplayActived = (CFStringCompare(outputType, kAudioSessionOutputRoute_AirPlay, 0) == kCFCompareEqualTo); } CFRelease(currentRouteDescriptionDictionary); } return airplayActived;}
|
Set categoryIn the next step, set the AudioSession Category and useAudioSession
Call the following interface
123 |
extern OSStatus AudioSessionSetProperty(AudioSessionPropertyID inID, UInt32 inDataSize, const void *inData);
|
If you want to play the video, run the following code:
1234 |
UInt32 sessionCategory = kAudioSessionCategory_MediaPlayback;AudioSessionSetProperty (kAudioSessionProperty_AudioCategory, sizeof(sessionCategory), &sessionCategory);
|
UseAVAudioSession
Call the following interface
1234 |
/* set session category */- (BOOL)setCategory:(NSString *)category error:(NSError **)outError;/* set session category with options */- (BOOL)setCategory:(NSString *)category withOptions: (AVAudioSessionCategoryOptions)options error:(NSError **)outError NS_AVAILABLE_IOS(6_0);
|
As for the type of Category, it is described in the official documentation. I will not go into details here. You can set the Category according to the functions you need.
123456789 |
// AudioSession AudioSessionCategory enumeration enum {region = 'ambi', region = 'solo', region = 'medi', region = 'reca', kAudioSessionCategory_PlayAndRecord = 'plar ', kAudioSessionCategory_AudioProcessing = 'proc '};
|
1234567891011121314151617181920 |
// AudioSession's AudioSessionCategory string/* Use this category for background sounds such as rain, car engine noise, etc. mixes with other music. */AVF_EXPORT NSString * const AVAudioSessionCategoryAmbient;/* Use this category for background sounds. other music will stop playing. */AVF_EXPORT NSString * const AVAudioSessionCategorySoloAmbient;/* Use this category for music tracks. */AVF_EXPORT NSString * const AVAudioSessionCategoryPlayback;/* Use this category when recording audio. */AVF_EXPORT NSString * const AVAudioSessionCategoryRecord;/* Use this category when recording and playing back audio. */AVF_EXPORT NSString * const AVAudioSessionCategoryPlayAndRecord;/* Use this category when using a hardware codec or signal processor while not playing or recording audio. */AVF_EXPORT NSString * const AVAudioSessionCategoryAudioProcessing;
|
EnableWith Category, you can start AudioSession. the startup method is as follows:
12345678 |
// AudioSession start method extern OSStatus AudioSessionSetActive (Boolean active); extern OSStatus AudioSessionSetActiveWithFlags (Boolean active, UInt32 inFlags); // AVAudioSession start method-(BOOL) setActive :( BOOL) active error :( NSError **) outError;-(BOOL) setActive :( BOOL) active withFlags :( NSInteger) flags error :( NSError **) outError NS_DEPRECATED_IOS (4_0, 6_0 ); -(BOOL) setActive :( BOOL) active witexceptions :( AVAudioSessionSetActiveOptions) options error :( NSError **) outError NS_AVAILABLE_IOS (6_0 );
|
After the startup method is called, you must determine whether the startup is successful. startup failure often exists. For example, a front-end app is playing, if your app is in the background and wants to start AudioSession, it will return a failure.
Generally, we can call the first method to start and stop AudioSession. However, if you are creating an instant speech communication app (similar to or similar to Yixin), you must note that the second method is required when deactive AudioSession is passed in.kAudioSessionSetActiveFlag_NotifyOthersOnDeactivation
(AVAudioSession
Input the options ParameterAVAudioSessionSetActiveOptionNotifyOthersOnDeactivation
). When your app deactive its AudioSession, the system will notify the last interrupted playback app to terminate the interrupt (that is, the interrupt callback mentioned above ), if your app passes the parameter NotifyOthersOnDeactivation during deactive, other apps will receive an additional parameter when receiving the callback for interrupt completion.kAudioSessionInterruptionType_ShouldResume
Otherwise, it is ShouldNotResume (AVAudioSessionInterruptionOptionShouldResume
), You can determine whether to continue playing the video based on the parameter value.
The general process is as follows:
- A music software, A, is playing;
- The user opens your software to play the conversation voice and AudioSession is active;
- Music Software A is interrupted and receives the InterruptBegin event;
- After the speech playing ends, AudioSession is deactive and the parameter NotifyOthersOnDeactivation is input;
- When the music software A receives the InterruptEnd event, view the Resume parameter. If ShouldResume controls the audio to continue playing, the ShouldNotResume will be interrupted;
The official document provides an image to illustrate this phenomenon:
However, some voice communication software and some music software are ignored.NotifyOthersOnDeactivation
AndShouldResume
As a result, we often receive such user feedback:
Your app won't continue playing after listening to a paragraph using the xx voice software, but the xx music software can continue playing.
Okay, I just spoke about it. Ignore me.
Supplement: 7.19 update:
Even if you have already calledAudioSessionInitialize
Method. In some cases, the AudioSession may fail after being interrupted. You need to call it again.AudioSessionInitialize
Method to regenerate the AudioSession. OtherwiseAudioSessionSetActive
560557673 will be returned (Other AudioSession methods are the same, and AudioSession must be initialized before all methods are called), converted to string and then "! InikAudioSessionNotInitialized
IOS 5.1.x is prone to this situation, and iOS 6.x and 7.x occasionally occur (The specific reason is unknown.It seems to be called directly when it is interrupted.AudioOutputUnitStop
).
ThereforeAudioSessionSetActive
The error code should be determined. If the error code above needs to be re-initialized, AudioSession.
Attach OSStatus to string:
12345678910111213141516171819202122 |
#import
NSString * OSStatusToString(OSStatus status){ size_t len = sizeof(UInt32); long addr = (unsigned long)&status; char cstring[5]; len = (status >> 24) == 0 ? len - 1 : len; len = (status >> 16) == 0 ? len - 1 : len; len = (status >> 8) == 0 ? len - 1 : len; len = (status >> 0) == 0 ? len - 1 : len; addr += (4 - len); status = EndianU32_NtoB(status); // strings are big endian strncpy(cstring, (char *)addr, len); cstring[len] = 0; return [NSString stringWithCString:(char *)cstring encoding:NSMacOSRomanStringEncoding];}
|
Interrupt handlingAfter the AudioSession is started normally, the audio can be played. The following describes how to handle the interruption. Previously, we mentioned that interrupt callbacks should be centrally managed in iOS 5, and custom notifications should be sent when the interrupt starts and ends.
UseAudioSession
Interrupt callback should be obtained firstkAudioSessionProperty_InterruptionType
Then, send a custom notification with corresponding parameters.
12345678910111213 |
static void MyAudioSessionInterruptionListener(void *inClientData, UInt32 inInterruptionState){ AudioSessionInterruptionType interruptionType = kAudioSessionInterruptionType_ShouldNotResume; UInt32 interruptionTypeSize = sizeof(interruptionType); AudioSessionGetProperty(kAudioSessionProperty_InterruptionType, &interruptionTypeSize, &interruptionType); NSDictionary *userInfo = @{MyAudioInterruptionStateKey:@(inInterruptionState), MyAudioInterruptionTypeKey:@(interruptionType)}; [[NSNotificationCenter defaultCenter] postNotificationName:MyAudioInterruptionNotification object:nil userInfo:userInfo];}
|
The processing method after receiving the notification is as follows (note the ShouldResume parameter ):
12345678910111213141516171819202122232425 |
-(Void) interruptionNotificationReceived :( NSNotification *) notification {UInt32 interruptionState = [notification. userInfo [MyAudioInterruptionStateKey] unsignedIntValue]; AudioSessionInterruptionType interruptionType = [notification. userInfo [Response] unsignedIntValue]; [self response: interruptionState type: interruptionType];}-(void) response :( UInt32) interruptionState type :( partial) interruptionType {if (interruptionState = response) {// control UI, pause playing} else if (interruptionState = kAudioSessionEndInterruption) {if (interruptionType = enabled) {OSStatus status = AudioSessionSetActive (true ); if (status = noErr) {// control the UI, continue playing }}}}
|
SummaryThe topic about AudioSession ends here (the codeword is really tired ..). Summary:
- If the minimum version supports iOS 5, you can use
AudioSession
You can also consider usingAVAudioSession
You need to have a class to manage all the callbacks of the AudioSession in a unified manner, and send the corresponding custom notifications after receiving the callback;
- If the minimum version supports iOS 6 or later, use
AVAudioSession
, You do not need to manage them in a unified manner. You only need to receive notifications from AVAudioSession;
- Reasonable Selection Based on application scenarios
Category
;
- When deactive, pay attention to the application scenario of the app to choose whether to use it properly.
NotifyOthersOnDeactivation
Parameters;
- Pay attention to InterruptEnd events
ShouldResume
. Sample CodeI wrote it here.AudioSession
If you need to support iOS 5, you can use it.
Next announcementThe next article describes how to useAudioFileStreamer
Separation of audio frames and how to use themAudioQueue
.
The next article describes how to useAudioFileStreamer
Extract audio file format information and separate audio frames.
ReferencesAudioSession