Return to Step 3 and continue until the playback ends.As shown in the preceding steps,AudioQueue
The playing process is actually a typical producer and consumer problem. Producer isAudioFileStream
OrAudioFile
Audio Data frames in the production areaAudioQueue
In the buffer queue, wait until the buffer is filled up for consumption by the consumer;AudioQueue
As a consumer, the data in the buffer queue is consumed, and the data in the callback of another thread is notified to the consumer that the data has been consumed by the producer to continue production. ThereforeAudioQueue
In the process of playing audio, it is inevitable that you will be exposed to several problems such as multi-thread synchronization, semaphore usage, and deadlock avoidance.
After learning about the workflow, let's look back.AudioQueue
Methods, most of which are very easy to understand, and some need to be explained.
Create AudioQueueUse the following methods to generateAudioQueue
Instance
12345678910111213 |
OSStatus AudioQueueNewOutput (const AudioStreamBasicDescription * inFormat, AudioQueueOutputCallback inCallbackProc, void * inUserData, CFRunLoopRef inCallbackRunLoop, CFStringRef inCallbackRunLoopMode, UInt32 inFlags, AudioQueueRef * outAQ); OSStatus AudioQueueNewOutputWithDispatchQueue(AudioQueueRef * outAQ, const AudioStreamBasicDescription * inFormat, UInt32 inFlags, dispatch_queue_t inCallbackDispatchQueue, AudioQueueOutputCallbackBlock inCallbackBlock);
|
First, let's look at the first method:
The first parameter indicates the audio data format type to be played, which isAudioStreamBasicDescription
Object, is to useAudioFileStream
OrAudioFile
The parsed data format information;
Second ParameterAudioQueueOutputCallback
Is a Buffer.After being used
;
The third parameter is the context object;
The fourth parameter inCallbackRunLoop isAudioQueueOutputCallback
The RunLoop to be called back. If NULL is input, the callback will be sent again.AudioQueue
The internal RunLoop is called back, so it is generally possible to pass NULL;
The fifth parameter, inCallbackRunLoopMode, is set to RunLoop mode. If NULL is inputkCFRunLoopCommonModes
And pass NULL;
The sixth parameter, inFlags, is a reserved field. It does not work currently and is passed to 0;
The seventh parameter returns the generatedAudioQueue
Instance;
The return value is used to determine whether the creation is successful (OSStatus = noErr ).
The second method is to replace RunLoop with a dispatch queue, and the other parameters are the same.
Buffer-related method 1. Create a Buffer
12345678 |
OSStatus AudioQueueAllocateBuffer(AudioQueueRef inAQ, UInt32 inBufferByteSize, AudioQueueBufferRef * outBuffer); OSStatus AudioQueueAllocateBufferWithPacketDescriptions(AudioQueueRef inAQ, UInt32 inBufferByteSize, UInt32 inNumberPacketDescriptions, AudioQueueBufferRef * outBuffer);
|
The first method is passed inAudioQueue
The size of the instance and Buffer, and the size of the Buffer;
The second method can specify the number of PacketDescriptions in the generated Buffer;
2. Destroy Buffer
1 |
OSStatus AudioQueueFreeBuffer(AudioQueueRef inAQ,AudioQueueBufferRef inBuffer);
|
Note that this method is generally used only when a specific buffer needs to be destroyed (because the dispose method will automatically destroy all buffers), and this methodOnly when AudioQueue is not processing data
Can be used. Therefore, this method is generally unavailable.
3. Insert Buffer
1234 |
OSStatus AudioQueueEnqueueBuffer(AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, UInt32 inNumPacketDescs, const AudioStreamPacketDescription * inPacketDescs);
|
There are two Enqueue methods. The first method and the second method are given above.AudioQueueEnqueueBufferWithParameters
You can perform more operations on the buffer of Enqueue. I have not studied the second method in detail. Generally, the first method can meet the requirements, here I will only describe the first method:
This Enqueue method needs to be passed inAudioQueue
The instance and the Buffer that requires Enqueue. For inNumPacketDescs and inPacketDescs, You need to select the input parameters as needed. The document says that these two parameters are mainly used for VBR data playback, however, we have mentioned that even the CBR data AudioFileStream or AudioFile will provide PacketDescription, so this cannot be generalized. Simply put, if PacketDescription is passed, NULL is given. You don't have to worry about whether it is a VBR.
Playback Control 1. Start Playing
1 |
OSStatus AudioQueueStart(AudioQueueRef inAQ,const AudioTimeStamp * inStartTime);
|
The second parameter can be used to control the start time of playing. Generally, you can directly start playing and pass in NULL.
2. Decode data
123 |
OSStatus AudioQueuePrime(AudioQueueRef inAQ, UInt32 inNumberOfFramesToPrepare, UInt32 * outNumberOfFramesPrepared);
|
This method is not commonly used because it is called directly.AudioQueueStart
Decoding starts automatically (if needed ). The parameter is used to specify the number of frames to be decoded and the number of frames actually decoded;
3. pause playback
1 |
OSStatus AudioQueuePause(AudioQueueRef inAQ);
|
It should be noted that this method will be immediately paused once it is called, which meansAudioQueueOutputCallback
The callback will also be paused. In this case, pay special attention to the thread scheduling to prevent the thread from waiting infinitely.
4. Stop playing
1 |
OSStatus AudioQueueStop(AudioQueueRef inAQ, Boolean inImmediate);
|
If the second parameter is set to true, the playing (synchronization) will be stopped immediately. If the parameter is set to falseAudioQueue
It will play all the buffer of the Enqueue and then stop (asynchronous ). When using this function, you must specify the appropriate parameters as needed.
5. Flush
12 |
OSStatusAudioQueueFlush(AudioQueueRef inAQ);
|
After the call, the decoder status will be reset after playing all the buffer of Enqueu to prevent the current decoder status from affecting the decoding of the next audio segment (such as switching the Playing Song ). If andAudioQueueStop(AQ,false)
It does not take effect because the false parameter of the Stop method does the same thing.
6. Reset
1 |
OSStatus AudioQueueReset(AudioQueueRef inAQ);
|
ResetAudioQueue
Will clear all buffer with Enqueue and triggerAudioQueueOutputCallback
, CallAudioQueueStop
This method is also triggered. The direct call of this method is generally used in seek to clear the residual buffer.AudioQueueStop
).
7. Obtain the playback time
1234 |
OSStatus AudioQueueGetCurrentTime(AudioQueueRef inAQ, AudioQueueTimelineRef inTimeline, AudioTimeStamp * outTimeStamp, Boolean * outTimelineDiscontinuity);
|
Among the input parameters, the First and Fourth parameters areAudioQueueTimeline
We didn't use it here. We passed in NULL. Returned results after the callAudioTimeStamp
You can obtain the playback time from the timestap structure. The calculation method is as follows:
12 |
AudioTimeStamp time =...; // obtain NSTimeInterval playedTime = time. mSampleTime/_ format. mSampleRate using the AudioQueueGetCurrentTime method;
|
Note the following when using this time to obtain the method:
1. In the first case, the playback time refersActual playback time
It is different from playing progress in general. For example, after you start playing the video for 8 seconds, you can operate the slider to play the progress seek to 20th seconds and then play the video for 3 seconds. In this case, the playing time is usually 23 seconds, the playback progress.GetCurrentTime
The actual playback time is 11 seconds. Therefore, the timingOffset of seek must be saved for each seek:
12345 |
AudioTimeStamp time = ...; // obtain NSTimeInterval playedTime = time by using the AudioQueueGetCurrentTime method. mSampleTime/_ format. mSampleRate; // The playback time of seek. NSTimeInterval seekTime = ...; // time at which seek is required NSTimeInterval timingOffset = seekTime-playedTime;
|
The playback progress after seek needs to be calculated based on timingOffset and playedTime:
1 |
NSTimeInterval progress = timingOffset + playedTime;
|
2. Note thatGetCurrentTime
The method sometimes fails, so it is best to save the last obtained playback time. If the call fails, the last saved result is returned.
Destroy AudioQueue
1 |
AudioQueueDispose(AudioQueueRef inAQ, Boolean inImmediate);
|
All the buffers are cleared when the second parameter is destroyed. The meaning and usage of the second parameter are as follows:AudioQueueStop
The method is the same.
When using this method, you must note that whenAudioQueueStart
After the callAudioQueue
In fact, there is a short gap between them. IfAudioQueueStart
After the callAudioQueue
Called during the period before the actual start of OperationAudioQueueDispose
The program will be stuck. This problem was discovered when I used AudioStreamer. It must be found in iOS 6 (I did not test iOS 7, but iOS 7 was not released when I found the problem ), the reason is that AudioStreamer enters the Cleanup link when the audio EOF is enabled. The Cleanup link flush all the data and then calls the Dispose function. Therefore, when there is very little data in the audio file, it may occur.AudioQueueStart
At the time of the call, the EOF enters Cleanup, and the above problem occurs.
To avoid this problem, the first method is to schedule the thread and ensure that the call of the Dispose method must be after each RunLoop (that is, at least after a buffer is successfully played ). The second method is to listenkAudioQueueProperty_IsRunning
Attribute.AudioQueue
After the Start method is called, it is changed to 1 and 0 after it is stopped. Therefore, make sure that after the Start method is calledIsRunning
Can be called only when the value is 1.
Attributes and ParametersAnd otherAudioToolBox
Class,AudioToolBox
Many parameters and attributes can be set, retrieved, and listened. The following are related methods, so we will not repeat them here:
123456789101112 |
// Parameter-related method AudioQueueGetParameterAudioQueueSetParameter // Attribute-related method listener // Method for listener attribute change AudioQueueAddPropertyListenerAudioQueueRemovePropertyListener
|
Attribute and parameter list:
12345678910111213141516171819202122232425262728293031323334 |
// Attribute list enum {// typedef UInt32 AudioQueuePropertyID kAudioQueueProperty_IsRunning = 'aqrn ', // value is UInt32 success = 'aqsr', // value is Float64 success = 'aqdc ', // value is UInt32 kAudioQueueProperty_CurrentDevice = 'aqcd', // value is CFStringRef kAudioQueueProperty_MagicCookie = 'aqmc ', // value is void * Signature = 'xops ', // value is UInt32 kAudioQueueProperty_StreamDescription = 'aqft ', // value is AudioStreamBasicDescription kAudioQueueProperty_ChannelLayout = 'aqcl', // value is AudioChannelLayout layout = 'aqme ', // value is UInt32 kAudioQueueProperty_CurrentLevelMeter = 'aqm', // value is array of AudioQueueLevelMeterState, 1 per channel Limit = 'aqm', // value is array of AudioQueueLevelMeterState, 1 per channel identifier = 'dcb', // value is UInt32 kAudioQueueProperty_ConverterError = 'qcve', // value is UInt32 kAudioQueueProperty_EnableTimePitch = 'q _ tp ', // value is UInt32, 0/1 kAudioQueueProperty_TimePitchAlgorithm = 'qtpa ', // value is UInt32. See values below. signature = 'qtpe', // value is UInt32, 1 = bypassed}; // parameter list enum // typedef UInt32 AudioQueueParameterID; {Signature = 1, kAudioQueueParam_PlayRate = 2, kAudioQueueParam_Pitch = 3, kAudioQueueParam_VolumeRampTime = 4, kAudioQueueParam_Pan = 13 };
|
Among them, valuable attributes include: