iOS audio playback audioqueue (a): Play local music

Source: Internet
Author: User

  • Audioqueue Brief Introduction
  • Audiostreamer description
  • Audioqueue Specific explanations
    • Audioqueue Working principle
    • Audioqueue Main interface
      • Audioqueuenewoutput
      • Audioqueueallocatebuffer
      • Audioqueueenqueuebuffer
      • Audioqueuestart Pause Stop Flush Reset Dispose
      • Audioqueuefreebuffer
      • Audioqueuegetproperty Audioqueuesetproperty
  • Audio playback Localaudioplayer
    • Initialization of the player
    • Play Audio
      • Localaudioplayer related properties
      • Read and start parsing audio
      • Parsing Audio Information
        • Kaudiofilestreamproperty_dataformat
        • Kaudiofilestreamproperty_fileformat
        • Kaudiofilestreamproperty_audiodatabytecount
        • Kaudiofilestreamproperty_bitrate
        • Kaudiofilestreamproperty_dataoffset
        • Kaudiofilestreamproperty_audiodatapacketcount
        • Kaudiofilestreamproperty_readytoproducepackets
      • Parsing Audio Frames
      • Play audio data
      • Clean up Related resources
  • End

iOS enables you to play local music in a number of ways. such as Avaudioplayer, these can be very good competent. It's strange to have someone. Why go back and ask for the second. Use more complex audioqueue to play local music? Please keep looking down.

Audioqueue Brief Introduction

Audioqueue, that's what Apple's developer documentation says.

"Audio Queue Services provides a straightforward, low overhead way to record and play audio in iOS and Mac OS X."

Audioqueue Official documents

The Audioqueue service provides a direct, low-overhead way to record and play music on iOS and Mac OS x.

The advantage of playing music with Audioqueue is that it is very inexpensive and supports streaming (edge-to-bottom playback). But the disadvantage is that the development is very difficult. So there is network Audio library Audiostreamer, online has a lot of talk audioqueue. However, there are few examples of the code. The company project has audio requirements, although the project is not the use of my own written audio playback function, but later I would like to study, this seems to me more wonderful and more interesting audioqueue.

Audiostreamer description

One of the more famous streaming audio players in iOS is Audiostreamer, which uses Audioqueue. Just that the audio library does not support local music playback, I feel very strange why the author does not support it. And in the process of use, I found that the library is still a bit of a problem, although I do not know much about audio knowledge, and can not be compared with the master and discuss. But I also hope that through my own study, finally completed a similar Audiostreamer network music library, now perhaps just a vision. At least I have tried it before, whether I have the ability at last. But the work is more busy recently, and the lack of their knowledge. I do not know when to achieve the talent. This time first to make up Audiostreamer no support, use Audioqueue play local music.

Audioqueue specifically explains how Audioqueue works

I truncated the following image from Apple's Official document:

This diagram is a good illustration of how audioqueue works, such as the following:
1. The user calls the corresponding method, reads the audio data from the hard disk into the audioqueue buffer, and feeds the buffer into the audio queue.


2. The user app provides an interface via Audioqueue. Tell the speaker device that there is already data in the buffer. Be able to take it to play.
3. When the audio data in a buffer is played, Audioqueue tells the user that there is currently an empty buffer that can be used to populate the data for you.


4. Repeat the above steps. Until data playback is complete.

Here, there must be a lot of students found, Audioqueue is actually the producer-consumer model of the typical application.

Audioqueue Main interface Audioqueuenewoutput
OSStatus AudioQueueNewOutput(const AudioStreamBasicDescription *ininCallbackProc, void *ininininFlags, AudioQueueRef  _Nullable *outAQ);

This method is used to create a audioqueue for outputting audio

The number of references and return instructions are as follows:
1. Informat: This parameter indicates the data format of the audio that will be played
2. Incallbackproc: This callback is used to notify the user when Audioqueue has finished using a buffer, and the user can continue to populate the audio data
3. inuserdata: Data pointer passed by the user for passing to the callback function
4. Incallbackrunloop: Indicates which runloop the callback event occurred in, assuming that null is passed, indicating that the callback event is run on the thread where the Audioqueue is located, and normally, NULL is passed.
5. Incallbackrunloopmode: Indicates the runloop pattern of the callback event, passing null equivalent to kcfrunloopcommonmodes, typically passing null to
6. Outaq: The reference instance of the Audioqueue,

Returns Osstatus, assuming the value is noerr. It means there is no error. Audioqueue created successfully.

Audioqueueallocatebuffer
ininBufferByteSize, AudioQueueBufferRef  _Nullable *outBuffer);

The function of this method is to open up space for the buffer that holds the audio data.

The number of references and return instructions are as follows:
1. Inaq: reference instance of Audioqueue
2. inbufferbytesize: The size of the buffer that needs to be opened
3. Outbuffer: A reference instance of the open buffer

Returns Osstatus, assuming that the value is Noerr, indicates a successful buffer opening.

Audioqueueenqueuebuffer
inininNumPacketDescs, const AudioStreamPacketDescription *inPacketDescs);

This method is used to queue audioqueuebuffer that have already populated the data to Audioqueue

The number of references and return instructions are as follows:
1. Inaq: reference instance of Audioqueue
2. inbuffer: A buffer instance that needs to be queued
3. Innumpacketdescs: How many frames of audio data exist in the buffer
4. Inpacketdescs: Information about each frame in the buffer. The user is required to indicate the offset value of the data in the buffer for each frame, specified by the field Mstartoffset

Returns Osstatus. Assuming a value of noerr, the buffer has been successfully queued. Wait for playback

Audioqueuestart Pause Stop Flush Reset Dispose
inAQ, const AudioTimeStamp *ininininininininImmediate);

As the name implies, the first three methods are used for audio playback, pause and stop.

The latter two methods are used to clean and reset the audio queue at last, and cleaning ensures that the data in the queue is fully output. Audioqueudispose is used to clean up audioqueue resources.

The number of references and return instructions are as follows:
1. Inaq: reference instance of Audioqueue
2. instarttime: Indicate the time to start playing audio, assuming you want to start right away. Pass NULL
3. Inimmediate: Indicates whether to stop audio playback immediately, if so. Pass True

Returns a osstatus that indicates whether the related operation was successfully run.

Audioqueuefreebuffer
ininBuffer);

This method is used at the end of playback. Use when releasing a cleanup buffer

Audioqueuegetproperty Audioqueuesetproperty
ininininID, const void *ininDataSize);

This get/set method, which is used to set the related properties of getting audioqueue, refer to the instructions in the AudioQueue.h header file.

Audio Playback (Localaudioplayer)

Use Audioqueue play music, generally need to cooperate with Audiofilestream, Audiofilestream is responsible for parsing audio data. The Audioqueue is responsible for playing the parsed audio data.

This time, only the most important local audio playback functions are implemented. It is intended to lay the groundwork for the future, not to deal with whatever state is relevant (e.g. pause, stop, SEEK), error, etc.

Playback is similar to streaming, just because the audio data originates from a local file rather than a network. The following steps are required:
1. Continue to read part of the data from the file until the end of all data read
2. Submit the data read in the file to Audiofilestream for data analysis
3. Create Audioqueue when data is put into Audioqueuebuffer
4. Place the buffer in the Audioqueue to start playing the audio
5. Play audio end, clean up related resources

Initialization of the player

The player's Init is primarily used to specify which audio files to play, such as the following:

Read the file operation, using the Nsfilehandle class. Audioinuselock, is a nslock* type that is used to mark when Audioqueue notifies us that an empty buffer can be used.

We initialize the player when the user clicks on the PlayButton, and call the play method to play

watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvy2fpcm8xmjm=/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/southeast "alt=" Initializes the player instance "title=" >

Play Audio

Playing audio is divided into the following steps:
1. Read and start parsing audio
2. Parsing Audio Information
3. Parsing Audio Frames
4. Play Audio data
5. Cleanup of related resources

Let's first define a few macros to specify the size of some buffers

#define kNumberOfBuffers 3              //AudioQueueBuffer数量。一般指明为3#define kAQBufSize 128 * 1024           //每一个AudioQueueBuffer的大小#define kAudioFileBufferSize 2048       //文件读取数据的缓冲区大小#define kMaxPacketDesc 512              //最大的AudioStreamPacketDescription个数
Localaudioplayer related properties

The properties defined in Localaudioplayer are as follows:

Read and start parsing audio

We use Audiofilestream to parse audio information. After the user calls the play method. First call Audiofilestreamopen, open the Audiofilestream, as seen below:

watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvy2fpcm8xmjm=/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/southeast "alt=" Open Audiofilestream "title=" ">

extern OSStatus AudioFileStreamOpen (void *inClientData, AudioFileStream_PropertyListenerProc   inPropertyListenerProc, AudioFileStream_PacketsProc             inPacketsProc, AudioFileTypeID                          inFileTypeHint, AudioFileStreamID * outAudioFileStream);

Examples of Audiofilestreamopen include the following:
1. inclientdata: User-specified data to be passed to the callback function. Here we specify (__bridge localaudioplayer*) Self
2. Inpropertylistenerproc: When parsing to an audio message. Will callback the method
3. Inpacketsproc: When parsing to an audio frame, the method is recalled
4. Infiletypehint: Indicates the format of the audio data. Assuming you don't know the format of the audio data, you can pass 0
5. Outaudiofilestream: audiofilestreamid instance, need to be saved for use

After the data has been read. Call Audiofilestreamparsebytes to parse the data, with prototypes such as the following:

ininininFlags);

A description of the number of parameters is as follows:
1. Inaudiofilestream: audiofilestreamid instance, opened by Audiofilestreamopen
2. indatabytesize: The size of the data bytes parsed
3. InData: The data size of this resolution
4. inflags: Data parsing flag. There is only one value kaudiofilestreamparseflag_discontinuity = 1, which indicates whether the parsed data is discontinuous, and we can pass 0 now.

When the file data is read at the end of the collection, the file can be closed at this time.

Parsing Audio Information

Assuming parsing to the audio information, the previously specified callback function will be called, as seen in the following:

watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvy2fpcm8xmjm=/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/southeast "alt=" parsing audio Information-1 "title=" >

Each related property can call Audiofilestreamgetproperty to get to the corresponding value, such as the following prototype:

ininPropertyID, UInt32 *ioPropertyDataSize, void *                              outPropertyData);

Description of the parameters:
1. Inaudiofilestream: audiofilestreamid instance, opened by Audiofilestreamopen
2. Inpropertyid: The name of the property to get. See AudioFileStream.h
3. iopropertydatasize: Indicates the size of the property
4. outpropertydata: space for storing this property value

Kaudiofilestreamproperty_dataformat

This property indicates the format information for the audio data, and the returned data is a audiostreambasicdescription structure. Need to save for use with Audioqueue

Kaudiofilestreamproperty_fileformat

This property indicates the encoding format of the audio data, such as MPEG.

Kaudiofilestreamproperty_audiodatabytecount

This property gets the length of the audio data that can be used to calculate the audio duration, calculated as:
Duration = (audio data byte size * 8)/Sample rate

Kaudiofilestreamproperty_bitrate

This property gets the sample rate of the audio. can be used to calculate audio duration

Kaudiofilestreamproperty_dataoffset

This property indicates the offset of the audio data in the entire audio file:
Total audio File size = offset + audio data byte size

Kaudiofilestreamproperty_audiodatapacketcount

This property indicates how many frames are in the audio file

Kaudiofilestreamproperty_readytoproducepackets

This property tells us that the full audio frame data has been parsed and ready to produce an audio frame. It is then called to another callback function. Here we create the audio queue audioqueue, assuming that the audio data has the Magic Cookie data, call Audiofilestreamgetpropertyinfo first, get the data is writable, assuming that the property value can be written and then removed, and write to Audioqueue. The audio data frame is then parsed.

Parsing Audio Frames

After the audio information parsing is complete, the audio data frame should be parsed, as shown in the following code:


Over here. We use the previously set Inclientdata to change the callback function from C to OBJC, and the processing code after parsing to the audio data, as seen below:

watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvy2fpcm8xmjm=/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/southeast "alt=" parsing audio data frame-2 "title=" ">

After parsing to the audio data, we are going to write the data to Audioqueuebuffer, first of all, the prototype of the callback function, as seen below:

typedef void (*AudioFileStream_PacketsProc)(void *              ininNumberBytes,UInt32       inNumberPackets, const void *inInputData,                           AudioStreamPacketDescription *inPacketDescriptions);

Description of the parameters:
1. inclientdata: User Data set by Audiofilestreamopen
2. innumberbytes: Number of bytes in audio data
3. innumberpackets: The number of audio frames resolved to
4. ininputdata: Data that includes these audio data frames
5. inpacketdescriptions: audiostreampacketdescription array. This includes Mstartoffset, which indicates the starting position of the relevant data for the frame. Mdatabytesize indicates the size of the frame data.

At this point we first create an audio queue. Used to play audio. and allocate space for each buffer, as seen below:

watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvy2fpcm8xmjm=/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/southeast "alt=" to create an audio queue "title=" >

Then we iterate through each frame. To get the offset position and data size of each frame, assuming that the current frame's data size cannot be stored in the current buffer, we should queue the current buffer. Changes the currently used buffer to the next. Resets the relevant data fill information and the data frame information that is already included.

self.audioQueueCurrentBufferIndex = (++self.audioQueueCurrentBufferIndex) % kNumberOfBuffers;self0;self0;

Assuming you haven't started playing music at this point, you can start playing music

if(self.isPlayingNO) {    NULL);    self.isPlayingYES;}

If the next specified buffer is already in use. That is, all buffers are full and queued. You should wait for the

while(inuse[self.audioQueueCurrentBufferIndex]);

Finally, assuming that the buffer space can hold a frame of data, we use memcpy to copy the data to the corresponding position in the buffer, save the relevant information for each frame, set the offset of each frame in the buffer (Mstartoffset), and set the data size of the current buffer's stored party. And the amount of frames already included.

When a buffer is used, Audioqueue will invoke the callback function set by Audioqueuenewoutput, as seen below:


In this callback, we iterate through each buffer and find the empty buffer. Change the token of the buffer to unused.

Play audio data

At the time of processing the data frame. Assuming the buffer is full (the buffer space is not enough to hold the next frame of data), you can start playing audio at this time

if(self.isPlayingNO) {    NULL);    self.isPlayingYES;}

We are also able to invoke related methods such as Audioqueuepause to pause and terminate audio playback. For example, see the following:

Clean up Related resources

Finally. We clean up the resources, in front of which we used Audioqueueaddpropertylistener to set a listener for the Kaudioqueueproperty_isrunning property, and when Audioqueue was started or terminated, The function is called:

The callback function, as seen in the following:

In this method, we use Audioqueuereset to reset the play queue, call Audioqueuefreebuffer to free the buffer space, release all audioqueue resources, and turn off Audiofilestream.

Just in the actual use of the process, I found that the data is empty when the Audioqueue does not actively terminate, that is, the callback is not actively invoked. So I think, we should be to get to the current playback of the progress, when the playback is completed when the call Audioqueuestop stop playing it, this problem is left to continue to study later.

End

Finally, all the code was finished, and the user just clicked play to hear the wonderful music of "far Away". Because there is only one playbutton on the interface. Don't put a demo. Put on your headphones. Quietly enjoy the fruits of your own with this wonderful music.

iOS audio playback audioqueue (a): Play local music

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.