IOS audio playback (4): Converting AudioFile

Source: Internet
Author: User

IOS audio playback (4): Converting AudioFile
Source: http://msching.github.io/blog/2014/07/19/audio-in-ios-4/preface

Next, the third articleAudioStreamFileLet's talk about this article.AudioFile. AndAudioStreamFileSameAudioFileYesAudioToolBoxIn the framework, it can also complete Step 1 described in the first article, read audio format information and perform frame separation, but in fact its function is far more than that.

Introduction to AudioFile

As described in the official document:

a C programming interface that enables you to read or write a wide variety of audio data to or from disk or a memory buffer.With Audio File Services you can:

  • Create, initialize, open, and close audio files
  • Read and write audio files
  • Optimize audio files
  • Work with user data and global information

    This class can be used to create and initialize audio files, read and write audio data, optimize audio files, and read and write audio format information. It is very powerful, it can be used not only to support audio playback, but also to generate audio files. Of course, this article only involves some content related to audio playback (opening audio files, reading format information, and reading audio data. In fact, I only know a little about these methods, other functions have never been used...> _ <).

    Open the "posture" of AudioFile"

    AudioFileTwo methods to open a file are provided:

    1,AudioFileOpenURL

    12345678910
    enum {  kAudioFileReadPermission      = 0x01,  kAudioFileWritePermission     = 0x02,  kAudioFileReadWritePermission = 0x03};extern OSStatus AudioFileOpenURL (CFURLRef inFileRef,                                  SInt8 inPermissions,                                  AudioFileTypeID inFileTypeHint,                                  AudioFileID * outAudioFile);

    The method definition is used to read local files:

    The first parameter is the file path;

    The second parameter indicates how the file can be used, including read, write, or read/write. If you open the file and perform operations other than permitted, you will getkAudioFilePermissionsErrorError Code (for example, if it is declaredkAudioFileReadPermissionHoweverAudioFileWriteBytes);

    The third parameter, andAudioFileStreamThe open method is also a helpAudioFileIndicates the type of the file to be parsed. If the file type is determined, it should be passed in;

    The fourth parameter returnsAudioFileID, This ID needs to be saved as a parameter for subsequent methods;

    The return value is used to determine whether the file is successfully opened (OSSStatus = noErr ).

    2,AudioFileOpenWithCallbacks

    1234567
    extern OSStatus AudioFileOpenWithCallbacks (void * inClientData,                                            AudioFile_ReadProc inReadFunc,                                            AudioFile_WriteProc inWriteFunc,                                            AudioFile_GetSizeProc inGetSizeFunc,                                            AudioFile_SetSizeProc inSetSizeFunc,                                            AudioFileTypeID inFileTypeHint,                                            AudioFileID * outAudioFile);

    After reading the first Open method, it seems a bit confusing. How can I tell AudioFile which file to Open without URL parameters? Let's take a look at the parameter description:

    The first parameter, context information, is not explained;

    The second parameter, whenAudioFileCallback to Read audio data (after calling the Open and Read MethodsSynchronizationCallback );

    The third parameter, whenAudioFileCallback when audio data needs to be written (used when writing audio files );

    The fourth parameter, whenAudioFileThe total file size callback is required (after the Open and Read methods are calledSynchronizationCallback );

    The fifth parameter, whenAudioFileLarge file callback needs to be set (used for writing audio files );

    The sixth and seventh parameters are the same as the return values.AudioFileOpenURLMethod;

    This method focuses onAudioFile_ReadProcThis callback. From another perspective, this method has a higher degree of freedom than the first method. AudioFile only needs a data source, whether it is a file on the disk, data in the memory, or even a network streamAudioFileWhen data is required (when Open and Read ),AudioFile_ReadProcThe callback provides proper data for AudioFile. That is to say, the method can be used not only to read local files, but alsoAudioFileStreamRead data in the form of a stream.

    Next let's take a look.AudioFile_GetSizeProcAndAudioFile_ReadProcCallbacks related to these two reading functions

    1234567
    typedef SInt64 (*AudioFile_GetSizeProc)(void * inClientData);typedef OSStatus (*AudioFile_ReadProc)(void * inClientData,                                       SInt64 inPosition,                                       UInt32 requestCount,                                       void * buffer,                                       UInt32 * actualCount);

    FirstAudioFile_GetSizeProcCallback. This callback is easy to understand. The total length of the returned file can be obtained through the file system or httpResponse.

    NextAudioFile_ReadProcCallback:

    The first parameter, the context object, will not be repeated;

    The second parameter reads data starting from the nth byte;

    The third parameter indicates the length of the data to be read;

    The fourth parameter, the return parameter, is a Data Pointer and its space has been allocated. What we need to do is to put the data memcpy into the buffer;

    The fifth parameter indicates the actual data length, that is, the Data Length from memcpy to buffer;

    Return value. If no exception is generated, noErr is returned. If an exception exists, you can select the expected error constant Based on the exception type. (generally, if no other return values are used, it is enough to return noErr );

    Here we need to explain how this callback method works.AudioFileThe callback method is called when data is needed. The data time points are as follows:

    1. When calling the Open methodAudioFileWhen the Open method is called, the audio format information will be parsed. Only the compliant audio format can be successfully opened. Otherwise, the Open method will return an error code (in other words, once an Open method is called successfully, it is equivalentAudioStreamFileReturn after ParseReadyToProducePacketsSimilarly, you can start to read audio data as long as the Open method is successful. For details, see Article 3). Therefore, you need to provide some audio data for resolution during the Open method call process;

    2. When calling Read-related methods, it is easy to understand;

      When providing data through callback, you need to pay attention to the inPosition and requestCount parameters. These two parameters indicate that the data range required for this callback is requestCount data starting from inPosition. There are two situations:

      1. Ample data: copy the data in this range to the buffer, assign requestCount to actualCount, and return noError;

      2. Insufficient data: if there is not enough data, you can only copy the data at hand to the buffer. Note that the copied data must start from inPosition.Continuous DataAfter the copy is complete, assign a value to actualCount to the Data Length actually copied to the buffer and return noErr. This process can be represented by the following code:

        1234567891011121314151617
        static OSStatus MyAudioFileReadCallBack(void *inClientData,                                        SInt64 inPosition,                                        UInt32 requestCount,                                        void *buffer,                                        UInt32 *actualCount){    __unsafe_unretained MyContext *context = (__bridge MyContext *)inClientData;    *actualCount = [context availableDataLengthAtOffset:inPosition maxLength:requestCount];    if (*actualCount > 0)    {        NSData *data = [context dataAtOffset:inPosition length:*actualCount];        memcpy(buffer, [data bytes], [data length]);    }    return noErr;}

        There are two situations:

        2.1. when the Open method is called, The callback data is insufficient: the Open method of AudioFile reads data in several steps based on the file format type to determine whether the file format is legal, the inPosition and requestCount of each step are different. If one step fails, the next step is taken directly. If several steps fail, the Open method fails. To put it simply, before calling Open, you must ensure that the format information of the audio file is complete, which meansAudioFileIt cannot be used independently for reading audio streams. It must be used before stream playback.AudioStreamFileTo obtainReadyToProducePacketsFlag to ensure information integrity;

        2.2. insufficient callback data during Read method calling: In this case, the value of inPosition and requestCount is related to the parameters passed in when the Read method is called. Insufficient data does not affect the Read method itself, as long as the callback returns noErr, the Read operation succeeds, but the data actually sent to the caller of the Read method is insufficient. Therefore, the problem is handled by the caller of the Read operation;

        Read audio format information

        After opening the audio file, you can read the format information. The method used to read the audio file is as follows:

        123456789
        extern OSStatus AudioFileGetPropertyInfo(AudioFileID inAudioFile,                                         AudioFilePropertyID inPropertyID,                                         UInt32 * outDataSize,                                         UInt32 * isWritable);                                      extern OSStatus AudioFileGetProperty(AudioFileID inAudioFile,                                     AudioFilePropertyID inPropertyID,                                     UInt32 * ioDataSize,                                     void * outPropertyData);    

        AudioFileGetPropertyInfoThis method is used to obtain the data size (outDataSize) of an attribute and whether the attribute can be written (isWritable ).AudioFileGetPropertyTo obtain the data corresponding to the attribute. For attributes with variable sizes, useAudioFileGetPropertyInfoOnly the data size can be obtained (for example, formatList), while some single attributes of Certain types do not have to be called first.AudioFileGetPropertyInfoDirect callAudioFileGetPropertyFor example, BitRate:

        12345678910111213141516171819202122232425262728
        AudioFileID fileID; // AudioFileID returned by the Open Method // obtain the format information UInt32 formatListSize = 0; OSStatus status = AudioFileGetPropertyInfo (_ fileID, kAudioFilePropertyFormatList, & formatListSize, NULL ); if (status = noErr) {AudioFormatListItem * formatList = (AudioFormatListItem *) malloc (formatListSize); status = AudioFileGetProperty (fileID, kAudioFilePropertyFormatList, & formatListSize, formatList ); if (status = noErr ) {For (int I = 0; I * sizeof (AudioFormatListItem) <formatListSize; I + = sizeof (AudioFormatListItem) {AudioStreamBasicDescription pasbd = formatList [I]. mASBD; // select the desired format .. } Free (formatList);} // get the bitRate UInt32 bitRate; UInt32 bitRateSize = sizeof (bitRate); status = AudioFileGetProperty (fileID, kAudioFilePropertyBitRate, & size, & bitRate ); if (status! = NoErr) {// handle errors}

        The following attributes can be obtained. You can refer to the document to obtain the information you need (note that EstimatedDuration is available here and you can get the Duration ):

        12345678910111213141516171819202122232425262728
        enum{  kAudioFilePropertyFileFormat             =    'ffmt',  kAudioFilePropertyDataFormat             =    'dfmt',  kAudioFilePropertyIsOptimized            =    'optm',  kAudioFilePropertyMagicCookieData        =    'mgic',  kAudioFilePropertyAudioDataByteCount     =    'bcnt',  kAudioFilePropertyAudioDataPacketCount   =    'pcnt',  kAudioFilePropertyMaximumPacketSize      =    'psze',  kAudioFilePropertyDataOffset             =    'doff',  kAudioFilePropertyChannelLayout          =    'cmap',  kAudioFilePropertyDeferSizeUpdates       =    'dszu',  kAudioFilePropertyMarkerList             =    'mkls',  kAudioFilePropertyRegionList             =    'rgls',  kAudioFilePropertyChunkIDs               =    'chid',  kAudioFilePropertyInfoDictionary         =    'info',  kAudioFilePropertyPacketTableInfo        =    'pnfo',  kAudioFilePropertyFormatList             =    'flst',  kAudioFilePropertyPacketSizeUpperBound   =    'pkub',  kAudioFilePropertyReserveDuration        =    'rsrv',  kAudioFilePropertyEstimatedDuration      =    'edur',  kAudioFilePropertyBitRate                =    'brat',  kAudioFilePropertyID3Tag                 =    'id3t',  kAudioFilePropertySourceBitDepth         =    'sbtd',  kAudioFilePropertyAlbumArtwork           =    'aart',  kAudioFilePropertyAudioTrackCount        =    'atct',  kAudioFilePropertyUseAudioTrack          =    'uatk'}; 
        Read audio data

        There are two methods to read audio data:

        1. Directly read audio data:

        12345
        extern OSStatus AudioFileReadBytes (AudioFileID inAudioFile,                                    Boolean inUseCache,                                    SInt64 inStartingByte,                                    UInt32 * ioNumBytes,                                    void * outBuffer);

        The first parameter, FileID;

        The second parameter indicates whether the cache is required. Generally, false is used;

        The third parameter reads data starting from the first few bytes.

        The fourth parameter, which is used as an input parameter to indicate the amount of data to be read, after the call is complete, the output parameter indicates how much data is actually Read (that is, requestCount and actualCount in the Read callback );

        The fifth parameter, the buffer pointer, must be allocated with enough memory in advance (ioNumBytes is large, that is, the buffer in the Read callback, so there is no need to allocate memory in the Read callback );

        The returned value indicates whether the read is successful. If EOF is returnedkAudioFileEndOfFileError;

        All the data obtained by using this method is data without frame separation. If you want to play or decode the data, you must passAudioFileStreamSeparate frames;

        2. Read audio data by frame (Packet:

        12345678910111213141516
        extern OSStatus AudioFileReadPacketData (AudioFileID inAudioFile,                                         Boolean inUseCache,                                         UInt32 * ioNumBytes,                                         AudioStreamPacketDescription * outPacketDescriptions,                                         SInt64 inStartingPacket,                                         UInt32 * ioNumPackets,                                         void * outBuffer);                                      extern OSStatus AudioFileReadPackets (AudioFileID inAudioFile,                                      Boolean inUseCache,                                      UInt32 * outNumBytes,                                      AudioStreamPacketDescription * outPacketDescriptions,                                      SInt64 inStartingPacket,                                      UInt32 * ioNumPackets,                                      void * outBuffer);

        There are two methods for reading by frame. The two methods look similar, and even the parameters are almost the same, but their use scenarios and efficiency are different. The two methods are described in the official document as follows:

        • AudioFileReadPacketDataIs memory efficient when reading variable bit-rate (VBR) audio data;
        • AudioFileReadPacketDataIs more efficientAudioFileReadPacketsWhen reading compressed file formats that do not have packet tables, such as MP3 or ADTS. this function is a good choice for reading either CBR (constant bit-rate) or VBR data if you do not need to read a fixed duration of audio.
        • UseAudioFileReadPacketsOnly when you need to read a fixed duration of audio data, or when you are reading only uncompressed audio.

          It is used only when you need to read fixed-length audio or non-compressed audio.AudioFileReadPacketsAnd useAudioFileReadPacketDataHigher efficiency and memory saving;

          Let's take a look at these parameters:

          The first and second parameters are the sameAudioFileReadBytes;

          The third parameter,AudioFileReadPacketDataIoNumBytes is used for input and output. It indicates the size of outBuffer during input and the size of data actually read in output. AndAudioFileReadPacketsOutNumBytes is only used for output, indicating the actual size of data read;

          The fourth parameter is the frame information array pointer. memory needs to be allocated before input. The size must be sufficient for ioNumPackets frames (ioNumPackets * sizeof (AudioStreamPacketDescription ));

          The fifth parameter indicates the number of frames to be read during input and the number of frames actually read during output;

          The sixth parameter, the outBuffer Data Pointer, needs to be allocated space before input. This parameter looks like the two methods are the same, but it is not. ForAudioFileReadPacketDataFor example, you only need to allocateApproximate frame size * Number of framesThe method determines the final output Frames Based on the given memory space. If the space is insufficient, the number of frames is reduced.AudioFileReadPacketsYou need to allocateMaximum frame size (or upper limit of frame size) * Number of frames(The maximum frame size and the difference between the upper limit of the frame size, etc.); that is why the third parameter is used by the input and output in two directions, the other is only the reason for output. In this case, the former method saves more memory than the latter;

          Return Value, sameAudioFileReadBytes;

          The data read by these two methods is the data after the frame separation, which can be directly used for playback or decoding.

          The following code uses two methods (taking MP3 as an example ):

          123456789101112131415161718192021
          AudioFileID fileID; // AudioFileIDUInt32 ioNumPackets =... returned by the Open method ...; // how many packetSInt64 inStartingPacket = ...; // read UInt32 bitRate = ...; // AudioFileGetProperty read kAudioFilePropertyBitRateUInt32 sampleRate = ...; // AudioFileGetProperty reads kAudioFilePropertyDataFormat or describytecountperpacket = 144 * bitRate/sampleRate; // The approximate size of each Packet for MP3 data UInt32 descSize = sizeof (AudioStreamPacketDescription) * ioNumPackets; audioStreamPacketDescription * bytes = (AudioStreamPacketDescription *) malloc (descSize); UInt32 ioNumBytes = optional * ioNumPackets; void * outBuffer = (void *) malloc (ioNumBytes); OSStatus status = bytes (fileID, false, & ioNumBytes, outPacketDescriptions, inStartingPacket, & ioNumPackets, outBuffer );
          12345678910111213141516171819202122
          AudioFileID fileID; // AudioFileIDUInt32 ioNumPackets =... returned by the Open method ...; // how many packetSInt64 inStartingPacket = ...; // read UInt32 maxByteCountPerPacket = ...; // AudioFileGetProperty reads kAudioFilePropertyMaximumPacketSize, the maximum packet size // you can also use: // UInt32 byteCountUpperBoundPerPacket = ...; // AudioFileGetProperty reads bytes. The upper limit of the current packet size (when the entire file is not scanned) UInt32 descSize = sizeof (AudioStreamPacketDescription) * ioNumPackets; AudioStreamPacketDescription * outPacketDescriptions = *) malloc (descSize); UInt32 outNumBytes = 0; UInt32 ioNumBytes = maxByteCountPerPacket * ioNumPackets; void * outBuffer = (void *) malloc (ioNumBytes); OSStatus status = bytes (fileID, false, & outNumBytes, outPacketDescriptions, inStartingPacket, & ioNumPackets, outBuffer );
          Seek

          Seek's ideas and previous discussionsAudioFileStreamThe difference is that AudioFile has no way to fix the seek offset and seek time:

          • UseAudioFileReadBytesYou need to calculate the approximateSeekOffset
          • UseAudioFileReadPacketDataOrAudioFileReadPacketsYou need to calculate the seekToPacket

            For the calculation methods of approximateSeekOffset and seekToPacket, see section 3.

            Disable AudioFile

            AudioFileCallAudioFileClose.

            1
            extern OSStatus AudioFileClose (AudioFileID inAudioFile);  
            Summary

            This article focuses onAudioFileThe audio reading function is introduced. A summary is as follows:

            • AudioFileThere are two Open methods. You need to select different methods for your own use scenarios;

            • AudioFileOpenURLUsed to read local files

            • AudioFileOpenWithCallbacksIs more widely used than the former.AudioFile_ReadProcThis callback method will be called when the Open method itself and the Read method are calledSynchronizationCall

            • You must ensure that the format information of the audio file is readable.AudioFileAudioFile cannot be used independently for reading audio streams.AudioStreamFileTo read the stream (UseAudioStreamFileTo determine whether the file format information is readable and then call the Open method );

            • UseAudioFileGetPropertyWhen reading the format information, you must determine whether the read information needs to be called first.AudioFileGetPropertyInfoObtain the data size before reading it;

            • Different audio reading methods should be selected for reading audio data based on the Use scenario. The variables to be calculated for different reading methods seek are also different;

            • AudioFileCallAudioFileClose;

              Sample Code

              The demo is not provided here for reading local files with AudioFile. For the use of AudioFile in stream playback, we recommend that you read DOUAudioStreamer, the open source player code of Douban.

              Next announcement

              The next article describes how to useAudioQueue.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.