Work with user data and global informationThis class can be used to create and initialize audio files, read and write audio data, optimize audio files, and read and write audio format information. It is very powerful, it can be used not only to support audio playback, but also to generate audio files. Of course, this article only involves some content related to audio playback (opening audio files, reading format information, and reading audio data. In fact, I only know a little about these methods, other functions have never been used...> _ <).
Open the "posture" of AudioFile"AudioFile
Two methods to open a file are provided:
1,AudioFileOpenURL
12345678910 |
enum { kAudioFileReadPermission = 0x01, kAudioFileWritePermission = 0x02, kAudioFileReadWritePermission = 0x03};extern OSStatus AudioFileOpenURL (CFURLRef inFileRef, SInt8 inPermissions, AudioFileTypeID inFileTypeHint, AudioFileID * outAudioFile);
|
The method definition is used to read local files:
The first parameter is the file path;
The second parameter indicates how the file can be used, including read, write, or read/write. If you open the file and perform operations other than permitted, you will getkAudioFilePermissionsError
Error Code (for example, if it is declaredkAudioFileReadPermission
HoweverAudioFileWriteBytes
);
The third parameter, andAudioFileStream
The open method is also a helpAudioFile
Indicates the type of the file to be parsed. If the file type is determined, it should be passed in;
The fourth parameter returnsAudioFileID
, This ID needs to be saved as a parameter for subsequent methods;
The return value is used to determine whether the file is successfully opened (OSSStatus = noErr ).
2,AudioFileOpenWithCallbacks
1234567 |
extern OSStatus AudioFileOpenWithCallbacks (void * inClientData, AudioFile_ReadProc inReadFunc, AudioFile_WriteProc inWriteFunc, AudioFile_GetSizeProc inGetSizeFunc, AudioFile_SetSizeProc inSetSizeFunc, AudioFileTypeID inFileTypeHint, AudioFileID * outAudioFile);
|
After reading the first Open method, it seems a bit confusing. How can I tell AudioFile which file to Open without URL parameters? Let's take a look at the parameter description:
The first parameter, context information, is not explained;
The second parameter, whenAudioFile
Callback to Read audio data (after calling the Open and Read MethodsSynchronization
Callback );
The third parameter, whenAudioFile
Callback when audio data needs to be written (used when writing audio files );
The fourth parameter, whenAudioFile
The total file size callback is required (after the Open and Read methods are calledSynchronization
Callback );
The fifth parameter, whenAudioFile
Large file callback needs to be set (used for writing audio files );
The sixth and seventh parameters are the same as the return values.AudioFileOpenURL
Method;
This method focuses onAudioFile_ReadProc
This callback. From another perspective, this method has a higher degree of freedom than the first method. AudioFile only needs a data source, whether it is a file on the disk, data in the memory, or even a network streamAudioFile
When data is required (when Open and Read ),AudioFile_ReadProc
The callback provides proper data for AudioFile. That is to say, the method can be used not only to read local files, but alsoAudioFileStream
Read data in the form of a stream.
Next let's take a look.AudioFile_GetSizeProc
AndAudioFile_ReadProc
Callbacks related to these two reading functions
1234567 |
typedef SInt64 (*AudioFile_GetSizeProc)(void * inClientData);typedef OSStatus (*AudioFile_ReadProc)(void * inClientData, SInt64 inPosition, UInt32 requestCount, void * buffer, UInt32 * actualCount);
|
FirstAudioFile_GetSizeProc
Callback. This callback is easy to understand. The total length of the returned file can be obtained through the file system or httpResponse.
NextAudioFile_ReadProc
Callback:
The first parameter, the context object, will not be repeated;
The second parameter reads data starting from the nth byte;
The third parameter indicates the length of the data to be read;
The fourth parameter, the return parameter, is a Data Pointer and its space has been allocated. What we need to do is to put the data memcpy into the buffer;
The fifth parameter indicates the actual data length, that is, the Data Length from memcpy to buffer;
Return value. If no exception is generated, noErr is returned. If an exception exists, you can select the expected error constant Based on the exception type. (generally, if no other return values are used, it is enough to return noErr );
Here we need to explain how this callback method works.AudioFile
The callback method is called when data is needed. The data time points are as follows:
When calling the Open methodAudioFile
When the Open method is called, the audio format information will be parsed. Only the compliant audio format can be successfully opened. Otherwise, the Open method will return an error code (in other words, once an Open method is called successfully, it is equivalentAudioStreamFile
Return after ParseReadyToProducePackets
Similarly, you can start to read audio data as long as the Open method is successful. For details, see Article 3). Therefore, you need to provide some audio data for resolution during the Open method call process;
When calling Read-related methods, it is easy to understand;
When providing data through callback, you need to pay attention to the inPosition and requestCount parameters. These two parameters indicate that the data range required for this callback is requestCount data starting from inPosition. There are two situations:
Ample data: copy the data in this range to the buffer, assign requestCount to actualCount, and return noError;
Insufficient data: if there is not enough data, you can only copy the data at hand to the buffer. Note that the copied data must start from inPosition.Continuous Data
After the copy is complete, assign a value to actualCount to the Data Length actually copied to the buffer and return noErr. This process can be represented by the following code:
1234567891011121314151617 |
static OSStatus MyAudioFileReadCallBack(void *inClientData, SInt64 inPosition, UInt32 requestCount, void *buffer, UInt32 *actualCount){ __unsafe_unretained MyContext *context = (__bridge MyContext *)inClientData; *actualCount = [context availableDataLengthAtOffset:inPosition maxLength:requestCount]; if (*actualCount > 0) { NSData *data = [context dataAtOffset:inPosition length:*actualCount]; memcpy(buffer, [data bytes], [data length]); } return noErr;}
|
There are two situations:
2.1. when the Open method is called, The callback data is insufficient: the Open method of AudioFile reads data in several steps based on the file format type to determine whether the file format is legal, the inPosition and requestCount of each step are different. If one step fails, the next step is taken directly. If several steps fail, the Open method fails. To put it simply, before calling Open, you must ensure that the format information of the audio file is complete, which meansAudioFile
It cannot be used independently for reading audio streams. It must be used before stream playback.AudioStreamFile
To obtainReadyToProducePackets
Flag to ensure information integrity;
2.2. insufficient callback data during Read method calling: In this case, the value of inPosition and requestCount is related to the parameters passed in when the Read method is called. Insufficient data does not affect the Read method itself, as long as the callback returns noErr, the Read operation succeeds, but the data actually sent to the caller of the Read method is insufficient. Therefore, the problem is handled by the caller of the Read operation;
Read audio format informationAfter opening the audio file, you can read the format information. The method used to read the audio file is as follows:
123456789 |
extern OSStatus AudioFileGetPropertyInfo(AudioFileID inAudioFile, AudioFilePropertyID inPropertyID, UInt32 * outDataSize, UInt32 * isWritable); extern OSStatus AudioFileGetProperty(AudioFileID inAudioFile, AudioFilePropertyID inPropertyID, UInt32 * ioDataSize, void * outPropertyData);
|
AudioFileGetPropertyInfo
This method is used to obtain the data size (outDataSize) of an attribute and whether the attribute can be written (isWritable ).AudioFileGetProperty
To obtain the data corresponding to the attribute. For attributes with variable sizes, useAudioFileGetPropertyInfo
Only the data size can be obtained (for example, formatList), while some single attributes of Certain types do not have to be called first.AudioFileGetPropertyInfo
Direct callAudioFileGetProperty
For example, BitRate:
12345678910111213141516171819202122232425262728 |
AudioFileID fileID; // AudioFileID returned by the Open Method // obtain the format information UInt32 formatListSize = 0; OSStatus status = AudioFileGetPropertyInfo (_ fileID, kAudioFilePropertyFormatList, & formatListSize, NULL ); if (status = noErr) {AudioFormatListItem * formatList = (AudioFormatListItem *) malloc (formatListSize); status = AudioFileGetProperty (fileID, kAudioFilePropertyFormatList, & formatListSize, formatList ); if (status = noErr ) {For (int I = 0; I * sizeof (AudioFormatListItem) <formatListSize; I + = sizeof (AudioFormatListItem) {AudioStreamBasicDescription pasbd = formatList [I]. mASBD; // select the desired format .. } Free (formatList);} // get the bitRate UInt32 bitRate; UInt32 bitRateSize = sizeof (bitRate); status = AudioFileGetProperty (fileID, kAudioFilePropertyBitRate, & size, & bitRate ); if (status! = NoErr) {// handle errors}
|
The following attributes can be obtained. You can refer to the document to obtain the information you need (note that EstimatedDuration is available here and you can get the Duration ):
12345678910111213141516171819202122232425262728 |
enum{ kAudioFilePropertyFileFormat = 'ffmt', kAudioFilePropertyDataFormat = 'dfmt', kAudioFilePropertyIsOptimized = 'optm', kAudioFilePropertyMagicCookieData = 'mgic', kAudioFilePropertyAudioDataByteCount = 'bcnt', kAudioFilePropertyAudioDataPacketCount = 'pcnt', kAudioFilePropertyMaximumPacketSize = 'psze', kAudioFilePropertyDataOffset = 'doff', kAudioFilePropertyChannelLayout = 'cmap', kAudioFilePropertyDeferSizeUpdates = 'dszu', kAudioFilePropertyMarkerList = 'mkls', kAudioFilePropertyRegionList = 'rgls', kAudioFilePropertyChunkIDs = 'chid', kAudioFilePropertyInfoDictionary = 'info', kAudioFilePropertyPacketTableInfo = 'pnfo', kAudioFilePropertyFormatList = 'flst', kAudioFilePropertyPacketSizeUpperBound = 'pkub', kAudioFilePropertyReserveDuration = 'rsrv', kAudioFilePropertyEstimatedDuration = 'edur', kAudioFilePropertyBitRate = 'brat', kAudioFilePropertyID3Tag = 'id3t', kAudioFilePropertySourceBitDepth = 'sbtd', kAudioFilePropertyAlbumArtwork = 'aart', kAudioFilePropertyAudioTrackCount = 'atct', kAudioFilePropertyUseAudioTrack = 'uatk'};
|
Read audio dataThere are two methods to read audio data:
1. Directly read audio data:
12345 |
extern OSStatus AudioFileReadBytes (AudioFileID inAudioFile, Boolean inUseCache, SInt64 inStartingByte, UInt32 * ioNumBytes, void * outBuffer);
|
The first parameter, FileID;
The second parameter indicates whether the cache is required. Generally, false is used;
The third parameter reads data starting from the first few bytes.
The fourth parameter, which is used as an input parameter to indicate the amount of data to be read, after the call is complete, the output parameter indicates how much data is actually Read (that is, requestCount and actualCount in the Read callback );
The fifth parameter, the buffer pointer, must be allocated with enough memory in advance (ioNumBytes is large, that is, the buffer in the Read callback, so there is no need to allocate memory in the Read callback );
The returned value indicates whether the read is successful. If EOF is returnedkAudioFileEndOfFileError
;
All the data obtained by using this method is data without frame separation. If you want to play or decode the data, you must passAudioFileStream
Separate frames;
2. Read audio data by frame (Packet:
12345678910111213141516 |
extern OSStatus AudioFileReadPacketData (AudioFileID inAudioFile, Boolean inUseCache, UInt32 * ioNumBytes, AudioStreamPacketDescription * outPacketDescriptions, SInt64 inStartingPacket, UInt32 * ioNumPackets, void * outBuffer); extern OSStatus AudioFileReadPackets (AudioFileID inAudioFile, Boolean inUseCache, UInt32 * outNumBytes, AudioStreamPacketDescription * outPacketDescriptions, SInt64 inStartingPacket, UInt32 * ioNumPackets, void * outBuffer);
|
There are two methods for reading by frame. The two methods look similar, and even the parameters are almost the same, but their use scenarios and efficiency are different. The two methods are described in the official document as follows:
AudioFileReadPacketData
Is memory efficient when reading variable bit-rate (VBR) audio data;
AudioFileReadPacketData
Is more efficientAudioFileReadPackets
When reading compressed file formats that do not have packet tables, such as MP3 or ADTS. this function is a good choice for reading either CBR (constant bit-rate) or VBR data if you do not need to read a fixed duration of audio.
- Use
AudioFileReadPackets
Only when you need to read a fixed duration of audio data, or when you are reading only uncompressed audio.It is used only when you need to read fixed-length audio or non-compressed audio.AudioFileReadPackets
And useAudioFileReadPacketData
Higher efficiency and memory saving;
Let's take a look at these parameters:
The first and second parameters are the sameAudioFileReadBytes
;
The third parameter,AudioFileReadPacketData
IoNumBytes is used for input and output. It indicates the size of outBuffer during input and the size of data actually read in output. AndAudioFileReadPackets
OutNumBytes is only used for output, indicating the actual size of data read;
The fourth parameter is the frame information array pointer. memory needs to be allocated before input. The size must be sufficient for ioNumPackets frames (ioNumPackets * sizeof (AudioStreamPacketDescription ));
The fifth parameter indicates the number of frames to be read during input and the number of frames actually read during output;
The sixth parameter, the outBuffer Data Pointer, needs to be allocated space before input. This parameter looks like the two methods are the same, but it is not. ForAudioFileReadPacketData
For example, you only need to allocateApproximate frame size * Number of frames
The method determines the final output Frames Based on the given memory space. If the space is insufficient, the number of frames is reduced.AudioFileReadPackets
You need to allocateMaximum frame size (or upper limit of frame size) * Number of frames
(The maximum frame size and the difference between the upper limit of the frame size, etc.); that is why the third parameter is used by the input and output in two directions, the other is only the reason for output. In this case, the former method saves more memory than the latter;
Return Value, sameAudioFileReadBytes
;
The data read by these two methods is the data after the frame separation, which can be directly used for playback or decoding.
The following code uses two methods (taking MP3 as an example ):
123456789101112131415161718192021 |
AudioFileID fileID; // AudioFileIDUInt32 ioNumPackets =... returned by the Open method ...; // how many packetSInt64 inStartingPacket = ...; // read UInt32 bitRate = ...; // AudioFileGetProperty read kAudioFilePropertyBitRateUInt32 sampleRate = ...; // AudioFileGetProperty reads kAudioFilePropertyDataFormat or describytecountperpacket = 144 * bitRate/sampleRate; // The approximate size of each Packet for MP3 data UInt32 descSize = sizeof (AudioStreamPacketDescription) * ioNumPackets; audioStreamPacketDescription * bytes = (AudioStreamPacketDescription *) malloc (descSize); UInt32 ioNumBytes = optional * ioNumPackets; void * outBuffer = (void *) malloc (ioNumBytes); OSStatus status = bytes (fileID, false, & ioNumBytes, outPacketDescriptions, inStartingPacket, & ioNumPackets, outBuffer );
|
12345678910111213141516171819202122 |
AudioFileID fileID; // AudioFileIDUInt32 ioNumPackets =... returned by the Open method ...; // how many packetSInt64 inStartingPacket = ...; // read UInt32 maxByteCountPerPacket = ...; // AudioFileGetProperty reads kAudioFilePropertyMaximumPacketSize, the maximum packet size // you can also use: // UInt32 byteCountUpperBoundPerPacket = ...; // AudioFileGetProperty reads bytes. The upper limit of the current packet size (when the entire file is not scanned) UInt32 descSize = sizeof (AudioStreamPacketDescription) * ioNumPackets; AudioStreamPacketDescription * outPacketDescriptions = *) malloc (descSize); UInt32 outNumBytes = 0; UInt32 ioNumBytes = maxByteCountPerPacket * ioNumPackets; void * outBuffer = (void *) malloc (ioNumBytes); OSStatus status = bytes (fileID, false, & outNumBytes, outPacketDescriptions, inStartingPacket, & ioNumPackets, outBuffer );
|
SeekSeek's ideas and previous discussionsAudioFileStream
The difference is that AudioFile has no way to fix the seek offset and seek time:
- Use
AudioFileReadBytes
You need to calculate the approximateSeekOffset
- Use
AudioFileReadPacketData
OrAudioFileReadPackets
You need to calculate the seekToPacketFor the calculation methods of approximateSeekOffset and seekToPacket, see section 3.
Disable AudioFileAudioFile
CallAudioFileClose
.
1 |
extern OSStatus AudioFileClose (AudioFileID inAudioFile);
|
SummaryThis article focuses onAudioFile
The audio reading function is introduced. A summary is as follows:
AudioFile
There are two Open methods. You need to select different methods for your own use scenarios;
AudioFileOpenURL
Used to read local files
AudioFileOpenWithCallbacks
Is more widely used than the former.AudioFile_ReadProc
This callback method will be called when the Open method itself and the Read method are calledSynchronization
Call
You must ensure that the format information of the audio file is readable.AudioFile
AudioFile cannot be used independently for reading audio streams.AudioStreamFile
To read the stream (UseAudioStreamFile
To determine whether the file format information is readable and then call the Open method );
UseAudioFileGetProperty
When reading the format information, you must determine whether the read information needs to be called first.AudioFileGetPropertyInfo
Obtain the data size before reading it;
Different audio reading methods should be selected for reading audio data based on the Use scenario. The variables to be calculated for different reading methods seek are also different;
AudioFile
CallAudioFileClose
;
Sample CodeThe demo is not provided here for reading local files with AudioFile. For the use of AudioFile in stream playback, we recommend that you read DOUAudioStreamer, the open source player code of Douban.
Next announcementThe next article describes how to useAudioQueue
.