Recently, when I was reading some android audio code, I had a new idea about getMinBufferSize, and the previous Code finally got rid of it.
I would also like to thank my brother ldh_123456 for his reply and help me understand this.
The detailed call process will not be mentioned. Simply put, the following several lines of code can be understood in the pipeline.
// Ensure that buffer depth covers at least audio hardware latency
// The function of the following line of code is the English comment above. AfFrameCount is the size of the hardware buffer, in frame.
// Why is the unit frame, because the sampling rate refers to the number of sampling points in one second. a sampling point is actually a frame, and its size is: number of channels × sampling depth.
// For example, the dual-channel format is 16-bit data with the sample depth. The frame size is 2 × 2 = 4.
// AfSampleRate is the real sampling rate during hardware playback.
// Here, the meaning of the following line of code is basically clear.
// AfFrameCount/afSampleRate determines how long a hardware buffer can be played. The unit is frame/(frame/s), that is, seconds.
// AfLatency is the latency required by the hardware. The unit is ms. Therefore, to make afFrameCount/afSampleRate the result consistent with the unit of afLatency, a latency of 1000 is multiplied.
// (1000 * afFrameCount)/afSampleRate) if it can be written (1000 × (afFrameCount/afSampleRate), it will be much easier to understand.
// The result of this line of code is to meet the hardware latency, at least a few buffers must be created on the software. Note that the number is calculated and the size of each buffer is not involved.
Uint32_t minBufCount = afLatency/(1000 * afFrameCount)/afSampleRate );
// This line of code indicates that the software layer should have at least two buffer Blocks
If (minBufCount <2) minBufCount = 2;
// The number of buffers is calculated above, and the size of each buffer is calculated below.
// How big should each buffer be? It should correspond to the hardware buffer, so that the data of a software buffer is inserted into a hardware buffer.
// In this case, is the size of the software buffer the same as that of the hardware buffer, and it is OK for afFrameCount?
// Of course it is not that simple, because sampling rate conversion is involved in the middle. The hardware sampling rate is fixed (generally so), and the sampling rate of the playing music is various, this requires sample rate conversion.
// The software buffer stores the data before the conversion, and the hardware buffer stores the converted data.
// To ensure the correspondence between the software buffer and the data playback duration in the hardware buffer, You Need To: (buffersize/sampleRate of a single software) = (afFrameCount/afSampleRate ).
// That is, a single software buffersize = (afFrameCount * sampleRate)/afSampleRate.
// The total software buffer size (in frame) is: minBufCount * (afFrameCount * sampleRate)/afSampleRate.
// Does google intentionally confuse people like this ???
* FrameCount = (sampleRate = 0 )? AfFrameCount * minBufCount:
AfFrameCount * minBufCount * sampleRate/afSampleRate;
// Knowing the buffer size in frame units and the definition of frame, it is not difficult to find the buffer size in bytes.