the set of all voices, with their contained effects and their interconnections, are referred to as the audio processing Graph. The graph takes a set of audio streams from the client as input, processes them, and delivers the final result to an audio Device. All audio processing takes place in a separate thread with a periodicity defined by the graph ' s quantum (currently mill Iseconds on Microsoft Windows, and 5 1/3 milliseconds on Xbox 360). Every quantum milliseconds, the thread wakes up and disperses quantum milliseconds of audio data through the entire graph. for the example of building a basic audio graph, see how to:Build a Basic Audio processing Graph.
A Simple Audio Graph:
The client can control the graph's state dynamically whilst it is running. Control actions might include adding and removing inputs and outputs, changing the internal effects and interconnections, Setting parameters on the effects, enabling and disabling parts of the graph, and so on. For a example of dynamically changing an audio graph, see how to to:dynamically Add or Remove Voices from an audio graph.
Processing the Graph
Any method, call, affects any object in the graph was considered to being effecting a graph state change. Graph state changes include the following:
- creating and destroying voices
- starting or stopping voices
- changing the Destinations of a voice
- modifying effect chains
- enabling or disabling effects
- Setting Parameters on the effects or on the built-in SRCs, filters, volumes, and mixers
Any set of the graph state changes can is combined and performed as an atomic transaction. These atomic operations is known as Operation sets. They is discussed in theXAudio2 Operation SetsOverview.Internal Data Representation
Audio data within the XAudio2 graph is a stored and processed in 32-bit floating-point PCM form. However, the channel count and sample rate can vary within the graph. The format in which a given voice processes audio are determined by the voice type and parameters used to create the voice.
voice Type |
parameters |
ixaudio2sourcevoice |
the channel count and sample rate of the voices to which the source voice sends audio. |
ixaudio2submixvoice and ixaudio2masteringvoice |
The inputchannels and inputsamplerate arguments used to create the submix/mastering voice. |
Format Conversion
XAudio2 handles any sample rate or channel conversions that is required as audio travels from one voice to another with The following limitations:
- all destination voices for a particular voice must be Running at the same sample rate
- effects in a effect chain can change the audio's channel count, but not it sample rate /li>
- An effect Chain ' s output channel count must match that's the voices to which it sends
- no Dynamic graph change can is made which would break The rules above
On the input side, source voices can read data in any valid PCM format, or in any of the compressed formats supported by XAudio2. If the input data is compressed, it's decoded to floating-point PCM before any further processing are done.
On the output side, mastering voices can is only produce PCM data. This data would always be satisfy the same restrictions described above for input PCM data.
Related Topics
-
XAudio2 Programming Guide
-
How to:build a Basic Audio processing Graph
-
How to:dynamically Add or Remove Voices from an Audio Graph
-
How To:use submix Voices
-
How to:create an Effect Chain
DirectX XAudio2 key points for audio diagrams