Audio Units are created when a Component with appropriate type, subType and manufactureID is opened. In the underlying implementation this will create a C++ object - the open component calls can viewed as the basic allocation of the unit's basic resources.
In a reciprocal operation, the close component calls are expected to delete and remove any resources that were in use by the Audio Unit. In the underlying implementation this will delete a C++ object.
When an Audio Unit is created (i.e. the Component is opened), the underlying code is expected to only do a minimal amount of work, in fact as little as possible. There are a number of properties of a unit that a host application may query for information before deciding to use a unit. Thus, the act of opening an Audio Unit is expected to be a relatively inexpensive operation.
Contents | |
Initialization | |
Resetting State | |
Output Units - Start and Stop State | |
IDs, Scope and Elements - Properties and Parameters |
Functions | ||
AudioUnitInitialize | Initialize an Audio Unit. | |
AudioUnitUninitialize | Uninitialize an AudioUnit. | |
AudioUnitReset | Reset the state of an Audio Unit. | |
AudioOutputUnitStart | Start an output unit. | |
AudioOutputUnitStop | Stop an output unit. | |
Defined Types | ||
AudioUnitElement | ||
Enumerations | ||
AudioUnitScope |
In order to discriminate between an opened state and a state where the Audio Unit can be expected to do work, there are additional functions:
ComponentResult AudioUnitInitialize( AudioUnit ci );After opening an Audio Unit component, it must be initialized with this function.
Initialization of an Audio Unit can be an expensive operation, as it can involve the acquisition of assets (e.g. a sound bank for a MusicDevice), allocation of memory buffers required for the processing involved within the unit, and so forth. Once a unit is initialized it is basically in a state in which it can be largely expected to do work.
Some properties can only be queried and set when an Audio Unit is initialized, though many can and should be set before the unit is initialized.
ComponentResult AudioUnitUninitialize( AudioUnit ci );This function may be used to return an Audio Unit to its uninitialized state, causing it to release resources without closing the component. AudioUnitInitialize must be called again before the unit can be expected to do work again.
Uninitialization should release as many of the resources that have been acquired in initialization as possible.
ComponentResult AudioUnitReset( AudioUnit ci, AudioUnitScope inScope, AudioUnitElement inElement );This is typically used when an Audio Unit has been initialized, but the hosting program wants to restore the unit to its initialized state without going through the process of releasing resources and re-acquiring them. Reset can be applied to specific scopes of the unit as appropriate. This call is not expected to reset the values of parameters to an initial state, but will for example, clear state of a filter, clear delay lines, and so forth. Normally inScope should be kAudioUnitScope_Global and inElement should be 0.
Output Audio Units implement two additional functions:
AudioOutputUnitStart and AudioOutputUnitStop. These
functions can be used for example, to start an AudioDevice for one of the device
output units. In this case, the IOProc
of the device will be
started, which will in turn call the Render function of the output unit, which
then turns and obtains input data from its attached callbacks or connections.
The OutputUnit is also used as the sole head of an AUGraph
, where
starting the graph will call the start method of its output unit.
The output units support an kAudioUnitProperty_IsRunning property that will, when queried, return the running status of the unit. This property can also generate notifications when the unit is started or stopped.
Calls to all of these functions are not accumulative or counted. Thus, calling these will have the expected effect, where units will be started, stopped, initialized and so forth. Once an Audio Unit is closed, any subsequent call that uses the Audio Unit identifier will return a badComponentInstance result.
ComponentResult AudioOutputUnitStart( AudioUnit ci );
ComponentResult AudioOutputUnitStop( AudioUnit ci );
Whilst the above relate to a more generic concept of the Audio Unit's state, there are also two other ways that the state of an Audio Unit can be managed.
Properties are defined to represent those states that establish a particular unit's operational state. For instance, an effect unit will require some input data on which it can operate. An Audio Unit can receive input data from one of two sources, an AURenderCallback, where the host has supplied a callback function, or a connection, where the Audio Unit will receive input data from another Audio Unit. These input source are established through the Audio Unit property mechanism.
Parameters represent state that directly effects the rendering activity of an Audio Unit. For instance, the delay time of an Audio Unit that is a delay effect, can be changed as often as desired, and this will directly effect the results of the render processing that this unit has on its source data.
How properties and parameters should be applied to an Audio Unit are described by three numbers:
AudioUnitProperties.h
. Parameter ID's are found by one of two
mechanisms.
AudioUnitParameters.h
kAudioUnitProperty_ParameterList
property. The host can use the
kAudioUnitProperty_ParameterInfo
property to retrieve specific
information about a given ParameterID.Group Scope
that is applicable to that class of Audio Unit. For example, a Mixer unit will generally have parameters that are defined
by either the Input or Output scope, for example volume
. A mixer
may take several inputs and mix them to one output, so in this case the host
will want to apply a different volume (or pan) on each of the inputs, so the
volume parameter would be specified with:
AudioUnitSetParameter (myMixerUnit, kStereoMixerParam_Volume, kAudioUnitScope_Input, 2, 0.5, mySampleOffset);Here, we are applying a volume of 0.5 to the input that is on
elementID==2
. The mySampleOffset
is an sample-frame
offest into the next buffer that the mixer will render that describes when this
parameter value should be applied.
AudioUnitSetParameter (myMixerUnit, kStereoMixerParam_Volume, kAudioUnitScope_Output, 0, 0.8, 0);Here, we are applying the same parameter (volume), but this time on the output scope - which in this case will represent the entire mixed output of the mixer's inputs.
In this case (and in many cases with Audio Units), the
elementID
can be considered like a bus on a mixing board. It
generally represents a bus of audio data that is being passed around together,
and is expected to be acted upon at the same time with a single call to the
Audio Unit's render method. This should not be confused with channels. A
bus can and often does, contain more than a single channel of audio data.
Many properties on an Audio Unit are defined to act in the global scope, and the elementID for the global scope is always zero.
typedef UInt32 AudioUnitScope;
enum { kAudioUnitScope_Global = 0, kAudioUnitScope_Input = 1, kAudioUnitScope_Output = 2, kAudioUnitScope_Group = 3 };These constants are used to specify the scope of a parameter or property. See the discussion of AudioUnitElement, below.
typedef UInt32 AudioUnitElement;An AudioUnitElement specifies an element within a scope. In the input and output scopes, the element corresponds to an I/O bus of the unit. For example, in an effect unit, input and output elements 0 can be thought of as the main input and output, and any additional elements can be thought of as "side-chain" inputs or outputs. A mixer unit's multiple inputs are represented as elements in the input scope. The global scope always contains only one element, with index 0. Group scope is used in more specialized situations specific to the Audio Unit.