Audio Unit Properties

Header file: AudioUnit/AudioUnitProperties.h

This section describes the different properties that apply to Audio Units. These are organized in functional groups as listed below. The AudioUnitPropertyID is listed with the struct or type that represents the property's value. These values are declared in AudioUnit/AudioUnitProperties.h.

Important - these property values are always passed by reference to both the Get and Set property calls (i.e. you pass a pointer to the type specified).


Contents
 Connection Management
 Format Negotiation
 Parameters
 Buffer Management
 Rendering Properties
 Performance Properties
 Audio Unit View and Host Properties
 Host Callbacks - Musical Time
 Audio Unit Presets and Persistence
 Internal Algorithm Configuration
 MusicDevice Properties
 AudioDeviceID Property
 OutputUnit Properties
 3D and Spatialization Properties
Callbacks
 AudioUnitGetParameterProc
 AudioUnitSetParameterProc
 AudioUnitRenderProc
 HostCallback_GetBeatAndTempo This callback is provided to obtain basic information from the host of its current musical location, namely its current beat and current tempo. By convention, it is assumed that the first beat of a sequence starts at beat zero.
 HostCallback_GetMusicalTimeLocation This callback is provided to obtain more detailed information from the host concerning its current musical location.
 HostCallback_GetTransportState This callback is provided to obtain information from the host about its current transport state
Defined Types
 AudioUnitPropertyID
Structs
 AudioUnitConnection
 AURenderCallbackStruct
 AUChannelInfo
 AudioUnitMIDIControlMapping
 AudioUnitExternalBuffer
 HostCallbackInfo
 AUPreset
Constants
 kAudioUnitProperty_MakeConnection
 kAudioUnitProperty_BusCount
 kAudioUnitProperty_SetRenderCallback
 kAudioUnitProperty_SetInputCallback
 kAudioUnitProperty_StreamFormat
 kAudioUnitProperty_SampleRate
 kAudioUnitProperty_SupportedNumChannels
 kAudioUnitProperty_ParameterList
 kAudioUnitProperty_ParameterInfo
 kAudioUnitProperty_ParameterValueStrings
 kAudioUnitProperty_MIDIControlMapping
 kAudioUnitProperty_MaximumFramesPerSlice
 kAudioUnitProperty_SetExternalBuffer
 kAudioUnitProperty_Latency
 kAudioUnitProperty_TailTime
 kAudioUnitProperty_BypassEffect
 kAudioUnitProperty_LastRenderError
 kAudioUnitProperty_FastDispatch
 kAudioUnitProperty_CPULoad
 kAudioUnitProperty_RenderQuality
 kAudioUnitProperty_GetUIComponentList
 kAudioUnitProperty_ContextName
 kAudioUnitProperty_IconLocation
 kAudioUnitProperty_ElementName
 kAudioUnitProperty_HostCallbacks
 kAudioUnitProperty_ClassInfo
 kAudioUnitProperty_CurrentPreset
 kAudioUnitProperty_PresentPreset
 kAudioUnitProperty_FactoryPresets
 kAudioUnitProperty_ReverbRoomType
 kAudioUnitProperty_UsesInternalReverb
 kAudioUnitProperty_SRCAlgorithm
 kMusicDeviceProperty_InstrumentCount
 kMusicDeviceProperty_InstrumentName
 kMusicDeviceProperty_InstrumentNumber
 kMusicDeviceProperty_SoundBankFSSpec
 kMusicDeviceProperty_BankName
 kMusicDeviceProperty_GroupOutputBus
 kMusicDeviceProperty_MIDIXMLNames
 kAudioOutputUnitProperty_CurrentDevice
 kAudioOutputUnitProperty_IsRunning
 kAudioUnitProperty_SpeakerConfiguration
 kAudioUnitProperty_SpatializationAlgorithm
 kAudioUnitProperty_DopplerShift

AudioUnitPropertyID

typedef UInt32 AudioUnitPropertyID;
The constants declared in this header file are represented using this type.

Connection Management

kAudioUnitProperty_MakeConnection

Type: AudioUnitConnection

Use this property with AudioUnitSetProperty to establish a connection between the destination unit (which is the Audio Unit that you make the call on) and the source unit that is specified in the provided AudioUnitConnection struct. In AudioUnitSetProperty you specify kAudioUnitScope_Input for the AudioUnitScope parameter. The elementID is the input number upon which the connection will be made (this is also redundantly stored in the AudioUnitConnection).

AudioUnitConnection

struct AudioUnitConnection {
  AudioUnit  sourceAudioUnit;
  UInt32     sourceOutputNumber;
  UInt32     destInputNumber;
};

kAudioUnitProperty_BusCount

Type: UInt32

The scope is either kAudioUnitScope_Input or kAudioUnitScope_Output to both get and set the number of input or output busses (elements). By default many Audio Units will create a single input or output bus (element), so this call is generally used to create additional busses. A typical example would be the interleaver or deinterleaver units where their behaviour is determined by the number of in or out busses respectively. Other units, such as Mixer units, may have already allocated the necessary state to accept a number of inputs, so this call can be used to determine that limit.

kAudioUnitProperty_SetRenderCallback

Type: AURenderCallbackStruct

This is used with a AudioUnitSetProperty call on a V2 Audio Unit (i.e. where the Component's type is not kAudioUnitComponentType). This (and the corresponding kAudioUnitProperty_SetInputCallback for the V1 Audio Unit) are used to register a callback with an Audio Unit to provide audio data on the specified elementID (bus) of the input scope. When the Audio Unit calls the render callback (or input callback for V1), it will provide a buffer that the input callback should fill with data. When this property is set the caller should also set the stream format property for that elementID (bus) of the input scope to tell the Audio Unit what format the data is in that it will be providing (see kAudioUnitProperty_StreamFormat).

AURenderCallbackStruct

struct AURenderCallbackStruct {
  AURenderCallback  inputProc;
  void *            inputProcRefCon;
};

kAudioUnitProperty_SetInputCallback

Type: AudioUnitInputCallback

This is used with a AudioUnitSetProperty call on a V1 Audio Unit (i.e. where the Component's type is kAudioUnitComponentType).


Format Negotiation

kAudioUnitProperty_StreamFormat

Type: AudioStreamBasicDescription

Typically kAudioUnitScope_Input or kAudioUnitScope_Output are passed in for the AudioUnitScope and the bus number (zero based) is specified in the elementID. This completely specifies the format that exists on the specified scope. Some units can take this property on the kAudioUnitScope_Global, which will generally mean either (the common case) that the formats are the same on both input and output, or that the audio unit can internally process data in a different format than its in and out formats (less typical, but possible). See kAudioUnitProperty_SpeakerConfiguration for more complex rendering processes involving audio spatialization.

There is a subtlety about the usage of the format flags with the AudioStreamBasicDescription and the Audio Unit V2 format that should be discusssed.

When an AudioStreamBasicDescription has the kAudioFormatFlagIsNonInterleaved flag, which is the case with the cannonical format for the V2 units, the AudioBufferList has a different structure and semantic. In this case, the AudioStreamBasicDescription fields will describe the format of ONE of the AudioBuffers that are contained in the list, AND each AudioBuffer in the list is determined to have a single (mono) channel of audio data. Then, the AudioStreamBasicDescription's mChannelsPerFrame will indicate the total number of AudioBuffers that are contained within the AudioBufferList - where each buffer contains one channel. This is used primarily with the AudioUnit (and AudioConverter) representation of this list - and typically won't be found in the AudioHardware.h usage of this structure.

kAudioUnitProperty_SampleRate

Type: Float64

This is a convenience property of the complete kAudioUnitProperty_StreamFormat above. However, it is particularly useful in those cases where an application wishes to track the sample rate of an Audio Unit, for example, in the case of an AudioDeviceID unit where the user may change its sample rate independently of the application. This can also be used of course to set the sample rate of an input or output element.

kAudioUnitProperty_SupportedNumChannels

Type: AUChannelInfo

If not implemented, the Audio Unit may be agnostic about the number of channels and only a format setting can validate whether the channels are accepted. Generally, this will mean (particularly with Effect Units) that any number of channels are usable as long as there is the same number of channels on both the input and output scopes. Other units can accept a mismatch in the channelization of their busses, thus this property is provide to allow those units to publish the allowable channel configurations that can be accepted on input and output.

Returns pairs of numbers of channels (e.g. 1 in / 1 out, 1 in / 2 out, 2 in / 2 out, etc.). If a value of -1 is seen, then this can be interpreted as "any" number of channels for that scope. So, the default setting for an Effect Unit would be -1/-1, and for these types of units it is not expected that they publish this property if this value (same number of channels in and out, with no restriction on the number of channels) is supported.

AUChannelInfo

struct AUChannelInfo {
  SInt16  inChannels;
  SInt16  outChannels;
};

Parameters

kAudioUnitProperty_ParameterList

Type: AudioUnitParameterID array

The caller specifies the AudioUnitScope to be queried for its parameters. Most effect units will define parameters in the global scope (as the unit itself applies the parameters to the work it does). A mixer unit will typically define parameters in both the input (apply different volumes to each input) and output scopes (the overall volume of the mix). The call will return a list of AudioUnitParameterIDs, which can then be used with kAudioUnitProperty_ParameterInfo to obtain information about the parameter.

Some parameters range may change depending on characteristics of the formats the Audio Unit is operating in. For instance, a common case is a Hz parameter in an effect, where the real limitation (maximum value) of this parameter will vary based on the sample rate that the unit is operating at. In this case, if the sample rate of an audio unit is changed, a notification can be sent for this property change and the application can then re-present the new maximum value of the Hz parameter at this new sample rate.

kAudioUnitProperty_ParameterInfo

Type: AudioUnitParameterInfo

The caller passes in the desired AudioUnitScope, AudioUnitElement for the AudioUnitParameterID in the AudioUnitGetProperty call to obtain information about a particular parameter.

kAudioUnitProperty_ParameterValueStrings

Type: CFArrayRef

The caller passes in the desired AudioUnitScope, AudioUnitElement for the AudioUnitParameterID and receives an array of CFString's corresponding to the discrete integral values of the parameter. Only valid for parameters which have a unit of kAudioUnitParameterUnit_Indexed. The caller is responsible for releasing the array. (Releasing the array will in turn automatically release the contained CFStrings.)

kAudioUnitProperty_MIDIControlMapping

Type: AudioUnitMIDIControlMapping

The caller passes in global scope, the elementID ignored. It returns an array of AudioUnitMIDIControlMapping's, specifying a default mapping of MIDI controls and/or NRPN's to Audio Unit scopes/elements/parameters. For more detailed information on these properties see the section on Parameter Types and Information.

AudioUnitMIDIControlMapping

struct AudioUnitMIDIControlMapping {
  UInt16                midiNRPN;
  UInt8                 midiControl;
  UInt8                 scope;
  AudioUnitElement      element;
  AudioUnitParameterID  parameter;
};

Fields

midiNRPN
0xFFFF if none, MSB, LSB are in low 14 bits
midiControl
0xFF if none, must not use controls:
scope
element
parameter

Buffer Management

kAudioUnitProperty_MaximumFramesPerSlice

Type: UInt32

This property describes the maximum number of frames an AudioUnit will be asked to render. Where possible it is also recommended that this is always what an AudioUnit is asked to render. When asking for input, and AudioUnit can also not ask for more input than its maximum number of frames. If it needs more input than allowed by its Max Frames setting, then it should slice its input request into multiple requests. (This can be the case with both Converter AU's and Offline AU's as they can process more or less input for any given output request - for instance a sample rate conversion).

kAudioUnitProperty_SetExternalBuffer

Type: AudioUnitExternalBuffer

A new property for the V2 Audio Unit, and should be set on a global scope (### This may be incorrect -- see the SDK source for a definitive answer ###). Sophisticated hosts of audio units can use this property to better manage the memory usage and performance of a graph of audio units, for instance allowing for the reuse of buffers in a chain.

Basically, the behaviour of this is, if set, an Audio Unit can and should use this buffer to pull its inputs (as a v2 AU MUST provide a buffer when calling the RenderCallback on its inputs, it would use this instead of an internally created buffer.

AudioUnitExternalBuffer

struct AudioUnitExternalBuffer {
  Byte *  buffer;
  UInt32  size;
};

Fields

buffer
size

Rendering Properties

kAudioUnitProperty_Latency

Type: Float64

The input to output latency in seconds. This figure should be as accurate as possible, as it represents the offset of how much time it takes for a sample on input to appear in the output of an audio unit.

This property should report this value as accurately as it can represent it in a single value. The unit is also free to change this value if any changes a user makes to its parameters would effect the overall processing latency of the unit.

For example, a look ahead limiter would require a certain number of samples of input before it can represent those samples in the output (as it has to be able to estimate the sample values and its slope to determine how to limit the signal without clipping it), and thus the output of a particular input sample would occur some time after the input sample was received.

For a host that is doing sample accurate processing of two or more audio inputs (or synchronizing its audio output to some other timeline), it is extremely important that it can determine the processing latency that might be introduced by a particular unit. If a host is passing audio through a number of audio units, then the host can query each audio unit in succession, adding the reported latency values to arrive at the overall latency.

This overall latency can then be used by the host in the following manner. The host would feed the number of samples that correspond to the latency amount through the processing chain, getting that number of samples on the output side. The host then throws those output samples away (they will typically have sample values of zero anyway). When feeding the audio units input in this scenario, the input is the actual input that you are going to want to render. After having thrown the latency number of samples away on the output, the next sample that you get back will be the first valid sample that has been processed from the input.

When pre-rolling a unit (or set of units) in this manner, AudioUnitReset should be called to clear any processing state that an audio unit may have retained from its previous render operations.

This property is often used in conjunction with the kAudioUnitProperty_TailTime property, where there is more discussion of these issues.

kAudioUnitProperty_TailTime

Type: Float64

This value represents any additional input (or output that must be captured) to pass a signal completely through an effect. This can be thought of as the length of time taken for the output signal to die down to silence (defined here as less than -120dB of full scale). For example, a reverb or delay effect would take a certain amount of time (say 2 seconds) for its output to go to silence after its input signal goes to silence. Many effects use filters having an impluse response which will introduce, similar to a reverb, an output signal that goes past the last input signal.

An Audio Unit is expected to publish its tail time, even in the case of a filter where the tail is a minimal value (say around 1 msec).

How does a host then use this property?

Let us take an example of a host that allows for arbitrary start of playback within a sound file. The host wants to be able to produce the same output for that first sample of playback from both this "in the middle" start and from starting playback at the beginning of the file.

When starting "in the middle" the host needs to determine for a given signal chain, two things. Firstly, lets take an example of a delay effect. The output of the first samples would contain the delays from the previous samples (that in this case aren't actually being played). So, the host can ask the delay unit for its tail (lets say its 2 seconds). Then, it can pre-roll the proceeding 2 seconds of input data, which the unit itself will of course mix that into the first 2 seconds of the output that it produces. This is of course the normal result when you start playback from the beginning.

The host will throw away the output the gets generated when it prepares the audio unit in this manner (because that is output that is actually before the output that it wants). But, it has now primed the audio unit, so when it pulls for the data it does want, the preceding 2 seconds of data is in the delay (in this case), and will thus be present in the output.

When pre-rolling a unit (or set of units) in this manner, AudioUnitReset should be called to clear any processing state that an audio unit may have retained from its previous render operations.

This property's value will also be used at the end of rendering a piece of audio. Lets say that you are applying a reverb effect to some source and you're outputting a file with that processing applied. Obviously, when the input audio data is finished you still want to render some additional audio to capture the tail of the effect (eg. The reverb dying down to silence). The tail property tells you how much additional output you'd need to get (in this case the input would be silence), in order to push the entire audible effect of the unit through to the output.

This property is also closely related to kAudioUnitProperty_Latency and typically a host that is dealing with these issues will need to know both the latency an effect introduces as well as its tail.

An important difference between these is that the tail can be an approximate estimate and should be biased on the conservative side. The latency however, should be as accurate as possible, because of the offset between input and output placement of a sample that the latency property indicates. It is important to note with effects that are specifically designed to introduce delays (like a reverb or a delay audio unit) should not report that as latency, since this is part of the desired affect.

When taken together the latency and tail properties enable a host to determine how much priming an audio unit requires (and for that matter an entire processing graph of several audio units) and how much additional output after the end of the input is reached, should be captured to accurately preserve the entire contents of the rendering.

kAudioUnitProperty_BypassEffect

Type: UInt32

Can be used to have an effect unit not apply its processing on its input, but just pass it through to the output without processing it.

kAudioUnitProperty_LastRenderError

Type: OSStatus

This is a read only property that returns the last error code returned by AudioUnitRender, and clears it. Rather than polling this property, it's best that interested clients install a property listener on it.


Performance Properties

kAudioUnitProperty_FastDispatch

Type: function pointer

The inElement value is the component selector that describes to the unit what the function pointer corresponds to. Dispatching through the Component API calls has some overhead that can and should be avoided in the rendering and parameter setting calls where a real-time context is normally required.

The inComponentStorage argument that is passed to each of these callbacks when user code is calling them, is not the AudioUnit - the ComponentInstance. It is the value that is returned by the following call:

	myCompStorage = GetComponentInstanceStorage (anAudioUnit);

The following fast dispatch function pointers are declared in AUComponent.h

AudioUnitGetParameterProc

		typedef extern ComponentResult	(*AudioUnitGetParameterProc)(	void 					*inComponentStorage,
																		AudioUnitParameterID	inID,
																		AudioUnitScope			inScope,
																		AudioUnitElement		inElement,
																		Float32					*outValue );
																	
		

AudioUnitSetParameterProc

		typedef extern ComponentResult	(*AudioUnitSetParameterProc)(	void 					*inComponentStorage,
																		AudioUnitParameterID	inID,
																		AudioUnitScope			inScope,
																		AudioUnitElement		inElement,
																		Float32					inValue,
																		UInt32					inBufferOffsetInFrames );
		

AudioUnitRenderProc

		typedef extern ComponentResult (*AudioUnitRenderProc)(		void 						*inComponentStorage,
																	AudioUnitRenderActionFlags	*ioActionFlags,
																	const AudioTimeStamp		*inTimeStamp,
																	UInt32						inOutputBusNumber,
																	UInt32						inNumberFrames,
																	AudioBufferList				*ioData );
		

kAudioUnitProperty_CPULoad

Type: Float32

Is used to specify to the Audio Unit the desired load that it should limit its rendering times to that limit. The property is specified with a range of 0 to 1. A value of zero means no limitation - and represents a way to turn this limitation off, desirable for instance when doing off-line rendering.

kAudioUnitProperty_RenderQuality

Type: UInt32

Provides a quality range (0-127) that an audio unit can use to decide how high a quality it uses when doing its rendering (which generally trades off the amount of CPU that is consumed). Currently both the DLS Synth and the Reverb use this to scale back the quality of their rendering. Generally the kRenderQuality enum settings should be used, however some units may respond to intermediate values. In those that don't, the quality is rounded to the nearest value as represented by this enum.


Audio Unit View and Host Properties

kAudioUnitProperty_GetUIComponentList

Type: ComponentDescription array

Returns an array of ComponentDescriptions specifying AudioUnitCarbonView components designed to present custom user interfaces for editing this Audio Unit (as distinct from the generic user interface supplied by Apple).

kAudioUnitProperty_ContextName

Type: CFStringRef

Allows an application to provide a name that can be presented to the user that specifies the context of a specific unit. Whilst the string supplied by the host is by and large dependent on the Host App and the context within which a given AU is being used, in general it is recommended that this string not contain the name of the Audio Unit.

Host applications typically provide context information within how they present an Audio Unit to the user, and thus should not rely on the AU incorporate contextual information in their normal UI. For example, a host might set the title of the AU's View Window:

- - - - - - - My Synth Track::(4) AUMatrixReverb  - - - - - - -
(Where My Synth Track is the user supplied name of the track, (4) is the index of the effect in that track, and the name of the AU)

In this case, the host might provide a context string to the Audio Unit as: "My Synth Track::Effect (4)" An Audio Unit can use this string however, in situations where it needs to provide some visual feedback to the user based on state of a particular instance of itself. It is recommended that the AU provides some additional context about the context string:

- - - - - - -
AU Matrix Reverb
Being used within context: My Synth Track::Effect (4)
Can't find resource file...
- - - - - - -

kAudioUnitProperty_IconLocation

Type: CFURLRef

Allows an AudioUnit to specify an associated icon. Returns a CFURLRef containing the full Posix-style path of the icon file. The caller is responsible for releasing the CFURLRef, and for instantiating the image.

To facilitate support for these icons in Carbon as well as Cocoa UI, we require that this property point to a ".icns" files. These files can be created using /Developer/Applications/Utilities/Icon\ Composer.app.

kAudioUnitProperty_ElementName

Type: CFStringRef

Allows an AudioUnit to provide names for the individual elements that are contained in a scope. A typical usage is to provide names for the input and outputs elements (or buses).

For example, Apple's DLSMusicDevice implements this property for the two outputs it provides. The names of each of these two elements are different based on whether the AU's internal reverb is on or off. If on, then the first output is "Stereo Mix", the second "Unused". If off, the first output is "Wet Mix" and the second is "Dry Mix".

Typically this property will be read only, but in some cases an AU might provide the ability to set the name to any arbitrary string.


Host Callbacks - Musical Time

kAudioUnitProperty_HostCallbacks

Type: HostCallbackInfo

Used by the host to provide callbacks that an Audio Unit can use to obtain information from the host.

Currently, this property provides for two callbacks that are based around the concept of musical time. That is, when an Audio Unit is asked to render, it can query the host for information about the host's musical time, and then use that information to match its DSP to a musical context. For example, a delay unit can time the delays to the beats, based on the song's time signature as well as tracking tempo changes.

How does this work?

When the host opens an Audio Unit and connects it, it should also call this property.

    HostCallbackInfo info;
    memset (&info, 0, sizeof (HostCallbackInfo));
    info.hostUserData = this;
    info.beatAndTempoProc = DispatchGetBeatAndTempo;
		
        //ignore result of this - don't care if the property isn't supported
    AudioUnitSetProperty (mAudioUnit, 
                        kAudioUnitProperty_HostCallbacks, 
                        kAudioUnitScope_Global, 
                        0, //elementID 
                        &info,
                        sizeof (HostCallbackInfo));

In this example, the host is only supporting the HostCallback_GetBeatAndTempo callback, so in that case this is the only information that the host can provide to the Audio Unit. Any unsupported callbacks from a host should of course be set to NULL.

Once the host has set this property, the Audio Unit now has callbacks that it can make to the host.

The "info.hostUserData = this" line shows the host seeting this user data field to the value of the this pointer of a C++ object. The Audio Unit is required to always pass this hostUserData field back to the host when it makes the callback.

In this case, how does the host implement this callback?

OSStatus AUNodeSequenceDest::DispatchGetBeatAndTempo (
                                     void*                inHostUserData,
                                     Float64*             outCurrentBeat, 
                                     Float64*             outCurrentTempo)
{
    AUNodeSequenceDest* This = (AUNodeSequenceDest*)inHostUserData;
    if (This)
        return This->GetBeatAndTempo (outCurrentBeat, outCurrentTempo);
    
    return paramErr;
}

When the Audio Unit calls the beatAndTempoProc it will call this dispatch. The inHostUserData is treated as it was passed in, ie. the this pointer to the object in question. The dispatch call here, is defined as a static (or class) member function, which then dispatches to an instance method GetBeatAndTempo.

If the host is unable to provide the requested information (for instance the audio is coming from a live situation with no beat or tempo context) then it can return the kAudioUnitErr_CannotDoInCurrentContext error code.

The Audio Unit will only make this call in response to the host calling the unit's AudioUnitRender call. Thus, any of the values that the host provides to the Audio Unit in these callbacks relate to the particular buffer that it has asked the Audio Unit to render.

An Audio Unit may also decide to publish parameters where the unit types of those parameters are based on tempo and beat information. See Tempo Parameters for more detail in the parameter section of this documentation.

HostCallbackInfo

struct HostCallbackInfo {
  void *                        			hostUserData;
  HostCallback_GetBeatAndTempo  			beatAndTempoProc;
  HostCallback_GetMusicalTimeLocation  		musicalTimeLocationProc;
  HostCallback_GetTransportState			transportStateProc;	
};

HostCallback_GetBeatAndTempo

This callback is provided to obtain basic information from the host of its current musical location, namely its current beat and current tempo. By convention, it is assumed that the first beat of a sequence starts at beat zero.
typedef OSStatus (*HostCallback_GetBeatAndTempo)(
  void *     inHostUserData,
  Float64 *  outCurrentBeat,
  Float64 *  outCurrentTempo
);
When the Audio Unit makes this call any of the out... parameters can be NULL. This indicates to the host that the Audio Unit does not need information about that particular value. Thus, the host should always check the out... pointers to ensure they are valid.

Parameters

inHostUserData
This is the value that was passed in by the host for the hostUserData field of the HostCallbackInfo structure. The Audio Unit, when calling this callback must supply this value as it was given.
outCurrentBeat
This should represent the exact beat value that applies to the start of the current buffer that the Audio Unit has been asked to render. This can, of course, be a fractional beat value.
outCurrentTempo
This represents the current tempo at the time of the first sample of the current buffer. If there is a tempo change within the buffer itself, then this cannot be communicated by the host to the Audio Unit, except of course that the next buffer's tempo value would be different. Tempo is defined as the number of whole-number (integer) beat values (as indicated by the outCurrentBeat field) per minute.
Result:

noErr, or kAudioUnitErr_CannotDoInCurrentContext if unable to provide requested information

HostCallback_GetMusicalTimeLocation

This callback is provided to obtain more detailed information from the host concerning its current musical location.
typedef OSStatus (*HostCallback_GetMusicalTimeLocation)(
  void *     inHostUserData,
  UInt32 *   outDeltaSampleOffsetToNextBeat,
  Float32 *  outTimeSig_Numerator,
  UInt32 *   outTimeSig_Denominator,
  Float64 *  outCurrentMeasureDownBeat);

How does this work? Lets take an example of a score, and the beats, etc, that a host would be expected to provide to the Audio Unit. In the following, the first beat is 0.

Score Time Sig: |3/4     |       |4/8    |6/8    |5/8    |3/4    |4/4    |
Down Beats:      0        3       6       8       11      13.5    16.5    20.5

For a change from 3/4 to 4/8 the value of the beat does not change (thus, a 4/8 time sig will still have 2 beat values for that measure - the value of the beat unit does not change as the time signature changes). In common practise a beat is generally associated with a quarter note.

Typically of course, if the Audio Unit is using this callback it will also need to use the HostCallback_GetBeatAndTempo callback as well. Thus a host is required to support that callback if it supports this one.

When the Audio Unit makes this call any of the out... parameters can be NULL. This indicates to the host that the Audio Unit does not need information about that particular value. Thus, the host should always check the out... pointers to ensure they are valid.

Parameters

inHostUserData
This is the value that was passed in by the host for the hostUserData field of the HostCallbackInfo structure. The Audio Unit, when calling this callback must supply this value as it was given.
outDeltaSampleOffsetToNextBeat
Will contain the number of samples until the next whole beat from the start sample of the current rendering buffer.
outTimeSig_Numerator
The number of beats of the denominator value that are contained in the current measure.
outTimeSig_Denominator
Uses music notational conventions (4 is a quarter note, 8 an eigth note, etc). A whole (integer) beat in any of the beat values is generally considered to be a quarter note.
outCurrentMeasureDownBeat
The beat that corresponds to the downbeat (first beat) of the current measure that is being rendered.
Result:

noErr, or kAudioUnitErr_CannotDoInCurrentContext if unable to provide requested information

HostCallback_GetTransportState

This callback is provided to obtain information from the host about its current transport state
typedef OSStatus (*HostCallback_GetTransportState)(
  void 		*inHostUserData,
  Boolean 	*outIsPlaying,
  Boolean 	*outTransportStateChanged,
  Float64 	*outCurrentSampleInTimeLine,
  Boolean 	*outIsCycling,
  Float64 	*outCycleStartBeat,
  Float64 	*outCycleEndBeat);
When the Audio Unit makes this call any of the out... parameters can be NULL. This indicates to the host that the Audio Unit does not need information about that particular value. Thus, the host should always check the out... pointers to ensure they are valid.

Parameters

inHostUserData
This is the value that was passed in by the host for the hostUserData field of the HostCallbackInfo structure. The Audio Unit, when calling this callback must supply this value as it was given.
outIsPlaying
The time line of the host's transport is advancing (true), or not (false). If false, host's may only be able to provide limited information for other properties in this (and the other) host callbacks.
outTransportStateChanged
Indicates that some state of the host's transport has changed. For instance, the time-line has started or stopped, the position within the time-line has changed (for eg. the SPL has been moved to a new location, or the transport is in cycle mode, and has jumped from the end back to the start of the cycle).
outCurrentSampleInTimeLine
Represents how many samples from the start of the song (in the sample rate of the AU) that the AU's current render cycle starts at.
outIsCycling
If false, there is no valid value for either the cycling start or end beats. Thus, these values are only valid if the host's transport is actually cycling at the time the Audio Unit makes this call.
outCycleStartBeat
If is cycling, this value represents the beat of the start of the cycle.
outCycleEndBeat
If is cycling, this value represents the beat of the end of the cycle.
Result:

noErr, or kAudioUnitErr_CannotDoInCurrentContext if unable to provide requested information


Audio Unit Presets and Persistence

kAudioUnitProperty_ClassInfo

Type: CFPropertyListRef dictionary

CFPropertyListRef dictionary is a constrained subset of a CFDictionary that uses CFStrings as keys, and whose values can only be CFPropertyListRefs (which includes CFStrings, CFNumbers, CFData, or arrays/dictionaries whose values and keys are constrained in this same way).

There are essentially two types of preset dictionaries. The global preset, which specifies the entire state of an AudioUnit. The part preset, which contains the addition part key, specifies the preset state of a part. See discussion on multimbral MusicDevice units for more information.

The dictionary contains several key/value pairs:

name
a CFString that is the name associated with the current preset
version
a CFNumber that represents the version of the class data
type
a CFNumber that represents the componentType of the Audio Unit as defined by its ComponentDescription
subtype
a CFNumber that represents the componentSubType of the Audio Unit as defined by its ComponentDescription
manufacturer
a CFNumber that represents the componentManufacturerID of the Audio Unit as defined by its ComponentDescription
data
Audio Unit-specific internal state, contained in a CFDataRef - currently this is the value of each of the parameters, on each element of each scope.
vstdata
the data delivered from the GetChunk call of the VST interface. It is provided so that AU's that have VST equivalents, can be instantiated from the preset state of their VST equivalent
part
is used to describe that this preset belongs to a single part of a multi-timbral MusicDevice. The value is specific to the particular AudioUnit for this key
render-quality
If the Audio Unit supports kAudioUnitProperty_RenderQuality, the state of this property will be saved and restored
cpu-load
If the Audio Unit supports kAudioUnitProperty_CPULoad, the state of this property will be saved and restored
On exit from GetProperty, the client owns a reference to the CFPropertyListRef. SetProperty does not consume (release) a reference to the CFPropertyListRef.

The name field is filled in by finding the last preset that was currently set on the unit (whether factory or ClassInfo). The name will be "Untitled" if the unit has no presets, and the ClassInfo has never been set.

The dictionary can be parsed using the appropriate CoreFoundation functions. The class data contains enough information to establish a ComponentDescription that can then be used to find and open the appropriate Audio Unit, open it, and then re-establish the state as saved in the dictionary. As this currently only contains the parameter values (for Apple's Audio Units as shipped in 10.2) this may not be complete for some units. For example, the name of the SoundBank for the DLSMusicDevice is not currently saved in the class data. It is anticipated that properties that are needed to reestablish the complete state of an Audio Unit will be saved in future release (and consequently, the version number of the class data will be revised). Developers can add custom property keys that are unique to their Audio Units. In this case we recommend that developers begin their keys with their Unique ManufacturerID to avoid possible conflicts with future keys that might be defined. For eg. ACME-my-custom-key Apple will continue to define (and thus reserves) properties that are not qualified with a manufacturer ID.

kAudioUnitProperty_CurrentPreset

Deprecated, see kAudioUnitProperty_PresentPreset

kAudioUnitProperty_PresentPreset

Type: AUPreset

This property has the same logistics as the pre-exising Current Preset property. However, that property had undefined behaviour when it came to the CFString contained within the AUPreset. Thus, it was decided to deprecate the kAudioUnitProperty_CurrentPreset property and replace this with this one, where the semantics of the CFString usage are both clear and consistent with other properties that use CF objects. See, the CF_AU_Properties document in the SDK. Read: This can be used by the caller to identify the current preset of the unit. This behaves differently for handling both Factory Presets and User states (ClassInfo). If the last state set is a factory preset (i.e. no call to set ClassInfo has been made), then the AUPreset contains both a valid number (greater than or equal to zero) and name (the number and name of the appropriate factory preset). If the unit has factory presets, then the first time this property is queried, it returns the default preset.

If a set ClassInfo property was the last call made, then the AUPreset will contain a number of -1 (signifying User preset), and the name contained within the class info. If the name has not been set, you get a default name, such as "Untitled". When returned, the CFString (as with all other CF objects retrieved from Get Property) in the AUPreset is owned by the client and should be released. Code in PublicUtility (CAAudioUnit), shows how a client can deal with the migration of usage from units that do not yet implement the _CurrentPreset property.

Write: The number in AUPreset is used to select the preset. If presetNumber is equal to or greater than zero (factory preset):

Set the state of the unit to one of the factory presets. The caller provides an AUPreset (from kAudioUnitProperty_FactoryPresets), and this becomes the current state of the unit. kAudioUnitErr_InvalidPropertyValue is returned if the preset number is not recognised by the Audio Unit. If presetNumber is less than zero: (signifying a user preset):

Sets the current preset for the unit (including the name supplied in presetName). This name will then be saved into the unit's data when getting the current state of the ClassInfo property. This allows the name of a state to be saved along with the state so it can be shown to the user when that state is re-established.

AUPreset

struct AUPreset {
  SInt32       presetNumber;
  CFStringRef  presetName;
};

Fields

presetNumber
presetName

kAudioUnitProperty_FactoryPresets

Type: CFArrayRef containing AUPreset's

Returns an array of AUPreset that contain a number and name for each of the presets. The number of each preset must be greater (or equal to) zero, and the numbers need not be ordered or contiguous. The name of each preset can be presented to the user as a means of identifying each preset. The CFArrayRef should be released by the caller.


Internal Algorithm Configuration

kAudioUnitProperty_ReverbRoomType

Type: UInt32

The caller should pass in one of the kReverbRoomType enum values. This property is supported by those units that implement the kAudioUnitProperty_UsesInternalReverb (DLSMusicDevice, 3DMixer) as well as the MatrixReverb unit.

kAudioUnitProperty_UsesInternalReverb

Type: UInt32

Some audio units can use an internal reverb. The 3DMixer and the DLSMusicDevice both have this property on by default (value==1). To turn this off, set the value of this property to zero.

kAudioUnitProperty_SRCAlgorithm

Type: OSType

The value is an identifier for the sample rate converter algorithm to use. This is currently supported by the AUConverter unit and the OutputDevice units.


MusicDevice Properties

kMusicDeviceProperty_InstrumentCount

Type: UInt32

This returns the number of instruments that are able to be used by a MusicDevice Audio Unit. In the DLSMusicDevice this returns the number of instruments that are in the DLS or SoundFont collection that is currently set on this unit.

kMusicDeviceProperty_InstrumentName

Type: char array

The MusicDeviceInstrumentID is passed in for the inElement argument, and the call returns the name for that instrumentID.

kMusicDeviceProperty_InstrumentNumber

Type: MusicDeviceInstrumentID

The caller passes in the instrument "index" in the inElement argument. This "index" is zero-based and must be less than the number of instruments (determined using the kMusicDeviceProperty_InstrumentCount property).

The value passed back will be a MusicDeviceInstrumentID. This MusicDeviceInstrumentID may then be used with the kMusicDeviceProperty_InstrumentName property, or in any of the MusicDevice calls which take a MusicDeviceInstrumentID argument.

This value is further expected to be formatted in a particular manner relating to the bank and patch number values of MIDI. The number is formatted as 0xMMLLPP, where the lowest byte is the patch number of the instrument, the second byte the LSB of the instrument's bank select, and the 3rd byte, the MSB of the instrument's bank select.

kMusicDeviceProperty_SoundBankFSSpec

Type: FSSpec

This property is used with a MusicDevice that requires sample data to be used as a source for its rendering. The DLSMusicDevice will accept both DownLoadable Sound files and Sound Fonts as the sample data for its intruments.

kMusicDeviceProperty_BankName

Type: CFStringRef

Returns the name of the currently loaded sound bank of the DLSMusicDevice. The CFStringRef should be released by the caller.

kMusicDeviceProperty_GroupOutputBus

Type: UInt32 The caller passes in MusicDeviceGroupID for the AudioUnitElement and kAudioUnitScope_Group for the AudioUnitScope. The caller should pre-assign the number of busses that are going to be assigned using this call. Then, when this property is set, any notes that are produced on a particular group (which can be considered as equivalent to a MIDI Channel for the moment) will be produced on this assigned bus. This property is implemented by the DLSMusicDevice.

kMusicDeviceProperty_MIDIXMLNames

Type: CFURLRef

Returns a URL to a MIDINameDocument describing the MusicDevice's patch, note and control names.


AudioDeviceID Property

kAudioOutputUnitProperty_CurrentDevice

Type: AudioDeviceID Will return the AudioDeviceID of any Audio Unit that is set (or will track) an AudioDevice. The property can be set on some Audio Units, but not on others.

OutputUnit Properties

kAudioOutputUnitProperty_IsRunning

Type: UInt32

This value is initially set to 0 (false). When AudioOutputUnitStart is called the value of this property is 1 (true), and when AudioOutputUnitStop is consequently called the value is again zero. Audio Units do not count the number of times that their start or stop methods are called.


3D and Spatialization Properties

kAudioUnitProperty_SpeakerConfiguration

Type: UInt32

This is a property that is typically supported by Audio Units that generate content that corresponds to common multi-channel formats. Currently the following values are defined for this property. When this property is supported it should be used over the kAudioUnitProperty_StreamFormat as the stream format doesn't describe sufficient information to the renderer when applying spatialization techniques.

kSpeakerConfiguration_HeadPhones
Used to signify that the rendering should be based on the user listening with headphones
kSpeakerConfiguration_Stereo
Used to signify that stereo speakers will be used. The channel ordering is Left/Right. Stereo speakers are generally expected to be 30 degrees to the left and right respectively of the listener
kSpeakerConfiguration_Quad
Used to signify that quad speakers will be used. The channel ordering is Left/Right/Rear Left/Rear Right. Generally these speakers are place in a square around the listener, where the listener is located in the center of the square.
kSpeakerConfiguration_5_1
Used to signify a 5.1 speaker configuration. The channel ordering is Left/Right/Rear Left/Rear Right/Center/Sub. Often, an Audio Unit generating content for this configuration will olny generate 5 channels of data (not 6). This channel ordering is expected to be observed for this value (even though there are other channel orderings for 5.1 content that are also in common usage.

kAudioUnitProperty_SpatializationAlgorithm

Type: UInt32

The caller passes in one of the kSpatializationAlgorithm enum values to specify which particular algorithm should be applied on the specified scope (input for the 3DMixer) and elementID (bus number). This allows different inputs to the 3DMixer to have different spatialization algorithms applied to each input.

kAudioUnitProperty_DopplerShift

Type: UInt32

A value of 1 will enable the application of a Doppler shift effect to a moving source in the 3DMixer unit.