Header file: AudioUnit/AudioUnitProperties.h
This section describes the different properties that apply to Audio Units.
These are organized in functional groups as listed below. The
AudioUnitPropertyID is listed with the struct or type that represents the
property's value. These values are declared in
AudioUnit/AudioUnitProperties.h
.
Important - these
property values are always passed by reference to both the Get and Set property
calls (i.e. you pass a pointer to the type specified).
typedef UInt32 AudioUnitPropertyID;The constants declared in this header file are represented using this type.
Use this property with AudioUnitSetProperty to establish a connection between the destination unit (which is the Audio Unit that you make the call on) and the source unit that is specified in the provided AudioUnitConnection struct. In AudioUnitSetProperty you specify kAudioUnitScope_Input for the AudioUnitScope parameter. The elementID is the input number upon which the connection will be made (this is also redundantly stored in the AudioUnitConnection).
struct AudioUnitConnection { AudioUnit sourceAudioUnit; UInt32 sourceOutputNumber; UInt32 destInputNumber; };
The scope is either kAudioUnitScope_Input or kAudioUnitScope_Output to both get and set the number of input or output busses (elements). By default many Audio Units will create a single input or output bus (element), so this call is generally used to create additional busses. A typical example would be the interleaver or deinterleaver units where their behaviour is determined by the number of in or out busses respectively. Other units, such as Mixer units, may have already allocated the necessary state to accept a number of inputs, so this call can be used to determine that limit.
This is used with a AudioUnitSetProperty call on a V2 Audio Unit (i.e.
where the Component's type is not kAudioUnitComponentType). This
(and the corresponding kAudioUnitProperty_SetInputCallback
for the V1 Audio Unit) are used to register a callback with an Audio
Unit to provide audio data on the specified elementID (bus) of the input
scope. When the Audio Unit calls the render callback (or input callback
for V1), it will provide a buffer that the input callback should fill
with data. When this property is set the caller should also set the
stream format property for that elementID (bus) of the input scope to
tell the Audio Unit what format the data is in that it will be providing
(see kAudioUnitProperty_StreamFormat
).
struct AURenderCallbackStruct { AURenderCallback inputProc; void * inputProcRefCon; };
This is used with a AudioUnitSetProperty call on a V1 Audio Unit (i.e. where the Component's type is kAudioUnitComponentType).
Typically kAudioUnitScope_Input or kAudioUnitScope_Output are passed in for the AudioUnitScope and the bus number (zero based) is specified in the elementID. This completely specifies the format that exists on the specified scope. Some units can take this property on the kAudioUnitScope_Global, which will generally mean either (the common case) that the formats are the same on both input and output, or that the audio unit can internally process data in a different format than its in and out formats (less typical, but possible). See kAudioUnitProperty_SpeakerConfiguration
for more complex rendering processes involving audio spatialization.
There is a subtlety about the usage of the format flags with the AudioStreamBasicDescription and the Audio Unit V2 format that should be discusssed.
When an AudioStreamBasicDescription has the kAudioFormatFlagIsNonInterleaved flag, which is the case with the cannonical format for the V2 units, the AudioBufferList has a different structure and semantic. In this case, the AudioStreamBasicDescription fields will describe the format of ONE of the AudioBuffers that are contained in the list, AND each AudioBuffer in the list is determined to have a single (mono) channel of audio data. Then, the AudioStreamBasicDescription's mChannelsPerFrame will indicate the total number of AudioBuffers that are contained within the AudioBufferList - where each buffer contains one channel. This is used primarily with the AudioUnit (and AudioConverter) representation of this list - and typically won't be found in the AudioHardware.h usage of this structure.
This is a convenience property of the complete
kAudioUnitProperty_StreamFormat
above. However, it is
particularly useful in those cases where an application wishes to track
the sample rate of an Audio Unit, for example, in the case of an
AudioDeviceID unit where the user may change its sample rate
independently of the application. This can also be used of course to
set the sample rate of an input or output element.
If not implemented, the Audio Unit may be agnostic about the number of channels and only a format setting can validate whether the channels are accepted. Generally, this will mean (particularly with Effect Units) that any number of channels are usable as long as there is the same number of channels on both the input and output scopes. Other units can accept a mismatch in the channelization of their busses, thus this property is provide to allow those units to publish the allowable channel configurations that can be accepted on input and output.
Returns pairs of numbers of channels (e.g. 1 in / 1 out, 1 in / 2 out, 2 in / 2 out, etc.). If a value of -1 is seen, then this can be interpreted as "any" number of channels for that scope. So, the default setting for an Effect Unit would be -1/-1, and for these types of units it is not expected that they publish this property if this value (same number of channels in and out, with no restriction on the number of channels) is supported.
struct AUChannelInfo { SInt16 inChannels; SInt16 outChannels; };
The caller specifies the AudioUnitScope to be queried for its
parameters. Most effect units will define parameters in the global scope
(as the unit itself applies the parameters to the work it does). A mixer
unit will typically define parameters in both the input (apply different
volumes to each input) and output scopes (the overall volume of the
mix). The call will return a list of AudioUnitParameterIDs, which can
then be used with kAudioUnitProperty_ParameterInfo
to
obtain information about the parameter.
Some parameters range may change depending on characteristics of the formats the Audio Unit is operating in. For instance, a common case is a Hz parameter in an effect, where the real limitation (maximum value) of this parameter will vary based on the sample rate that the unit is operating at. In this case, if the sample rate of an audio unit is changed, a notification can be sent for this property change and the application can then re-present the new maximum value of the Hz parameter at this new sample rate.
The caller passes in the desired AudioUnitScope, AudioUnitElement for the AudioUnitParameterID in the AudioUnitGetProperty call to obtain information about a particular parameter.
The caller passes in the desired AudioUnitScope, AudioUnitElement for
the AudioUnitParameterID and receives an array of CFString's
corresponding to the discrete integral values of the parameter. Only
valid for parameters which have a unit of
kAudioUnitParameterUnit_Indexed
. The caller is responsible
for releasing the array. (Releasing the array will in turn
automatically release the contained CFStrings.)
The caller passes in global scope, the elementID ignored. It returns an array of AudioUnitMIDIControlMapping's, specifying a default mapping of MIDI controls and/or NRPN's to Audio Unit scopes/elements/parameters. For more detailed information on these properties see the section on Parameter Types and Information.
struct AudioUnitMIDIControlMapping { UInt16 midiNRPN; UInt8 midiControl; UInt8 scope; AudioUnitElement element; AudioUnitParameterID parameter; };
This property describes the maximum number of frames an AudioUnit will be asked to render. Where possible it is also recommended that this is always what an AudioUnit is asked to render. When asking for input, and AudioUnit can also not ask for more input than its maximum number of frames. If it needs more input than allowed by its Max Frames setting, then it should slice its input request into multiple requests. (This can be the case with both Converter AU's and Offline AU's as they can process more or less input for any given output request - for instance a sample rate conversion).
A new property for the V2 Audio Unit, and should be set on a global scope (### This may be incorrect -- see the SDK source for a definitive answer ###). Sophisticated hosts of audio units can use this property to better manage the memory usage and performance of a graph of audio units, for instance allowing for the reuse of buffers in a chain.
Basically, the behaviour of this is, if set, an Audio Unit can and should use this buffer to pull its inputs (as a v2 AU MUST provide a buffer when calling the RenderCallback on its inputs, it would use this instead of an internally created buffer.
struct AudioUnitExternalBuffer { Byte * buffer; UInt32 size; };
The input to output latency in seconds. This figure should be as accurate as possible, as it represents the offset of how much time it takes for a sample on input to appear in the output of an audio unit.
This property should report this value as accurately as it can represent it in a single value. The unit is also free to change this value if any changes a user makes to its parameters would effect the overall processing latency of the unit.
For example, a look ahead limiter would require a certain number of samples of input before it can represent those samples in the output (as it has to be able to estimate the sample values and its slope to determine how to limit the signal without clipping it), and thus the output of a particular input sample would occur some time after the input sample was received.
For a host that is doing sample accurate processing of two or more audio inputs (or synchronizing its audio output to some other timeline), it is extremely important that it can determine the processing latency that might be introduced by a particular unit. If a host is passing audio through a number of audio units, then the host can query each audio unit in succession, adding the reported latency values to arrive at the overall latency.
This overall latency can then be used by the host in the following manner. The host would feed the number of samples that correspond to the latency amount through the processing chain, getting that number of samples on the output side. The host then throws those output samples away (they will typically have sample values of zero anyway). When feeding the audio units input in this scenario, the input is the actual input that you are going to want to render. After having thrown the latency number of samples away on the output, the next sample that you get back will be the first valid sample that has been processed from the input.
When pre-rolling a unit (or set of units) in this manner, AudioUnitReset should be called to clear any processing state that an audio unit may have retained from its previous render operations.
This property is often used in conjunction with the kAudioUnitProperty_TailTime property, where there is more discussion of these issues.
This value represents any additional input (or output that must be captured) to pass a signal completely through an effect. This can be thought of as the length of time taken for the output signal to die down to silence (defined here as less than -120dB of full scale). For example, a reverb or delay effect would take a certain amount of time (say 2 seconds) for its output to go to silence after its input signal goes to silence. Many effects use filters having an impluse response which will introduce, similar to a reverb, an output signal that goes past the last input signal.
An Audio Unit is expected to publish its tail time, even in the case of a filter where the tail is a minimal value (say around 1 msec).
How does a host then use this property?
Let us take an example of a host that allows for arbitrary start of playback within a sound file. The host wants to be able to produce the same output for that first sample of playback from both this "in the middle" start and from starting playback at the beginning of the file.
When starting "in the middle" the host needs to determine for a given signal chain, two things. Firstly, lets take an example of a delay effect. The output of the first samples would contain the delays from the previous samples (that in this case aren't actually being played). So, the host can ask the delay unit for its tail (lets say its 2 seconds). Then, it can pre-roll the proceeding 2 seconds of input data, which the unit itself will of course mix that into the first 2 seconds of the output that it produces. This is of course the normal result when you start playback from the beginning.
The host will throw away the output the gets generated when it prepares the audio unit in this manner (because that is output that is actually before the output that it wants). But, it has now primed the audio unit, so when it pulls for the data it does want, the preceding 2 seconds of data is in the delay (in this case), and will thus be present in the output.
When pre-rolling a unit (or set of units) in this manner, AudioUnitReset should be called to clear any processing state that an audio unit may have retained from its previous render operations.
This property's value will also be used at the end of rendering a piece of audio. Lets say that you are applying a reverb effect to some source and you're outputting a file with that processing applied. Obviously, when the input audio data is finished you still want to render some additional audio to capture the tail of the effect (eg. The reverb dying down to silence). The tail property tells you how much additional output you'd need to get (in this case the input would be silence), in order to push the entire audible effect of the unit through to the output.
This property is also closely related to kAudioUnitProperty_Latency and typically a host that is dealing with these issues will need to know both the latency an effect introduces as well as its tail.
An important difference between these is that the tail can be an approximate estimate and should be biased on the conservative side. The latency however, should be as accurate as possible, because of the offset between input and output placement of a sample that the latency property indicates. It is important to note with effects that are specifically designed to introduce delays (like a reverb or a delay audio unit) should not report that as latency, since this is part of the desired affect.
When taken together the latency and tail properties enable a host to determine how much priming an audio unit requires (and for that matter an entire processing graph of several audio units) and how much additional output after the end of the input is reached, should be captured to accurately preserve the entire contents of the rendering.
Can be used to have an effect unit not apply its processing on its input, but just pass it through to the output without processing it.
This is a read only property that returns the last error code returned by AudioUnitRender, and clears it. Rather than polling this property, it's best that interested clients install a property listener on it.
The inElement value is the component selector that describes to the unit what the function pointer corresponds to. Dispatching through the Component API calls has some overhead that can and should be avoided in the rendering and parameter setting calls where a real-time context is normally required.
The inComponentStorage argument that is passed to each of these callbacks when user code is calling them, is not the AudioUnit - the ComponentInstance. It is the value that is returned by the following call:
myCompStorage = GetComponentInstanceStorage (anAudioUnit);
The following fast dispatch function pointers are declared in AUComponent.h
typedef extern ComponentResult (*AudioUnitGetParameterProc)( void *inComponentStorage, AudioUnitParameterID inID, AudioUnitScope inScope, AudioUnitElement inElement, Float32 *outValue );
typedef extern ComponentResult (*AudioUnitSetParameterProc)( void *inComponentStorage, AudioUnitParameterID inID, AudioUnitScope inScope, AudioUnitElement inElement, Float32 inValue, UInt32 inBufferOffsetInFrames );
typedef extern ComponentResult (*AudioUnitRenderProc)( void *inComponentStorage, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inOutputBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData );
Is used to specify to the Audio Unit the desired load that it should limit its rendering times to that limit. The property is specified with a range of 0 to 1. A value of zero means no limitation - and represents a way to turn this limitation off, desirable for instance when doing off-line rendering.
Provides a quality range (0-127) that an audio unit can use to decide how high a quality it uses when doing its rendering (which generally trades off the amount of CPU that is consumed). Currently both the DLS Synth and the Reverb use this to scale back the quality of their rendering. Generally the kRenderQuality enum settings should be used, however some units may respond to intermediate values. In those that don't, the quality is rounded to the nearest value as represented by this enum.
Returns an array of ComponentDescriptions specifying AudioUnitCarbonView components designed to present custom user interfaces for editing this Audio Unit (as distinct from the generic user interface supplied by Apple).
Allows an application to provide a name that can be presented to the user that specifies the context of a specific unit. Whilst the string supplied by the host is by and large dependent on the Host App and the context within which a given AU is being used, in general it is recommended that this string not contain the name of the Audio Unit.
Host applications typically provide context information within how they present an Audio Unit to the user, and thus should not rely on the AU incorporate contextual information in their normal UI. For example, a host might set the title of the AU's View Window:
- - - - - - - My Synth Track::(4) AUMatrixReverb - - - - - - -(Where My Synth Track is the user supplied name of the track, (4) is the index of the effect in that track, and the name of the AU)
In this case, the host might provide a context string to the Audio Unit as: "My Synth Track::Effect (4)" An Audio Unit can use this string however, in situations where it needs to provide some visual feedback to the user based on state of a particular instance of itself. It is recommended that the AU provides some additional context about the context string:
- - - - - - - AU Matrix Reverb Being used within context: My Synth Track::Effect (4) Can't find resource file... - - - - - - -
Allows an AudioUnit to specify an associated icon. Returns a CFURLRef containing the full Posix-style path of the icon file. The caller is responsible for releasing the CFURLRef, and for instantiating the image.
To facilitate support for these icons in Carbon as well as Cocoa UI, we require that this property point to a ".icns" files. These files can be created using /Developer/Applications/Utilities/Icon\ Composer.app.
Allows an AudioUnit to provide names for the individual elements that are contained in a scope. A typical usage is to provide names for the input and outputs elements (or buses).
For example, Apple's DLSMusicDevice implements this property for the two outputs it provides. The names of each of these two elements are different based on whether the AU's internal reverb is on or off. If on, then the first output is "Stereo Mix", the second "Unused". If off, the first output is "Wet Mix" and the second is "Dry Mix".
Typically this property will be read only, but in some cases an AU might provide the ability to set the name to any arbitrary string.
Used by the host to provide callbacks that an Audio Unit can use to obtain information from the host.
Currently, this property provides for two callbacks that are based around the concept of musical time. That is, when an Audio Unit is asked to render, it can query the host for information about the host's musical time, and then use that information to match its DSP to a musical context. For example, a delay unit can time the delays to the beats, based on the song's time signature as well as tracking tempo changes.
How does this work?
When the host opens an Audio Unit and connects it, it should also call this property.
HostCallbackInfo info; memset (&info, 0, sizeof (HostCallbackInfo)); info.hostUserData = this; info.beatAndTempoProc = DispatchGetBeatAndTempo; //ignore result of this - don't care if the property isn't supported AudioUnitSetProperty (mAudioUnit, kAudioUnitProperty_HostCallbacks, kAudioUnitScope_Global, 0, //elementID &info, sizeof (HostCallbackInfo));
In this example, the host is only supporting the HostCallback_GetBeatAndTempo callback, so in that case this is the only information that the host can provide to the Audio Unit. Any unsupported callbacks from a host should of course be set to NULL.
Once the host has set this property, the Audio Unit now has callbacks that it can make to the host.
The "info.hostUserData = this" line shows the host seeting this user data field to the value of the this pointer of a C++ object. The Audio Unit is required to always pass this hostUserData field back to the host when it makes the callback.
In this case, how does the host implement this callback?
OSStatus AUNodeSequenceDest::DispatchGetBeatAndTempo ( void* inHostUserData, Float64* outCurrentBeat, Float64* outCurrentTempo) { AUNodeSequenceDest* This = (AUNodeSequenceDest*)inHostUserData; if (This) return This->GetBeatAndTempo (outCurrentBeat, outCurrentTempo); return paramErr; }
When the Audio Unit calls the beatAndTempoProc it will call this dispatch. The inHostUserData is treated as it was passed in, ie. the this pointer to the object in question. The dispatch call here, is defined as a static (or class) member function, which then dispatches to an instance method GetBeatAndTempo.
If the host is unable to provide the requested information (for instance the audio is coming from a live situation with no beat or tempo context) then it can return the kAudioUnitErr_CannotDoInCurrentContext error code.
The Audio Unit will only make this call in response to the host calling the unit's AudioUnitRender call. Thus, any of the values that the host provides to the Audio Unit in these callbacks relate to the particular buffer that it has asked the Audio Unit to render.
An Audio Unit may also decide to publish parameters where the unit types of those parameters are based on tempo and beat information. See Tempo Parameters for more detail in the parameter section of this documentation.
struct HostCallbackInfo { void * hostUserData; HostCallback_GetBeatAndTempo beatAndTempoProc; HostCallback_GetMusicalTimeLocation musicalTimeLocationProc; HostCallback_GetTransportState transportStateProc; };
typedef OSStatus (*HostCallback_GetBeatAndTempo)( void * inHostUserData, Float64 * outCurrentBeat, Float64 * outCurrentTempo );When the Audio Unit makes this call any of the out... parameters can be NULL. This indicates to the host that the Audio Unit does not need information about that particular value. Thus, the host should always check the out... pointers to ensure they are valid.
noErr, or kAudioUnitErr_CannotDoInCurrentContext if unable to provide requested information
typedef OSStatus (*HostCallback_GetMusicalTimeLocation)( void * inHostUserData, UInt32 * outDeltaSampleOffsetToNextBeat, Float32 * outTimeSig_Numerator, UInt32 * outTimeSig_Denominator, Float64 * outCurrentMeasureDownBeat);
How does this work? Lets take an example of a score, and the beats, etc, that a host would be expected to provide to the Audio Unit. In the following, the first beat is 0.
Score Time Sig: |3/4 | |4/8 |6/8 |5/8 |3/4 |4/4 | Down Beats: 0 3 6 8 11 13.5 16.5 20.5
For a change from 3/4 to 4/8 the value of the beat does not change (thus, a 4/8 time sig will still have 2 beat values for that measure - the value of the beat unit does not change as the time signature changes). In common practise a beat is generally associated with a quarter note.
Typically of course, if the Audio Unit is using this callback it will also need to use the HostCallback_GetBeatAndTempo callback as well. Thus a host is required to support that callback if it supports this one.
When the Audio Unit makes this call any of the out... parameters can be NULL. This indicates to the host that the Audio Unit does not need information about that particular value. Thus, the host should always check the out... pointers to ensure they are valid.
noErr, or kAudioUnitErr_CannotDoInCurrentContext if unable to provide requested information
typedef OSStatus (*HostCallback_GetTransportState)( void *inHostUserData, Boolean *outIsPlaying, Boolean *outTransportStateChanged, Float64 *outCurrentSampleInTimeLine, Boolean *outIsCycling, Float64 *outCycleStartBeat, Float64 *outCycleEndBeat);When the Audio Unit makes this call any of the out... parameters can be NULL. This indicates to the host that the Audio Unit does not need information about that particular value. Thus, the host should always check the out... pointers to ensure they are valid.
noErr, or kAudioUnitErr_CannotDoInCurrentContext if unable to provide requested information
CFPropertyListRef dictionary is a constrained subset of a CFDictionary that uses CFStrings as keys, and whose values can only be CFPropertyListRefs (which includes CFStrings, CFNumbers, CFData, or arrays/dictionaries whose values and keys are constrained in this same way).
There are essentially two types of preset dictionaries. The global preset, which specifies the entire state of an AudioUnit. The part preset, which contains the addition part key, specifies the preset state of a part. See discussion on multimbral MusicDevice units for more information.
The dictionary contains several key/value pairs:
The name field is filled in by finding the last preset that was currently set on the unit (whether factory or ClassInfo). The name will be "Untitled" if the unit has no presets, and the ClassInfo has never been set.
The dictionary can be parsed using the appropriate CoreFoundation functions. The class data contains enough information to establish a ComponentDescription that can then be used to find and open the appropriate Audio Unit, open it, and then re-establish the state as saved in the dictionary. As this currently only contains the parameter values (for Apple's Audio Units as shipped in 10.2) this may not be complete for some units. For example, the name of the SoundBank for the DLSMusicDevice is not currently saved in the class data. It is anticipated that properties that are needed to reestablish the complete state of an Audio Unit will be saved in future release (and consequently, the version number of the class data will be revised). Developers can add custom property keys that are unique to their Audio Units. In this case we recommend that developers begin their keys with their Unique ManufacturerID to avoid possible conflicts with future keys that might be defined. For eg. ACME-my-custom-key Apple will continue to define (and thus reserves) properties that are not qualified with a manufacturer ID.
This property has the same logistics as the pre-exising Current Preset property. However, that property had undefined behaviour when it came to the CFString contained within the AUPreset. Thus, it was decided to deprecate the kAudioUnitProperty_CurrentPreset property and replace this with this one, where the semantics of the CFString usage are both clear and consistent with other properties that use CF objects. See, the CF_AU_Properties document in the SDK. Read: This can be used by the caller to identify the current preset of the unit. This behaves differently for handling both Factory Presets and User states (ClassInfo). If the last state set is a factory preset (i.e. no call to set ClassInfo has been made), then the AUPreset contains both a valid number (greater than or equal to zero) and name (the number and name of the appropriate factory preset). If the unit has factory presets, then the first time this property is queried, it returns the default preset.
If a set ClassInfo property was the last call made, then the AUPreset will contain a number of -1 (signifying User preset), and the name contained within the class info. If the name has not been set, you get a default name, such as "Untitled". When returned, the CFString (as with all other CF objects retrieved from Get Property) in the AUPreset is owned by the client and should be released. Code in PublicUtility (CAAudioUnit), shows how a client can deal with the migration of usage from units that do not yet implement the _CurrentPreset property.
Write:
The number in AUPreset is used to select the preset.
If presetNumber is equal to or greater than zero (factory preset):
Set the state of the unit to one of the factory presets. The caller
provides an AUPreset (from kAudioUnitProperty_FactoryPresets), and this
becomes the current state of the unit.
kAudioUnitErr_InvalidPropertyValue is returned if the preset number is
not recognised by the Audio Unit.
If presetNumber is less than zero: (signifying a user preset):
Sets the current preset for the unit (including the name supplied in
presetName). This name will then be saved into the unit's data when
getting the current state of the ClassInfo property. This allows the
name of a state to be saved along with the state so it can be shown to
the user when that state is re-established.
struct AUPreset { SInt32 presetNumber; CFStringRef presetName; };
Returns an array of AUPreset that contain a number and name for each of the presets. The number of each preset must be greater (or equal to) zero, and the numbers need not be ordered or contiguous. The name of each preset can be presented to the user as a means of identifying each preset. The CFArrayRef should be released by the caller.
The caller should pass in one of the kReverbRoomType enum values. This
property is supported by those units that implement the
kAudioUnitProperty_UsesInternalReverb
(DLSMusicDevice,
3DMixer) as well as the MatrixReverb unit.
Some audio units can use an internal reverb. The 3DMixer and the DLSMusicDevice both have this property on by default (value==1). To turn this off, set the value of this property to zero.
The value is an identifier for the sample rate converter algorithm to use. This is currently supported by the AUConverter unit and the OutputDevice units.
This returns the number of instruments that are able to be used by a MusicDevice Audio Unit. In the DLSMusicDevice this returns the number of instruments that are in the DLS or SoundFont collection that is currently set on this unit.
The MusicDeviceInstrumentID is passed in for the inElement argument, and the call returns the name for that instrumentID.
The caller passes in the instrument "index" in the inElement argument. This "index" is zero-based and must be less than the number of instruments (determined using the kMusicDeviceProperty_InstrumentCount property).
The value passed back will be a MusicDeviceInstrumentID. This MusicDeviceInstrumentID may then be used with the kMusicDeviceProperty_InstrumentName property, or in any of the MusicDevice calls which take a MusicDeviceInstrumentID argument.
This value is further expected to be formatted in a particular manner relating to the bank and patch number values of MIDI. The number is formatted as 0xMMLLPP, where the lowest byte is the patch number of the instrument, the second byte the LSB of the instrument's bank select, and the 3rd byte, the MSB of the instrument's bank select.
This property is used with a MusicDevice that requires sample data to be used as a source for its rendering. The DLSMusicDevice will accept both DownLoadable Sound files and Sound Fonts as the sample data for its intruments.
Returns the name of the currently loaded sound bank of the DLSMusicDevice. The CFStringRef should be released by the caller.
Returns a URL to a MIDINameDocument describing the MusicDevice's patch, note and control names.
This value is initially set to 0 (false). When AudioOutputUnitStart is called the value of this property is 1 (true), and when AudioOutputUnitStop is consequently called the value is again zero. Audio Units do not count the number of times that their start or stop methods are called.
This is a property that is typically supported by Audio Units that
generate content that corresponds to common multi-channel formats.
Currently the following values are defined for this property. When this
property is supported it should be used over the
kAudioUnitProperty_StreamFormat
as the stream format
doesn't describe sufficient information to the renderer when applying
spatialization techniques.
kSpeakerConfiguration_HeadPhones
kSpeakerConfiguration_Stereo
kSpeakerConfiguration_Quad
kSpeakerConfiguration_5_1
The caller passes in one of the kSpatializationAlgorithm enum values to specify which particular algorithm should be applied on the specified scope (input for the 3DMixer) and elementID (bus number). This allows different inputs to the 3DMixer to have different spatialization algorithms applied to each input.
A value of 1 will enable the application of a Doppler shift effect to a moving source in the 3DMixer unit.