Audio Unit parameters are used to modify the behaviour of the rendering
process of an Audio Unit. Parameters can often be applied in real time (though
some parameters will cause glitching when changed because of the nature of the
underlying DSP or the nature of the parameter itself). Parameters can be applied
very frequently as a particular value is changed over a short period of time. A
parameter's value is defined to be a Float32
, and like properties
is specified through the parameterID, its scope and the element (or bus) within
that scope.
Contents | |
Getting and Setting Parameter Values | |
Parameter Types and Information | |
Tempo Parameters | |
ParameterID's for Apple Audio Units |
Functions | ||
AudioUnitGetParameter | Get a parameter's value. | |
AudioUnitSetParameter | Set a parameter's value. | |
AudioUnitScheduleParameters | Schedule one or more parameter changes to happen within the current render cycle. | |
Defined Types | ||
AudioUnitParameterID | ||
Structs | ||
AudioUnitParameter | ||
AudioUnitParameterEvent | ||
AudioUnitParameterInfo | ||
Enumerations | ||
AUParameterEventType | ||
AudioUnitParameterUnit | ||
AudioUnitParameterInfo flags |
typedef UInt32 AudioUnitParameterID;
struct AudioUnitParameter { AudioUnit mAudioUnit; AudioUnitParameterID mParameterID; AudioUnitScope mScope; AudioUnitElement mElement; };This structure is used in the Audio Unit Utilities to specify an individual Audio Unit parameter with a single function argument.
ComponentResult AudioUnitGetParameter( AudioUnit ci, AudioUnitParameterID inID, AudioUnitScope inScope, AudioUnitElement inElement, Float32 * outValue );This function retrieves the current value of the parameter as specified by its ID, scope and element.
ComponentResult AudioUnitSetParameter( AudioUnit ci, AudioUnitParameterID inID, AudioUnitScope inScope, AudioUnitElement inElement, Float32 inValue, UInt32 inBufferOffsetInFrames );This function sets the current value of the parameter as specified by its ID, scope and element.
The AUParameterListener, described in the Audio Unit Utilities section, provides an alternative function, AUParameterSet, that can be used to set the value of a parameter. This service provides the capability of notifying other portions of your program that the value of an Audio Unit parameter has been changed. Internally it uses AudioUnitSetParameter to change the parameter value.
Parameter changes are ideally scheduled just before the AudioUnitRender
call is made. Thus, the AudioUnitSetParameter is also delineated with a
inBufferOffsetInFrames
argument; this value specifies to
the Audio Unit that the parameter value should be changed in the next
render the inBufferOffsetInFrames
sample frames within the
buffer that is next rendered. A value of zero will apply the new
parameter value as soon as possible. Given this, however, not all Audio
Units will be able to do intra-buffer adjustments to parameter values,
though the API gives units the opportunity to do that.
In many digital music environments there are two rates that are of interest. The first is the sample rate of the audio data itself - this is a well understood concept and how this rate effects the frequencies that can be represented by a particular digitally generated audio signal. The second is often known as the k-rate, or control rate. This rate determines the granularity of control or (in Audio Unit parlance) parameter value changes. For Audio Unit processing, this is generally equivalent to the size of the audio buffer (how many sample frames) divided by the sample rate.
For instance, given a sample rate of 48KHz, and a buffer size of 512 samples, the k-rate is 10.666 msecs. Whereas a buffer size of 128 samples gives you a k-rate of 2.666 msecs. So, parameter changes will have a granularity of 10.666 msecs (or 2.666 msecs for 128 frames). The provision of the buffer offset in AudioUnitSetParameter provides the capability of a k-rate that can be the same as the sample rate (i.e. parameter changes can be applied on a sample by sample basis).
ComponentResult AudioUnitScheduleParameters( AudioUnit ci, const AudioUnitParameterEvent * inParameterEvent, UInt32 inNumParamEvents );The limitations of AudioUnitSetParameter (see above) led to an enhanced method of setting of parameter values with the V2 Audio Unit, with the addition of this function.
This allows multiple parameter events to be scheduled simultaneously, thus making it explicit that the Audio Unit should apply potentially multiple parameter value changes within the same render buffer. This call introduces a new parameter event, the ramp parameter. This is used when the host wishes the Audio Unit to ramp a parameter from a start value to an end value, over a specified number of frames. Ramp parameters should be rescheduled each buffer.
For instance, you have a parameter that you wish to ramp from a value of 0.5 to 0.8. You want this ramp to last for 2000 samples, and at the time you want this to start, it should start 400 samples within the current buffer. We'll also assume that we have a render buffer size of 512 samples. This situation would be scheduled like this:
startOffset durInFrames startValue endValue First Schedule: 400 2000 0.5 0.8 - Render first buffer Second Schedule: -112 2000 0.5 0.8 - Render second buffer Third Schedule: -624 2000 0.5 0.8 - Render third buffer Fourth Schedule: -1136 2000 0.5 0.8 - Render fourth buffer Fifth Schedule: -1648 2000 0.5 0.8 - Render fifth (and last) bufferThe scheduling first indicates where the ramp event should start. The second and consequent schedule events (done before each consequent buffer), indicate with the negative number how far into the ramp we are for the next render cycle.
The curve that is applied to a ramped parameter is determined by the Audio Unit itself. This allows the host to just specify the progression of the curve in a linear manner (as described above). If a host wishes to apply a non-linear curve, that can currently be approximated through smaller line segments. This may change in the future with the additions of standard properties to allow an Audio Unit to have particular curves applied to ramped parameter values.
If the time line was shifted by the host to some place within a ramped parameter event, the host need only schedule the remaining portion of the ramp. For instance, in the example above, imagine playback was stopped and the play head was moved so that it fell at 402 samples before the end of the ramp event. The host need only schedule the last slice.
startOffset durInFrames startValue endValue First Schedule: -1598 2000 0.5 0.8 - Render first (and last) bufferIf parameter values are to be set outside of the rendering process and there is no external timing information available that would allow a parameter value to be scheduled within a buffer, then the bufferOffset can and should be set to zero. This allows the Audio Unit to apply the parameter value as soon as possible. This is for instance, how the AUParameterSet call works, and also how the GenericAUView component sends the parameter value changes. In this situation, the ScheduledParameter API call can't be used, so ramped parameter events cannot be scheduled without a knowledge of where in the Audio Unit's sample-based time line you will be at for the Audio Unit's render cycle.
typedef UInt32 AUParameterEventType;
enum { kParameterEvent_Immediate = 1, kParameterEvent_Ramped = 2 };The
AudioUnitParameterEvent
struct can contain either a single
parameter value, or information for a ramped parameter event.
struct AudioUnitParameterEvent { AudioUnitScope scope; AudioUnitElement element; AudioUnitParameterID parameter; AUParameterEventType eventType; union { struct { SInt32 startBufferOffset; UInt32 durationInFrames; Float32 startValue; Float32 endValue; } ramp; struct { UInt32 bufferOffset; Float32 value; } immediate; } eventValues; };
Parameters can be of various types, with differing ranges and characteristics. It is expected that Audio Units will publish their parameters in format that is most natural to the parameter itself.
struct AudioUnitParameterInfo { char name[60]; CFStringRef cfNameString; AudioUnitParameterUnit unit; Float32 minValue; Float32 maxValue; Float32 defaultValue; UInt32 flags; };Contains information about a particular parameter as defined by an Audio Unit. This information includes the name of parameter, its minimum, maximum and default value, flags, as well as the parameter's unit or format as described below.
The kAudioUnitProperty_ParameterInfo property returns information about an Audio Unit parameter in this structure.
typedef UInt32 AudioUnitParameterUnit;
enum { kAudioUnitParameterUnit_Generic = 0, kAudioUnitParameterUnit_Indexed = 1, kAudioUnitParameterUnit_Boolean = 2, kAudioUnitParameterUnit_Percent = 3, kAudioUnitParameterUnit_Seconds = 4, kAudioUnitParameterUnit_SampleFrames = 5, kAudioUnitParameterUnit_Phase = 6, kAudioUnitParameterUnit_Rate = 7, kAudioUnitParameterUnit_Hertz = 8, kAudioUnitParameterUnit_Cents = 9, kAudioUnitParameterUnit_RelativeSemiTones = 10, kAudioUnitParameterUnit_MIDINoteNumber = 11, kAudioUnitParameterUnit_MIDIController = 12, kAudioUnitParameterUnit_Decibels = 13, kAudioUnitParameterUnit_LinearGain = 14, kAudioUnitParameterUnit_Degrees = 15, kAudioUnitParameterUnit_EqualPowerCrossfade = 16, kAudioUnitParameterUnit_MixerFaderCurve1 = 17, kAudioUnitParameterUnit_Pan = 18, kAudioUnitParameterUnit_Meters = 19, kAudioUnitParameterUnit_AbsoluteCents = 20 };This enumeration defines a parameter's value units, as returned in the unit field of the AudioUnitParameterInfo structure.
These ranges are recommended ranges only and some Audio Units may report
different ranges. Audio Units publish their parameters using the
kAudioUnitProperty_ParameterInfo
property. The Audio
Unit Properties section contains general information about those
properties that relate to parameter management.
absoluteCents = 1200 * log2(f / 440) + 6900
Some parameters may publish their parameter types as kAudioUnitParameterUnit_Beats or kAudioUnitParameterUnit_BPM. It is worth noting some points here, and how this can related to the kAudioUnitProperty_HostCallbacks property, which provides the ability of an Audio Unit to get information from the hosting application about the current tempo.
Firstly, A parameter unit type of the kAudioUnitParameterUnit_BPM means that the parameter value will be timed as beats per minute. So far so good, but we still haven't told the audio unit what to do, though without any further information we can reasonably presume that this parameter will be timed to beat at that rate.
But we can have a finer degree of control here, and this is represented by the parameter unit called kAudioUnitParameterUnit_Beats.
This describes how that parameter will relate its timing to the tempo. Here, we're expecting that a value of "1" means that the parameter is timed to a beat (as expressed by the _BPM - whether this is maintained by the host tempo or set by the user), a value of "2" means the parameter is timed at 2 beats, a value of "0.5" means its timed at a half a beat, etc...
Either of these parameters can be used independantly, ie. an explicit beat tempo can be specified using the kAudioUnitParameterUnit_BPM parameter unit, or a scaled relationship to a tempo can be represented using the kAudioUnitParameterUnit_Beats parameter unit. However, the kAudioUnitParameterUnit_Beats parameter unit is only useful if the audio unit itself can follow the tempo of the host, and this parameter then allows for this scaled or relative relationship to be drawn to the tempo.
There maybe some situations however when the Audio Unit is not able to get tempo information from the host. Either, the host doesn't support that property (kAudioUnitProperty_HostCallbacks), or the host is being used by a user in a live situationn where no tempo can be ascertained by the host.
In this case an Audio Unit may decide to publish parameters for both of these properties, thus allowing the user to specify both a default tempo kAudioUnitParameterUnit_BPM, and a relationship of other parameters to that tempo with parameters that are based on the kAudioUnitParameterUnit_Beats parameter type.
Lets take an example. We have a multi-tap delay unit where each tap can be delayed by a different time.
Delay Tempo Parameter (kAudioUnitParameterUnit_BPM parameter units) -> Range from 20 to 400 BPM Delay Tap 1 Time (kAudioUnitParameterUnit_Beats parameter units) -> Range from 0 to 4 Delay Tap 2 Time (kAudioUnitParameterUnit_Beats parameter units) -> Range from 0 to 4 Delay Tap 3 Time (kAudioUnitParameterUnit_Beats parameter units) -> Range from 0 to 4
We decide that we want these three taps to have a musically interesting relationship to each other. So, we decide to set "Delay Tap 1 Time" to 1, "Delay Tap 2 Time" to 2 and "Delay Tap 3 Time" to 0.5. Now, as we set the tempo of the delay unit, these taps will change their actual delay time in that relationship.
Tempo -> Tap 1 (1) Tap 2 (2) Tap 3 (0.5) 60 60 a sec 30 a sec 120 a sec 120 120 a sec 60 a sec 240 a sec
Thus, by just changing the Delay Tempo Parameter, the times of each of the taps (in this example) changes relative to that tempo setting. An Audio Unit may link these tempos to beats, or may not - this is just an example.
You can also see that one way to view the beat parameter is as a musical note. Lets assume that the BPM is in terms of quarter notes (crotchets) - typical for much music where the time signature is 4/4... So, a beat value of 1 means that the parameter beats at the rate of a quarter note. If the beat value is 2, then this parameter will beat at a value of a half note (minim) - ie. 2 beats per note. If the beat value is 0.5, then this parameter will beat at a value of an eighth note (quaver) - ie. 1/2 a beat per note. Of course, we don't know, nor do we need to know in this case, how the music is being notated. All we care about here is how many beats per minute, and then what the relationship of a beat parameter would be to that tempo.
Now, a complication can come into play here when we relate this to the kAudioUnitProperty_HostCallbacks property and its support for deriving the tempo from the host. This is obviously a highly desirable capability, but if an Audio Unit also publishes a tempo based parameter, which tempo wins?
It is suggested that if the audio unit can support both, that it presents to the user a boolean property, that allows the user to choose whether the tempo should be tracked from the host, or the tempo as set by this tempo based parameter is to be the tempo used. Thus,
Delay Tracks Host Tempo (kAudioUnitParameterUnit_Boolean) -> if true, ignores user setting of the "Delay Tempo Parameter" above -> if false, ignores host tempo
Where "Delay Tracks Host Tempo" is set to true, the Audio Unit would then treat the "Delay Tempo Parameter" as a read only parameter, and attempts by the user to explicitly set this might fail.
enum { kAudioUnitParameterFlag_IsHighResolution = (1L << 23), kAudioUnitParameterFlag_NonRealTime = (1L << 24), kAudioUnitParameterFlag_CanRamp = (1L << 25), kAudioUnitParameterFlag_ExpertMode = (1L << 26), kAudioUnitParameterFlag_HasCFNameString = (1L << 27), kAudioUnitParameterFlag_IsGlobalMeta = (1L << 28), kAudioUnitParameterFlag_IsElementMeta = (1L << 29), kAudioUnitParameterFlag_IsReadable = (1L << 30), kAudioUnitParameterFlag_IsWritable = (1L << 31) };The flags that are specified in this struct contain additional information about the parameter.
The discovery mechanism for parameters is the most reliable means of
obtaining information about a parameter. However, this can be somewhat onerous
if an Audio Unit is being used programmatically. Thus, in
AudioUnit/AudioUnitParameters.h
a complete definitions of the
parameterIDs for all of Apple's Audio Units are provided, with comments about
their respective scope, range and default values. Third party Audio Units that
may also be used in such a manner should also publish a similar definition of
their parameters to make this process easier.