bindings/webaudio library
Web Audio API
Classes
- AnalyserNode
- The interface represents a node able to provide real-time frequency and time-domain analysis information. It is an AudioNode that passes the audio stream unchanged from the input to the output, but allows you to take the generated data, process it, and create audio visualizations. An has exactly one input and one output. The node works even if the output is not connected.
- AnalyserOptions
- AudioBuffer
-
The interface represents a short audio asset residing in memory,
created from an audio file using the
AudioContext.decodeAudioData()
method, or from raw data usingAudioContext.createBuffer()
. Once put into an AudioBuffer, the audio can then be played by being passed into an AudioBufferSourceNode. Objects of these types are designed to hold small audio snippets, typically less than 45 s. For longer sounds, objects implementing the MediaElementAudioSourceNode are more suitable. The buffer contains data in the following format: non-interleaved IEEE754 32-bit linear PCM with a nominal range between-1
and+1
, that is, a 32-bit floating point buffer, with each sample between -1.0 and 1.0. If the has multiple channels, they are stored in separate buffers. - AudioBufferOptions
- AudioBufferSourceNode
- The interface is an AudioScheduledSourceNode which represents an audio source consisting of in-memory audio data, stored in an AudioBuffer. This interface is especially useful for playing back audio which has particularly stringent timing accuracy requirements, such as for sounds that must match a specific rhythm and can be kept in memory rather than being played from disk or the network. To play sounds which require accurate timing but must be streamed from the network or played from disk, use a AudioWorkletNode to implement its playback.
- AudioBufferSourceOptions
- AudioContext
- The interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode. An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an before you do anything else, as everything happens inside a context. It's recommended to create one AudioContext and reuse it instead of initializing a new one each time, and it's OK to use a single for several different audio sources and pipeline concurrently.
- AudioContextOptions
- AudioDestinationNode
-
The interface represents the end destination of an audio graph
in a given context — usually the speakers of your device. It can
also be the node that will "record" the audio data when used with
an OfflineAudioContext.
has no output (as it is the output, no more AudioNode can be
linked after it in the audio graph) and one input. The number of
channels in the input must be between
0
and the maxChannelCount value or an exception is raised. The of a given AudioContext can be retrieved using theAudioContext.destination
property. - AudioListener
-
The interface represents the position and orientation of the
unique person listening to the audio scene, and is used in audio
spatialization. All PannerNodes spatialize in relation to the
stored in the
BaseAudioContext.listener
attribute. It is important to note that there is only one listener per context and that it isn't an AudioNode. - AudioNode
- The interface is a generic interface for representing an audio processing module. Examples include:
- AudioNodeOptions
- AudioParam
-
The Web Audio API's interface represents an audio-related
parameter, usually a parameter of an AudioNode (such as
GainNode.gain
). An can be set to a specific value or a change in value, and can be scheduled to happen at a specific time and following a specific pattern. Each has a list of events, initially empty, that define when and how values change. When this list is not empty, changes using theAudioParam.value
attributes are ignored. This list of events allows us to schedule changes that have to happen at very precise times, using arbitrary timeline-based automation curves. The time used is the one defined inAudioContext.currentTime
. - AudioParamDescriptor
-
The dictionary of the Web Audio API specifies properties for
AudioParam objects.
It is used to create custom AudioParams on an
AudioWorkletNode. If the underlying AudioWorkletProcessor has
a
parameterDescriptors
static getter, then the returned array of objects based on this dictionary is used internally by AudioWorkletNode constructor to populate itsparameters
property accordingly. - AudioParamMap
- The Web Audio API interface represents a set of multiple audio parameters, each described as a mapping of a String identifying the parameter to the AudioParam object representing its value.
- AudioProcessingEvent
- Deprecated: This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes. Avoid using it, and update existing code if possible; see the compatibility table at the bottom of this page to guide your decision. Be aware that this feature may cease to work at any time. The Web Audio API represents events that occur when a ScriptProcessorNode input buffer is ready to be processed.
- AudioProcessingEventInit
- AudioRenderCapacity
- AudioRenderCapacityEvent
- AudioRenderCapacityEventInit
- AudioRenderCapacityOptions
- AudioScheduledSourceNode
-
The interface—part of the Web Audio API—is a parent interface
for several types of audio source node interfaces which share the
ability to be started and stopped, optionally at specified times.
Specifically, this interface defines the
start()
andstop()
methods, as well as the onended event handler. - AudioTimestamp
- AudioWorklet
-
Secure context: This feature is available only in secure
contexts (HTTPS), in some or all supporting browsers.
The interface of the Web Audio API is used to supply custom
audio processing scripts that execute in a separate thread to
provide very low latency audio processing.
The worklet's code is run in the AudioWorkletGlobalScope
global execution context, using a separate Web Audio thread which
is shared by the worklet and other audio nodes.
Access the audio context's instance of through the
BaseAudioContext.audioWorklet
property. - AudioWorkletGlobalScope
-
The interface of the Web Audio API represents a global execution
context for user-supplied code, which defines custom
AudioWorkletProcessor-derived classes.
Each BaseAudioContext has a single AudioWorklet available
under the
audioWorklet
property, which runs its code in a single . As the global execution context is shared across the current BaseAudioContext, it's possible to define any other variables and perform any actions allowed in worklets — apart from defining AudioWorkletProcessor-derived classes. - AudioWorkletNode
-
Note: Although the interface is available outside secure
contexts, the
BaseAudioContext.audioWorklet
property is not, thus custom AudioWorkletProcessors cannot be defined outside them. - AudioWorkletNodeOptions
- AudioWorkletProcessor
- The interface of the Web Audio API represents an audio processing code behind a custom AudioWorkletNode. It lives in the AudioWorkletGlobalScope and runs on the Web Audio rendering thread. In turn, an AudioWorkletNode based on it runs on the main thread.
- BaseAudioContext
- The interface of the Web Audio API acts as a base definition for online and offline audio-processing graphs, as represented by AudioContext and OfflineAudioContext respectively. You wouldn't use directly — you'd use its features via one of these two inheriting interfaces. A can be a target of events, therefore it implements the EventTarget interface.
- BiquadFilterNode
-
The interface represents a simple low-order filter, and is
created using the
BaseAudioContext/createBiquadFilter
method. It is an AudioNode that can represent different kinds of filters, tone control devices, and graphic equalizers. A always has exactly one input and one output. - BiquadFilterOptions
- ChannelMergerNode
- The interface, often used in conjunction with its opposite, ChannelSplitterNode, reunites different mono inputs into a single output. Each input is used to fill a channel of the output. This is useful for accessing each channels separately, e.g. for performing channel mixing where gain must be separately controlled on each channel.
- ChannelMergerOptions
- ChannelSplitterNode
- The interface, often used in conjunction with its opposite, ChannelMergerNode, separates the different channels of an audio source into a set of mono outputs. This is useful for accessing each channel separately, e.g. for performing channel mixing where gain must be separately controlled on each channel.
- ChannelSplitterOptions
- ConstantSourceNode
- The interface—part of the Web Audio API—represents an audio source (based upon AudioScheduledSourceNode) whose output is single unchanging value. This makes it useful for cases in which you need a constant value coming in from an audio source. In addition, it can be used like a constructible AudioParam by automating the value of its offset or by connecting another node to it; see Controlling multiple parameters with ConstantSourceNode. A has no inputs and exactly one monaural (one-channel) output. The output's value is always the same as the value of the offset parameter.
- ConstantSourceOptions
- ConvolverNode
- The interface is an AudioNode that performs a Linear Convolution on a given AudioBuffer, often used to achieve a reverb effect. A always has exactly one input and one output.
- ConvolverOptions
- DelayNode
- The interface represents a delay-line; an AudioNode audio-processing module that causes a delay between the arrival of an input data and its propagation to the output. A always has exactly one input and one output, both with the same amount of channels.
- DelayOptions
- DynamicsCompressorNode
- The interface provides a compression effect, which lowers the volume of the loudest parts of the signal in order to help prevent clipping and distortion that can occur when multiple sounds are played and multiplexed together at once. This is often used in musical production and game audio. is an AudioNode that has exactly one input and one output.
- DynamicsCompressorOptions
- GainNode
- The interface represents a change in volume. It is an AudioNode audio-processing module that causes a given gain to be applied to the input data before its propagation to the output. A always has exactly one input and one output, both with the same number of channels. The gain is a unitless value, changing with time, that is multiplied to each corresponding sample of all input channels. If modified, the new gain is instantly applied, causing unaesthetic 'clicks' in the resulting audio. To prevent this from happening, never change the value directly but use the exponential interpolation methods on the AudioParam interface.
- GainOptions
- IIRFilterNode
- The interface of the Web Audio API is a AudioNode processor which implements a general infinite impulse response (IIR) filter; this type of filter can be used to implement tone control devices and graphic equalizers as well. It lets the parameters of the filter response be specified, so that it can be tuned as needed.
- IIRFilterOptions
- MediaElementAudioSourceNode
-
The interface represents an audio source consisting of an HTML5
<audio>
or<video>
element. It is an AudioNode that acts as an audio source. AMediaElementSourceNode
has no inputs and exactly one output, and is created using theAudioContext.createMediaElementSource()
method. The amount of channels in the output equals the number of channels of the audio referenced by the HTMLMediaElement used in the creation of the node, or is 1 if the HTMLMediaElement has no audio. - MediaElementAudioSourceOptions
- MediaStreamAudioDestinationNode
-
The interface represents an audio destination consisting of a
WebRTC MediaStream with a single
AudioMediaStreamTrack
, which can be used in a similar way to a MediaStream obtained fromNavigator.getUserMedia()
. It is an AudioNode that acts as an audio destination, created using theAudioContext.createMediaStreamDestination()
method. - MediaStreamAudioSourceNode
-
The interface is a type of AudioNode which operates as an
audio source whose media is received from a MediaStream
obtained using the WebRTC or Media Capture and Streams APIs.
This media could be from a microphone (through
getUserMedia()
) or from a remote peer on a WebRTC call (using the RTCPeerConnection's audio tracks). A has no inputs and exactly one output, and is created using theAudioContext.createMediaStreamSource()
method. The takes the audio from the first MediaStreamTrack whosekind
attribute's value isaudio
. See Track ordering for more information about the order of tracks. The number of channels output by the node matches the number of tracks found in the selected audio track. - MediaStreamAudioSourceOptions
- MediaStreamTrackAudioSourceNode
-
The interface is a type of AudioNode which represents a source
of audio data taken from a specific MediaStreamTrack obtained
through the WebRTC or Media Capture and Streams APIs.
The audio itself might be input from a microphone or other audio
sampling device, or might be received through a
RTCPeerConnection, among other possible options.
A has no inputs and exactly one output, and is created using the
AudioContext.createMediaStreamTrackSource()
method. This interface is similar to MediaStreamAudioSourceNode, except it lets you specifically state the track to use, rather than assuming the first audio track on a stream. - MediaStreamTrackAudioSourceOptions
- OfflineAudioCompletionEvent
-
The Web Audio API interface represents events that occur when
the processing of an OfflineAudioContext is terminated. The
complete
event uses this interface. - OfflineAudioCompletionEventInit
- OfflineAudioContext
- The interface is an AudioContext interface representing an audio-processing graph built from linked together AudioNodes. In contrast with a standard AudioContext, an doesn't render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an AudioBuffer.
- OfflineAudioContextOptions
- OscillatorNode
- The interface represents a periodic waveform, such as a sine wave. It is an AudioScheduledSourceNode audio-processing module that causes a specified frequency of a given wave to be created—in effect, a constant tone.
- OscillatorOptions
- PannerNode
- The interface represents the position and behavior of an audio source signal in space. It is an AudioNode audio-processing module describing its position with right-hand Cartesian coordinates, its movement using a velocity vector and its directionality using a directionality cone. A always has exactly one input and one output: the input can be mono or stereo but the output is always stereo (2 channels); you can't have panning effects without at least two audio channels!
- PannerOptions
- PeriodicWave
-
The interface defines a periodic waveform that can be used to
shape the output of an OscillatorNode.
has no inputs or outputs; it is used to define custom
oscillators when calling
OscillatorNode.setPeriodicWave()
. The itself is created/returned byBaseAudioContext.createPeriodicWave
. - PeriodicWaveConstraints
- PeriodicWaveOptions
- ScriptProcessorNode
- Deprecated: This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes. Avoid using it, and update existing code if possible; see the compatibility table at the bottom of this page to guide your decision. Be aware that this feature may cease to work at any time. The interface allows the generation, processing, or analyzing of audio using JavaScript.
- StereoPannerNode
-
The interface of the Web Audio API represents a simple stereo
panner node that can be used to pan an audio stream left or
right. It is an AudioNode audio-processing module that
positions an incoming audio stream in a stereo image using a
low-cost equal-power panning algorithm.
The pan property takes a unitless value between
-1
(full left pan) and1
(full right pan). This interface was introduced as a much simpler way to apply a simple panning effect than having to use a full PannerNode. - StereoPannerOptions
- WaveShaperNode
- The interface represents a non-linear distorter. It is an AudioNode that uses a curve to apply a wave shaping distortion to the signal. Beside obvious distortion effects, it is often used to add a warm feeling to the signal. A always has exactly one input and one output.
- WaveShaperOptions
Enums
Extensions
- PropsAnalyserNode on AnalyserNode
- PropsAnalyserOptions on AnalyserOptions
- PropsAudioBuffer on AudioBuffer
- PropsAudioBufferOptions on AudioBufferOptions
- PropsAudioBufferSourceNode on AudioBufferSourceNode
- PropsAudioBufferSourceOptions on AudioBufferSourceOptions
- PropsAudioContext on AudioContext
- PropsAudioContextOptions on AudioContextOptions
- PropsAudioDestinationNode on AudioDestinationNode
- PropsAudioListener on AudioListener
- PropsAudioNode on AudioNode
- PropsAudioNodeOptions on AudioNodeOptions
- PropsAudioParam on AudioParam
- PropsAudioParamDescriptor on AudioParamDescriptor
- PropsAudioProcessingEvent on AudioProcessingEvent
- PropsAudioProcessingEventInit on AudioProcessingEventInit
- PropsAudioRenderCapacity on AudioRenderCapacity
- PropsAudioRenderCapacityEvent on AudioRenderCapacityEvent
- PropsAudioRenderCapacityEventInit on AudioRenderCapacityEventInit
- PropsAudioRenderCapacityOptions on AudioRenderCapacityOptions
- PropsAudioScheduledSourceNode on AudioScheduledSourceNode
- PropsAudioTimestamp on AudioTimestamp
- PropsAudioWorkletGlobalScope on AudioWorkletGlobalScope
- PropsAudioWorkletNode on AudioWorkletNode
- PropsAudioWorkletNodeOptions on AudioWorkletNodeOptions
- PropsAudioWorkletProcessor on AudioWorkletProcessor
- PropsBaseAudioContext on BaseAudioContext
- PropsBiquadFilterNode on BiquadFilterNode
- PropsBiquadFilterOptions on BiquadFilterOptions
- PropsChannelMergerOptions on ChannelMergerOptions
- PropsChannelSplitterOptions on ChannelSplitterOptions
- PropsConstantSourceNode on ConstantSourceNode
- PropsConstantSourceOptions on ConstantSourceOptions
- PropsConvolverNode on ConvolverNode
- PropsConvolverOptions on ConvolverOptions
- PropsDelayNode on DelayNode
- PropsDelayOptions on DelayOptions
- PropsDynamicsCompressorNode on DynamicsCompressorNode
- PropsDynamicsCompressorOptions on DynamicsCompressorOptions
- PropsGainNode on GainNode
- PropsGainOptions on GainOptions
- PropsIIRFilterNode on IIRFilterNode
- PropsIIRFilterOptions on IIRFilterOptions
- PropsMediaElementAudioSourceNode on MediaElementAudioSourceNode
- PropsMediaElementAudioSourceOptions on MediaElementAudioSourceOptions
- PropsMediaStreamAudioDestinationNode on MediaStreamAudioDestinationNode
- PropsMediaStreamAudioSourceNode on MediaStreamAudioSourceNode
- PropsMediaStreamAudioSourceOptions on MediaStreamAudioSourceOptions
- PropsMediaStreamTrackAudioSourceOptions on MediaStreamTrackAudioSourceOptions
- PropsOfflineAudioCompletionEvent on OfflineAudioCompletionEvent
- PropsOfflineAudioCompletionEventInit on OfflineAudioCompletionEventInit
- PropsOfflineAudioContext on OfflineAudioContext
- PropsOfflineAudioContextOptions on OfflineAudioContextOptions
- PropsOscillatorNode on OscillatorNode
- PropsOscillatorOptions on OscillatorOptions
- PropsPannerNode on PannerNode
- PropsPannerOptions on PannerOptions
- PropsPeriodicWaveConstraints on PeriodicWaveConstraints
- PropsPeriodicWaveOptions on PeriodicWaveOptions
- PropsScriptProcessorNode on ScriptProcessorNode
- PropsStereoPannerNode on StereoPannerNode
- PropsStereoPannerOptions on StereoPannerOptions
- PropsWaveShaperNode on WaveShaperNode
- PropsWaveShaperOptions on WaveShaperOptions