webaudio library Null safety

Web Audio API

https://webaudio.github.io/web-audio-api/

Classes

AnalyserNode
The interface represents a node able to provide real-time frequency and time-domain analysis information. It is an AudioNode that passes the audio stream unchanged from the input to the output, but allows you to take the generated data, process it, and create audio visualizations. [...]
AnalyserOptions
AudioBuffer
The interface represents a short audio asset residing in memory, created from an audio file using the AudioContext.decodeAudioData() method, or from raw data using AudioContext.createBuffer(). Once put into an AudioBuffer, the audio can then be played by being passed into an AudioBufferSourceNode. [...]
AudioBufferOptions
AudioBufferSourceNode
The interface is an AudioScheduledSourceNode which represents an audio source consisting of in-memory audio data, stored in an AudioBuffer. It's especially useful for playing back audio which has particularly stringent timing accuracy requirements, such as for sounds that must match a specific rhythm and can be kept in memory rather than being played from disk or the network. To play sounds which require accurate timing but must be streamed from the network or played from disk, use a AudioWorkletNode to implement its playback. [...]
AudioBufferSourceOptions
AudioContext
The interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode. An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an before you do anything else, as everything happens inside a context. It's recommended to create one AudioContext and reuse it instead of initializing a new one each time, and it's OK to use a single for several different audio sources and pipeline concurrently.
AudioContextOptions
The AudioContextOptions dictionary is used to specify configuration options when constructing a new AudioContext object to represent a graph of web audio nodes. It is only used when calling the AudioContext() constructor.
AudioDestinationNode
The interface represents the end destination of an audio graph in a given context — usually the speakers of your device. It can also be the node that will "record" the audio data when used with an OfflineAudioContext. [...]
AudioListener
The interface represents the position and orientation of the unique person listening to the audio scene, and is used in audio spatialization. All PannerNodes spatialize in relation to the stored in the BaseAudioContext.listener attribute. [...]
AudioNode
The interface is a generic interface for representing an audio processing module. [...]
AudioNodeOptions
The dictionary of the Web Audio API specifies options that can be used when creating new AudioNode objects. [...]
AudioParam
The Web Audio API's interface represents an audio-related parameter, usually a parameter of an AudioNode (such as GainNode.gain). An can be set to a specific value or a change in value, and can be scheduled to happen at a specific time and following a specific pattern. [...]
AudioParamDescriptor
The AudioParamDescriptor dictionary of the Web Audio API specifies properties for AudioParam objects. It is used to create custom AudioParams on an AudioWorkletNode. If the underlying AudioWorkletProcessor has a parameterDescriptors static getter, then the returned array of objects based on this dictionary is used internally by AudioWorkletNode constructor to populate its parameters property accordingly.
AudioParamMap
Draft This page is not complete. [...]
AudioProcessingEvent
Deprecated This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes. Avoid using it, and update existing code if possible; see the compatibility table at the bottom of this page to guide your decision. Be aware that this feature may cease to work at any time. [...]
AudioProcessingEventInit
AudioScheduledSourceNode
The interface—part of the Web Audio API—is a parent interface for several types of audio source node interfaces which share the ability to be started and stopped, optionally at specified times. Specifically, this interface defines the start() and stop() methods, as well as the onended event handler. You can't create an object directly. Instead, use the interface which extends it, such as AudioBufferSourceNode, OscillatorNode, and ConstantSourceNode. Unless stated otherwise, nodes based upon output silence when not playing (that is, before start() is called and after stop() is called). Silence is represented, as always, by a stream of samples with the value zero (0).
AudioTimestamp
AudioWorklet
Secure contextThis feature is available only in secure contexts (HTTPS), in some or all supporting browsers. [...]
AudioWorkletGlobalScope
The interface of the Web Audio API represents a global execution context for user-supplied code, which defines custom AudioWorkletProcessor-derived classes. Each BaseAudioContext has a single AudioWorklet available under the audioWorklet property, which runs its code in a single . [...]
AudioWorkletNode
Although the interface is available outside secure contexts, the BaseAudioContext.audioWorklet property is not, thus custom AudioWorkletProcessors cannot be defined outside them. [...]
AudioWorkletNodeOptions
The AudioWorkletNodeOptions dictionary of the Web Audio API is used to specify configuration options when constructing a new AudioWorkletNode object for custom audio processing. It is only used when calling the AudioWorkletNode() constructor. During internal instantiation of the underlying AudioWorkletProcessor, the structured clone algorithm is applied to the options object and the result is passed into AudioWorkletProcessor's constructor.
AudioWorkletProcessor
The interface of the Web Audio API represents an audio processing code behind a custom AudioWorkletNode. It lives in the AudioWorkletGlobalScope and runs on the Web Audio rendering thread. In turn, an AudioWorkletNode based on it runs on the main thread.
BaseAudioContext
The interface of the Web Audio API acts as a base definition for online and offline audio-processing graphs, as represented by AudioContext and OfflineAudioContext respectively. You wouldn't use directly — you'd use its features via one of these two inheriting interfaces. [...]
BiquadFilterNode
The interface represents a simple low-order filter, and is created using the BaseAudioContext/createBiquadFilter method. It is an AudioNode that can represent different kinds of filters, tone control devices, and graphic equalizers. A always has exactly one input and one output. [...]
BiquadFilterOptions
ChannelMergerNode
The interface, often used in conjunction with its opposite, ChannelSplitterNode, reunites different mono inputs into a single output. Each input is used to fill a channel of the output. This is useful for accessing each channels separately, e.g. for performing channel mixing where gain must be separately controlled on each channel. [...]
ChannelMergerOptions
ChannelSplitterNode
The interface, often used in conjunction with its opposite, ChannelMergerNode, separates the different channels of an audio source into a set of mono outputs. This is useful for accessing each channel separately, e.g. for performing channel mixing where gain must be separately controlled on each channel. [...]
ChannelSplitterOptions
ConstantSourceNode
The interface—part of the Web Audio API—represents an audio source (based upon AudioScheduledSourceNode) whose output is single unchanging value. This makes it useful for cases in which you need a constant value coming in from an audio source. In addition, it can be used like a constructible AudioParam by automating the value of its offset or by connecting another node to it; see Controlling multiple parameters with ConstantSourceNode. [...]
ConstantSourceOptions
ConvolverNode
The interface is an AudioNode that performs a Linear Convolution on a given AudioBuffer, often used to achieve a reverb effect. A always has exactly one input and one output. Note: For more information on the theory behind Linear Convolution, see the Convolution article on Wikipedia. [...]
ConvolverOptions
DelayNode
The interface represents a delay-line; an AudioNode audio-processing module that causes a delay between the arrival of an input data and its propagation to the output. A always has exactly one input and one output, both with the same amount of channels. [...]
DelayOptions
DynamicsCompressorNode
The interface provides a compression effect, which lowers the volume of the loudest parts of the signal in order to help prevent clipping and distortion that can occur when multiple sounds are played and multiplexed together at once. This is often used in musical production and game audio. is an AudioNode that has exactly one input and one output; it is created using the BaseAudioContext.createDynamicsCompressor method. [...]
DynamicsCompressorOptions
GainNode
The interface represents a change in volume. It is an AudioNode audio-processing module that causes a given gain to be applied to the input data before its propagation to the output. A always has exactly one input and one output, both with the same number of channels. [...]
GainOptions
IIRFilterNode
The interface of the Web Audio API is a AudioNode processor which implements a general infinite impulse response (IIR) filter; this type of filter can be used to implement tone control devices and graphic equalizers as well. It lets the parameters of the filter response be specified, so that it can be tuned as needed. [...]
IIRFilterOptions
MediaElementAudioSourceNode
The interface represents an audio source consisting of an HTML5 <audio> or <video> element. It is an AudioNode that acts as an audio source. A MediaElementSourceNode has no inputs and exactly one output, and is created using the AudioContext.createMediaElementSource() method. The amount of channels in the output equals the number of channels of the audio referenced by the HTMLMediaElement used in the creation of the node, or is 1 if the HTMLMediaElement has no audio. [...]
MediaElementAudioSourceOptions
MediaStreamAudioDestinationNode
The interface represents an audio destination consisting of a WebRTC MediaStream with a single AudioMediaStreamTrack, which can be used in a similar way to a MediaStream obtained from Navigator.getUserMedia(). [...]
MediaStreamAudioSourceNode
The interface is a type of AudioNode which operates as an audio source whose media is received from a MediaStream obtained using the WebRTC or Media Capture and Streams APIs. This media could be from a microphone (through getUserMedia()) or from a remote peer on a WebRTC call (using the RTCPeerConnection's audio tracks). [...]
MediaStreamAudioSourceOptions
The MediaStreamAudioSourceOptions dictionary provides configuration options used when creating a MediaStreamAudioSourceNode using its constructor. It is not needed when using the AudioContext.createMediaStreamSource() method.
MediaStreamTrackAudioSourceNode
The interface is a type of AudioNode which represents a source of audio data taken from a specific MediaStreamTrack obtained through the WebRTC or Media Capture and Streams APIs. The audio itself might be input from a microphone or other audio sampling device, or might be received through a RTCPeerConnection, among other posible options. [...]
MediaStreamTrackAudioSourceOptions
The MediaStreamTrackAudioSourceOptions dictionary is used when specifying options to the MediaStreamTrackAudioSourceNode() constructor. It isn't needed when using the AudioContext.createMediaStreamTrackSource() method.
OfflineAudioCompletionEvent
The Web Audio API interface represents events that occur when the processing of an OfflineAudioContext is terminated. The complete event implements this interface. Note: This interface is marked as deprecated; it is still supported for legacy reasons, but it will soon be superseded when the promise version of OfflineAudioContext.startRendering is supported in browsers, which will no longer need it.
OfflineAudioCompletionEventInit
OfflineAudioContext
The interface is an AudioContext interface representing an audio-processing graph built from linked together AudioNodes. In contrast with a standard AudioContext, an doesn't render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an AudioBuffer.
OfflineAudioContextOptions
OscillatorNode
The interface represents a periodic waveform, such as a sine wave. It is an AudioScheduledSourceNode audio-processing module that causes a specified frequency of a given wave to be created—in effect, a constant tone. [...]
OscillatorOptions
PannerNode
The interface represents the position and behavior of an audio source signal in space. It is an AudioNode audio-processing module describing its position with right-hand Cartesian coordinates, its movement using a velocity vector and its directionality using a directionality cone. A always has exactly one input and one output: the input can be mono or stereo but the output is always stereo (2 channels); you can't have panning effects without at least two audio channels! [...]
PannerOptions
PeriodicWave
The interface defines a periodic waveform that can be used to shape the output of an OscillatorNode. has no inputs or outputs; it is used to define custom oscillators when calling OscillatorNode.setPeriodicWave(). The itself is created/returned by BaseAudioContext.createPeriodicWave.
PeriodicWaveConstraints
PeriodicWaveOptions
ScriptProcessorNode
Deprecated This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes. Avoid using it, and update existing code if possible; see the compatibility table at the bottom of this page to guide your decision. Be aware that this feature may cease to work at any time. [...]
StereoPannerNode
The interface of the Web Audio API represents a simple stereo panner node that can be used to pan an audio stream left or right. It is an AudioNode audio-processing module that positions an incoming audio stream in a stereo image using a low-cost equal-power panning algorithm. The pan property takes a unitless value between -1 (full left pan) and 1 (full right pan). This interface was introduced as a much simpler way to apply a simple panning effect than having to use a full PannerNode. [...]
StereoPannerOptions
WaveShaperNode
The interface represents a non-linear distorter. It is an AudioNode that uses a curve to apply a wave shaping distortion to the signal. Beside obvious distortion effects, it is often used to add a warm feeling to the signal. [...]
WaveShaperOptions

Enums

AudioContextLatencyCategory
AudioContextState
AutomationRate
BiquadFilterType
ChannelCountMode
ChannelInterpretation
DistanceModelType
OscillatorType
OverSampleType
PanningModelType