Open In App

Web Audio API

Last Updated : 04 Dec, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

Web Audio API is used by developers to create and change the audio content through the web browser. There are various features for managing this audio operation like processing, analyzing audio data, synthesizing, etc. With this API, we can easily integrate the audio functionality in our projects. In this article, we will see the Methods, Interfaces, and 2 practical examples of Web Audio API.

Web Audio API Concepts and Usage

  • Web Audio API provides the modular routing of various audio operations and also performs the creation of complex functions through the audio nodes.
  • Web Audio API also instantiates the audio sources like oscillators, and streams along with effect nodes like filters and reverb.
  • They are used to build the connections between the sources, effect, and the final destination to create an audio processing chain.
  • Web Audio API also can be used for low-latency control. which enables applications like drum machines and sequencers to respond at particular times by maintaining accuracy.

Web Audio API Interfaces

  • AudioContext(): This method is used to initialize the Web Audio API context which is the main entry point to the Web Audio API.
  • AudioNode: This is the base interface for all the nodes in the audio graph which has properties and methods common to all the types of audio nodes. e.g. <audio> and <video> tag in HTML.
  • AudioParam: This interface is an individual parameter that controls some aspects of audio nodes like volume or frequency.
  • AudioParamMap: This interface mainly represents a map named AudioParam, which allows proper dynamic control through multiple parameters.
  • BaseAudioContext: This interface is the base interface that is extended by AudioContext, which is used for audio processing contexts.
  • AudioScheduledSourceNode: This interface is used for audio nodes with scheduled start and stop times.
  • OscillatorNode: This is used to generate the periodic waveform which can further be used for the sound synthesis processes.
  • AudioBuffer: This is a short audio asset that can be used to play back through an AudioBufferSourceNode.
  • AudioBufferSourceNode: This is used as an audio source that can play back audio data from AudioBuffer. We can also schedule it to start and stop.
  • MediaElementAudioSourceNode: This is used to describe the audio source that can be connected in HTML elements like <audio> and <video>.
  • MediaStreamAudioSourceNode: This interface is used for audio sources that are connected to a MediaStream.
  • MediaStreamTrackAudioSourceNode: This intrface is used to represent audio source that is connected to a scific ausio track within a MediaStream.

Syntax:

const audioContext = new (window.AudioContext || window.webkitAudioContext)();

Parameters:

  • audioContext: This is the variable to store the audio processing context:
  • new (window.AudioContext || window.webkitAudioContext)(): This is an instance of the Web Audio API audio connect which checks the compatibility with the AudioContext.
  • webkitAudioContext: This is based on the browser support.

Example 1: In this example, we are creating an Audio Synthesizer using Web Audio API where the user can set the frequency, detune, gain value, and check the audio by playing or pausing the audio. We have used the setValueTime function from the AudioParam interface which controls the parameters.

Javascript




document.addEventListener('DOMContentLoaded', () => {
    const Acontext = new (window.AudioContext ||
        window.webkitAudioContext)();
    const osci = Acontext.createOscillator();
    const gain = Acontext.createGain();
    osci.connect(gain);
    gain.connect(Acontext.destination);
    osci.type = 'sine';
    osci.frequency.value = 440;
    osci.detune.value = 0;
    gain.gain.value = 0.5;
    const playBtn =
        document.getElementById('playButton');
    const freqIp =
        document.getElementById('frequency');
    const detuneIp =
        document.getElementById('detune');
    const gainIp =
        document.getElementById('gain');
    playBtn.addEventListener('click', () => {
        if (Acontext.state === 'suspended') {
            Acontext.resume();
        }
        if (playBtn.textContent === 'Play') {
            osci.frequency
                .setValueAtTime(freqIp.value, Acontext.currentTime);
            osci.detune
                .setValueAtTime(detuneIp.value, Acontext.currentTime);
            gain.gain
                .setValueAtTime(gainIp.value, Acontext.currentTime);
            osci.start();
            playBtn.textContent = 'Stop';
        } else {
            osci.stop();
            playBtn.textContent = 'Play';
        }
    });
    freqIp.addEventListener('input', () => {
        osci.frequency
            .setValueAtTime(freqIp.value, Acontext.currentTime);
    });
    detuneIp.addEventListener('input', () => {
        osci.detune
            .setValueAtTime(detuneIp.value, Acontext.currentTime);
    });
    gainIp.addEventListener('input', () => {
        gain.gain
            .setValueAtTime(gainIp.value, Acontext.currentTime);
    });
});


Example 2: In this example, we have created the audio analysis feature where we can see the peak frequency and and average volume of the audio which is been uploaded from the device. The visuals while the the audio is been playing are also shown on the screen. We have used the interfaces like AudioContext, MediaElementAudioSourceNode, MediaStreamTrackAudioSourceNode, and AudioParam in the example.

Javascript




const aObj =
    new (window.AudioContext ||
        window.webkitAudioContext)();
const analyseObj = aObj.createAnalyser();
const canvas =
    document.getElementById('visualizer');
const ctx = canvas.getContext('2d');
analyseObj.fftSize = 256;
const leng = analyseObj.frequencyBinCount;
const dArr = new Uint8Array(leng);
const src =
    aObj.createMediaElementSource(document
        .getElementById('audioPlayer'));
src.connect(analyseObj);
analyseObj.connect(aObj.destination);
document.getElementById('audioFileInput')
    .addEventListener('change', function (e) {
    const file = e.target.files[0];
    if (file) {
        const objectURL = URL.createObjectURL(file);
        document.getElementById('audioPlayer').src = objectURL;
    }
});
function draw() {
    analyseObj.getByteFrequencyData(dArr);
    ctx.fillStyle = '#2c3e50';
    ctx.fillRect(0, 0, canvas.width, canvas.height);
    const barWidth = (canvas.width / leng) * 2.5;
    let x = 0;
    dArr.forEach(function (data) {
        const barHeight = data;
 
        ctx.fillStyle = `rgb(0, ${barHeight + 100}, 0)`;
        ctx.fillRect(x, canvas.height - barHeight / 2,
            barWidth, barHeight / 2);
 
        x += barWidth + 1;
    });
    requestAnimationFrame(draw);
}
function currTimeFunction() {
    const ap = document.getElementById('audioPlayer');
    const curr = document.getElementById('currentTime');
    curr.textContent =
        `Current Time: ${ap.currentTime.toFixed(2)}s`;
}
function peakFreqFunction() {
    analyseObj.getByteFrequencyData(dArr);
    const idx = dArr.indexOf(Math.max(...dArr));
    const freq = (idx / leng) * aObj.sampleRate / 2;
    const freqEle = document.getElementById('peakFrequency');
    freqEle.textContent = `Peak Frequency: ${freq.toFixed(2)} Hz`;
}
function avgVolumeFunction() {
    analyseObj.getByteFrequencyData(dArr);
    const avgVol = dArr.reduce((acc, val) => acc + val, 0) / dArr.length;
    const avgVolEle = document.getElementById('averageVolume');
    avgVolEle.textContent = `Average Volume: ${avgVol.toFixed(2)}`;
}
document.getElementById('audioPlayer')
    .addEventListener('timeupdate', function () {
    currTimeFunction();
    peakFreqFunction();
    avgVolumeFunction();
});
draw();
currTimeFunction();
peakFreqFunction();
avgVolumeFunction();



Browser Compatibility:

  • Chrome
  • Edge
  • Firefox
  • Opera
  • Safari


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads