/r/webaudio

Photograph via snooOG

A subreddit dedicated to anything related to the Web Audio API.

A subreddit dedicated to anything related to the Web Audio API. Demos, creations, programming problems, articles: read and post them here.

Interesting links:

Tutorials

Demos and examples

Currently supported browsers:

  • Chrome
  • iOS 6 beta
  • Safari 6 Developer Preview

/r/webaudio

1,003 Subscribers

2

new OscillatorNode or update existing?

if I'm creating a sequencer or an arpeggiator... should every note be a newly constructed (e.g. "new OscillatorNode()/new GainNode()"), rather than continuously updating the frequency and associated GainNode?

I'm asking for rules of thumb rather than for this to be a black-and-white answer, because I know there are exceptions to any rule

View Poll

2 Comments
2024/12/02
04:30 UTC

2

Profiling Methods

I'm building an audio UI, and I want to assess the average time between UI trigger and actual audio playback

I'm using tone.js for audio and pixijs for UI

What sort of strategies are people using to test such things?

1 Comment
2024/12/01
22:21 UTC

5

what is the utility of getFrequencyResponse() for Biquad Filter/IIRFilter?

I'm researching the WebAudio APIs pretty heavily but coming at this from a creative standpoint rather than a math or electrical standpoint, and then learning the fundamentals as I go...

https://developer.mozilla.org/en-US/docs/Web/API/BiquadFilterNode/getFrequencyResponse

how does someone _use_ the frequency response data? I'm trying to wrap my head around what utility that information has for audio processing, and there isn't much written about on the internet (or I don't know where to look!)

does anyone have any perspective on this?

3 Comments
2024/11/30
16:48 UTC

10

Introducing pd4web: Run PureData in Your Web Browser

Hey everyone! I wanted to share a tool for anyone working with web audio or exploring interactive sound design: pd4web.

What is PureData (Pd)? 🤔

PureData (often abbreviated as Pd) is an open-source visual programming language used for creating interactive computer music and multimedia works. It's widely used by musicians, sound designers, and artists for live performances, sound synthesis, and more. Pd works by connecting blocks (called objects) to create sound processing flows, allowing users to build complex audio systems without needing to write traditional code. You can think of it as a canvas for interactive sound.

What is pd4web? 🚀

pd4web automates the creation of an emscripten environment and also processes the Pd Patch, its output will be a functional website with visual objects (such as sliders, knobs, keyboards, etc.). Of course, you can also use it with a lot of tools like p5js, vexflow, besides others.

Key Features 🌟

  1. Streamlined Development: Build full audio applications online using PureData’s visual programming interface. You don’t need to worry about complex setups or installations, pd4web will handle the emscripten configuration and build.

  2. Easy Access for Performers/Users: Performers and users can load and interact with the audio app in the browser, without the hassle of setting up PureData or managing libraries. Simply load a page, and start performing!

  3. Live Electronic Music Preservation: pd4web automatically creates a repository for all the code and assets you need to run your compositions, preserving your live electronic works for future use or sharing.

Check pd4web: https://charlesneimog.github.io/pd4web/

2 Comments
2024/11/27
17:22 UTC

0

Is there any library that exposes implementations of plain AudioNodes?

I'm not sure if something like this exists, but I imagine a library similar to ToneJS, without needing to adopt the entire framework. Something like Lodash for WebAudio, where I can select plain AudioNodes or tools to help me build my own audio system.

1 Comment
2024/11/16
16:53 UTC

1 Comment
2024/11/02
15:38 UTC

5

Multiple AudioContext vs Global AudioContext

Hello all.

Audio noob here.

I am building a website with embedded audio chat (Jitsi). There are many other audio sources in the website (videos that can play, buttons that play sounds)

I am having echo / feedback problems. I suspect this is because I have seperate AudioContext for each element, and therefore the AEC cannot work properly.

Is it best practise to share a single AudioContext? This is a bit tricky as some things I use (Jitsi) hide their AudioContext within an iframe and security limitations prevent me accessing it. I am working on a lower level of implementation of Jitsi now.

Thanks

3 Comments
2024/10/22
04:38 UTC

5

I created a channel based sound player

Maybe someone finds it useful: https://www.npmjs.com/package/@mediamonks/channels

One specific usecase why i initially created it was to be able to have a layer of background ambient/music loops that can easily be switched (and crossfaded).

const channel = channelsInstance.createChannel('background-music', {
    type: 'monophonic',
    defaultStartStopProps: { fadeInTime: 2, fadeOutTime: 2, loop: true },
});
// start a loop
channel.play('loop1');

// starting 2nd loop some time later, loop1 will fade out, loop2 will fade in 
channel.play('loop2');
0 Comments
2024/10/18
16:37 UTC

2

[Help] How can I toggle spatialization on an audio source?

Basically the title. I have a spatialized panner node, but I want the option to temporarily disable spatialization and hear the audio source directly. What's the best approach to this?

2 Comments
2024/10/12
20:04 UTC

7

My old work Web Audio API + WebGL

4 Comments
2024/10/08
07:34 UTC

1

Can you please help me to find a website where I can upload sounds from my library and then play/mix them?

I remember that I used one, but unfortunately, I forgot its name. Thank you.

0 Comments
2024/10/02
18:56 UTC

2

Playing a simple dynamical generated audio buffer in Tone.js

Sorry for the trivial question, but I'm struggling in finding the correct method that I should use to achieve this simple task using Tone.js (and/or a clear example):

  • connect a "stereo audio stream" (don't know what's the correct name in Tone.js, AudioWorklet??) to the default output
  • set a callback function that is called whenever the audio stream needs more audio data, and I can fill two float buffers (left and right) with my own data (a buffer should be an array of 2048 floats)

I found "createScriptProcessor" but it seems deprecated and not in the Tone.js framework.

Thank you in advance.

0 Comments
2024/09/19
15:24 UTC

1

AudioWorklet Pitchshifter (SoundTouchJS or Phaze) working with React

Hi all. I seem to be going in circles trying to implement a realtime pitch/key and playback-rate pitchshifer using either phaze or soundtouchjs libraries. I want to implement this in a React JS app.

Does anyone have experience with this? Thank you very much in advance

4 Comments
2024/08/26
13:21 UTC

1

Occasional Skipped Audio Chunks When Playing Real-Time Stream in VueJS App

I'm working on a VueJS web application that receives audio data through a WebSocket and plays it in real-time using the Web Audio API. The audio data is sent as base64-encoded chunks which I decode and append to a SourceBuffer in a MediaSource. The problem I'm facing is that occasionally, when the duration of audio is shorter, the audio chunks are received but not played immediately. When the next set of audio chunks is received, the previously skipped audio starts playing, followed by the new audio chunks. Here is the code I am using to set up the audio playback in my component:

initAudioSetup() {
      this.mediaSource = new MediaSource();
      const audioElement = document.getElementById("audio");
      audioElement.src = URL.createObjectURL(this.mediaSource);


      this.mediaSource.addEventListener("sourceopen", () => {
        this.sourceBuffer = this.mediaSource.addSourceBuffer("audio/mpeg");


        let queue = [];
        let isUpdating = false;


        const processQueue = () => {
          if (queue.length > 0 && !isUpdating) {
            console.log("PROCESSING QUEUE");
            isUpdating = true;


            this.sourceBuffer.appendBuffer(queue.shift());
          }
        };


        this.sourceBuffer.addEventListener("updateend", () => {
          isUpdating = false;
          processQueue();
        });


        // Listen for new audio chunks
        window.addEventListener("newAudioChunk", (event) => {
          const chunk = event.detail;
          const binaryString = atob(chunk);
          const len = binaryString.length;
          const bytes = new Uint8Array(len);
          for (let i = 0; i < len; i++) {
            bytes[i] = binaryString.charCodeAt(i);
          }
          queue.push(bytes);
          processQueue();
        });
        window.addEventListener("endOfAudio", () => {
          console.log("end of audio");
         
          console.log(this.mediaSource.sourceBuffers);
        });
      });
      audioElement.play();
    }

Audio data is received through a WebSocket and dispatched as newAudioChunk events. Each chunk is base64-decoded and converted to a Uint8Array before being appended to the SourceBuffer.Occasionally, received audio chunks are not played immediately. Instead, they play only after new chunks are received.What could be causing these audio chunks to be skipped initially and then played later?

1 Comment
2024/06/26
07:17 UTC

8

Implement a Data-Driven Web Audio Engine

Hello All, the last period I'm writing a series of posts about how to implement a Data-Driven Web Audio Engine from zero. Currently I have written the first 4 parts, and I want to continue as I have energy to give to this. The idea of this posts came from my first implementation of an Engine like this, the Blibliki
If anyone is interested, I'm happy to hear comments here or in my blog.

https://mikezaby.com/posts/web-audio-engine-part1

https://mikezaby.com/posts/web-audio-engine-part2

https://mikezaby.com/posts/web-audio-engine-part3

https://mikezaby.com/posts/web-audio-engine-part4

0 Comments
2024/05/31
13:14 UTC

1

Start oscillators at random phase?

I'm working on an art thing using web audio API. The programming is really simple - a few oscillators at fixed frequencies, their amplitude being modulated by some other oscillators, also at fixed (but much lower) frequencies.

Some of these LFOs are very slow, down in the thousandths-of-a-Hz range. I would love to have them start at a random point in their cycle, rather than at the consistent point they currently start. Is this possible?

I can do this per oscillator, but ideal would be have all oscillators in the javascript independently start at a random phase... is THAT possible?

5 Comments
2024/05/15
09:40 UTC

5

Web Audio API issues on MacOS?

Does anyone has experienced any issues on Macos? I have set the frequency to exactly 562 hz with a detune of exactly -700 cents which should result in a perfectly steady sine wave. It's a software issue as the windows version is running in a VM and has no problems. The waveform seems to flip every other frame and I don't know why.

This is the visualizer on Windows:

Windows 11

And this is the visualizer on macos:

MacOS Sonoma

3 Comments
2024/02/13
11:39 UTC

2

[Earwurm] Small scope TypeScript package for UI sounds

Just in case anyone will find this useful in their own projects… I wanted to promote a package I’ve published called earwurm:

I know there are already competent alternatives in this space, so to quickly summarize the purpose of this specific package:

Earwurm is an opinionated and minimal-scope solution for loading short audio files via the Web Audio API. Intended for playback of UI sound effects in modern browsers.

Minimal React example:

import {Earwurm, type LibraryEntry} from 'earwurm';

const entries: LibraryEntry[] = [
  {id: 'beep', path: 'assets/beep.webm'},
  {id: 'zap', path: 'assets/zap.mp3'},
];

const manager = new Earwurm();
manager.add(...entries);

// Optional: pre-fetch/decode each asset ahead of time
// so the browser is ready for immediate playback.
entries.forEach(({id}) => manager.get(id)?.prepare());

async function handlePlaySound(id = '') {
  const stack = manager.get(id);
  const sound = await stack?.prepare();

  sound?.play();
}

function Page() {
  return (
    <div>
      <button onClick={() => handlePlaySound('beep')}>Play beep</button>
      <button onClick={() => handlePlaySound('zap')}>Play zap</button>
    </div>
  );
}

An example of the above code can be tinkered with in this CodeSandbox. Better yet, the source code for the Demo site is included in the repo.

Earwurm doesn’t try to solve for every use case, and is instead limited to what I believe is an expected set of patterns for UI sound effects.

That being said, I do consider this an MVP. There are other features I intend to add in the future, such as the ability to reverse playback, as well as adjust pitch. All of this is building towards having a tool that empowers me build richer user experiences.

So, just in case anyone else finds this interesting, please give it a shot and feel free to report any issues you may encounter! 😁

0 Comments
2024/01/28
20:42 UTC

6

Brushed the dust off a music making web app and gave it a overhaul with friendlier UI for jam sessions

1 Comment
2023/12/03
18:36 UTC

1

Please Internet geniuses I'm desperate for help and ready to give up

Please forgive me I know this is going to be long and hopefully not too redundant but I have been battling this issue for almost two years and I swear there is no solution or it's the easiest thing that I keep stepping over. I would be forever in your debt for a miracle.

So I'm a piano player and aside from Instagram Youtube the usual I love going on omegle because it's face to face. For the past and I swear I'm not kidding close to two years I have been having this audio issue that for the life of me I can't figure out and I am pretty technically savvy. If I stream through obs to a specific streaming site like youtube or Instagram my sound is great aside from usual hums and buzzes that I'm trying to get rid of between my yamaha Electric keyboard and computer. When you are streaming on omegle you really aren't streaming on omegle you are using OBS as a virtual camera and as a virtual audio you know what I mean. Except the differences you are going through an Internet browser and you just tell the browser which virtual stuff to use botta Bing bang everything should be good Pretty much how Most streamers do it from what I understand. The issue I have when I am using my live setup which is my keyboard and microphone Which we know works correctly because it works when I stream outside of an Internet browser but when I go through the browser my sound just cuts off after a few seconds slowly like it's being muffled and then if I stop for a while it's like the noise suppression releases and I can play a few more notes on the piano and then it slowly dies out again into a weird muffily every 3rd 4th note is heard until I stop and then you can literally hear the usual static return so you know you are at full volume if you understand what I'm saying. When I say I have tried everything I mean all the way to fresh install of windows installing a sound blaster sound card all of the usual windows sound settings let things take exclusive control don't let them take exclusive control the communications tab to do nothing but ultimately because it works when I am not using a browser I know it's not a hardware issue 99% because I also bought a brand new computer and have nothing on it but my streaming stuff and I have gone into real tech audio control panel made sure my steam account wasn't doing anything mature my NVIDIA RTX 4070 isn't suppressing anything but again because If I try and directly go through an Internet browser my sound gets suppressed. Because I watch Harry Mac and other streamers so much I keep saying to myself he can do it so I know other people do it so I No it can be done. I have tried the top three Browsers and I have gone into the flag experimental settings and shut off any kind of audio interference and I even found an extension that I tried because here's what physically happens sort of with my setup. After getting the new computer going on for the first time praying that it was going to work I noticed as soon as I went on the browser and the browser took control of my microphone and camera because it gets permission that my microphone volume as soon as I played a note on my piano would automatically slowly increase all the way up to 100 so this leads to some kind of conclusion that when I'm going online it doesn't think I'm loud enough because it starts suppressing me and then after it suppresses my sound it tries to make my volume higher which in turn makes it suppress my sound. Forgive me if I sound like a crazed maniac but I have tried so many things that it literally is the number one thorn in my side and has been for almost two years. Because I have two computers I can test it by going on omegle with myself using a crazy keyword But then I started thinking I'm on the same Internet line maybe it's bouncing back and forth between my two computers but it literally happens if I do a microphone test online I can see all of the bars start to get lower and lower and lower on the little equalizer thing. My God if you are reading all of this I'm already grateful because I just feel so alone in this. I'm to the point where I think it's my AT&T fiber doing something because there are only so many common denominators now that I'm using a new computer but something is suppressing my sound and I would literally pay somebody $100 to help me and that's a lot of money to me I promise. I know you guys know everything I guess I have the option of just giving up but maybe somebody would go on omegle with me and test it we just have to put in a keyword like submarine or something off the wall but we would have to time it. Sorry to be a pain But at least you would hear what I'm talking about. I hear other people play music and piano and synthesizers and violins LOLI can go on and on I know other people do it and I am a pretty smart dude but something is choking my sound and I am on my hands and knees begging for help. Ohh wise Internet gods hear me please send me my magic wand that I am so desperate for. Barry.

0 Comments
2023/10/16
02:54 UTC

Back To Top