/r/webaudio

Photograph via snooOG

A subreddit dedicated to anything related to the Web Audio API.

A subreddit dedicated to anything related to the Web Audio API. Demos, creations, programming problems, articles: read and post them here.

Interesting links:

Tutorials

Demos and examples

Currently supported browsers:

  • Chrome
  • iOS 6 beta
  • Safari 6 Developer Preview

/r/webaudio

993 Subscribers

1 Comment
2024/11/02
15:38 UTC

4

Multiple AudioContext vs Global AudioContext

Hello all.

Audio noob here.

I am building a website with embedded audio chat (Jitsi). There are many other audio sources in the website (videos that can play, buttons that play sounds)

I am having echo / feedback problems. I suspect this is because I have seperate AudioContext for each element, and therefore the AEC cannot work properly.

Is it best practise to share a single AudioContext? This is a bit tricky as some things I use (Jitsi) hide their AudioContext within an iframe and security limitations prevent me accessing it. I am working on a lower level of implementation of Jitsi now.

Thanks

3 Comments
2024/10/22
04:38 UTC

6

I created a channel based sound player

Maybe someone finds it useful: https://www.npmjs.com/package/@mediamonks/channels

One specific usecase why i initially created it was to be able to have a layer of background ambient/music loops that can easily be switched (and crossfaded).

const channel = channelsInstance.createChannel('background-music', {
    type: 'monophonic',
    defaultStartStopProps: { fadeInTime: 2, fadeOutTime: 2, loop: true },
});
// start a loop
channel.play('loop1');

// starting 2nd loop some time later, loop1 will fade out, loop2 will fade in 
channel.play('loop2');
0 Comments
2024/10/18
16:37 UTC

2

[Help] How can I toggle spatialization on an audio source?

Basically the title. I have a spatialized panner node, but I want the option to temporarily disable spatialization and hear the audio source directly. What's the best approach to this?

2 Comments
2024/10/12
20:04 UTC

7

My old work Web Audio API + WebGL

4 Comments
2024/10/08
07:34 UTC

1

Can you please help me to find a website where I can upload sounds from my library and then play/mix them?

I remember that I used one, but unfortunately, I forgot its name. Thank you.

0 Comments
2024/10/02
18:56 UTC

2

Playing a simple dynamical generated audio buffer in Tone.js

Sorry for the trivial question, but I'm struggling in finding the correct method that I should use to achieve this simple task using Tone.js (and/or a clear example):

  • connect a "stereo audio stream" (don't know what's the correct name in Tone.js, AudioWorklet??) to the default output
  • set a callback function that is called whenever the audio stream needs more audio data, and I can fill two float buffers (left and right) with my own data (a buffer should be an array of 2048 floats)

I found "createScriptProcessor" but it seems deprecated and not in the Tone.js framework.

Thank you in advance.

0 Comments
2024/09/19
15:24 UTC

1

AudioWorklet Pitchshifter (SoundTouchJS or Phaze) working with React

Hi all. I seem to be going in circles trying to implement a realtime pitch/key and playback-rate pitchshifer using either phaze or soundtouchjs libraries. I want to implement this in a React JS app.

Does anyone have experience with this? Thank you very much in advance

4 Comments
2024/08/26
13:21 UTC

1

Occasional Skipped Audio Chunks When Playing Real-Time Stream in VueJS App

I'm working on a VueJS web application that receives audio data through a WebSocket and plays it in real-time using the Web Audio API. The audio data is sent as base64-encoded chunks which I decode and append to a SourceBuffer in a MediaSource. The problem I'm facing is that occasionally, when the duration of audio is shorter, the audio chunks are received but not played immediately. When the next set of audio chunks is received, the previously skipped audio starts playing, followed by the new audio chunks. Here is the code I am using to set up the audio playback in my component:

initAudioSetup() {
      this.mediaSource = new MediaSource();
      const audioElement = document.getElementById("audio");
      audioElement.src = URL.createObjectURL(this.mediaSource);


      this.mediaSource.addEventListener("sourceopen", () => {
        this.sourceBuffer = this.mediaSource.addSourceBuffer("audio/mpeg");


        let queue = [];
        let isUpdating = false;


        const processQueue = () => {
          if (queue.length > 0 && !isUpdating) {
            console.log("PROCESSING QUEUE");
            isUpdating = true;


            this.sourceBuffer.appendBuffer(queue.shift());
          }
        };


        this.sourceBuffer.addEventListener("updateend", () => {
          isUpdating = false;
          processQueue();
        });


        // Listen for new audio chunks
        window.addEventListener("newAudioChunk", (event) => {
          const chunk = event.detail;
          const binaryString = atob(chunk);
          const len = binaryString.length;
          const bytes = new Uint8Array(len);
          for (let i = 0; i < len; i++) {
            bytes[i] = binaryString.charCodeAt(i);
          }
          queue.push(bytes);
          processQueue();
        });
        window.addEventListener("endOfAudio", () => {
          console.log("end of audio");
         
          console.log(this.mediaSource.sourceBuffers);
        });
      });
      audioElement.play();
    }

Audio data is received through a WebSocket and dispatched as newAudioChunk events. Each chunk is base64-decoded and converted to a Uint8Array before being appended to the SourceBuffer.Occasionally, received audio chunks are not played immediately. Instead, they play only after new chunks are received.What could be causing these audio chunks to be skipped initially and then played later?

1 Comment
2024/06/26
07:17 UTC

7

Implement a Data-Driven Web Audio Engine

Hello All, the last period I'm writing a series of posts about how to implement a Data-Driven Web Audio Engine from zero. Currently I have written the first 4 parts, and I want to continue as I have energy to give to this. The idea of this posts came from my first implementation of an Engine like this, the Blibliki
If anyone is interested, I'm happy to hear comments here or in my blog.

https://mikezaby.com/posts/web-audio-engine-part1

https://mikezaby.com/posts/web-audio-engine-part2

https://mikezaby.com/posts/web-audio-engine-part3

https://mikezaby.com/posts/web-audio-engine-part4

0 Comments
2024/05/31
13:14 UTC

1

Start oscillators at random phase?

I'm working on an art thing using web audio API. The programming is really simple - a few oscillators at fixed frequencies, their amplitude being modulated by some other oscillators, also at fixed (but much lower) frequencies.

Some of these LFOs are very slow, down in the thousandths-of-a-Hz range. I would love to have them start at a random point in their cycle, rather than at the consistent point they currently start. Is this possible?

I can do this per oscillator, but ideal would be have all oscillators in the javascript independently start at a random phase... is THAT possible?

5 Comments
2024/05/15
09:40 UTC

5

Web Audio API issues on MacOS?

Does anyone has experienced any issues on Macos? I have set the frequency to exactly 562 hz with a detune of exactly -700 cents which should result in a perfectly steady sine wave. It's a software issue as the windows version is running in a VM and has no problems. The waveform seems to flip every other frame and I don't know why.

This is the visualizer on Windows:

Windows 11

And this is the visualizer on macos:

MacOS Sonoma

3 Comments
2024/02/13
11:39 UTC

2

[Earwurm] Small scope TypeScript package for UI sounds

Just in case anyone will find this useful in their own projects… I wanted to promote a package I’ve published called earwurm:

I know there are already competent alternatives in this space, so to quickly summarize the purpose of this specific package:

Earwurm is an opinionated and minimal-scope solution for loading short audio files via the Web Audio API. Intended for playback of UI sound effects in modern browsers.

Minimal React example:

import {Earwurm, type LibraryEntry} from 'earwurm';

const entries: LibraryEntry[] = [
  {id: 'beep', path: 'assets/beep.webm'},
  {id: 'zap', path: 'assets/zap.mp3'},
];

const manager = new Earwurm();
manager.add(...entries);

// Optional: pre-fetch/decode each asset ahead of time
// so the browser is ready for immediate playback.
entries.forEach(({id}) => manager.get(id)?.prepare());

async function handlePlaySound(id = '') {
  const stack = manager.get(id);
  const sound = await stack?.prepare();

  sound?.play();
}

function Page() {
  return (
    <div>
      <button onClick={() => handlePlaySound('beep')}>Play beep</button>
      <button onClick={() => handlePlaySound('zap')}>Play zap</button>
    </div>
  );
}

An example of the above code can be tinkered with in this CodeSandbox. Better yet, the source code for the Demo site is included in the repo.

Earwurm doesn’t try to solve for every use case, and is instead limited to what I believe is an expected set of patterns for UI sound effects.

That being said, I do consider this an MVP. There are other features I intend to add in the future, such as the ability to reverse playback, as well as adjust pitch. All of this is building towards having a tool that empowers me build richer user experiences.

So, just in case anyone else finds this interesting, please give it a shot and feel free to report any issues you may encounter! 😁

0 Comments
2024/01/28
20:42 UTC

6

Brushed the dust off a music making web app and gave it a overhaul with friendlier UI for jam sessions

1 Comment
2023/12/03
18:36 UTC

1

Please Internet geniuses I'm desperate for help and ready to give up

Please forgive me I know this is going to be long and hopefully not too redundant but I have been battling this issue for almost two years and I swear there is no solution or it's the easiest thing that I keep stepping over. I would be forever in your debt for a miracle.

So I'm a piano player and aside from Instagram Youtube the usual I love going on omegle because it's face to face. For the past and I swear I'm not kidding close to two years I have been having this audio issue that for the life of me I can't figure out and I am pretty technically savvy. If I stream through obs to a specific streaming site like youtube or Instagram my sound is great aside from usual hums and buzzes that I'm trying to get rid of between my yamaha Electric keyboard and computer. When you are streaming on omegle you really aren't streaming on omegle you are using OBS as a virtual camera and as a virtual audio you know what I mean. Except the differences you are going through an Internet browser and you just tell the browser which virtual stuff to use botta Bing bang everything should be good Pretty much how Most streamers do it from what I understand. The issue I have when I am using my live setup which is my keyboard and microphone Which we know works correctly because it works when I stream outside of an Internet browser but when I go through the browser my sound just cuts off after a few seconds slowly like it's being muffled and then if I stop for a while it's like the noise suppression releases and I can play a few more notes on the piano and then it slowly dies out again into a weird muffily every 3rd 4th note is heard until I stop and then you can literally hear the usual static return so you know you are at full volume if you understand what I'm saying. When I say I have tried everything I mean all the way to fresh install of windows installing a sound blaster sound card all of the usual windows sound settings let things take exclusive control don't let them take exclusive control the communications tab to do nothing but ultimately because it works when I am not using a browser I know it's not a hardware issue 99% because I also bought a brand new computer and have nothing on it but my streaming stuff and I have gone into real tech audio control panel made sure my steam account wasn't doing anything mature my NVIDIA RTX 4070 isn't suppressing anything but again because If I try and directly go through an Internet browser my sound gets suppressed. Because I watch Harry Mac and other streamers so much I keep saying to myself he can do it so I know other people do it so I No it can be done. I have tried the top three Browsers and I have gone into the flag experimental settings and shut off any kind of audio interference and I even found an extension that I tried because here's what physically happens sort of with my setup. After getting the new computer going on for the first time praying that it was going to work I noticed as soon as I went on the browser and the browser took control of my microphone and camera because it gets permission that my microphone volume as soon as I played a note on my piano would automatically slowly increase all the way up to 100 so this leads to some kind of conclusion that when I'm going online it doesn't think I'm loud enough because it starts suppressing me and then after it suppresses my sound it tries to make my volume higher which in turn makes it suppress my sound. Forgive me if I sound like a crazed maniac but I have tried so many things that it literally is the number one thorn in my side and has been for almost two years. Because I have two computers I can test it by going on omegle with myself using a crazy keyword But then I started thinking I'm on the same Internet line maybe it's bouncing back and forth between my two computers but it literally happens if I do a microphone test online I can see all of the bars start to get lower and lower and lower on the little equalizer thing. My God if you are reading all of this I'm already grateful because I just feel so alone in this. I'm to the point where I think it's my AT&T fiber doing something because there are only so many common denominators now that I'm using a new computer but something is suppressing my sound and I would literally pay somebody $100 to help me and that's a lot of money to me I promise. I know you guys know everything I guess I have the option of just giving up but maybe somebody would go on omegle with me and test it we just have to put in a keyword like submarine or something off the wall but we would have to time it. Sorry to be a pain But at least you would hear what I'm talking about. I hear other people play music and piano and synthesizers and violins LOLI can go on and on I know other people do it and I am a pretty smart dude but something is choking my sound and I am on my hands and knees begging for help. Ohh wise Internet gods hear me please send me my magic wand that I am so desperate for. Barry.

0 Comments
2023/10/16
02:54 UTC

2

Looking for an experienced developer to help with a web audio related task. Paid, of course, don't want to leech.

4 Comments
2023/09/03
19:27 UTC

2

Limiting Buffering times

I’m writing a radio app. The hardware is on a local network. I have a socketio server in python that collects raw audio PCM data which is sent via socket to a client along with power spectrum data to be rendered as sound and real-time power spectrum display. The client is an electron app using electron-forge. To render sound I use BufferSource and AudioBuffers. All this works great except the Audio API buffers. It slowly builds up a cache of PCM data. It’s a small but annoying effect. After several minutes I typically have several seconds of buffer. First off the power spectrum display is out of sync with the sound which I could likely fix by buffering that data as well. That aside, how can I limit the buffering of sound to be less than say 0.2 seconds. Anything less than a second would be great.

1 Comment
2023/07/14
18:58 UTC

4

Starting from scratch with a Web Audio app: advice/recommendations on Libraries?

I'm a startup founder building a prototype generative music app using Web Audio. I would love to hear your advice about where to start from programming standpoint: pure Javascript? Use libraries like Tone.js + Tuna.js, Howler.js, Wad.js, XSound? I'd prefer to well-supported libraries. I'm happy to trade some functionality or performance for stability.

The app we are building needs to support sample loop playback, SoundFont libraries, simple subtractive synthesis (not Serum-quality), and real-time audio processing (Tuna.js appears to have everything we need). Ideally, it would be efficient enough to playback 8 stereo 44.1Khz samples simultaneously with Tuna effects in Safari/Firefox on MacOS, or ideally, Safari on iOS on a <3 year old iPhone.

I would very much appreciate your advice. And, DM me if you are interested in freelance work.

Thanks in advance.

11 Comments
2023/06/06
00:37 UTC

1

Seamless seeking playing audio

It seems to be rather lame question after looking at posts here, butI'm trying to make cue-point functionality in my web app so I can seamlessly switch to another time of track

I've found this post on stackoverflow: https://stackoverflow.com/questions/59815825/how-to-skip-ahead-n-seconds-while-playing-track-using-web-audio-api

But yet it looks complicated to me. I use HLS, and looks like it makes things even more complex... Can you give any advice on this?

up. seems hls was actual problem in ios safari

0 Comments
2023/05/28
17:02 UTC

1

who want make a team

Hello
I search partners create an online service for musicians, for example a system to retrieve the acapella on a piece.

Motivated people?

If so leave a message with your experience.

Lots of experience in audio processing with ableton live.

3 years of experience in javascript.

2 Comments
2023/04/10
19:50 UTC

Back To Top