/r/webaudio
A subreddit dedicated to anything related to the Web Audio API.
A subreddit dedicated to anything related to the Web Audio API. Demos, creations, programming problems, articles: read and post them here.
Interesting links:
Tutorials
Demos and examples
Currently supported browsers:
/r/webaudio
if I'm creating a sequencer or an arpeggiator... should every note be a newly constructed (e.g. "new OscillatorNode()/new GainNode()"), rather than continuously updating the frequency and associated GainNode?
I'm asking for rules of thumb rather than for this to be a black-and-white answer, because I know there are exceptions to any rule
I'm building an audio UI, and I want to assess the average time between UI trigger and actual audio playback
I'm using tone.js for audio and pixijs for UI
What sort of strategies are people using to test such things?
I'm researching the WebAudio APIs pretty heavily but coming at this from a creative standpoint rather than a math or electrical standpoint, and then learning the fundamentals as I go...
https://developer.mozilla.org/en-US/docs/Web/API/BiquadFilterNode/getFrequencyResponse
how does someone _use_ the frequency response data? I'm trying to wrap my head around what utility that information has for audio processing, and there isn't much written about on the internet (or I don't know where to look!)
does anyone have any perspective on this?
Hey everyone! I wanted to share a tool for anyone working with web audio or exploring interactive sound design: pd4web.
PureData (often abbreviated as Pd) is an open-source visual programming language used for creating interactive computer music and multimedia works. It's widely used by musicians, sound designers, and artists for live performances, sound synthesis, and more. Pd works by connecting blocks (called objects) to create sound processing flows, allowing users to build complex audio systems without needing to write traditional code. You can think of it as a canvas for interactive sound.
pd4web automates the creation of an emscripten environment and also processes the Pd Patch, its output will be a functional website with visual objects (such as sliders, knobs, keyboards, etc.). Of course, you can also use it with a lot of tools like p5js, vexflow, besides others.
Streamlined Development: Build full audio applications online using PureData’s visual programming interface. You don’t need to worry about complex setups or installations, pd4web will handle the emscripten configuration and build.
Easy Access for Performers/Users: Performers and users can load and interact with the audio app in the browser, without the hassle of setting up PureData or managing libraries. Simply load a page, and start performing!
Live Electronic Music Preservation: pd4web automatically creates a repository for all the code and assets you need to run your compositions, preserving your live electronic works for future use or sharing.
Check pd4web: https://charlesneimog.github.io/pd4web/
I'm not sure if something like this exists, but I imagine a library similar to ToneJS, without needing to adopt the entire framework. Something like Lodash for WebAudio, where I can select plain AudioNodes or tools to help me build my own audio system.
Hello all.
Audio noob here.
I am building a website with embedded audio chat (Jitsi). There are many other audio sources in the website (videos that can play, buttons that play sounds)
I am having echo / feedback problems. I suspect this is because I have seperate AudioContext for each element, and therefore the AEC cannot work properly.
Is it best practise to share a single AudioContext? This is a bit tricky as some things I use (Jitsi) hide their AudioContext within an iframe and security limitations prevent me accessing it. I am working on a lower level of implementation of Jitsi now.
Thanks
Maybe someone finds it useful: https://www.npmjs.com/package/@mediamonks/channels
One specific usecase why i initially created it was to be able to have a layer of background ambient/music loops that can easily be switched (and crossfaded).
const channel = channelsInstance.createChannel('background-music', {
type: 'monophonic',
defaultStartStopProps: { fadeInTime: 2, fadeOutTime: 2, loop: true },
});
// start a loop
channel.play('loop1');
// starting 2nd loop some time later, loop1 will fade out, loop2 will fade in
channel.play('loop2');
Basically the title. I have a spatialized panner node, but I want the option to temporarily disable spatialization and hear the audio source directly. What's the best approach to this?
I remember that I used one, but unfortunately, I forgot its name. Thank you.
Sorry for the trivial question, but I'm struggling in finding the correct method that I should use to achieve this simple task using Tone.js (and/or a clear example):
I found "createScriptProcessor" but it seems deprecated and not in the Tone.js framework.
Thank you in advance.
Hi all. I seem to be going in circles trying to implement a realtime pitch/key and playback-rate pitchshifer using either phaze or soundtouchjs libraries. I want to implement this in a React JS app.
Does anyone have experience with this? Thank you very much in advance
I'm working on a VueJS web application that receives audio data through a WebSocket and plays it in real-time using the Web Audio API. The audio data is sent as base64-encoded chunks which I decode and append to a SourceBuffer in a MediaSource. The problem I'm facing is that occasionally, when the duration of audio is shorter, the audio chunks are received but not played immediately. When the next set of audio chunks is received, the previously skipped audio starts playing, followed by the new audio chunks. Here is the code I am using to set up the audio playback in my component:
initAudioSetup() {
this.mediaSource = new MediaSource();
const audioElement = document.getElementById("audio");
audioElement.src = URL.createObjectURL(this.mediaSource);
this.mediaSource.addEventListener("sourceopen", () => {
this.sourceBuffer = this.mediaSource.addSourceBuffer("audio/mpeg");
let queue = [];
let isUpdating = false;
const processQueue = () => {
if (queue.length > 0 && !isUpdating) {
console.log("PROCESSING QUEUE");
isUpdating = true;
this.sourceBuffer.appendBuffer(queue.shift());
}
};
this.sourceBuffer.addEventListener("updateend", () => {
isUpdating = false;
processQueue();
});
// Listen for new audio chunks
window.addEventListener("newAudioChunk", (event) => {
const chunk = event.detail;
const binaryString = atob(chunk);
const len = binaryString.length;
const bytes = new Uint8Array(len);
for (let i = 0; i < len; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
queue.push(bytes);
processQueue();
});
window.addEventListener("endOfAudio", () => {
console.log("end of audio");
console.log(this.mediaSource.sourceBuffers);
});
});
audioElement.play();
}
Audio data is received through a WebSocket and dispatched as newAudioChunk events. Each chunk is base64-decoded and converted to a Uint8Array before being appended to the SourceBuffer.Occasionally, received audio chunks are not played immediately. Instead, they play only after new chunks are received.What could be causing these audio chunks to be skipped initially and then played later?
Hello All, the last period I'm writing a series of posts about how to implement a Data-Driven Web Audio Engine from zero. Currently I have written the first 4 parts, and I want to continue as I have energy to give to this. The idea of this posts came from my first implementation of an Engine like this, the Blibliki
If anyone is interested, I'm happy to hear comments here or in my blog.
https://mikezaby.com/posts/web-audio-engine-part1
https://mikezaby.com/posts/web-audio-engine-part2
I'm working on an art thing using web audio API. The programming is really simple - a few oscillators at fixed frequencies, their amplitude being modulated by some other oscillators, also at fixed (but much lower) frequencies.
Some of these LFOs are very slow, down in the thousandths-of-a-Hz range. I would love to have them start at a random point in their cycle, rather than at the consistent point they currently start. Is this possible?
I can do this per oscillator, but ideal would be have all oscillators in the javascript independently start at a random phase... is THAT possible?
Does anyone has experienced any issues on Macos? I have set the frequency to exactly 562 hz with a detune of exactly -700 cents which should result in a perfectly steady sine wave. It's a software issue as the windows version is running in a VM and has no problems. The waveform seems to flip every other frame and I don't know why.
This is the visualizer on Windows:
And this is the visualizer on macos:
Just in case anyone will find this useful in their own projects… I wanted to promote a package I’ve published called earwurm
:
I know there are already competent alternatives in this space, so to quickly summarize the purpose of this specific package:
Earwurm
is an opinionated and minimal-scope solution for loading short audio files via theWeb Audio API
. Intended for playback of UI sound effects in modern browsers.
Minimal React example:
import {Earwurm, type LibraryEntry} from 'earwurm';
const entries: LibraryEntry[] = [
{id: 'beep', path: 'assets/beep.webm'},
{id: 'zap', path: 'assets/zap.mp3'},
];
const manager = new Earwurm();
manager.add(...entries);
// Optional: pre-fetch/decode each asset ahead of time
// so the browser is ready for immediate playback.
entries.forEach(({id}) => manager.get(id)?.prepare());
async function handlePlaySound(id = '') {
const stack = manager.get(id);
const sound = await stack?.prepare();
sound?.play();
}
function Page() {
return (
<div>
<button onClick={() => handlePlaySound('beep')}>Play beep</button>
<button onClick={() => handlePlaySound('zap')}>Play zap</button>
</div>
);
}
An example of the above code can be tinkered with in this CodeSandbox. Better yet, the source code for the Demo site
is included in the repo.
Earwurm
doesn’t try to solve for every use case, and is instead limited to what I believe is an expected set of patterns for UI sound effects.
That being said, I do consider this an MVP. There are other features I intend to add in the future, such as the ability to reverse
playback, as well as adjust pitch
. All of this is building towards having a tool that empowers me build richer user experiences.
So, just in case anyone else finds this interesting, please give it a shot and feel free to report any issues you may encounter! 😁
Please forgive me I know this is going to be long and hopefully not too redundant but I have been battling this issue for almost two years and I swear there is no solution or it's the easiest thing that I keep stepping over. I would be forever in your debt for a miracle.
So I'm a piano player and aside from Instagram Youtube the usual I love going on omegle because it's face to face. For the past and I swear I'm not kidding close to two years I have been having this audio issue that for the life of me I can't figure out and I am pretty technically savvy. If I stream through obs to a specific streaming site like youtube or Instagram my sound is great aside from usual hums and buzzes that I'm trying to get rid of between my yamaha Electric keyboard and computer. When you are streaming on omegle you really aren't streaming on omegle you are using OBS as a virtual camera and as a virtual audio you know what I mean. Except the differences you are going through an Internet browser and you just tell the browser which virtual stuff to use botta Bing bang everything should be good Pretty much how Most streamers do it from what I understand. The issue I have when I am using my live setup which is my keyboard and microphone Which we know works correctly because it works when I stream outside of an Internet browser but when I go through the browser my sound just cuts off after a few seconds slowly like it's being muffled and then if I stop for a while it's like the noise suppression releases and I can play a few more notes on the piano and then it slowly dies out again into a weird muffily every 3rd 4th note is heard until I stop and then you can literally hear the usual static return so you know you are at full volume if you understand what I'm saying. When I say I have tried everything I mean all the way to fresh install of windows installing a sound blaster sound card all of the usual windows sound settings let things take exclusive control don't let them take exclusive control the communications tab to do nothing but ultimately because it works when I am not using a browser I know it's not a hardware issue 99% because I also bought a brand new computer and have nothing on it but my streaming stuff and I have gone into real tech audio control panel made sure my steam account wasn't doing anything mature my NVIDIA RTX 4070 isn't suppressing anything but again because If I try and directly go through an Internet browser my sound gets suppressed. Because I watch Harry Mac and other streamers so much I keep saying to myself he can do it so I know other people do it so I No it can be done. I have tried the top three Browsers and I have gone into the flag experimental settings and shut off any kind of audio interference and I even found an extension that I tried because here's what physically happens sort of with my setup. After getting the new computer going on for the first time praying that it was going to work I noticed as soon as I went on the browser and the browser took control of my microphone and camera because it gets permission that my microphone volume as soon as I played a note on my piano would automatically slowly increase all the way up to 100 so this leads to some kind of conclusion that when I'm going online it doesn't think I'm loud enough because it starts suppressing me and then after it suppresses my sound it tries to make my volume higher which in turn makes it suppress my sound. Forgive me if I sound like a crazed maniac but I have tried so many things that it literally is the number one thorn in my side and has been for almost two years. Because I have two computers I can test it by going on omegle with myself using a crazy keyword But then I started thinking I'm on the same Internet line maybe it's bouncing back and forth between my two computers but it literally happens if I do a microphone test online I can see all of the bars start to get lower and lower and lower on the little equalizer thing. My God if you are reading all of this I'm already grateful because I just feel so alone in this. I'm to the point where I think it's my AT&T fiber doing something because there are only so many common denominators now that I'm using a new computer but something is suppressing my sound and I would literally pay somebody $100 to help me and that's a lot of money to me I promise. I know you guys know everything I guess I have the option of just giving up but maybe somebody would go on omegle with me and test it we just have to put in a keyword like submarine or something off the wall but we would have to time it. Sorry to be a pain But at least you would hear what I'm talking about. I hear other people play music and piano and synthesizers and violins LOLI can go on and on I know other people do it and I am a pretty smart dude but something is choking my sound and I am on my hands and knees begging for help. Ohh wise Internet gods hear me please send me my magic wand that I am so desperate for. Barry.