Fragmented Notes

This is where I write about learning to code

Looking Into Web-audio - Generating Sound

It looks as if it is possible to create sound just with the capabilities of the browser, using JavaScript to access the Web Audio API. I take a first look into it.

Yes, there is web-audio. What’s the point?

The reason I am writing this blog post is that I want to understand the subject myself, motivated by a course on Creative Programming for Audiovisual Art, tought by Mick Grierson at Goldsmiths University of London. I ran into some technical problems right in the first session, starting with the web platform, that is used for coding in the course eating all my RAM and so basically freezing itself and my entire computer. So I looked into the library that is used in this course (Maxim, a c++ library ported to JavaScript, written by Mick Grierson) and tried the examples, provided with the library and got not sound, but error messages. Since I am in no hurry to get through the course (I am only accessing it with the free plan, where I don’t get graded anyway), I decided to dig a little deeper into it to understand the issues I am having and to find a way to generate sound that will not freeze my computer or make my browsers complain and work well with the course.

So in this post, I will take a basic look into what the web-audio API is, which libraries look promising to me and maybe even find out what I did wrong with Maxim in the process. At the end I would like to have a couple of working simple examples to show, with different technologies.

Dan Mackinlay wrote about this topic in a blog post, that was one of my first finds when looking for clues to web-audio.

Can I use it?

As with all modern web technologies one of the first questions might be: Does it even work in my browser? Can I use it for projects that are meant to be seen by the public or will they encounter problems with their browsers or devices?

The short answer is “Yes”.

The long answer: In november 2016 it is supported by all major browsers on the desktop, on iOS and on Chrome for Android. The only browsers that do not support web-audio are Opera-Mini and Internet Explorer (source), but who cares about IE anymore.

There are some parts of it that are differing between the browsers.

“getUserMedia/Stream API” seems to be a part of the web-audio API that is still at least a little problematic, because it is handled differently by the browsers that implement it: Chrome and Firefox require prefixes: Firefox uses “moz” and Chrome and Opera use “webkit” and Chrome can only be called from secure origins.

How to use it?

Basically there are two main options: using it directly, with vanilla JavaScript, or using it with the help of a library. The existance of multiple libraries might indicate, that there is a need for one.

Without library

This primary documentation is as always of course the specification by the w3c, but it is (because it is a specification and not a tutorial or introduction) not the best read.

Surprisingly (considering that IE does not support web-audio) Microsoft has a good (short) introduction into the API. It is not really helpful for understanding to create sound, because it focuses on working with audio inputs like sound files or microphone streams, the relevant info for generating sound is in the article about the OscillatorNode. This is the same with the Mozilla Developer Network on Web Audio and the OscillatorNode. While here I am only looking into creating sound, the AnalyserNode needs mentioning for later projects, that might include audio visualization. There is a nice example of the OscillatorNode combined with some graphics on MDN called Violent Theremin.

With this I was able to produce a simple page that plays a sine wave and adjusts the volume with two buttons:

Example 1

As you can see in the code or in the mentioned documentation, the AudioContext works with different nodes, that are connected to another, beginning with one or more sources, going through some nodes that do stuff with the signal and then joined into a Destination. A very nice tutorial that goes deep enough to do real stuff is Getting Started with Web Audio API by HTML5 Rocks.


For a list of existing libraries you can go to Dan Mackinleays blog post, that I mentioned in the first paragraph. I will look into two that are of interest to me.


As the name suggests, this is an addition to p5.js, a JavaScript library that is a reimagination of the Processing language for creative programming.

The sound library is provided at the website of the p5.js-project and it is also included in the p5-editor. That makes using the library very easy if you are already using p5 for creative programming.

To generate sounds there is also an oscillator object, but the p5.Oscillator combines with it some functions that are part of other nodes in the Web Audio API, for example the amp()-method for controlling the amplitude, that not only changes the value of the amplitude, but also takes an optional value for smoothing the transition.

p5.js has another advantage and that is Daniel Shiffman, a professor at NYU, who makes great videos about p5 and processing, that are to be found on his youtube-channel, like this long one about sound visualization.

Example 2


I have not looked into this one yet, which is of course a little stupid, because it is the one used in the course that prompted this excursion.

Conclusion for today

It is not that difficult to start generating sounds without any library. The Web Audio API has good documentation and is not very obscure.