Last week I worked up an audio visualizer in Flash for a friend that yielded some nice results. The ActionScript 3.0 ByteArray and computeSpectrum are powerful and have many potential uses. Doing the same thing with canvas
and JavaScript requires a more roundabout method that is less accurate but more gratifying from a coding perspective.
My journey down this rabbit hole led me to almost doing this. I got myself a copy of Minfield and read up on how to build my own audio patched version of Firefox… yeah, not going to happen.
Needing more of a quick fix, I came up with two solutions. My first attempt at a solution was to build a Flash audio sampler that piped out the -1 to 1 floating point numbers to a file and use them in a JS array to manipulate coordinates of sprites on a canvas
. It worked quite well but the synchronization was off. I have not fixed this in that version but in the interim I got sidetracked and found another way to get some data. Quite simple really: load up an mp3 into Audacity and go Analyze -> Time -> Beat Finder. Set the tolerance to about 55 and this will generate beat labels. Export those to a text file and then clean up the numbers to get an array of milliseconds for your JS setInterval()
.
Granted this is not spectrum data—it’s simpler beat info—but in a way it works better for animating sprites because it’s cleaner. Spectrum data is very granular and works nice in widgets designed to show this (e.g. Graphs and Histogram visuals).
This test shows twitter searches on an HTML5 canvas
and moves them around in time to the beat of the music loaded in an audio tag. It’s not very exciting at this point but it’s got me thinking of some fun stuff to do. Every experiment like this is a step closer to moving away from Flash. I still can’t settle on using Processing or Raphael though.