HTML5 Spectral graph animation with the change of music rhythm

Source: Internet
Author: User
Tags event listener try catch

The HTML5 audio processing interface that will be introduced here is not the same as that of the voice tag. The audio tag on the page is just a more semantic representation of the HTML5, and the audio API that HTML5 provides for JavaScript programming gives us the ability to manipulate raw audio streaming data directly in the code, making it available for arbitrary processing and reengineering.

An example of the most typical visual display of the HTML5 Audio API is the spectral graph that follows the rhythm of the music, also known as visualization. This is an example of how audio data is manipulated in JavaScript.

The code in this article is for reference only, the actual code to download the source is subject to the following.

Learn about the audio API
A piece of audio is intercepted before it reaches the speaker, so we get the audio data, and this interception is done by window. Audiocontext to do, all of our operations on audio are based on this object.

Through Audiocontext can create different kinds of audionode, that is, the audio node, different nodes action, and some of the audio plus filters such as improve the tone (such as biquadfilternode), change the monotony, some audio segmentation, For example, the sound source in the channel segmentation to get the sound of the left and right channels (Channelsplitternode), some of the audio data spectrum analysis is used in this article (Analysernode).





Audio API in the browser
Unified Prefix
The processing of audio in JavaScript requires first instantiating an audio context type Window.audiocontext. Currently, Chrome and Firefox support it, but require a prefix that is mozaudiocontext in Chrome for Window.webkitaudiocontext,firefox. So in order to make the code more generic, can work in both browsers, just need a code to unify the prefix.

Window. Audiocontext =
Window. Audiocontext | | Window.webkitaudiocontext | | Window.mozaudiocontext | | Window.msaudiocontext;

This is a common use, or the operator ' | | ' In an expression that is concatenated, and returns when it encounters the truth. In Chrome, for example, window. Audiocontext for undefined, then go down, encountered window.webkitaudiocontext not undefined, the expression in this judgement as truth, so return it, so this time Window.audiocontext = Window.webkitaudiocontext, so we can use window directly in the code. Audiocontext without worrying about specific chrome or Firefox.

var audiocontext=new window. Audiocontext ();

Consider scenarios that your browser does not support
However, this is only guaranteed to work properly in browsers that support Audiocontext, and if it is in IE, the operation of instantiating an object above will fail, so it is necessary to add a try catch statement to avoid the error.
try {
var audiocontext = new window. Audiocontext ();
} catch (e) {
Console.log ('! Your Browser does not support Audiocontext ');
}

This is much safer, mom no longer worry about the browser error.

Organization Code
To better encode, we create a visualizer object that writes all the relevant properties and methods to it. By convention, the object's properties are written directly inside the constructor, and the object's methods are written into the prototype. Private methods used inside objects begin with a dash, not necessarily but a good naming habit. Where some basic properties are set to be used in subsequent code, please see the source for details.

 var Visualizer = function () {this.file = null,//file to be processed, followed by how to get the file This.filename = null,//The name of the file to process, file name T His.audiocontext = NULL,//context for audio processing, which will be initialized later This.source = NULL,//Save audio}; Visualizer.prototype = {_prepareapi:function () {///unified prefix for easy invocation of window. Audiocontext = window. Audiocontext | | Window.webkitaudiocontext | | Window.mozaudiocontext | |    Window.msaudiocontext;  Here by the way also requestanimationframe a patch, later used to write animation to use Window.requestanimationframe = Window.requestanimationframe | |         Window.webkitrequestanimationframe | | Window.mozrequestanimationframe | |   Window.msrequestanimationframe;    Safely instantiate a audiocontext and assign a value to the Audiocontext property of visualizer to facilitate the processing of audio later using try {this.audiocontext = new audiocontext ();        } catch (E) {console.log ('! Your browser does not support Audiocontext: (');    Console.log (e); }  }, }
Loading audio files
Needless to say, you must get the audio file in your code before you can further process it.

File acquisition methods: Read files to JavaScript can be in the following three ways:
1. Open an AJAX asynchronous request to get the file, if the local test needs to turn off the browser's homologous security policy to be successful, or you can only put the Web page on the server to work properly.
Specifically, a XMLHttpRequest request, the file path as the requested URL, and set the request return type of ' ArrayBuffer ', this format is convenient for our subsequent processing. Here is an example.
Loadsound ("Sample.mp3"); Call
Defining functions for loading audio files
function Loadsound (URL) {
var request = new XMLHttpRequest (); Create a request
Request.open (' GET ', url, true); Configure the request type, file path, etc.
Request.responsetype = ' Arraybuffer '; Configuring the Data return type
Once the acquisition is complete, the audio is further manipulated, such as decoding
Request.onload = function () {
var arraybuffer = Request.response;
}
Request.send ();
}

2. Through file type input to file selection, listen to input onchnage event, a file selected to start in the code to get processing, this method is convenient, and does not need to work on the server

3. Drag and drop the file to the page to get it, a little more complicated than the previous method (to listen to ' dragenter ', ' dragover ', ' drop ' and other events) but also good in the local environment, without server support.

Needless to say, methods 2 and 3 are convenient for local development and testing, so both of our methods are implemented, supporting both file selection and file dragging.

(1) by selecting get
Put a file type of input on the page. It then listens for its onchange event in JavaScript. This event is triggered when the value of input has changed.
For the onchange event, there is a small difference between Chrome and Firefox, if you have selected a file, then input has a value, if you select the same file again, the onchange event will not be triggered, but in Firefox the event is triggered. It's a little bit of a relationship here.
<label for= "UploadedFile" >drag&drop or select a file to Play:</label>
<input type= "File" id= "UploadedFile" ></input>

Of course, here also put the last we want to draw canvas also put up together, the back will not be more words. So the following is the final HTML, the page basically does not change, a lot of work is written in JavaScript.
<div id= "wrapper" >
<div id= "FileWrapper" class= "File_wrapper" >
<div id= "Info" >
HTML5 Audio API Showcase | An Audio Viusalizer
</div>
<label for= "UploadedFile" >drag&drop or select a file to Play:</label>
<input type= "File" id= "UploadedFile" ></input>
</div>
<div id= "Visualizer_wrapper" >
<canvas id= ' canvas ' width= "height=" ></canvas>
</div>
</div>
Write a little bit more style:
#fileWrapper {
Transition:all 0.5s Ease;
}
#fileWrapper: hover {
Opacity:1!important;
}
#visualizer_wrapper {
Text-align:center;
}

A new method is added to the prototype of the visualizer object to listen for a file select both the OnChange event discussed earlier and to get the selected file in the event.

_addeventlistner:function () {    var = this,    audioinput = document.getElementById (' UploadedFile '),    Dropcontainer = document.getElementsByTagName ("canvas") [0];    Listen to whether a file is selected    Audioinput.onchange = function () {      //The file length can be determined here to determine if the user really chooses the file, if the point is canceled the file length is 0 if      ( AudioInput.files.length!== 0) {          that.file = audioinput.files[0];//Assign a file to visualizer object's properties          That.filename = That.file.name;          That._start (); After getting to the file, start the program, this method will be defined and implemented later      };   }

(2) Gets the
we use the canvas in the page as the target for placing the file, listening to the drag event ' DragEnter ', ' dragover ', ' drop ' and so on.
or in the Addeventlistner method that has been added above, and then write three event listener code.
Dropcontainer.addeventlistener ("DragEnter", function () {    that._updateinfo (' Drop It on the page ', True);}, False); Dropcontainer.addeventlistener ("DragOver", function (e) {    e.stoppropagation ();    E.preventdefault ();    E.datatransfer.dropeffect = ' copy '; Set file placement type to copy}, False);d Ropcontainer.addeventlistener ("DragLeave", function () {    that._updateinfo (that.info, False);d Ropcontainer.addeventlistener ("Drop", function (e) {    e.stoppropagation ();    E.preventdefault ();    That.file = E.datatransfer.files[0]; Gets the file and assigns the value to the visualizer object    that.filename = that.file.name;    That._start ();}, False);

Notice that in the above code we set the file drag-and-drop mode to ' copy ' when we Are ' dragover ', and get the file as a copy, if not set, the file will not be retrieved correctly.

Then in the ' drop ' event, we get the file for a bit of a step operation.

Read the file as Arraybuffer with FileReader

Here is the _start () method, now the file is obtained, but first need to convert the obtained file to the Arraybuffer format, to be able to pass to Audiocontext to decode, so the next _start () The thing to do in the method is to instantiate a filereader to read the file into the Arraybuffer format.

 _start:function () {//read and decode the file into audio array buffer var. = This,//current This refers to the visualizer object , assign to that so that you can use File = This.file in other places,//Get the previously obtained files from the visualizer object FR = new FileReader (); Instantiate a filereader used to read the file Fr.onload = function (e) {/////file After reading the call this function var fileresult = E.target.result;//This is the result of Read success arr Aybuffer data var audiocontext = That.audiocontext; From visualizer get the first instantiation of the audiocontext used to do decoding Arraybuffer Audiocontext.decodeaudiodata (fileresult,function (buffer) {// This function is called when decoding succeeds, and the parameter buffer is the result that._visualize (audiocontext, buffer) obtained after decoding.      Call _visualize to proceed to the next step, this method is defined and implemented later}, function (e) {//This is a decoding failure will be called Functions Console.log ("!"), the file decoding failed: (");    });    }; Pass the file obtained from the previous step to FileReader and read it as Arraybuffer format fr.readasarraybuffer (file);} 

Notice here that we assign this to that, and then use that in the Audiocontext.decodeaudiodata callback function to refer to our visualizer object. This is due to the scope. We know that JavaScript cannot create code block-level scopes with curly braces, and that the only thing that can create scopes is the function. A function is a scope. The object within the function that this is pointing to depends on the case, which is audiocontext for the code above. So if you want to invoke the method or property of visualizer in this callback function, you need to pass it through another variable, which is that, we assign the outer this (to our Viusalizer object) to the new local variable that, This can then be passed to the inner scope without conflict with the original this in the inner scope. Like this usage in other areas of the source code is also used, careful you can download the source of this article slowly study.

So, in the callback function of Audiocontext.decodeaudiodata, when the decoding is done to get the Audiobuffer file (buffer parameter), then Audiocontext and buffer are passed to visualizer _ The visualize () method is further processed: play the music and draw the spectrum map. Of course this time the _visualize () method is not yet under, and the following begins to implement it.

Create Analyser analyzer and play audio

The obtained file is decoded and the audio buffer data is obtained. Next is the analyser node that sets up our audiocontext and gets the spectral energy information. Add the _visualize method to the visualizer object, which we do in this way.

Play Audio
The buffer is first assigned to Audiocontext. Audiocontext is only equivalent to a container, to make it really rich need to pass the actual music information to it. That is, the audio buffer data is passed to its Buffersource property.

In fact, you should be a little dizzy here, but it doesn't matter, look at the code will be more clear, the programmer is to understand the code is better than the text of a creature.
var audiobuffersoucenode = Audiocontext.createbuffersource ();
Audiobuffersoucenode.buffer = buffer;

In this two, the contents of the audio file are loaded into the audiocontext. Now it's time to start playing our audio.
Audiobuffersoucenode.start (0);

Here the parameter is the time, which indicates the moment from which the audio starts playing. Note: In the previous version of the browser is the use of Onteon () to play, the same parameters, refers to the beginning of the moment.

But there is no sound at this time, because the next step, To connect the Audiobuffersoucenode to the audiocontext.destination, this audiocontext destination is also related to speaker (speaker).
Audiobuffersoucenode.connect (audiocontext.destination);
Audiobuffersoucenode.start (0);

Now you can hear the sounds of the speakers coming through.
_visualize:function (audiocontext, buffer) {
var audiobuffersoucenode = Audiocontext.createbuffersource ();
Audiobuffersoucenode.connect (audiocontext.destination);
Audiobuffersoucenode.buffer = buffer;
Audiobuffersoucenode.start (0);
}


Creating analyzers
Creates a analyser node that gets the spectral energy value.
var analyser = Audiocontext.createanalyser ();
In the above step we are directly connected to the Audiobuffersoucenode and audiocontext.destination, the audio output directly to the speaker began to play, and now in order to audio in the play before the interception, so the analyser inserted in the Audiobuffersoucen Between Ode and Audiocontext.destination. Understand this truth, the code is very simple, audiobuffersoucenode connected to the Analyser,analyser connection destination.
Audiobuffersoucenode.connect (analyser);
Analyser.connect (audiocontext.destination);

And then start playing, now all the audio data goes through analyser, and we get the spectral energy information from the analyser and draw it to the canvas.

Suppose we have already written a method _drawspectrum (analyser) to draw a spectrum map;
_visualize:function (audiocontext, buffer) {
var audiobuffersoucenode = Audiocontext.createbuffersource (),
Analyser = Audiocontext.createanalyser ();
To connect source to the parser
Audiobuffersoucenode.connect (analyser);
Connect the analyzer to the destination to form the path to the speaker
Analyser.connect (audiocontext.destination);
Assigns the buffer data decoded from the previous step to the source
Audiobuffersoucenode.buffer = buffer;
Play
Audiobuffersoucenode.start (0);
After the music is sounded, pass the analyser to another method to start plotting the spectrogram, because the information needed for the drawing is obtained from the analyser.
This._drawspectrum (analyser);
}

Draw a beautiful spectral map

The next work, and the final step, is to implement the _drawspectrum () method, which will follow the music-smart column spectrum drawing out to the page.
Draw Cylindrical Energy Slots
First of all, you have to have a certain understanding of digital signal processing, scary, do not understand how much. Spectrum reaction is the sound of the frequency of the distribution of energy, so called the energy trough is not hard to connect with the game is suspected, is the input signal through the Fourier changes (the University of the knowledge can finally be put in handy). But I know all about this, just for the sake of showing off. The true spectrogram is continuous in frequency, not as evenly spaced as the final effect we see.

Using the code below we can get the energy of each frequency in the audio at this moment from the analyser.
var array = new Uint8array (analyser.frequencybincount);
Analyser.getbytefrequencydata (array);

The array now stores all data from the low-frequency 0Hz to the high-frequency ~hz. The frequency is x-axis and the energy value is the y-axis, and we can get a graph similar to the following.



So, for example, array[0]=100, we knew to draw a bar at x=0, a height of 100 units long, array[1]=50, and then in X=1 draw a column with a height of 50 unit length, and from this analogy, if you use a for loop to iterate through the array to draw it all out, is what you see.
Sampling

But we're not going to have that effect, we just have to take a sample of all the data, such as setting a step 100, a progress pump, to draw some bars in the entire spectrum.

Or first of all, according to the size of the screen, design the width of each bar, and their spacing, so as to calculate the total number of pictures in the picture, and then to figure out the sampling step to take how much, this example is achieved. It's still a little dizzy, just look at the simple code below:

var canvas = document.getElementById (' canvas '),
Meterwidth = 10,//width of the energy bar
Gap = 2,//spacing between energy bars
Meternum = 800/(10 + 2); Calculates how many bars can be drawn on the current canvas
var step = Math.Round (Array.length/meternum); Calculate the sampling step from the analyser

Our canvas is canvas width of 800px, while we set the bar width of 10px, column and column between the interval of 2px, so get meternum for a total number of bars can be drawn. By dividing the total length of the array by this number, we get the sampling step, that is, when we traverse the array, we take a value from the array to draw from the arrays, and this value is Array[i*step]. In this way, the meternum values are taken out evenly, thus correctly reacting to the shape of the original spectrum map.

var canvas = document.getElementById (' canvas '),
Cwidth = Canvas.width,
Cheight = Canvas.height-2,
Meterwidth = 10,//width of the energy bar
Gap = 2,//spacing between energy bars
Meternum = 800/(10 + 2),//Calculate how many bars can be drawn on the current canvas
CTX = Canvas.getcontext (' 2d '),
Array = new Uint8array (analyser.frequencybincount);
Analyser.getbytefrequencydata (array);
var step = Math.Round (Array.length/meternum); Calculates the sampling step from the analyser
Ctx.clearrect (0, 0, cwidth, cheight); Clear Canvas ready to draw
Define a gradient style for drawing
Gradient = ctx.createlineargradient (0, 0, 0, 300);
Gradient.addcolorstop (1, ' #0f0 ');
Gradient.addcolorstop (0.5, ' #ff0 ');
Gradient.addcolorstop (0, ' #f00 ');
Ctx.fillstyle = gradient;
Sample traversal of the source array to draw each spectral bar
for (var i = 0; i < Meternum; i++) {
var value = array[i * Step];
Ctx.fillrect (i * 12/* Width of the spectrum Bar + inter-bar spacing */, Cheight-value + capheight, meterwidth, cheight);
}

Use Requestanimationframe to move the bars.

But the above is only a moment to draw the spectrum, to let the entire screen move, we need to constantly update the screen, Window.requestanimationframe () Just provide the update screen to get animation effect of the function, here directly to the simple transformation of the Code, We get the effect we want: a spectral histogram that follows the music.

 var canvas = document.getElementById (' canvas '), cwidth = canvas.width, Cheight = canvas.height-2, Meterwid    th = 10,//width gap = 2,//energy bar Spacing Meternum = 800/(10 + 2),//calculates how many CTX = Canvas.getcontext (' 2d ') can be drawn on the current canvas;    Defines a gradient style for drawing gradient = ctx.createlineargradient (0, 0, 0, 300);    Gradient.addcolorstop (1, ' #0f0 ');    Gradient.addcolorstop (0.5, ' #ff0 ');    Gradient.addcolorstop (0, ' #f00 ');    Ctx.fillstyle = gradient;      var drawmeter = function () {var array = new Uint8array (analyser.frequencybincount);      Analyser.getbytefrequencydata (array); var step = Math.Round (Array.length/meternum); Calculate the sampling step ctx.clearrect (0, 0, cwidth, cheight);        Clear canvas ready to draw for (var i = 0; i < Meternum; i++) {var value = Array[i * Step];     Ctx.fillrect (i * 12/* Width of the spectrum Bar + inter-bar spacing */, Cheight-value + capheight, meterwidth, cheight);  } requestanimationframe (Drawmeter); } requestanimationframe (Drawmeter); 

Draw a slow-falling cap head
To the previous step, the main work has been completed. Finally, for the sake of aesthetics, then realize the head of the slow landing above the bar.
The principle is also very simple, is to draw a bar at the same time at the same x-axis position to draw a short bar, and its starting and ending position is higher than the bar in the spectrum. The hard part is how to make a slow landing.

The first thing to make clear is that we take a bar to illustrate the problem, when the bar height is higher than the previous moment, we are looking at the upper punch of a spectrum, so the cap head is close to the text bar, this good painting. Considering the opposite situation, when the height of the moment is lower than the height of the previous moment, the bar below is immediately retracted, and we need to remember the height of the cap head at the last moment, and this depiction will be Y-1 at the position of the previous moment. If the next time the Spectrum bar is still not over the CAP head position, continue to let it fall, Y-1 draw cap head.

Through the above analysis, so we at each time to draw the spectrum, we need to present the spectrum and the cap head of the Y value (that is, the vertical direction of the position) into a variable outside the loop, at the time of the next draw from this variable to read, the value of the moment and the value of the last moment of the variable is compared, and then

Finally, the implementation code is given:

_drawspectrum:function (analyser) {var canvas = document.getElementById (' canvas '), cwidth = Canvas.width, Cheight = canvas.height-2, Meterwidth = 10,//spectral strip width gap = 2,//spectral strip pitch Capheight = 2, capsty Le = ' #fff ', meternum = 800/(10 + 2),//number of spectral bars Capypositionarray = [];    Save the position of each cap head in the previous screen to this array ctx = Canvas.getcontext (' 2d '), gradient = ctx.createlineargradient (0, 0, 0, 300);    Gradient.addcolorstop (1, ' #0f0 ');    Gradient.addcolorstop (0.5, ' #ff0 ');    Gradient.addcolorstop (0, ' #f00 ');        var drawmeter = function () {var array = new Uint8array (analyser.frequencybincount);        Analyser.getbytefrequencydata (array); var step = Math.Round (Array.length/meternum);        Calculate the sampling step ctx.clearrect (0, 0, cwidth, cheight); for (var i = 0; i < Meternum; i++) {var value = Array[i * Step];//Get Current energy value if (capypositionarray . length < Math.Round (Meternum)) {Capypositionarray.push (value);            Initializes an array that holds the position of the CAP head, pressing the data of the first screen into it};            Ctx.fillstyle = Capstyle; Start Drawing Cap if (value < Capypositionarray[i]) {//If the current value is less than the previous value ctx.fillrect (I *12,cheight-(--capypositionarray [i]), meterwidth,capheight);//Use the previous saved value to draw the cap} else {ctx.fillrect (i *, Cheight-value, mete Rwidth, Capheight);            Otherwise draw directly with the current value capypositionarray[i] = value;            };            Start to draw spectral strips Ctx.fillstyle = gradient;        Ctx.fillrect (i *, Cheight-value + capheight, meterwidth, cheight);    } requestanimationframe (Drawmeter); } requestanimationframe (Drawmeter);
}

Reprint: http://www.108js.com/article/article7/70196.html?id=983

HTML5 Spectral graph animation with the change of music rhythm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.