At the beginning of life, a baby cries with a bright cry to announce the birth of a new life, before opening his eyes, a pair of small ears have begun to listen to the world. In today's user experience, almost all companies have visual designers, but few companies focus on auditory interaction. With the major manufacturers of HTML5 support for the increasingly perfect, front-end engineers have already been able to fiddle with a variety of sound waves, so that compound development go further.
Sonic is a mechanical wave that we can do a lot of things through the physical properties of waves. For example, the use of acoustic frequency coding for data transmission, analysis of frequency domain or time domain data for visualization applications, or combined with the characteristics of light waves to do some audio-visual linkage. All of the above can be achieved through the Web Auido specification in HTML5. I set up a project in my own GitHub to sort out some of the web audio related things I've written before and welcome to learn from each other.
Starting from the beginning of this series, I will start with basic how to use HTML5 to send Do,ri,mi.
The so-called music is made up of time and pitch, where each pitch has its own frequency. In Web audio, you can generate a specified frequency of sound by oscillator nodes. The code is as follows:
1 var New Webkitaudiocontext (), 2 OSC = context.createoscillator (); 3 osc.frequency.value = a; 4 Osc.connect (context.destination); 5 osc.start (0);
The above code produces a 440HZ tone through the value of the frequency property of the OSC object. Running the code browser will continue to emit a 440HZ tone, which is the pitch of the A1 in the 12 average law. With this benchmark, we can deduce the frequency of the other tones, allowing the browser to make different pitch sounds. As for the relationship, we need to first introduce the 12 average law.
In music theory, each group of tones consists of 12 tones, the sound name is C,c#,d,d#,e,f,f#,g,g#,a,a#,b, each 2 tones is divided by half intervals, while the frequency ratio of each adjacent chromatic is 2^ (1/12). After B is C, into the next loop, A1 is the small 1 group A frequency value. Based on the ratio of the half-audio rate, the a# frequency in the small-print 1 group is 440*2^ (1/12) HZ.
Because the pitch of these tones is certain and the number is limited, we can calculate the frequency values of these tones in advance and store them in order to avoid repeated calculations. Readers can determine the number of pitches to initialize according to their individual needs.
var musicalalphabet = [' C ', ' C # ', ' D ', ' d# ', ' E ', ' f ', ' f ', ' G ', ' g# ', ' A ', ' a# ' , ' B ' ={}, Freqrange =3,// c1-b3 I,j,base; for (I=1;i<freqrange;i++ ={}; Base = (i-1) *12; for (J=0;j<12;j++ // a1=440 Freqchat[i][musicalalphabet[j]]=440*math.pow (2, (base+j-9) /12);
In natural C major, we select C,d,e,f,g,a,b to form a set of tones. Its roll-call corresponds to Do,ri,mi,fa,so,la,si. The pitch corresponds only to the name of the sound, not to the roll. With the frequency of each pitch, we can easily initialize a variety of modes, students who do not understand temperament can search their own learning.
With the pitch, the next thing to learn is the value. The most straightforward way to control the length of a note's sound is to call the oscillator node start and end methods. The code is as follows:
1 var context = new Webkitaudiocontext (); 2 var OSC = Context.createoscillator (); 3 osc.frequency.value = 440; 4 osc.connect (context.destination); 5 var _c = Context.currenttime; 6 osc.start (_c); 7 osc.stop (_c+1);
It is important to note that Web audio has its own timeline, and the current time can be obtained through the CurrentTime property of the context object. This is a read-only attribute, according to the specifications. The arguments of the oscillator start and end methods are double, so here Osc.stop (_c+1) means stop after 1 seconds of the current time. The number on the common metronome indicates how many beats a minute, and the physical time of each shot can be obtained by simply removing it.
With the most basic pitch and value, we can write the simplest sheet music. Of course, pitch, vibrato, push string and other performance skills, distortion, reverb and other effects also need other web Audio node auxiliary to complete, these nodes and some of the techniques I will talk about in the future blog post slowly.
Forwarding Please specify the source :http://www.cnblogs.com/Arthus/p/4071049.html
Rock Bar html5! Audio Front-end interaction! A