OC Development Note 3 frequency acquisition and audio-visual display when recording

Source: Internet
Author: User

The general sound visualization makes the spectrum map on the line:


But I don't think the spectrogram can see what the sound is, so be sure to make a sonogram, both of which get information about the frequency of the sound.

This paper studies the two examples of Auriotouch and Pitchdetector on the official web, and integrates the main code obtained by Auriotouch's audio-visual display and pitch detector frequency to the previous section "OC Development Note 2 Augraph complete simultaneous recording and playback, and made a drawing with Calayer.

audio data stream processing function Performthru, in the previous section, this function only completes the mute processing function, in this section he can obtain the decibel of each frequency by FFT, as well as the current frequency and the decibel. The frequency data is cached in the array Farr for plotting, the main code is as follows :

Static Osstatusperformthru (Void*inrefcon, Audiounitrenderactionflag s *ioactionflags, const audiotimestamp *intimestamp, UInt32 inbusnum ber, UInt32 innumberframes, audiobufferlist *iodata) {//interface pointers, used to obtain    Take the mute switch button Cdyviewcontroller *this= (__bridge cdyviewcontroller*) Inrefcon;    int buffercapacity = this->buffercapacity;    SInt16 index = this->index;    void *databuffer = this->databuffer;    float *outputbuffer = this->outputbuffer;    uint32_t log2n = this->log2n;    uint32_t n = this->n;    uint32_t NOver2 = this->nover2;    Float32 kadjust0db =this->kadjust0db;    Complex_split A = this->a;    Fftsetup Fftsetup = this->fftsetup;    static int numlevels = sizeof (colorlevels)/sizeof (glfloat)/5; Audiounitrender the input data of the remote I/O is read in, where each data is in frame,//each frame has n audio data content (this is related to the concept of analog-to-digital bit,This will have n points per frame), 2 channels is multiplied by twice times the amount of data,//The entire data is present in the example of the Iodata pointer in osstatus rendererr = Audiounitrender (This->remoteiounit, IoA        Ctionflags, Intimestamp, 1, Innumberframes, iodata);    Read the data into the DataBuffer, read the full frame and begin to use the FFT processing int read = Buffercapacity-index; if (Read > Innumberframes) {memcpy ((SInt16 *) DataBuffer + index, Iodata->mbuffers[0].mdata, innumberframes*s        Izeof (SInt16));    This->index + = Innumberframes;        } else {//nslog (@ "Cache is full, start processing ...");        If we enter this conditional, our buffer would be filled and we should//perform the FFT.        memcpy ((SInt16 *) DataBuffer + index, Iodata->mbuffers[0].mdata, read*sizeof (SInt16));        Reset the index.        This->index = 0;        Convertint16tofloat (This, DataBuffer, OutputBuffer, buffercapacity); If the waveform switch is turned on if (This->ismute = = YES) {/*************** FFT display waveform *************************** ******************/float mfftnormfactor=n;            UInt32 maxframes = NOver2;            Output float *outfftdata = (float*) calloc (maxframes, sizeof (float));; Generate a split complex vector from the real data Vdsp_ctoz ((complex *) OutputBuffer, 2, &a, 1, maxframes            );            Take the FFT and scale appropriately vdsp_fft_zrip (Fftsetup, &a, 1, log2n, Kfftdirection_forward);            Vdsp_vsmul (A.realp, 1, &mfftnormfactor, A.realp, 1, maxframes);            Vdsp_vsmul (A.IMAGP, 1, &mfftnormfactor, A.IMAGP, 1, maxframes);            Zero out the Nyquist value this->a.imagp[0] = 0.0;            Convert the FFT data to DB vdsp_zvmags (&a, 1, Outfftdata, 1, maxframes);            In order to avoid taking log10 of zero, an adjusting factor was added in to make the minimum value equal-128db            Vdsp_vsadd (Outfftdata, 1, & kadjust0db, Outfftdata, 1, maxframes); Float32 one = 1;            Vdsp_vdbcon (Outfftdata, 1, &one, Outfftdata, 1, maxframes, 0);            printf ("Frequency%f \ n", *outfftdata);            int y, maxy=300;           int fftlength = 2048/2;            NSLog (@ "Frequency to texture texture number fftlength=%d phone screen height maxy=%d", fftlength,maxy);                for (y=0; y<maxy; y++) {cgfloat yfract = (cgfloat) y/(CGFloat) (maxY-1);                                CGFloat fftidx = Yfract * ((cgfloat) fftLength-1);                Double fftidx_i, Fftidx_f;                                Fftidx_f = MODF (Fftidx, &fftidx_i);                CGFloat Fft_l_fl, FFT_R_FL;                                CGFloat Interpval;                int lowerindex = (int) (fftidx_i);                int upperindex = (int) (fftidx_i + 1); Upperindex = (Upperindex = = fftlength)?                                FftLength-1: Upperindex;                FFT_L_FL = (cgfloat) (80-outfftdata[lowerindex])/64.; FFT_R_FL = (cgfloat) (80-ouTfftdata[upperindex])/64.;                Interpval = Fft_l_fl * (1.-Fftidx_f) + FFT_R_FL * FFTIDX_F;                NSLog (@ "fft_l_fl=%f fft_r_fl=%f interpval=%f", fft_l_fl,fft_r_fl,interpval);                Interpval = sqrt (CLAMP (0., Interpval, 1.));                int colorind=interpval*10;                 printf ("Frequency =%d amplitude =%f color class =%d \ n", (int) fftidx_i, interpval,colorind);               farr[(int) fftidx_i]=10-colorind;            printf ("%d=%d", (int) fftidx_i, farr[(int) fftidx_i]);            }//printf ("\ n"); /*************** FFT ***************************************************///}else{/********            FFT only shows the frequency and decibel ***************************************************************/uint32_t stride = 1;            Vdsp_ctoz ((complex*) OutputBuffer, 2, &a, 1, nOver2);            FFT transform//Carry out a Forward FFT transform. Vdsp_fft_zrip (FftseTup, &a, Stride, log2n, Fft_forward); The output signal is now in a split real form.            Use the VDSP_ZTOC to get//a split real vector.            Vdsp_ztoc (&a, 1, (COMPLEX *) OutputBuffer, 2, nOver2);            Determine the dominant frequency by taking the magnitude squared and//saving the bin which it resides in.            float dominantfrequency = 0;            int bin =-1; for (int i=0; i<n; i+=2) {//amplitude squared function magnitudesquared float curfreq = magnitudesquared (outp                Utbuffer[i], outputbuffer[i+1]);                    if (Curfreq > dominantfrequency) {dominantfrequency = Curfreq;                Bin = (i+1)/2;            }} printf ("Frequency:%f db:%d \ n", bin* (this->samplerate/buffercapacity), bin);        /*************** FFT ***************************************************************/}//Clear Cache OutputBuffer memset (OUTPUtbuffer, 0, n*sizeof (SInt16)); }//Do not need the speaker, clear 0 all channels of data (mute) mnumberbuffers for the channel number of two channels for 0~1, mono index only 0 for (UInt32 i=0; i < iodata->mnumberbuffers; i+    +) {memset (iodata->mbuffers[i].mdata, 0, iodata->mbuffers[i].mdatabytesize);    } if (Rendererr < 0) {return rendererr; } return NOERR;}
-(void) viewdidload{[Super Viewdidload];    Additional setup after loading the view, typically from a nib.    Ismute = NO;    index = 0;    UInt32 maxframes = 2048;    buffercapacity = Maxframes;    DataBuffer = (void*) malloc (maxframes * sizeof (SInt16));        OutputBuffer = (float*) malloc (maxframes *sizeof (float));    LOG2N is the length of the array to be processed by the FFT log2n = log2f (maxframes);    n = 1 << log2n;    ASSERT (n = = maxframes);    NOver2 = MAXFRAMES/2;    kadjust0db = 1.5849e-13;    A.realp = (float *) malloc (nOver2 * sizeof (float));    A.IMAGP = (float *) malloc (nOver2 * sizeof (float));    Fftsetup = Vdsp_create_fftsetup (log2n, fft_radix2);         samplerate=44100.0;    Self.soundimg=[[cdysoundimage alloc] initWithFrame:self.view.frame];    [Self.view.layer AddSublayer:self.soundimg.colorLayer];        switchbutton.layer.zposition=10;    Frequency Data memset (Farr, 0, sizeof (Farr)); Start Timer Self.timer = [Nstimer scheduledtimerwithtimeinterval:1/60 target:self selector:@Selector (tick) Userinfo:nil Repeats:yes];    Set initial hand positions [self tick];    [Self Initremoteio]; }

Vocal "All-in-A-box" imaging:


The picture on the left is Auriotouch and I draw the right picture.

Auriotouch use Opengles to draw, the approximate principle is to put the entire screen first filled with triangular nets, and then calculate the color value of each vertex coloring, in fact, I originally used Ogre to do similar things, but this time I want to try a simple drawing scheme: Calayer, The drawing functions are as follows:

-drawlayer:incontext in the Calayerdelegate protocol: method or-drawrect in UIView: Method (In fact, the former wrapper method), the//core graphics layer creates a drawing context, The size of the memory required by this context can be derived from this calculation: the layer width * Layer Height * 4 bytes, the width of the height of the units are pixels. For a full-screen layer on the retina ipad, the amount of memory is 2048*1526*4 bytes, which is equivalent to 12MB of RAM, which needs to be re-erased and redistributed each time the layer is redrawn. Software drawings are expensive, and unless absolutely necessary, you should avoid redrawing your view. The secret to improving your drawing performance is to avoid drawing//vector graphics Cashapelayer Although it is efficient but the number of simultaneous layers on the screen is approximately hundreds of-(void) Drawlayer: (Calayer *) layer Incontext: (        CGCONTEXTREF) ctx{for (int i=0;i<[cells count];i++) {Nsmutablearray *array1 = self.cells[i];                      for (int j=0;j<[array1 count];j++) {nsnumber *val= array1[j];            float cval = val.intvalue/10.0;            CGRect rectangle = CGRectMake (J*cellwidth, I*cellheight, Cellwidth, cellheight);            Set Rectangle fill Color: Red Cgcontextsetrgbfillcolor (CTX, Cval, Cval, Cval, 1.0);            Filled Rectangle cgcontextfillrect (ctx, rectangle);            Set Brush color: Black//Cgcontextsetrgbstrokecolor (CTX, 1.0, 0, 0, 1); Set brush line thickness//CgcontexTsetlinewidth (CTX, 1.0);            Draw a rectangular border//Cgcontextaddrect (Ctx,rectangle);         Perform painting//Cgcontextstrokepath (CTX);    }//printf ("\ n"); }      }

I also want to draw the screen width * Long small rectangle, that is at least 50,000 points above Ah, this way of drawing all by cup, the effect on the simulator can also be, because the CPU is strong, but the real machine should be used GPU to draw, so I intend to change the sprite kit to try.

By visible, Calayer completely uses the CPU drawing:


This period of time research on the official website of the two examples, as well as learning audio processing and drawing a total of 12 days, the most painful is to see the Auriotouch example, is really a bit of information can not find Ah, I hope this article will be to draw the spectrum map, audio visual effects of the students to provide help.

Code: Download

OC Development Note 3 frequency acquisition and audio-visual display when recording

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.