ios將網路攝影機捕獲的視頻資料轉為jpeg格式

來源:互聯網
上載者:User

想要將網路攝影機進行視頻錄製或者拍照可以用UIImagePickerController,不過UIImagePickerController會彈出一個自己的介面,可是有時候我們不想要彈出的這個介面,那麼就可以用另一種方法來擷取網路攝影機得到的資料了。

首先需要引入一個包#import <AVFoundation/AVFoundation.h>,接下來你的類需要實現AVCaptureVideoDataOutputSampleBufferDelegate這個協議,只需要實現協議中的一個方法就可以得到網路攝影機捕獲的資料了

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer    fromConnection:(AVCaptureConnection *)connection{     // Create a UIImage from the sample buffer data    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];    mData = UIImageJPEGRepresentation(image, 0.5);//這裡的mData是NSData對象,後面的0.5代表產生的圖片品質}

下面是imageFromSampleBuffer方法,方法經過一系列轉換,將CMSampleBufferRef轉為UIImage對象,並返回這個對象:

// Create a UIImage from sample buffer data- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {    // Get a CMSampleBuffer's Core Video image buffer for the media data    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);     // Lock the base address of the pixel buffer    CVPixelBufferLockBaseAddress(imageBuffer, 0);     // Get the number of bytes per row for the pixel buffer    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);     // Get the number of bytes per row for the pixel buffer    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);     // Get the pixel buffer width and height    size_t width = CVPixelBufferGetWidth(imageBuffer);     size_t height = CVPixelBufferGetHeight(imageBuffer);     // Create a device-dependent RGB color space    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();     // Create a bitmap graphics context with the sample buffer data    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,  bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);     // Create a Quartz image from the pixel data in the bitmap graphics context    CGImageRef quartzImage = CGBitmapContextCreateImage(context);     // Unlock the pixel buffer    CVPixelBufferUnlockBaseAddress(imageBuffer,0);    // Free up the context and color space    CGContextRelease(context);     CGColorSpaceRelease(colorSpace);    // Create an image object from the Quartz image    //UIImage *image = [UIImage imageWithCGImage:quartzImage];UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0f orientation:UIImageOrientationRight];    // Release the Quartz image    CGImageRelease(quartzImage);    return (image);}

不過要想讓網路攝影機工作起來,還得做一些工作才行:

// Create and configure a capture session and start it running- (void)setupCaptureSession {    NSError *error = nil;    // Create the session    AVCaptureSession *session = [[[AVCaptureSession alloc] init] autorelease];    // Configure the session to produce lower resolution video frames, if your     // processing algorithm can cope. We'll specify medium quality for the    // chosen device.    session.sessionPreset = AVCaptureSessionPresetMedium;    // Find a suitable AVCaptureDevice    AVCaptureDevice *device = [AVCaptureDevice   defaultDeviceWithMediaType:AVMediaTypeVideo];//這裡預設是使用後置網路攝影機,你可以改成自拍    // Create a device input with the device and add it to the session.    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];    if (!input) {        // Handling the error appropriately.    }    [session addInput:input];    // Create a VideoDataOutput and add it to the session    AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];    [session addOutput:output];    // Configure your output.    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);    [output setSampleBufferDelegate:self queue:queue];    dispatch_release(queue);    // Specify the pixel format   output.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:  [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,  [NSNumber numberWithInt: 320], (id)kCVPixelBufferWidthKey,                              [NSNumber numberWithInt: 240], (id)kCVPixelBufferHeightKey,  nil];AVCaptureVideoPreviewLayer* preLayer = [AVCaptureVideoPreviewLayer layerWithSession: session];    //preLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];    preLayer.frame = CGRectMake(0, 0, 320, 240);    preLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;      [self.view.layer addSublayer:preLayer];    // If you wish to cap the frame rate to a known value, such as 15 fps, set     // minFrameDuration.    output.minFrameDuration = CMTimeMake(1, 15);    // Start the session running to start the flow of data    [session startRunning];    // Assign session to an ivar.    //[self setSession:session];}

其中preLayer是一個預覽攝像的介面,加不加全看自己了,位置什麼的也是在preLayer.frame裡可設定。這裡強調一下output.videoSettings,這裡可以配置輸出資料的一些配置,比如寬高和視頻的格式。你可以在你這個controller中的初始化調用- (void)setupCaptureSession 方法,這樣網路攝影機就開始工作了,這裡沒有處理關閉什麼的,大家可以查文檔。

只能在真機中調試,模擬器不可以。

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.