IOS+openCV在Xcode的入門開發(轉)

來源:互聯網
上載者:User

標籤:gray   lan   pre   home   ade   圓形   app   code   www.   

看這篇文章之前先看看這個地址:OpenCV iOS開發(一)——安裝

 

昨天折騰了一天,終於搞定了openCV+IOS在Xcode下的環境並且實現一個基於霍夫演算法的圓形識別程式。廢話不多說,下面就是具體的折騰流程:

------------------------------------------------------安裝openCV-------------------------------------------------------------------

官網上有教程:http://docs.opencv.org/doc/tutorials/introduction/ios_install/ios_install.html#ios-installation

如果一切都和官網上說的這麼簡單的話我就不用寫部落格啦~(請把<my_working _directory>換成你要安裝openCV的路徑)

cd ~/<my_working _directory>git clone https://github.com/Itseez/opencv.gitcd /sudo ln -s /Applications/Xcode.app/Contents/Developer Developer

一直到這步都安裝正常(如果你沒有git的話去http://sourceforge.net/projects/git-osx-installer/)下載安裝

 

cd ~/<my_working_directory>python opencv/platforms/ios/build_framework.py ios

最後一步卡在最後一句是大概是因為沒有安裝 Cmake ,安裝Cmake去官網上下的dmg貌似是沒用的=。=,所以我換了一種方式,通過 homebrew 來安裝,首先安裝homebrew(ruby是Mac內建的所以不用擔心啦)

 ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

然後安裝Cmake:

 brew install cmake

安裝成功後,我們回到上面命令的最後一步,安裝openCV庫。接著就是漫長的等待,大概有半個時辰(一個時辰=兩個小時)安裝結束後,你會看到在安裝路徑下面有個ios的檔案夾裡面就是千辛萬苦得來的openCV IOS庫,喘口氣,我們繼續配置Xcode項目運行環境

---------------------------------------------------------配置Xcode openCV 環境------------------------------------------------------------------

安裝還不是最折騰的,作為一個小白來使用Xcode本身就是很大的挑戰(我前天還不會開發IOS。。。),更何況官網上的教程 http://docs.opencv.org/doc/tutorials/ios/hello/hello.html#opencvioshelloworld是對應的Xcode5.0的,而在Mac OSX 10.10上的Xcode已經到了6.3,兩者介面存在了一定的差異,所以只能碰運氣了。。。好在我運氣不錯~

其實上面這段教程,大部分都是可以跟著做的

1、Create a new XCode project.
2、Now we need to link opencv2.framework with Xcode. Select the project Navigator in the left hand panel and click on project name.
3、Under the TARGETS click on Build Phases. Expand Link Binary With Libraries option.
4、Click on Add others and go to directory where opencv2.framework is located and click open
5、Now you can start writing your application.

這段說的就是建立一個新項目然後選中項目,找到Build Phases,然後把剛才產生的openCV庫加入就行了,但這個時候要仔細看下面的圖,他的圖上還有三個庫也一併引入

之後說的是配置宏命令

Link your project with OpenCV as shown in previous section.
Open the file named NameOfProject-Prefix.pch ( replace NameOfProject with name of your project) and add the following lines of code.
#ifdef __cplusplus#import <opencv2/opencv.hpp>#endif

這段說的是要將openCV庫的先行編譯命令在pch檔案中聲明,但是,Xcode從5.0以後建立項目就不會自動產生這個檔案了,必須手動產生,於是我們選擇file->new,在彈出框裡面選擇IOS下面的other,在裡面找到pch檔案,命名為與項目命相同的檔案,並加入這段代碼,同樣仔細看教程中的圖,把其餘兩個也一併添上。檔案碼完之後,要開始關聯改檔案到項目中了。選中項目,然後再Build Phases邊上找到Build Settings,選中下面一行的All,然後搜尋prefix,在Apple LLVM 6.1 LANGUAGE中找到這一項,在後面添入 $(SRCROOT)/專案檔/名稱.pch,然後找到這一項上面的Precompile prefix Header 選擇Yes, 這樣檔案就加入到項目的先行編譯命令當中了。但這還沒完:

With the newer XCode and iOS versions you need to watch out for some specific details

The *.m file in your project should be renamed to *.mm.
You have to manually include AssetsLibrary.framework into your project, which is not done anymore by default.

這段說的就是所有運用了openCV的地方的.m檔案都要改成.mm檔案,然後要在項目中引入AssetsLibrary.framework(見前文引入openCV庫的步驟)

環境基本上已經配齊了,之後就是運用它們來開發HelloWorld了~

 -----------------------------------------------HelloOpenCV----------------------------------------------------------------------

第一步:開啟這個頁面http://docs.opencv.org/doc/tutorials/ios/image_manipulation/image_manipulation.html#opencviosimagemanipulation

- (cv::Mat)cvMatFromUIImage:(UIImage *)image{  CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);  CGFloat cols = image.size.width;  CGFloat rows = image.size.height;  cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)  CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to  data                                                 cols,                       // Width of bitmap                                                 rows,                       // Height of bitmap                                                 8,                          // Bits per component                                                 cvMat.step[0],              // Bytes per row                                                 colorSpace,                 // Colorspace                                                 kCGImageAlphaNoneSkipLast |                                                 kCGBitmapByteOrderDefault); // Bitmap info flags  CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);  CGContextRelease(contextRef);  return cvMat;}- (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image{  CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);  CGFloat cols = image.size.width;  CGFloat rows = image.size.height;  cv::Mat cvMat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels  CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to data                                                 cols,                       // Width of bitmap                                                 rows,                       // Height of bitmap                                                 8,                          // Bits per component                                                 cvMat.step[0],              // Bytes per row                                                 colorSpace,                 // Colorspace                                                 kCGImageAlphaNoneSkipLast |                                                 kCGBitmapByteOrderDefault); // Bitmap info flags  CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);  CGContextRelease(contextRef);  return cvMat; }-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat{  NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];  CGColorSpaceRef colorSpace;  if (cvMat.elemSize() == 1) {      colorSpace = CGColorSpaceCreateDeviceGray();  } else {      colorSpace = CGColorSpaceCreateDeviceRGB();  }  CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);  // Creating CGImage from cv::Mat  CGImageRef imageRef = CGImageCreate(cvMat.cols,                                 //width                                     cvMat.rows,                                 //height                                     8,                                          //bits per component                                     8 * cvMat.elemSize(),                       //bits per pixel                                     cvMat.step[0],                            //bytesPerRow                                     colorSpace,                                 //colorspace                                     kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info                                     provider,                                   //CGDataProviderRef                                     NULL,                                       //decode                                     false,                                      //should interpolate                                     kCGRenderingIntentDefault                   //intent                                     );  // Getting UIImage from CGImage  UIImage *finalImage = [UIImage imageWithCGImage:imageRef];  CGImageRelease(imageRef);  CGDataProviderRelease(provider);  CGColorSpaceRelease(colorSpace);  return finalImage; }

不管別的先建立一組檔案(.h+.mm)把這三個函數收了,注意要在頭上引入

#import <Foundation/Foundation.h>#import <UIKit/UIKit.h>#import <opencv2/opencv.hpp>

這三個函數的功能是把UIImage轉成cv::Mat類型和轉回來,然後有了cv::Mat就可以想幹嘛幹嘛啦,比如做個霍夫演算法檢測圓形:

+ (UIImage *) hough:(UIImage *) image{    cv::Mat img = [self cvMatFromUIImage:image];        cv::Mat gray(img.size(), CV_8UC4);    cv::Mat background(img.size(), CV_8UC4,cvScalar(255,255,255));    cvtColor(img, gray, CV_BGR2GRAY );    std::vector lines;        std::vector circles;    HoughCircles(gray, circles, CV_HOUGH_GRADIENT,                 2, image.size.width/8, 200, 100 );    for( size_t i = 0; i < circles.size(); i++ )    {        cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));        int radius = cvRound(circles[i][2]);        cv::circle( background, center, 3, cvScalar(0,0,0), -1, 8, 0 );        cv::circle( background, center, radius, cvScalar(0,0,0), 3, 8, 0 );    }        UIImage *res = [self UIImageFromCVMat:background];    return res;}

------------------------------------------------------我是最後的分割線------------------------------------------------------

這個架子搭好以後就可以方便的開發了,objective-C + Cocoa的文法真是奇葩到了極點,怎麼都看不順眼。。。。不過storyboard開發介面倒是挺好用的,希望不久的將來能做個美圖秀秀一樣的軟體玩玩~

 

轉至:http://www.cnblogs.com/tonyspotlight/p/4568305.html

IOS+openCV在Xcode的入門開發(轉)

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.