Look at this article before you look at it. Address: OpenCV iOS Development (i)--Installation
Yesterday took a day, finally took care of Opencv+ios in Xcode environment and realize a round recognition program based on the Hough algorithm. Needless to say, the following is the specific toss process:
------------------------------------------------------Installing the OpenCV----------------------------------------------------------- --------
Official online Tutorial: http://docs.opencv.org/doc/tutorials/introduction/ios_install/ios_install.html#ios-installation
If everything is as simple as the official web, I don't have to write a blog ~ (please replace <my_working _directory> with the path you want to install OPENCV)
CD ~/<my_working _directory>git clone Https://github.com/Itseez/opencv.gitcd/sudo ln-s/applications/xcode.app /contents/developer Developer
Until this step is installed properly (if you don't have git, go to http://sourceforge.net/projects/git-osx-installer/) Download the installation
CD ~/<my_working_directory>python opencv/platforms/ios/build_framework.py iOS
The last step of the card in the last sentence is probably because there is no installation of Cmake , install Cmake to the official online DMG seems to be useless =. =, so I changed a way to install through homebrew , first install homebrew (Ruby is the Mac comes with so don't worry about it)
Ruby-e "$ (curl-fssl https://raw.githubusercontent.com/Homebrew/install/master/install)"
Then install CMake:
Brew Install CMake
After the installation is successful, we go back to the last step of the command above and install the OpenCV library. Then there is a long wait, about half an hour (one time = two hours) after installation, you will see an iOS folder underneath the installation path. OPENCV iOS library, take a breath, we continue to configure the Xcode project Runtime environment
---------------------------------------------------------Configuring Xcode OpenCV Environmental------------------------------------------------------------------
Installation is not the most frustrating, as a small white to use Xcode itself is a big challenge (I will not develop iOS the day before ...) ), not to mention the official online tutorial Http://docs.opencv.org/doc/tutorials/ios/hello/hello.html#opencvioshelloworld is corresponding to the Xcode5.0, while in Mac OSX 10.10 on the Xcode has reached 6.3, the interface has a certain difference, so only luck ... Fortunately, I'm lucky.
In fact, the above tutorial, most of them can be followed
1. Create a new XCode project.
2, now we need to link opencv2.framework with Xcode. Select the project Navigator in the left hand panel and click on Project name.
3, under the TARGETS click on Build phases. Expand Link Binary with Libraries option.
4. Click on Add others and go to directory where Opencv2.framework is located and click Open
5. Now you can start writing your application.
This is about building a new project and then selecting the project, finding the build phases, and then adding the OpenCV library that you just created, but take a closer look at the diagram below, and there are three more libraries on his diagram.
After that, the Configure Macros command
Link your project with OpenCV as shown and previous section.
Open the file named nameofproject-prefix.pch (replace Nameofproject with name of your project) and add the following line S of code.
#ifdef __cplusplus#import <opencv2/opencv.hpp> #endif
This is to say that the precompiled command for the OPENCV Library is declared in the PCH file, However , Xcode creates the project from 5.0 and does not automatically generate the file and must be generated manually, so we choose File->new, In the pop-up box, select Other under iOS, find the PCH file in it, name it the same as the project, and add the code, as well as take a closer look at the diagram in the tutorial and add the remaining two. After the file code is finished, you start associating the files to the project. Select the item, and then build phases to find build Settings, select all on the following line, and then search for prefix, find it in Apple LLVM 6.1 language, add $ (srcroot) later Project file/name. pch, and then find this item above the precompile prefix Header Select Yes so the file is added to the project's precompiled command. But this is not the end:
With the newer XCode and IOS versions your need to watch out for some specific details
The *.m file in your project should is renamed to *.MM.
You have the to manually include Assetslibrary.framework into your project, which are not doing anymore by default.
This paragraph is about all the OpenCV. m files are converted to. mm files and then introduced into the project assetslibrary.framework(see the steps to introduce the OPENCV library earlier)
The environment is basically already in place, and then they are used to develop HelloWorld.
-----------------------------------------------HELLOOPENCV--------------------------------------------------------------- -------
First step: Open this page http://docs.opencv.org/doc/tutorials/ios/image_manipulation/image_manipulation.html#opencviosimagemanipulation
-(Cv::mat) Cvmatfromuiimage: (UIImage *) image{cgcolorspaceref colorspace = cgimagegetcolorspace (image. Cgimage); CGFloat cols = image.size.width; CGFloat rows = image.size.height; Cv::mat cvmat (rows, cols, CV_8UC4); 8 bits per component, 4 channels (color channels + Alpha) Cgcontextref Contextref = Cgbitmapcontextcreate (Cvmat.data, Pointer to Data cols,//Width O F bitmap rows,//Height of bitmap 8,//Bits per component CVMAT.STEP[0],//Bytes per row colorspace, ColorSpace Kcgimagealphanoneskiplast | Kcgbitmapbyteorderdefault); Bitmap Info Flags cgcontextdrawimage (contextref, cgrectmake (0, 0, cols, rows), image. Cgimage); Cgcontextrelease (CONTEXTREF); return Cvmat;} -(Cv::mat) Cvmatgrayfromuiimage: (UIImage *) image{cgcolorspaceref colorspace = cgimagegetcolorspace (image. Cgimage); CGFloat cols = image.size.width; CGFloat rows = image.size.height; Cv::mat cvmat (rows, cols, CV_8UC1); 8 bits per component, 1 channels cgcontextref contextref = Cgbitmapcontextcreate (Cvmat.data,//Pointe R to data cols,//Width of bitmap Rows,//Height of bitmap 8,//Bits per component Cvmat.step[0], Bytes per row colorspace,//ColorSpace Kcgimagealphanoneskiplast | Kcgbitmapbyteorderdefault); Bitmap Info Flags cgcontextdrawimage (contextref, cgrectmake (0, 0, cols, rows), image. Cgimage); Cgcontextrelease (CONTEXTREF); return cvmat; }-(UIImage *) Uiimagefromcvmat: (cv::mat) cvmat{nsdata *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize () *cvmat.total ()]; Cgcolorspaceref ColorSpace; if (cvmat.elemsize () = = 1) {colorspace = Cgcolorspacecreatedevicegray (); } else {colorspace = Cgcolorspacecreatedevicergb (); } cgdataproviderref Provider = Cgdataprovidercreatewithcfdata ((__bridge cfdataref) data); Creating cgimage from Cv::mat cgimageref imageref = cgimagecreate (Cvmat.cols,//width Cvmat.rows,//height 8,//bits per component 8 * cvmat.elemsize (),//bits per pixel cvmat.step[0 ],//bytesperrow ColorSpace, ColorSpace kcgimagealphanone|kcgbitmapbyteorderdefault,//Bitmap Info Provider,//cgdataproviderref NULL,//decode false, Should interpolate Kcgrenderingintentdefault//intent ); Getting UIImage from cgimage UIImage *finalimage = [UIImage imagewithcgimage:imageref]; Cgimagerelease (IMAGEREF); Cgdataproviderrelease (provider); Cgcolorspacerelease (ColorSpace); return finalimage; }
No matter what else, set up a set of files (. h+.mm) Take these three functions and pay attention to introducing them on your head.
#import <Foundation/Foundation.h> #import <UIKit/UIKit.h> #import <opencv2/opencv.hpp>
The function of these three functions is to turn UIImage into a cv::mat type and turn back, and then with the cv::mat you can do what you want to do, such as a Hough algorithm to detect the circle:
+ (UIImage *) Hough: (UIImage *) image{ cv::mat img = [self cvmatfromuiimage:image]; Cv::mat Gray (Img.size (), CV_8UC4); Cv::mat background (img.size (), Cv_8uc4,cvscalar (255,255,255)); Cvtcolor (IMG, Gray, Cv_bgr2gray); Std::vector lines; Std::vector circles; Houghcircles (gray, circles, cv_hough_gradient, 2, IMAGE.SIZE.WIDTH/8, N.); for (size_t i = 0; i < circles.size (); i++) { CV::P oint Center (cvround (Circles[i][0]), Cvround (Circles[i][1] )); int radius = Cvround (circles[i][2]); Cv::circle (background, center, 3, Cvscalar (0,0,0),-1, 8, 0); Cv::circle (background, center, RADIUS, Cvscalar (0,0,0), 3, 8, 0); } UIImage *res = [self uiimagefromcvmat:background]; return res;}
------------------------------------------------------I'm the last dividing line------------------------------------------------------
This shelf can be convenient after the development of the OBJECTIVE-C + cocoa grammar is really wonderful to the extreme, how all dislike ... But storyboard development interface is very useful, I hope in the near future can do a beautiful 美图秀秀 like the software to play ~
Go to: http://www.cnblogs.com/tonyspotlight/p/4568305.html
IOS+OPENCV Development in Xcode (GO)