Recently I was working on a visual tracking project for visual IPC, because the actual operation has nothing to do with me, just because I am interested in doing a little research, because the platform mainly uses C, the visual processing library will undoubtedly be ideal for choosing Emgu. Emgu is a C # package of OpenCV. The materials posted on the Internet are rare and it takes a lot of time to search, the introduction to Emgu seems to be a good article on the Internet. I will not repeat it here.
The requirement of the project should be real-time, but it seems difficult to use Emgu in real time, not to mention that real-time video frames are hard to guarantee, the image comparison function of Emgu spent more than 100 ms on my i5 machine. However, close to real-time is not impossible, such as using a better CPU or using graphics card operations, there may be a better visual processing library, and there should be a lot of methods. The actual requirement of my project is to make some simple judgment on the Motion Track of an object, so I used a disgusting method to store the video data stream to a Queue, start another thread to process the data slowly. Anyway, I only need to know the result after the event, so it is more important to ensure the real-time raw data.
I declare in advance that due to development reasons, cameras cannot be debugged in the office. I created an ImageStream class to encapsulate the collected video data, because Emgu uses the Image <TColor> type, there will be some Image format conversion work to note. Converting Video Data Byte [] to Image <TColor> is not too difficult, that is, just a sentence. You may need to note the JPEG and BMP formats. Because it is a simulation, I will convert the jpg image on the computer into a Byte [] data stream, put it in the ImageStream, open a thread to plug the image data, and process the image in a thread. The general process is like this.
First, the ImageStream class code is as follows:
Using Emgu. CV; using Emgu. CV. structure; using System. collections. generic; using System. drawing; using System. IO; using System. linq; using System. text; using System. threading. tasks; namespace QImageClass {// <summary> // class used to save the Image data stream /// </summary> public class ImageStream {// <summary> // Time Value /// </summary> public DateTime m_DateTime; /// <summary> /// source file name /// </summary> public string m_SrcFileName; /// <summary> /// image stream /// </summary> public MemoryStream m_ImageStream = null; /// <summary> /// constructor /// </summary> /// <param name = "dt"> </param> /// <param name = "pBuf"> </param> public ImageStream (DateTime dt, byte [] pBuf, string srcFileName = null) {this. m_DateTime = dt; this. m_ImageStream = new MemoryStream (pBuf); this. m_SrcFileName = srcFileName ;} /// <summary> /// convert the Image to Emgu /// </summary> /// <returns> </returns> public Image <Bgr, byte> ToEmguImage () {Image img = Image. fromStream (this. m_ImageStream); return new Image <Bgr, Byte> (Bitmap) (img ));} /// <summary> /// specify the file name based on the time. /// </summary> /// <returns> </returns> public string ToFileName () {string file = this. m_DateTime.Year.ToString ("D4") + "-" + this. m_DateTime.Month.ToString ("D2") + "-" + this. m_DateTime.Day.ToString ("D2") + "-" + this. m_DateTime.Hour.ToString ("D2") + "-" + this. m_DateTime.Minute.ToString ("D2") + "-" + this. m_DateTime.Second.ToString ("D2") + "-" + this. m_DateTime.Millisecond.ToString ("D3"); return file ;}}}
M_DateTime and m_SrcFileName in the class are only used as the identification parameters for a data source, which is convenient for debugging.
I encapsulated image motion detection into a Poser class. I used Add (ImageStream im) to Add image data to the processing queue and then processed it in the ProcessThread thread. The Poser code is as follows:
Using System; using System. collections. generic; using System. linq; using System. text; using System. threading. tasks; using System. IO; using System. collections; using System. threading; using Emgu. CV; using Emgu. CV. structure; using System. diagnostics; using Emgu. CV. videoSurveillance; using Emgu. CV. cvEnum; using System. drawing; namespace QImageClass {/// <summary> /// Image sequence processing class /// </summary> public class ImagePose R {// <summary> /// linked list of image stream data /// </summary> private Queue <ImageStream> _ ImageStreamList = new Queue <ImageStream> (); /// <summary> /// indicates the exit thread. /// </summary> private bool _ QuitThreadFlag = true; /// <summary> /// total number of joined sequences /// </summary> private int _ TotalImageCount = 0; /// <summary> /// number of processed items /// </summary> private int _ FinishedCount = 0; /// <summary> /// Mutex lock /// </summary> private Mutex _ Wa ItMutex = new Mutex (); // <summary> // foreground and background detector /// </summary> private FGDetector <Bgr> _ ForeGroundDetector = null; /// <summary> /// default constructor /// </summary> public ImagePoser () {}/// <summary> /// Start image processing /// </summary> public void Start () {this. _ QuitThreadFlag = false; Thread thread = new Thread (new ThreadStart (this. processThread); thread. name = "ImagePoserThread"; thread. is true; thread. sta Rt () ;}/// <summary> // Stop the image processing thread // </summary> public void Stop () {this. _ QuitThreadFlag = true;} // <summary> // exit condition // </summary> /// <returns> </returns> protected virtual bool StopCondition () {Queue <int> fifo = new Queue <int> (); return false ;} /// <summary> /// Add image data /// </summary> /// <param name = "imageStream"> </param> public void Add (ImageStream imageStream) {this. _ WaitMutex. waitO Ne (); this. _ ImageStreamList. enqueue (imageStream); this. _ TotalImageCount ++; Debug. writeLine ("Poser Add:" + this. _ TotalImageCount. toString (); this. _ WaitMutex. releaseMutex ();} /// <summary> /// total number of tasks to be processed /// </summary> /// <returns> </returns> public int GetTotalCount () {return this. _ TotalImageCount ;}/// <summary> /// number of processed items /// </summary> /// <returns> </returns> public int GetBeFinishedCoun T () {return this. _ FinishedCount ;} /// <summary> /// obtain the number of unprocessed tasks. /// </summary> /// <returns> </returns> public int GetUnFinishedCount () {this. _ WaitMutex. waitOne (); int nListCount = this. _ ImageStreamList. count; this. _ WaitMutex. releaseMutex (); return nListCount;} // <summary> // image processing thread // </summary> private void ProcessThread () {// foreground detector if (this. _ ForeGroundDetector = null) {this. _ ForeGroundD Etector = new FGDetector <Bgr> (FORGROUND_DETECTOR_TYPE.FGD);} while (! This. _ QuitThreadFlag) {ImageStream im = null; this. _ WaitMutex. waitOne (); if (this. _ ImageStreamList. count = 0) {this. _ WaitMutex. releaseMutex (); Thread. sleep (1); continue;} Stopwatch st = new Stopwatch (); st. start (); // extract a set of ImageStream im = this. _ ImageStreamList. dequeue (); this. _ FinishedCount ++; this. _ WaitMutex. releaseMutex (); // convert to the Image format used by OpenCV <Bgr, Byte> tagImage = (im. toEmguImage ()). re Size (0.5, INTER. CV_INTER_LINEAR); // tagImage. save ("E: \" + im. toFileName () + ". bmp "); // Save the Bmp file // motion detection ////////////////////////////// //////////////////////////////////////// ///// process tagImage with Gaussian processing. smoothGaussian (3); // obtain the foreground and convert it into a grayscale _ ForeGroundDetector. update (tagImage); Image <Gray, Byte> foreGroundMark = _ ForeGroundDetector. foregroundMask; // foreGroundMark. save ("E: \" + im. toFileName () + ". bmp "); // save Bmp Format File // Edge Point Set Contour <Point> contour = foreGroundMark. FindContours (); if (contour! = Null) {// Rectangle rect = contour. boundingRectangle; // Image <Bgr, Byte> resImg = new Image <Bgr, Byte> (foreGroundMark. size); // painting Edge Point Set foreach (Point p in contour) {tagImage. draw (new CircleF (p, 2.0f), new Bgr (Color. red), 1);} // The painting is bound to a rectangular tagImage. draw (contour. boundingRectangle, new Bgr (Color. green), 1);} // Save the processed image tagImage. save ("E: \" + im. toFileName () + ". bmp "); // Save the Bmp Format File // calculate the image processing time st. stop (); Debug. writeLine ("Poser processing time:" + st. elapsedMilliseconds. toString () + "ms \ r \ n"); Thread. sleep (1 );}}}}
Should it be difficult to start a thread during use ?), Use code similar to the following:
/// <Summary> /// fill the data stream /// </summary> private void Thread1 () {// read the resource file EmunFileReader reader = new EmunFileReader ("D: \ TEST_JPG-Copy ",". jpg "); string [] fileList = reader. getFileList (); int nCount = reader. getFileCount (); // image processor ImagePoser poser = new ImagePoser (); poser. start (); foreach (string s in fileList) {// convert the image to ImageStream Debug. writeLine (s); Image img = Image. fromFile (s); ImageStream ism = new ImageStream (DateTime. now, ImageConvert. imageToBytes (img, ImageFormat. jpeg), s); poser. add (ism); Thread. sleep (1);} while (true) {if (poser. getTotalCount () = nCount & poser. getBeFinishedCount () = nCount) {poser. stop (); break;} else {Thread. sleep (1 );}}}
As for the original images, you can search for them by yourself. I use PS to generate a moving object, and the original images and result images are stored in the attachment. You can download and play them on your own.
This article from the "A few traces of rain lock clear autumn" blog, please be sure to keep this source http://joeyliu.blog.51cto.com/3647812/1243395