Today is a coincidence to find the same outline of the nature of the article.
If only we could find it earlier. However, "When you think it is too late, in fact, it is not too late" can also be consolation, but can not often confuse themselves, after all, I need to start running!
Follow this outline to go down, perhaps there will be unexpected gains, and then put the problem of multi-perspective, perhaps it should be effective.
Well, it's useless to want too much of anything else.
I think the more important thing now is to go along with this article and continue with what I'm going to do.
The original "Robhess Sift Source Analysis: summary" Address:
http://blog.csdn.net/masibuaa/article/details/9191309
The article first gives the suggestion to look at the Stitcher, has already read before the discovery:
Before the BO simple analysis: http://www.cnblogs.com/letben/p/5338443.html, it does not have great significance.
The stitching sample has also been introduced earlier.
And then I found a loop. "I finally remember why I wrote the log. Because I didn't know what to do with the breakpoint.
Image stitching and source download see this blog post: Panoramic image stitching based on SIFT features
In the SIFT feature extraction when you want to call Robhess source code, see "Robhess Sift Source Analysis: Review"
Well, I haven't seen it yet.
http://blog.csdn.net/masibuaa/article/details/9246493
The details of this inside, but should be very useful. Read it once, run it again, it should be the lower limit.
So in its sift-based panorama stitching, it should be a minimal set. Perhaps he turned the source more detailed. There should be a lot of other stuff. In other words, this is just a code, you need to read the code, the inside of the idea extracted, recorded.
Todo
Then the breakpoint comes back first without looking at the code.
Back to Overview:
http://blog.csdn.net/masibuaa/article/details/9191309 probably understood that this robhess should be a personal name.
And then until http://robwhess.github.io/opensift/to: Https://github.com/robwhess/opensift/zipball/master
The discovery should be to him to share his own code on the classmate, so guess masikkk should be wrong to write the name of robwhess. Bo Master is also a great ability, abruptly to Robervis a new English name ...
Simply write
http://robwhess.github.io/opensift/
This inside said things, do not know how, now whether to see Chinese or English is always remember nothing, old Bar, or lazy bar ... May be a long-term confusion caused by the inability to learn the syndrome.
So translate it: maybe I really read such an article:
Scale invariant feature transformation (SIFT) is a method used to detect unique and stable image features, which can be easily detected between images to perform tasks such as object detection recognition or physical conversion between images. This open source Sift library is reachable. It consists of the OPENCV open Source computer visual library using C language and the calculation of the SIFT feature between images, and the matching of SIFT features using KD trees and the use of random sampling consistent methods to calculate the conversion composition of the set picture. This library also includes feature import and his work on the features of the image from David Lowe's Sift "executable" and Oxford "Vgg" affine covariant feature detection. Http://www.cs.ubc.ca/~lowe/keypoints/The following picture depicts such a feature.
First Picture: Detect SIFT features only from two images
The second picture: After sift after the random sampling has been, the results obtained. "The staff were very friendly and helpful."
Depend on:
OPENCV version is higher than 2.1
Gtk+2 version is higher than 2.9 "but I don't know what the gtk,g kit is."
Reference:
Open source Sift library rhess ACM MM2010 "name, convention, year?" 】
Unique image features from scale invariant key points
Background:
Then a patent notice.
That's why I want to run. This program is configured with GTK in addition to the OPENCV we have to configure. Fortunately, they are all free. But it feels like it's going to cost a little bit, I'm dripping mom ... "Can't have read it, sure enough, English is not the mother tongue, looks really laborious ... 】
Afternoon with the teacher discussed a bit, said some content
1, for the weighted splicing, how to do, we will have a lot of sift points, but in fact we only need a small part of the coincidence, in the speed of the problem. That's what I mentioned before, the stitching of the desktop graph I intercepted. What to do.
2, for a piece of video content, such as the video is about the license plate information, but it is not clear, if you can extract from these unclear cars a clearer license plate information, this whether there is a way to do, that is, can be a series of single blurred photos to get a clearer picture of this thing, The teacher gives the opinion is: The video can get the image, the image according to the first chapter or the middle of a certain sheet as the benchmark, infinite fit, see whether can get a better result.
Now the immediate priority is: the part of the image fusion, do this weighting, and blur to get clear.
Just got some more important links to the literature: actually, I've just forgotten ...
And then there's just one left on the original blog.
Some sift algorithms and panoramic stitching tests are downloaded, including the test map provided on the Oxford University website
Then the following body part.
Take a look and look at the basic fusion. In fact, the main is a basic algorithm, if you can achieve a splicing to tell the truth, the back of a lot of research are still relatively good to do. Let's get here first.
Read "Robhess's SIFT Source analysis: a review" note