EzDevInfo.com

sift.js

filter arrays using mongodb queries

How to use DoG Pyramid in SIFT

I am very new in image processing and pattern recognition. I am trying to implement SIFT algorithm where I am able to create the DoG pyramid and identify the local maximum or minimum in each octave. What I don't understand is that how to use these local max/min in each octave. How do I combine these points?

My question may sound very trivial. I have read Lowe's paper, but could not really understand what he did after he built the DoG pyramid. Any help is appreciated.

Thank you


Source: (StackOverflow)

How to use SIFT algorithm to compute how similiar two images are?

I have used the SIFT implementation of Andrea Vedaldi, to calculate the sift descriptors of two similar images (the second image is actually a zoomed in picture of the same object from a different angle).

Now I am not able to figure out how to compare the descriptors to tell how similar the images are?

I know that this question is not answerable unless you have actually played with these sort of things before, but I thought that somebody who has done this before might know this, so I posted the question.

the little I did to generate the descriptors:

>> i=imread('p1.jpg');
>> j=imread('p2.jpg');
>> i=rgb2gray(i);
>> j=rgb2gray(j);
>> [a, b]=sift(i);  % a has the frames and b has the descriptors
>> [c, d]=sift(j);

Source: (StackOverflow)

Advertisements

Sift implementation with OpenCV 2.2

Does someone know the link of example of SIFT implementation with OpenCV 2.2. regards,


Source: (StackOverflow)

Opencv 3.0 - module object has no attribute 'xfeatures2d'

I have shifted from OpenCV 2.4.9 to 3.0 to make use of drawMatches and drawMatchesKnn function. I came to know that it does not come along with non-free algorithms like SIFT , SURF. So I installed opencv_contrib from https://github.com/Itseez/opencv_contrib by following steps

cmake -DOPENCV_EXTRA_MODULES_PATH=/home/zealous/Downloads/opencv_contrib-master/modules /usr/local ..

make -j5

make install

I also cross checked in modules of opencv, xfeatures2d was there. Then when I tried to do

>>> import cv2
>>> help(cv2.xfeatures2d)

It gives me following error

Traceback (most recent call last):
  File "<pyshell#5>", line 1, in <module>
    help(cv2.xfeatures2d)
AttributeError: 'module' object has no attribute 'xfeatures2d'

What am I doing wrong here. Just FYI that I am using OpenCV 3.0 beta version . has OpenCV deactivated python wrappers for xfeatures2d or I have not installed it correct way?


Source: (StackOverflow)

SURF and SIFT Alternative Object Tracking Algorithm for Augmented Reality

After asking here and trying both SURF and SIFT, none of them seams to be efficient enough to generate interest points fast enough to track a stream from the camera.

SURF, for example, takes around 3 seconds to generate interest points for an image, that's way too slow to track a video coming from a web cam, and it'll be even worse when using it on a mobile phone.

I just need an algorithm that tracks a certain area, its scale, tilt, etc.. and I can build on top of that.

Thanks


Source: (StackOverflow)

Comparing SIFT features stored in a mysql database

I'm currently extending an image library used to categorize images and i want to find duplicate images, transformed images, and images that contain or are contained in other images.
I have tested the SIFT implementation from OpenCV and it works very well but would be rather slow for multiple images. Too speed it up I thought I could extract the features and save them in a database as a lot of other image related meta data is already being held there.

What would be the fastest way to compare the features of a new images to the features in the database?
Usually comparison is done calculating the euclidean distance using kd-trees, FLANN, or with the Pyramid Match Kernel that I found in another thread here on SO, but haven't looked much into yet.

Since I don't know of a way to save and search a kd-tree in a database efficiently, I'm currently only seeing three options:
* Let MySQL calculate the euclidean distance to every feature in the database, although I'm sure that that will take an unreasonable time for more than a few images.
* Load the entire dataset into memory at the beginning and build the kd-tree(s). This would probably be fast, but very memory intensive. Plus all the data would need to be transferred from the database.
* Saving the generated trees into the database and loading all of them, would be the fastest method but also generate high amounts of traffic as with new images the kd-trees would have to be rebuilt and send to the server.

I'm using the SIFT implementation of OpenCV, but I'm not dead set on it. If there is a feature extractor more suitable for this task (and roughly equally robust) I'm glad if someone could suggest one.


Source: (StackOverflow)

SURF vs SIFT, is SURF really faster?

I am testing some object detection with SURF and SIFT.

SURF claims to be faster and more robust than SIFT but I found in my test that this is not true. SIFT with medium images (600*400) is the same speed of SURF and it recognitizes objects pretty well (maybe even better than SURF).

I am doing something wrong?

Edit

Please note there is an article explaining how SURF could be much more fast with a little change to opencv code: http://computer-vision-talks.com/2011/06/a-few-thoughts-about-cvround/

If you know some active opencv developer please let him see it


Source: (StackOverflow)

How to get a rectangle around the target object using the features extracted by SIFT in OpenCV

I'm doing project in OpenCV on object detection which consists of matching the object in template image with the reference image. Using SIFT algorithm the features get acurately detected and matched but I want a rectagle around the matched features My algorithm uses the KD-Tree est ean First technique to get the matches


Source: (StackOverflow)

OpenCV Python and SIFT features

I know there is a lot of questions about Python and OpenCV but I didn't find help on this special topic.

I want to extract SIFT keypoints from an image in python OpenCV.

I have recently installed OpenCV 2.3 and can access to SURF and MSER but not SIFT. I can't see anything related to SIFT in python modules (cv and cv2) (well I'm lying a bit: there are 2 constants: cv2.SIFT_COMMON_PARAMS_AVERAGE_ANGLE and cv2.SIFT_COMMON_PARAMS_FIRST_ANGLE).

This puzzles me since a while. Is that related to the fact that some parts of OpenCV are in C and other in C++? Any idea?

P.S.: I have also tried pyopencv (another python binding for OpenCV <= 2.1) without success.


Source: (StackOverflow)

opencv FLANN with ORB descriptors?

I am trying to use FLANN with ORB descriptors, but opencv crashes with this simple code:

vector<vector<KeyPoint> > dbKeypoints;
vector<Mat> dbDescriptors;
vector<Mat> objects;   

/*
  load Descriptors from images (with OrbDescriptorExtractor())
*/

FlannBasedMatcher matcher;

matcher.add(dbDescriptors); 
matcher.train() //> Crash!

If I use SurfDescriptorExtractor() it works well.

How can I solve this?

OpenCV says:

OpenCV Error: Unsupported format or combination of formats (type=0
) in unknown function, file D:\Value\Personal\Parthenope\OpenCV\modules\flann\sr
c\miniflann.cpp, line 299

Source: (StackOverflow)

David Lowe's SIFT -- Question about scale space and image coordinates (weird offset problem)

I realize this is a highly specialized question.. but here goes. I am using an implementation of SIFT to find matches on two images. With the current implementation that I have, when I match an image with is 90 or 180 degree version, I get matches that are off by around half a pixel consistently but its varies within a range. So for example, if a match is found at pixel coordinate (x,y) in im1, then the corresponding match in its 90 degree rotated image im2 is at (x,y + 0.5). If i use a 180 degree image then the offset appears in both x and y coordinates and only in the x if I use a 270 degree (-90) rotated image.

1) First of all, I am assuming SIFT should give me the same matching location in a rotated image. An implicit assumption is that the rotation does not change the pixel values of the image which I confirmed is true. (I use IRFAN View to rotate and save as a .pgm and the pixel values remain unchanged).

2) I have other implementations which do not give this offset.

3) I am assuming this offset is programming related and possibly has to do with conversion from scale-space keypoint coordinates to image-space key-point coordinate.

I'm hoping someone has run across this problem or can point me out to a reference on how to convert from scale-space to image-space.


Source: (StackOverflow)

OpenCV Python can't use SURF, SIFT

I'm trying a simple thing like

    detector = cv2.SIFT()

and get this bad error

detector = cv2.SIFT()
AttributeError: 'module' object has no attribute 'SIFT'

I do not understand that because cv2 is installed.

cv2.version is

$Rev: 4557 $

my system is Ubuntu 12.04

maybe someone has got the same problem and could help me

thanks a lot

EDIT*:

long story short testypypypy.py

import cv2

detector = cv2.SIFT()

ERROR

Traceback (most recent call last):
  File "testypypy.py", line 3, in <module>
    detector = cv2.SIFT()
AttributeError: 'module' object has no attribute 'SIFT

if i take "SURF" it works because SURF is in dir(cv2) but if i also take cv2.BFMatcher() i get the same error...so it's missing and i have to add it but i don't know how


Source: (StackOverflow)

OpenCV-Python dense SIFT

OpenCV has very good documentation on generating SIFT descriptors, but this is a version of "weak SIFT", where the key points are detected by the original Lowe algorithm. The OpenCV example reads something like:

img = cv2.imread('home.jpg')
gray= cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

sift = cv2.SIFT()
kp = sift.detect(gray,None)
kp,des = sift.compute(gray,kp)

What I'm looking for is strong/dense SIFT, which does not detect keypoints but instead calculates SIFT descriptors for a set of patches (e.g. 16x16 pixels, 8 pixels padding) covering an image as a grid. As I understand it, there are two ways to do this in OpenCV:

  • I could divide the image in a grid myself, and somehow convert those patches to KeyPoints
  • I could use a grid-based feature detector

In other words, I'd have to replace the sift.detect() line with something that gives me the keypoints I require.

My problem is that the rest of the OpenCV documentation, especially wrt Python, is severely lacking, so I have no idea how to achieve either of these things. I see in the C++ documentation that there are keypoint detectors for grid, but I don't know how to use these from Python.

The alternative is to switch to VLFeat, which has a very good DSift/PHOW implementation but means that I'll have to switch from python to matlab.

Any ideas? Thanks.


Source: (StackOverflow)

How to train and predict using bag of words?

I have a folder of images of a car from every angle. I want to use the bag of words approach to train the system in recognizing the car. Once the training is done, I want that if an image of that car is given it should be able to recognize it.

I have been trying to learn the BOW function in opencv in order to make this work and have come at a level where I do not know what to do now and some guidance would be appreciated.

Here is my code that I used to make the bag of words:

Ptr<FeatureDetector> features = FeatureDetector::create("SIFT");
    Ptr<DescriptorExtractor> descriptors = DescriptorExtractor::create("SIFT");
    Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");

    //defining terms for bowkmeans trainer
    TermCriteria tc(MAX_ITER + EPS, 10, 0.001);
    int dictionarySize = 1000;
    int retries = 1;
    int flags = KMEANS_PP_CENTERS;
    BOWKMeansTrainer bowTrainer(dictionarySize, tc, retries, flags);

    BOWImgDescriptorExtractor bowDE(descriptors, matcher);

    //training data now
    Mat features;
    Mat img = imread("c:\\1.jpg", 0);
    Mat img2 = imread("c:\\2.jpg", 0);
    vector<KeyPoint> keypoints, keypoints2;
    features->detect(img, keypoints);
    features->detect(img2,keypoints2);
    descriptor->compute(img, keypoints, features);
    Mat features2;
    descripto->compute(img2, keypoints2, features2);
    bowTrainer.add(features);
    bowTrainer.add(features2);

    Mat dictionary = bowTrainer.cluster();
    bowDE.setVocabulary(dictionary);

This is all based on the BOW documentation.

I think at this stage my system is trained. and the next step is predicting.

this is where I dont know what to do. If I use SVM or NormalBayesClassifier they both use the terms train and predict.

How do I predict and train after this? any guidance would be much appreciated. How do I connect the training of the classifier to my `bowDE`` function?


Source: (StackOverflow)

Sift Extraction - opencv

I'm trying to get started working with sift feature extraction using (C++) OpenCv. I need to extract features using SIFT, match them between the original image (e.g. a book) and a scene, and after that calculate the camera pose.

So far I have found this algorithm using SURF. Does anyone know a base code from which I can get started, or maybe a way to convert the algorithm in the link from SURF to SIFT?

Thanks in advance.

EDIT: Ok, I worked out a solution for the sift problem. Now I'm trying to figure the camera pose. I'm trying to use: solvePnP, can anyone help me with an example?


Source: (StackOverflow)