EzDevInfo.com

image-processing interview questions

Top image-processing frequently asked interview questions

How to convert image to base64 encoding?

Can you please guide me how can I convert an image from a URL to base64 encoding?


Source: (StackOverflow)

High Quality Image Scaling Library [closed]

I want to scale an image in C# with quality level as good as Photoshop does. Is there any C# image processing library available to do this thing?


Source: (StackOverflow)

Advertisements

Which library should I use for server-side image manipulation on Node.JS? [closed]

I found a quite large list of available libraries on Node.JS wiki but I'm not sure which of those are more mature and provide better performance. Basically I want to do the following:

  1. load some images to a server from external sources
  2. put them onto one big canvas
  3. crop and mask them a bit
  4. apply a filter or two
  5. Resize the final image and give a link to it

Big plus if the node package works on both Linux and Windows.


Source: (StackOverflow)

Fast Bitmap Blur For Android SDK

Currently in an Android application that I'm developing I'm looping through the pixels of an image to blur it. This takes about 30 seconds on a 640x480 image.

While browsing apps in the Android Market I came across one that includes a blur feature and their blur is very fast (like 5 seconds) so they must be using a different method of blurring.

Anyone know a faster way other than looping through the pixels?


Source: (StackOverflow)

Algorithm to compare two images

Given two different image files (in whatever format I choose), I need to write a program to predict the chance if one being the illegal copy of another. The author of the copy may do stuff like rotating, making negative, or adding trivial details (as well as changing the dimension of the image).

Do you know any algorithm to do this kind of job?


Source: (StackOverflow)

How do I find Waldo with Mathematica?

This was bugging me over the weekend: What is a good way to solve those Where's Waldo? ['Wally' outside of North America] puzzles, using Mathematica (image-processing and other functionality)?

Here is what I have so far, a function which reduces the visual complexity a little bit by dimming some of the non-red colors:

whereIsWaldo[url_] := Module[{waldo, waldo2, waldoMask},
    waldo = Import[url];
    waldo2 = Image[ImageData[
        waldo] /. {{r_, g_, b_} /;
          Not[r > .7 && g < .3 && b < .3] :> {0, 0,
          0}, {r_, g_, b_} /; (r > .7 && g < .3 && b < .3) :> {1, 1,
          1}}];
    waldoMask = Closing[waldo2, 4];
    ImageCompose[waldo, {waldoMask, .5}]
]

And an example of a URL where this 'works':

whereIsWaldo["http://www.findwaldo.com/fankit/graphics/IntlManOfLiterature/Scenes/DepartmentStore.jpg"]

(Waldo is by the cash register):

Mathematica graphic


Source: (StackOverflow)

“Diff” an image using ImageMagick

How can I get the difference between two images? I have the original image. Someone has written on an exact duplicate of the original image. Now, I need to compare the original to the written on image and extract just the writing in image format.

Example: I have a picture of a house. Someone took a copy and wrote “Hello!” on the copy. I want to somehow compare the two pictures, remove the house, and be left with an image of the words “Hello!”.

Is this possible with ImageMagick? I know there are ways to get the statistical difference between images, but that is not what I am looking for.


Source: (StackOverflow)

How can I quantify difference between two images?

Here's what I would like to do:

I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much looks the same, I don't want to store the latest snapshot.

I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold.

I'm looking for simplicity rather than perfection. I'm using python.


Source: (StackOverflow)

Remove white background from an image and make it transparent

We're trying to do the following in Mathematica:
RMagick remove white background from image and make it transparent

But with actual photos it ends up looking lousy (like having a halo around the image).

Here's what we've tried so far:

unground0[img_] := With[{mask = ChanVeseBinarize[img, TargetColor->{1.,1.,1.}]},
  Rasterize[SetAlphaChannel[img, ImageApply[1-#&, mask]], Background->None]]]

Here's an example of what that does.

Original image:

original image

Image with the white background replaced with no background (or, for demonstration purposes here, a pink background):

image with transparent background -- actually a pink background here, to make the halo problem obvious

Any ideas for getting rid of that halo? Tweaking things like LevelPenalty, I can only get the halo to go away at the expense of losing some of the image.

EDIT: So I can compare solutions for the bounty, please structure your solution like above, namely a self-contained function named unground-something that takes an image and returns an image with transparent background. Thanks so much, everyone!


Source: (StackOverflow)

Image to ASCII art conversion

Prologue

This subject pops up here on SO from time to time, but is removed usually because of being a poorly written question. I saw many such questions and then silence from the OP (usual low rep) when additional info is requested. From time to time if the input is good enough for me I decide to respond with an answer and it usually gets a few up-votes per day while active but then after a few weeks the question gets removed/deleted and all starts from the beginning. So I decided to write this Q&A so I can reference such questions directly without rewriting the answer over and over again …

Another reason is also this META thread targeted at me so if you got additional input feel free to comment.

Question

How to convert bitmap image to ASCII art using C++ ?

Some constraints:

  • gray scale images
  • using mono-spaced fonts
  • keeping it simple (not using too advanced stuff for beginner level programmers)

Here is a related Wiki page ASCII art (thanks to @RogerRowland)


Source: (StackOverflow)

How would I tint an image programatically on the iPhone?

I would like to tint an image with a color reference. The results should look like the Multiply blending mode in Photoshop, where whites would be replaced with tint:

alt text

I will be changing the color value continuously.

Follow up: I would put the code to do this in my ImageView's drawRect: method, right?

As always, a code snippet would greatly aid in my understanding, as opposed to a link.

Update: Subclassing a UIImageView with the code Ramin suggested.

I put this in viewDidLoad: of my view controller:

[self.lena setImage:[UIImage imageNamed:kImageName]];
[self.lena setOverlayColor:[UIColor blueColor]];
[super viewDidLoad];

I see the image, but it is not being tinted. I also tried loading other images, setting the image in IB, and calling setNeedsDisplay: in my view controller.

Update: drawRect: is not being called.

Final update: I found an old project that had an imageView set up properly so I could test Ramin's code and it works like a charm!

Final, final update:

For those of you just learning about Core Graphics, here is the simplest thing that could possibly work.

In your subclassed UIView:

- (void)drawRect:(CGRect)rect {

    CGContextRef context = UIGraphicsGetCurrentContext();

    CGContextSetFillColor(context, CGColorGetComponents([UIColor colorWithRed:0.5 green:0.5 blue:0 alpha:1].CGColor)); // don't make color too saturated

    CGContextFillRect(context, rect); // draw base

    [[UIImage imageNamed:@"someImage.png"] drawInRect: rect blendMode:kCGBlendModeOverlay alpha:1.0]; // draw image
}

Source: (StackOverflow)

OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection

I successfully implemented the OpenCV square-detection example in my test application, but now need to filter the output, because it's quiet messy - or is my code wrong?

I'm interested in the four corner points of the paper for skew reduction (like that) and further processing …

Input & Output: Input & Output

Original image:

click

Code:

double angle( cv::Point pt1, cv::Point pt2, cv::Point pt0 ) {
    double dx1 = pt1.x - pt0.x;
    double dy1 = pt1.y - pt0.y;
    double dx2 = pt2.x - pt0.x;
    double dy2 = pt2.y - pt0.y;
    return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}

- (std::vector<std::vector<cv::Point> >)findSquaresInImage:(cv::Mat)_image
{
    std::vector<std::vector<cv::Point> > squares;
    cv::Mat pyr, timg, gray0(_image.size(), CV_8U), gray;
    int thresh = 50, N = 11;
    cv::pyrDown(_image, pyr, cv::Size(_image.cols/2, _image.rows/2));
    cv::pyrUp(pyr, timg, _image.size());
    std::vector<std::vector<cv::Point> > contours;
    for( int c = 0; c < 3; c++ ) {
        int ch[] = {c, 0};
        mixChannels(&timg, 1, &gray0, 1, ch, 1);
        for( int l = 0; l < N; l++ ) {
            if( l == 0 ) {
                cv::Canny(gray0, gray, 0, thresh, 5);
                cv::dilate(gray, gray, cv::Mat(), cv::Point(-1,-1));
            }
            else {
                gray = gray0 >= (l+1)*255/N;
            }
            cv::findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
            std::vector<cv::Point> approx;
            for( size_t i = 0; i < contours.size(); i++ )
            {
                cv::approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true);
                if( approx.size() == 4 && fabs(contourArea(cv::Mat(approx))) > 1000 && cv::isContourConvex(cv::Mat(approx))) {
                    double maxCosine = 0;

                    for( int j = 2; j < 5; j++ )
                    {
                        double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
                        maxCosine = MAX(maxCosine, cosine);
                    }

                    if( maxCosine < 0.3 ) {
                        squares.push_back(approx);
                    }
                }
            }
        }
    }
    return squares;
}

EDIT 17/08/2012:

To draw the detected squares on the image use this code:

cv::Mat debugSquares( std::vector<std::vector<cv::Point> > squares, cv::Mat image )
{
    for ( int i = 0; i< squares.size(); i++ ) {
        // draw contour
        cv::drawContours(image, squares, i, cv::Scalar(255,0,0), 1, 8, std::vector<cv::Vec4i>(), 0, cv::Point());

        // draw bounding rect
        cv::Rect rect = boundingRect(cv::Mat(squares[i]));
        cv::rectangle(image, rect.tl(), rect.br(), cv::Scalar(0,255,0), 2, 8, 0);

        // draw rotated rect
        cv::RotatedRect minRect = minAreaRect(cv::Mat(squares[i]));
        cv::Point2f rect_points[4];
        minRect.points( rect_points );
        for ( int j = 0; j < 4; j++ ) {
            cv::line( image, rect_points[j], rect_points[(j+1)%4], cv::Scalar(0,0,255), 1, 8 ); // blue
        }
    }

    return image;
}

Source: (StackOverflow)

Simple and fast method to compare images for similarity

I need a simple and fast way to compare two images for similarity. I.e. I want to get a high value if they contain exactly the same thing but may have some slightly different background and may be moved / resized by a few pixel.

(More concrete, if that matters: The one picture is an icon and the other picture is a subarea of a screenshot and I want to know if that subarea is exactly the icon or not.)

I have OpenCV at hand but I am still not that used to it.

One possibility I thought about so far: Divide both pictures into 10x10 cells and for each of those 100 cells, compare the color histogram. Then I can set some made up threshold value and if the value I get is above that threshold, I assume that they are similar.

I haven't tried it yet how well that works but I guess it would be good enough. The images are already pretty much similar (in my use case), so I can use a pretty high threshold value.

I guess there are dozens of other possible solutions for this which would work more or less (as the task itself is quite simple as I only want to detect similarity if they are really very similar). What would you suggest?


There are a few very related / similar questions about obtaining a signature/fingerprint/hash from an image:

Also, I stumbled upon these implementations which have such functions to obtain a fingerprint:


A bit offtopic: There exists many methods to create audio fingerprints. MusicBrainz, a web-service which provides fingerprint-based lookup for songs, has a good overview in their wiki. They are using AcoustID now. This is for finding exact (or mostly exact) matches. For finding similar matches (or if you only have some snippets or high noise), take a look at Echoprint. A related SO question is here. So it seems like this is solved for audio. All these solutions work quite good.

A somewhat more generic question about fuzzy search in general is here. E.g. there is locality-sensitive hashing and nearest neighbor search.


Source: (StackOverflow)

How to sort my paws?

In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws:

alt text

I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind).

As you can see there's clearly a repeating pattern and it comes back in almost every measurement. Here's a link to a presentation of 6 trials that were manually annotated.

My initial thought was to use heuristics to do the sorting, like:

  • There's a ~60-40% ratio in weight bearing between the front and hind paws;
  • The hind paws are generally smaller in surface;
  • The paws are (often) spatially divided in left and right.

However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs, whom probably have rules of their own.

Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like.

Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence.

alt text

So I'm looking for a better way to sort the results with their corresponding paw.

For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location (location on the plate and in time).

To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted.

Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed.

And for anyone interested, I’m keeping a blog with all the updates regarding the project!


Source: (StackOverflow)

Image Segmentation using Mean Shift explained

Could anyone please help me understand how Mean Shift segmentation actually works?

Here is a 8x8 matrix that I just made up

  103  103  103  103  103  103  106  104   
  103  147  147  153  147  156  153  104   
  107  153  153  153  153  153  153  107   
  103  153  147  96   98   153  153  104   
  107  156  153  97   96   147  153  107   
  103  153  153  147  156  153  153  101   
  103  156  153  147  147  153  153  104   
  103  103  107  104  103  106  103  107

Using the matrix above is it possible to explain how Mean Shift segmentation would separate the 3 different levels of numbers?


Source: (StackOverflow)