EzDevInfo.com

javacv

Java interface to OpenCV and more

Proguard removing annotations in Android application

I have included a project using gradle in my app:

compile group: 'org.bytedeco', name: 'javacv', version: '0.11'

Which builds fine. But whenever I run the app with proguard enabled, it apparently removes the @Platform annotation from the jars that get included then.

I tried using the following based on http://proguard.sourceforge.net/manual/examples.html#annotations

-keepattributes *Annotation*

-keep @org.bytedeco.javacpp.annotation interface * {
    *;
}

I also tried the following based on http://proguard.sourceforge.net/manual/troubleshooting.html#notkept

-keep @interface *

But that doesn't work either. What else can I try to prevent proguard from removed these annotations? I was thinking about using -injars or -libraryjars but I believe gradle handles that for you.


The solution:

So the solution is as follows:

I have included the following in my proguard rules:

# JavaCV
-keep @org.bytedeco.javacpp.annotation interface * {
    *;
}

-keep @org.bytedeco.javacpp.annotation.Platform public class *

-keepclasseswithmembernames class * {
    @org.bytedeco.* <fields>;
}

-keepclasseswithmembernames class * {
    @org.bytedeco.* <methods>;
}

-keepattributes EnclosingMethod
-keep @interface org.bytedeco.javacpp.annotation.*,javax.inject.*

-keepattributes *Annotation*, Exceptions, Signature, Deprecated, SourceFile, SourceDir, LineNumberTable, LocalVariableTable, LocalVariableTypeTable, Synthetic, EnclosingMethod, RuntimeVisibleAnnotations, RuntimeInvisibleAnnotations, RuntimeVisibleParameterAnnotations, RuntimeInvisibleParameterAnnotations, AnnotationDefault, InnerClasses
-keep class org.bytedeco.javacpp.** {*;}
-dontwarn java.awt.**
-dontwarn org.bytedeco.javacv.**
-dontwarn org.bytedeco.javacpp.**

# end javacv

And the following lines in my gradle (these are the most recent versions at date 7/5/2015 (dd/mm/yyyy)):

compile group: 'org.bytedeco', name: 'javacv', version: '0.11'
compile group: 'org.bytedeco.javacpp-presets', name: 'opencv', version: '2.4.11-0.11', classifier: 'android-arm'
compile group: 'org.bytedeco.javacpp-presets', name: 'opencv', version: '2.4.11-0.11', classifier: 'android-x86'
compile group: 'org.bytedeco.javacpp-presets', name: 'ffmpeg', version: '2.6.1-0.11', classifier: 'android-arm'
compile group: 'org.bytedeco.javacpp-presets', name: 'ffmpeg', version: '2.6.1-0.11', classifier: 'android-x86'

I am quite sure that some proguard rules are a bit overkill, but I have not yet tested which are redundant. You may want to figure this out yourself if you run into this issue.


Source: (StackOverflow)

Display two videos together then output as a merged video on a single screen

This question may sound a little bit complex or ambiguous, but I'll try to make it as clear as I can. I have done lots of Googling and spent lots of time but didn't find anything relevant for windows.

I want to play two videos on a single screen. One as full screen in background and one on top of it in a small window or small width/height in the right corner. Then I want an output which consists of both videos playing together on a single screen.

So basically one video overlays another and then I want that streamed as output so the user can play that stream later.

I am not asking you to write the whole code, just tell me what to do or how to do it or which tool or third party SDK I have to use to make it happen.

update: Tried a lots of solution.

1.Xuggler- doesn't support Android.

2.JavaCV or JJMPEG- not able to find any tutorial which suggested how to do it?

Now looking for FFMPEG- searched for a long time but not able to find any tutorial which suggest the coding way to do it. I found command line way to how to fix it. So can anyone suggest or point the tutorial of FFMPEG or tell any other way to


Source: (StackOverflow)

Advertisements

Opencv: Convert floorplan image into data model

my plan is to extract information out of a floor plan drawn on a paper. I already managed to detect 70-80% of the drawn doors:

Detecting doors in a floorplan

Now I want to create a data model from the walls. I already managed to extract them as you can see here:

extracted walls From that I created the contours:

extracted wall lines My idea now was to get the intersections of the lines from that image and create a data model from that. However if I use houghlines algorithm I get something like this:

enter image description here Does somebody have a different idea of how to get the intersections or even another idea how to get a model? Would be very nice.

PS: I am using javacv. But an algorithm in opencv would also be alright as I could translate that.


Source: (StackOverflow)

Android Computer Vision JavaCv OpenCV Fastv comparison

I am working on school project and part of it should be about current situation about computer vision libraries for Android. I went to it with large enthusiasm because computer vision seems like fascinating subject but I have been searching for more then a week and I did not find much. I would like to be able to provide information about libraries themselves and about comparison between them.

I will share what I found so far.

OpenCV

  • seems like the most advanced one and the most popular.

  • provide the biggest number of functions

  • it had problem with backward compatibility

  • is fast(at least so I heard but I have zero information about it)

  • does have biggest amount of books about it(at least for C++ version)

JavaCV

  • is wrapper for few other libraries including opencv

FastCv

  • new with Qualcomm behind it.

Wikitude

  • this is more for augmentedreality but in its core is still computer vision.

As you can see I have a little information about it and doing my own tests for every library is far beyond my current computer vision skills.

Kind regards, Peter.


Source: (StackOverflow)

OpenCV/JavaCV face recognition - Very similar confidence values

I will explain what I am trying to do, as it seems to be relevant in order to understand my question.

I am currently trying to do face recognition of people that step in front of a camera, based on known pictures in the database.

These known pictures are being collected from an identifying Smart Card (which contains only a single frontal face picture) or a frontal face profile picture from a social network. From what I've read so far, it seems that for a good face recognition, a good amount of training images is required (50+). As such, since my collected images are very few to create a reliable training set, I instead tried using my live camera frame captures (currently using 150) as the training set, and the identified pictures collected previously as the test set. I'm not sure if what I'm trying with this is correct, so please let me know if I'm screwing up.

So, the problem is that after I have let's say, 5 identified pictures that I got from Smart Cards, I tried to do face recognition using as a training set, the 150 frames which the camera captured of my face. When trying to recognize, the confidence values for each of the 5 test faces is EXTREMELY similar, making the whole program useless, because I cannot accurately recognize anyone. Often, using different camera captures as training I get higher confidence values from pictures of random people than the picture of myself.

I would appreciate any help you can give me, because I'm at a loss here.

Thank you.

Note: I'm using the JavaCV wrapper for OpenCV to make my program, and the haarcascades that come included in the package. Eigenfaces being the algorithm used.


Source: (StackOverflow)

Setting video stream metadata using Ffmpeg

I'm using the JavaCV FFmpegFrameRecorder class to encode Android's camera preview frames into a video.

The goal would be to replicate the result of the following command line:

ffmpeg -i input.mp4 -metadata:s:v:0 rotate="90" output.mp4

I modified the startUnsafe() method as follows, but it failed to generate the desired output:

if ((video_st = avformat_new_stream(oc, video_codec)) != null) {
        video_c = video_st.codec();
        video_c.codec_id(oformat.video_codec());
        video_c.codec_type(AVMEDIA_TYPE_VIDEO);
        ...
        AVDictionary avDictionary = new AVDictionary(null);
        av_dict_set(avDictionary, "rotate", "90", 0);
        video_st.metadata(avDictionaty);
        ...
}
...
avformat_write_header(oc, (PointerPointer) null);

This still encodes the video correctly, but the added metadata never appears on ffprobe. If it helps, the video encoding is h264.

By the way, here's the ffprobe output:

ffprobe version 2.3.3 Copyright (c) 2007-2014 the FFmpeg developers
  built on Jan 22 2015 18:22:57 with Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/2.3.3 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-avresample --enable-vda --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-libxvid --enable-libfreetype --enable-libvorbis --enable-libvpx --enable-libass --enable-ffplay --enable-libfdk-aac --enable-libopus --enable-libquvi --enable-libx265
  libavutil      52. 92.100 / 52. 92.100
  libavcodec     55. 69.100 / 55. 69.100
  libavformat    55. 48.100 / 55. 48.100
  libavdevice    55. 13.102 / 55. 13.102
  libavfilter     4. 11.100 /  4. 11.100
  libavresample   1.  3.  0 /  1.  3.  0
  libswscale      2.  6.100 /  2.  6.100
  libswresample   0. 19.100 /  0. 19.100
  libpostproc    52.  3.100 / 52.  3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'abcd.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf56.15.102
  Duration: 00:00:19.48, start: 0.023220, bitrate: 572 kb/s
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1280x720, 573 kb/s, 5.71 fps, 30 tbr, 15360 tbn, 60 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 64 kb/s (default)
    Metadata:
      handler_name    : SoundHandler

Any suggestions on why is it failing? Thanks.


Source: (StackOverflow)

Capturing a single image from my webcam in Java or Python

I want to capture a single image from my webcam and save it to disk. I want to do this in Java or Python (preferably Java). I want something that will work on both 64-bit Win7 and 32-bit Linux.

EDIT: I use Python 3.x, not 2.x

Because everywhere else I see this question asked people manage to get confused, I'm going to state a few things explicitly:

  • I do not want to use Processing
  • I do not want to use any language other than those stated above
  • I do want to display this image on my screen in any way, shape or form
  • I do not want to display a live video feed from my webcam on my screen, or save such a feed to my hard drive
  • The Java Media Framework is far too out of date. Do not suggest it.
  • I would rather not use JavaCV, but if I absolutely must, I want to know exactly which files from the OpenCV library I need, and how I can use these files without including the entire library (and preferably without sticking these files in any sort of PATH. Everything should be included in the one directory)
  • I can use Eclipse on the 64-bit Win7 computer if need be, but I also have to be able to compile and use it on 32-bit Linux as well
  • If you think I might or might not know something related to this subject in any way shape or form, please assume I do not know it, and tell me

EDIT2: I was able to get Froyo's pygame example working on Linux using Python 2.7 and pygame 1.9.1. the pygame.camera.camera_list() call didn't work, but it was unnecessary for the rest of the example. However, I had to call cam.set_controls() (for which you can find the documentation here http://www.pygame.org/docs/ref/camera.html) to up the brightness so I could actually see anything in the image I captured.

Also, I need to call the cam.get_image() and pygame.image.save() methods three times before the image I supposedly took on the first pair of calls actually gets saved. They appeared to be stuck in a weird buffer. Basically, instead of calling cam.get_image() once, I had to call it three times every single time I wanted to capture an image. Then and only then did I call pygame.image.save().

Unfortunately, as stated below, pygame.camera is only supported on Linux. I still don't have a solution for Windows.


Source: (StackOverflow)

creating video from sequence of images javacv

For creating video from sequence of images in android I used javacv 0.6 library, but I meet problem: It normally works on htc Sensation(Android 4.0.1, Processor type armv7) and htc Desire(Android 2.3.3, Processor type arm7) phones, but it doesn't work on htc Wildfire (Android 2.3.5,Processor type armv6) phone particularly it fails in this part of code

FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(videoFilePath,       
TalkingPhotoConstants.VIDEO_FRAME_WIDTH,TalkingPhotoConstants.VIDEO_FRAME_HEIGHT);

in the attached code.

public class MovieCreator extends AsyncTask<String, Void, Boolean> {

private opencv_core.IplImage[] iplimage;
private String audioFilePath;


private ProgressDialog progressDialog;
private Context context;
private List<TalkFrame> frames;

public MovieCreator(Context context, opencv_core.IplImage[] images, String audioFilePath,           
List<TalkFrame> frames) {
    this.context = context;
    this.iplimage = images;
    this.audioFilePath = audioFilePath;
    this.frames = frames;

}

private String createMovie() {

    String videoName = TalkingPhotoConstants.TMP_VIDEO_NAME;
    String path = TalkingPhotoConstants.RESOURCES_TMP_FOLDER;
    String videoFilePath = path + videoName;
    String finalVideoName = TalkingPhotoConstants.FINAL_VIDEO_NAME + 
    System.currentTimeMillis() + ".mp4";
    String finalVideoPath = TalkingPhotoConstants.RESOURCES_FOLDER + finalVideoName;

    try {

        FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(videoFilePath, 
        TalkingPhotoConstants.VIDEO_FRAME_WIDTH,TalkingPhotoConstants.VIDEO_FRAME_HEIGHT);


        //int frameCount = iplimage.length;
        int frameCount = frames.size();
        recorder.setAudioCodec(AV_CODEC_ID_AMR_NB);
        recorder.setVideoCodec(AV_CODEC_ID_MPEG4);

        recorder.setVideoBitrate(120000);
        recorder.setFrameRate(TalkingPhotoConstants.VIDEO_FRAME_RATE);

        recorder.setPixelFormat(AV_PIX_FMT_YUV420P);
        recorder.setFormat("mp4");

        recorder.start();


        for (int i = 0; i < frameCount; i++) {
            TalkFrame currentFrame = frames.get(i);
            long duration = currentFrame.getDuration();
            opencv_core.IplImage iplImage = cvLoadImage(currentFrame.getImageName());

            for (int j = 0; j < TalkingPhotoConstants.VIDEO_FRAME_RATE * duration; j++) {
                recorder.record(iplImage);

            }

        }

        recorder.stop();

        mergeAudioAndVideo(videoFilePath, audioFilePath, finalVideoPath);

    } catch (Exception e) {
        Log.e("problem", "problem", e);
        finalVideoName = "";
    }
    return finalVideoName;
}

private boolean mergeAudioAndVideo(String videoPath, String audioPath, String outPut)  
throws Exception {
    boolean isCreated = true;
    File file = new File(videoPath);
    if (!file.exists()) {
        return false;
    }


    FrameGrabber videoGrabber = new FFmpegFrameGrabber(videoPath);
    FrameGrabber audioGrabber = new FFmpegFrameGrabber(audioPath);

    videoGrabber.start();
    audioGrabber.start();
    FrameRecorder recorder = new FFmpegFrameRecorder(outPut,
            videoGrabber.getImageWidth(), videoGrabber.getImageHeight(),
            audioGrabber.getAudioChannels());


    recorder.setFrameRate(videoGrabber.getFrameRate());
    recorder.start();
    Frame videoFrame = null, audioFrame = null;
    while ((audioFrame = audioGrabber.grabFrame()) != null) {
        videoFrame = videoGrabber.grabFrame();
        if (videoFrame != null) {
            recorder.record(videoFrame);
        }
        recorder.record(audioFrame);

    }
    recorder.stop();
    videoGrabber.stop();
    audioGrabber.stop();
    return isCreated;
}

@Override
protected Boolean doInBackground(String... params) {
    String fileName = createMovie();
    boolean result = fileName.isEmpty();
    if (!result) {
        VideoDAO videoDAO = new VideoDAO(context);
        videoDAO.open();
        videoDAO.createVideo(fileName);
        videoDAO.close();
    }
    //Utils.cleanTmpDir();
    return result;
}

@Override
protected void onPreExecute() {
    progressDialog = new ProgressDialog(context);
    progressDialog.setTitle("Processing...");
    progressDialog.setMessage("Please wait.");
    progressDialog.setCancelable(false);
    progressDialog.setIndeterminate(true);
    progressDialog.show();
}

@Override
protected void onPostExecute(Boolean result) {
    if (progressDialog != null) {
        progressDialog.dismiss();

    }
}

}

There is no exception.

1.how can i fix it?

2.I have a version that problem is connected with device's processor type.

If I'm right how can I solve it?

Thanks in advance.


Source: (StackOverflow)

How to identify polygon using opencv or javacv?

I'm doing a project that use image processing techniques to identify different objects and their lengths. I go through many examples in javaCV as well as OpenCV. But unfortunately I was unable to identify T shape of polygon.

I try to use following rectangle identification method but I failed it.

public static CvSeq findSquares( final IplImage src,  CvMemStorage storage)
{

CvSeq squares = new CvContour();
squares = cvCreateSeq(0, sizeof(CvContour.class), sizeof(CvSeq.class), storage);

IplImage pyr = null, timg = null, gray = null, tgray;
timg = cvCloneImage(src);

CvSize sz = cvSize(src.width() & -2, src.height() & -2);
tgray = cvCreateImage(sz, src.depth(), 1);
gray = cvCreateImage(sz, src.depth(), 1);
pyr = cvCreateImage(cvSize(sz.width()/2, sz.height()/2), src.depth(), src.nChannels());

// down-scale and upscale the image to filter out the noise
cvPyrDown(timg, pyr, CV_GAUSSIAN_5x5);
cvPyrUp(pyr, timg, CV_GAUSSIAN_5x5);
cvSaveImage("ha.jpg",   timg);
CvSeq contours = new CvContour();
// request closing of the application when the image window is closed
// show image on window
// find squares in every color plane of the image
for( int c = 0; c < 3; c++ )
{
    IplImage channels[] = {cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1)};
    channels[c] = cvCreateImage(sz, 8, 1);
    if(src.nChannels() > 1){
        cvSplit(timg, channels[0], channels[1], channels[2], null);
    }else{
        tgray = cvCloneImage(timg);
    }
    tgray = channels[c];
    // try several threshold levels
    for( int l = 0; l < N; l++ )
    {
    //             hack: use Canny instead of zero threshold level.
    //             Canny helps to catch squares with gradient shading
        if( l == 0 )
        {
    //                apply Canny. Take the upper threshold from slider
    //                and set the lower to 0 (which forces edges merging)
                      cvCanny(tgray, gray, 0, thresh, 5);
    //                 dilate canny output to remove potential
    //                // holes between edge segments
                      cvDilate(gray, gray, null, 1);
                 }
          else
        {
    //                apply threshold if l!=0:
                      cvThreshold(tgray, gray, (l+1)*255/N, 255, CV_THRESH_BINARY);
          }
        //            find contours and store them all as a list
                      cvFindContours(gray, storage, contours, sizeof(CvContour.class), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);

                      CvSeq approx;

        //            test each contour
                      while (contours != null && !contours.isNull()) {
                      if (contours.elem_size() > 0) {
                           approx = cvApproxPoly(contours, Loader.sizeof(CvContour.class),storage, CV_POLY_APPROX_DP, cvContourPerimeter(contours)*0.02, 0);
                    if( approx.total() == 4
                            &&
                            Math.abs(cvContourArea(approx, CV_WHOLE_SEQ, 0)) > 1000 &&
                        cvCheckContourConvexity(approx) != 0
                        ){
                        double maxCosine = 0;
                        //
                        for( int j = 2; j < 5; j++ )
                        {
            // find the maximum cosine of the angle between joint edges
                                                double cosine = Math.abs(angle(new CvPoint(cvGetSeqElem(approx, j%4)), new CvPoint(cvGetSeqElem(approx, j-2)), new CvPoint(cvGetSeqElem(approx, j-1))));
                                                maxCosine = Math.max(maxCosine, cosine);
                         }
                         if( maxCosine < 0.2 ){
                                 CvRect x=cvBoundingRect(approx, l);
                                 if((x.width()*x.height())<5000 ){
                                     System.out.println("Width : "+x.width()+" Height : "+x.height());
                             cvSeqPush(squares, approx);
                                     //System.out.println(x);
                                 }
                         }
                    }
                }
                contours = contours.h_next();
            }
        contours = new CvContour();
    }
}
return squares;
}

Please can some help me to modify this method to identify T shapes from a image. The input image is like this.

enter image description here

This is the T shape that I have to identify

enter image description here


Source: (StackOverflow)

Detect orientation of a recorded video in android

I want to make my custom media player and requires orientation info of video (for detecting it is recorded from front or back camera). for jpeg images i can use ExifInterface.TAG_ORIENTATION but for video how i can find this information.

I tried to grab frame from video file and convert it into jpeg but again it always provides orientation 0 in all cases.

Please help me.Thanks in advance.


Source: (StackOverflow)

unstable face recognition using OpenCV

I’m developing an android application for face recognition, using JavaCV which is unofficial wrapper of OpenCV. After importing (com.googlecode.javacv.cpp.opencv_contrib.FaceRecognizer) I apply and test the following known methods:

  • LBPH using createLBPHFaceRecognizer() method
  • FisherFace using createFisherFaceRecognizer() method
  • EigenFace using createEigenFaceRecognizer() method

Before I recognize the detected face, I correct the rotated face and crop the proper zone, inspiring from this method

In general when I pass on camera a face already exist in the database, the recognition is ok. But this is not always correct. Sometimes it recognizes the unknown face (not found in Database of trained samples) with a high probability. When we have in the DB two or more faces of similar features (beard, mustache, glasses...) the recognition may be highly mistaken between those faces!

To predict the result using the test face image, I apply the following code:

public String predict(Mat m) {

        int n[] = new int[1];
        double p[] = new double[1];
        IplImage ipl = MatToIplImage(m,WIDTH, HEIGHT);

        faceRecognizer.predict(ipl, n, p);

        if (n[0]!=-1)
         mProb=(int)p[0];
        else
            mProb=-1;
            if (n[0] != -1)
            return labelsFile.get(n[0]);
        else
            return "Unkown";
    }

I can’t control the threshold of the probability p, because:

  • Small p < 50 could predict a correct result.
  • High p > 70 could predict a false result.
  • Middle p could predict a correct or false.

As well, I don’t understand why predict() function gives sometime a probability greater than 100 in case of using LBPH??? and in case of Fisher and Eigen it gives very big values (>2000) ?? Can someone help in finding a solution for these bizarre problems? Is there any suggestion to improve robustness of recognition? especially in case of similarity of two different faces.

The following is the entire class using Facerecognizer:

package org.opencv.javacv.facerecognition;

import static  com.googlecode.javacv.cpp.opencv_highgui.*;
import static  com.googlecode.javacv.cpp.opencv_core.*;

import static  com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_contrib.*;

import java.io.File;
import java.io.FileOutputStream;
import java.io.FilenameFilter;
import java.util.ArrayList;

import org.opencv.android.Utils;
import org.opencv.core.Mat;

import com.googlecode.javacv.cpp.opencv_imgproc;
import com.googlecode.javacv.cpp.opencv_contrib.FaceRecognizer;
import com.googlecode.javacv.cpp.opencv_core.IplImage;
import com.googlecode.javacv.cpp.opencv_core.MatVector;

import android.graphics.Bitmap;
import android.os.Environment;
import android.util.Log;
import android.widget.Toast;

public  class PersonRecognizer {

    public final static int MAXIMG = 100;
    FaceRecognizer faceRecognizer;
    String mPath;
    int count=0;
    labels labelsFile;

     static  final int WIDTH= 128;
     static  final int HEIGHT= 128;;
     private int mProb=999;


    PersonRecognizer(String path)
    {
      faceRecognizer =  com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(2,8,8,8,200);
     // path=Environment.getExternalStorageDirectory()+"/facerecog/faces/";
     mPath=path;
     labelsFile= new labels(mPath);


    }

    void changeRecognizer(int nRec)
    {
        switch(nRec) {
        case 0: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(1,8,8,8,100);
                break;
        case 1: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createFisherFaceRecognizer();
                break;
        case 2: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createEigenFaceRecognizer();
                break;
        }
        train();

    }

    void add(Mat m, String description) {
        Bitmap bmp= Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.ARGB_8888);

        Utils.matToBitmap(m,bmp);
        bmp= Bitmap.createScaledBitmap(bmp, WIDTH, HEIGHT, false);

        FileOutputStream f;
        try {
            f = new FileOutputStream(mPath+description+"-"+count+".jpg",true);
            count++;
            bmp.compress(Bitmap.CompressFormat.JPEG, 100, f);
            f.close();

        } catch (Exception e) {
            Log.e("error",e.getCause()+" "+e.getMessage());
            e.printStackTrace();

        }
    }

    public boolean train() {

        File root = new File(mPath);
        Log.i("mPath",mPath);
        FilenameFilter pngFilter = new FilenameFilter() {
            public boolean accept(File dir, String name) {
                return name.toLowerCase().endsWith(".jpg");

        };
        };

        File[] imageFiles = root.listFiles(pngFilter);

        MatVector images = new MatVector(imageFiles.length);

        int[] labels = new int[imageFiles.length];

        int counter = 0;
        int label;

        IplImage img=null;
        IplImage grayImg;

        int i1=mPath.length();


        for (File image : imageFiles) {
            String p = image.getAbsolutePath();
            img = cvLoadImage(p);

            if (img==null)
                Log.e("Error","Error cVLoadImage");
            Log.i("image",p);

            int i2=p.lastIndexOf("-");
            int i3=p.lastIndexOf(".");
            int icount=Integer.parseInt(p.substring(i2+1,i3)); 
            if (count<icount) count++;

            String description=p.substring(i1,i2);

            if (labelsFile.get(description)<0)
                labelsFile.add(description, labelsFile.max()+1);

            label = labelsFile.get(description);

            grayImg = IplImage.create(img.width(), img.height(), IPL_DEPTH_8U, 1);

            cvCvtColor(img, grayImg, CV_BGR2GRAY);

            images.put(counter, grayImg);

            labels[counter] = label;

            counter++;
        }
        if (counter>0)
            if (labelsFile.max()>1)
                faceRecognizer.train(images, labels);
        labelsFile.Save();
    return true;
    }

    public boolean canPredict()
    {
        if (labelsFile.max()>1)
            return true;
        else
            return false;

    }

    public String predict(Mat m) {
        if (!canPredict())
            return "";
        int n[] = new int[1];
        double p[] = new double[1];
        IplImage ipl = MatToIplImage(m,WIDTH, HEIGHT);
//      IplImage ipl = MatToIplImage(m,-1, -1);

        faceRecognizer.predict(ipl, n, p);

        if (n[0]!=-1)
         mProb=(int)p[0];
        else
            mProb=-1;
    //  if ((n[0] != -1)&&(p[0]<95))
        if (n[0] != -1)
            return labelsFile.get(n[0]);
        else
            return "Unkown";
    }




      IplImage MatToIplImage(Mat m,int width,int heigth)
      {


           Bitmap bmp=Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.ARGB_8888);


           Utils.matToBitmap(m, bmp);
           return BitmapToIplImage(bmp,width, heigth);

      }

    IplImage BitmapToIplImage(Bitmap bmp, int width, int height) {

        if ((width != -1) || (height != -1)) {
            Bitmap bmp2 = Bitmap.createScaledBitmap(bmp, width, height, false);
            bmp = bmp2;
        }

        IplImage image = IplImage.create(bmp.getWidth(), bmp.getHeight(),
                IPL_DEPTH_8U, 4);

        bmp.copyPixelsToBuffer(image.getByteBuffer());

        IplImage grayImg = IplImage.create(image.width(), image.height(),
                IPL_DEPTH_8U, 1);

        cvCvtColor(image, grayImg, opencv_imgproc.CV_BGR2GRAY);

        return grayImg;
    }



    protected void SaveBmp(Bitmap bmp,String path)
      {
            FileOutputStream file;
            try {
                file = new FileOutputStream(path , true);

            bmp.compress(Bitmap.CompressFormat.JPEG,100,file);  
            file.close();
            }
            catch (Exception e) {
                // TODO Auto-generated catch block
                Log.e("",e.getMessage()+e.getCause());
                e.printStackTrace();
            }

      }


    public void load() {
        train();

    }

    public int getProb() {
        // TODO Auto-generated method stub
        return mProb;
    }


}

Source: (StackOverflow)

Why cvFindContours() method doesn't detect Contours correctly in javacv?

I went through many questions in StackOverflow and able to develop small program to detect squares and rectangles correctly. This is my sample code

public static CvSeq findSquares(final IplImage src, CvMemStorage storage) {
    CvSeq squares = new CvContour();
    squares = cvCreateSeq(0, sizeof(CvContour.class), sizeof(CvSeq.class), storage);
    IplImage pyr = null, timg = null, gray = null, tgray;
    timg = cvCloneImage(src);
    CvSize sz = cvSize(src.width(), src.height());
    tgray = cvCreateImage(sz, src.depth(), 1);
    gray = cvCreateImage(sz, src.depth(), 1);
    // cvCvtColor(gray, src, 1);
    pyr = cvCreateImage(cvSize(sz.width() / 2, sz.height() / 2), src.depth(), src.nChannels());
    // down-scale and upscale the image to filter out the noise
    // cvPyrDown(timg, pyr, CV_GAUSSIAN_5x5);
    // cvPyrUp(pyr, timg, CV_GAUSSIAN_5x5);
    // cvSaveImage("ha.jpg",timg);
    CvSeq contours = new CvContour();
    // request closing of the application when the image window is closed
    // show image on window
    // find squares in every color plane of the image
    for (int c = 0; c < 3; c++) {
        IplImage channels[] = { cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1), cvCreateImage(sz, 8, 1) };
        channels[c] = cvCreateImage(sz, 8, 1);
        if (src.nChannels() > 1) {
            cvSplit(timg, channels[0], channels[1], channels[2], null);
        } else {
            tgray = cvCloneImage(timg);
        }
        tgray = channels[c];
        // // try several threshold levels
        for (int l = 0; l < N; l++) {
            // hack: use Canny instead of zero threshold level.
            // Canny helps to catch squares with gradient shading
            if (l == 0) {
                // apply Canny. Take the upper threshold from slider
                // and set the lower to 0 (which forces edges merging)
                cvCanny(tgray, gray, 0, thresh, 5);
                // dilate canny output to remove potential
                // // holes between edge segments
                cvDilate(gray, gray, null, 1);
            } else {
                // apply threshold if l!=0:
                cvThreshold(tgray, gray, (l + 1) * 255 / N, 255,
                        CV_THRESH_BINARY);
            }
            // find contours and store them all as a list
            cvFindContours(gray, storage, contours, sizeof(CvContour.class), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
            CvSeq approx;
            // test each contour
            while (contours != null && !contours.isNull()) {
                if (contours.elem_size() > 0) {
                    approx = cvApproxPoly(contours, Loader.sizeof(CvContour.class), storage, CV_POLY_APPROX_DP, cvContourPerimeter(contours) * 0.02, 0);
                    if (approx.total() == 4 && Math.abs(cvContourArea(approx, CV_WHOLE_SEQ, 0)) > 1000 && cvCheckContourConvexity(approx) != 0) {
                        double maxCosine = 0;
                        for (int j = 2; j < 5; j++) {
                            // find the maximum cosine of the angle between
                            // joint edges
                            double cosine = Math.abs(angle(
                                            new CvPoint(cvGetSeqElem(
                                                    approx, j % 4)),
                                            new CvPoint(cvGetSeqElem(
                                                    approx, j - 2)),
                                            new CvPoint(cvGetSeqElem(
                                                    approx, j - 1))));
                            maxCosine = Math.max(maxCosine, cosine);
                        }
                        if (maxCosine < 0.2) {
                            CvRect x = cvBoundingRect(approx, l);
                            if ((x.width() * x.height()) < 50000) {
                                System.out.println("Width : " + x.width()
                                        + " Height : " + x.height());
                                cvSeqPush(squares, approx);
                            }
                        }
                    }
                }
                contours = contours.h_next();
            }
            contours = new CvContour();
        }
    }
    return squares;
}

I use this image to detect rectangles and squares

enter image description here

I need to identify the following output

enter image description here

and

enter image description here

But when I run the above code, it detects only the following rectangles. But I don't know the reason for that. Please can someone explain the reason for that.

This is the output that I got.

enter image description here

Please be kind enough to explain the problem in above code and give some suggensions to detect this squares and rectangles.


Source: (StackOverflow)

Filling holes inside a binary object

I have a problem with filling white holes inside a black coins so that I can have only 0-255 binary image with filled black coins.. I have used Median filter to accomplish it but in that case connection bridge between coins grows and it goes impossible to recognize them after several times of erosion... So I need a simple floodFill like method in opencv

Here is my image with holes:

enter image description here

EDIT: floodfill like function must fill holes in big components without prompting X,Y coordinates as a seed...

EDIT: I tried to use cvDrawContours function but I doesn't fill contours inside bigger ones.

Here is my code:

        CvMemStorage mem = cvCreateMemStorage(0);
        CvSeq contours = new CvSeq();
        CvSeq ptr = new CvSeq();
        int sizeofCvContour = Loader.sizeof(CvContour.class);

        cvThreshold(gray, gray, 150, 255, CV_THRESH_BINARY_INV);

        int numOfContours = cvFindContours(gray, mem, contours, sizeofCvContour, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
        System.out.println("The num of contours: "+numOfContours); //prints 87, ok

        Random rand = new Random();
        for (ptr = contours; ptr != null; ptr = ptr.h_next()) {
            Color randomColor = new Color(rand.nextFloat(), rand.nextFloat(), rand.nextFloat());
            CvScalar color = CV_RGB( randomColor.getRed(), randomColor.getGreen(), randomColor.getBlue());
            cvDrawContours(gray, ptr, color, color, -1, CV_FILLED, 8);
        }
        CanvasFrame canvas6  = new CanvasFrame("drawContours");
        canvas6.showImage(gray);

Result: (you can see black holes inside each coin)

enter image description here


Source: (StackOverflow)

Learning JavaCV in pure Java

I am trying to learn JavaCV. As you all know, the lack of educational materials on this subject is a very big problem. In JavCV home page, they have provided lot of examples for the C++ examples in book "OpenCV CookBook". But the case is, they are not Java, they are in SCALA!!!! Now I have already gone crazy! I know lot of examples are in web, but I want to learn it from beginning to advance, then only I can do it properly. "OpenCV CookBook" is a very good book but it is all about OpenCV in C++, not anything about Java.

Someone please help me to find a better place to learn JavaCV. Provide me whatever, URL, Book, etc. But it must be about learning JavaCV in 100% Java, not in Scala, C++, C or whatever other language! Please help!


Source: (StackOverflow)

JavaCV FFmpegFrameRecorder properties explanation needed

I'm using FFmpegFrameRecorder to get the video input from my webcam and record it into a video file. The problem is that I'm building my application using a few different demo source codes that I found and I use properties some of which are not completely clear to me.

First, here is my code snippet :

FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(FILENAME,  grabber.getImageWidth(),grabber.getImageHeight());

        recorder.setVideoCodec(13);
        recorder.setFormat("mp4");
        recorder.setPixelFormat(avutil.PIX_FMT_YUV420P);
        recorder.setFrameRate(30);
        recorder.setVideoBitrate(10 * 1024 * 1024);

        recorder.start();
  • setVideoCodec(13) - What is the meaning of this (13) how can I understand what actual codec stands behind any number?
  • setPixelFormat - Just get this, don't know what it's doing in general
  • setFrameRate(30) - I think this should be pretty clear but still what is the logic behind what frame rate we choose (isn't the high the better?)
  • setVideoBitrate(10*1024*1024) - again almost no idea what this does and what's the logic behind the numbers?

At the end I just want to mention one last problem that I get recording video like this. If the actual length of the video is let's say 20secs. When I play the video file created from the program it runs significantly faster. Can't tell if it's exactly 2 times faster than it should be but in general if I record a 20sec video then it's played for about 10secs. What may cause this and how can I fix it?


Source: (StackOverflow)