webcam-capture
Project goal is to give users possibility to access build-in or connected via USB webcams or remote IP / network cameras directly from Java code. Using provided libraries user is able to read camera images and detect motion. Main project consist of several sub projects - the root one, which contains required classes, build-in webcam driver compa…
Webcam Capture in Java
i am working to make a c# program that can start streaming webcam, close and capture the still image when closing.
the programs work as excepted on my development machine but when i open it on other it dosent work and gives me unhandled exception: Afroge.Video.DirectShow error.
I have added references AFroge.Video.dll and AFroge.Video.DirectShow.dll
here is the exe file and code of my project.
sendspace .com/file/4okqsi
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
//Create using directives for easier access of AForge library's methods
using AForge.Video;
using AForge.Video.DirectShow;
namespace aforgeWebcamTutorial
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
//Create webcam object
VideoCaptureDevice videoSource;
private void Form1_Load(object sender, EventArgs e)
{
}
void videoSource_NewFrame(object sender, AForge.Video.NewFrameEventArgs eventArgs)
{
//Cast the frame as Bitmap object and don't forget to use ".Clone()" otherwise
//you'll probably get access violation exceptions
pictureBoxVideo.BackgroundImage = (Bitmap)eventArgs.Frame.Clone();
}
private void Form1_FormClosed(object sender, FormClosedEventArgs e)
{
//Stop and free the webcam object if application is closing
if (videoSource != null && videoSource.IsRunning)
{
videoSource.SignalToStop();
videoSource = null;
}
}
private void button1_Click(object sender, EventArgs e)
{
try {
if (videoSource.IsRunning)
{
videoSource.Stop();
pictureBoxVideo.BackgroundImage.Save("abc.png");
pictureBoxVideo.BackgroundImage = null;
}
}
catch (Exception er) { }
}
private void button2_Click(object sender, EventArgs e)
{
try {
//List all available video sources. (That can be webcams as well as tv cards, etc)
FilterInfoCollection videosources = new FilterInfoCollection(FilterCategory.VideoInputDevice);
//Check if atleast one video source is available
if (videosources != null)
{
//For example use first video device. You may check if this is your webcam.
videoSource = new VideoCaptureDevice(videosources[0].MonikerString);
try
{
//Check if the video device provides a list of supported resolutions
if (videoSource.VideoCapabilities.Length > 0)
{
string highestSolution = "0;0";
//Search for the highest resolution
for (int i = 0; i < videoSource.VideoCapabilities.Length; i++)
{
if (videoSource.VideoCapabilities[i].FrameSize.Width > Convert.ToInt32(highestSolution.Split(';')[0]))
highestSolution = videoSource.VideoCapabilities[i].FrameSize.Width.ToString() + ";" + i.ToString();
}
//Set the highest resolution as active
videoSource.VideoResolution = videoSource.VideoCapabilities[Convert.ToInt32(highestSolution.Split(';')[1])];
}
}
catch { }
//Create NewFrame event handler
//(This one triggers every time a new frame/image is captured
videoSource.NewFrame += new AForge.Video.NewFrameEventHandler(videoSource_NewFrame);
//Start recording
videoSource.Start();
}
}
catch (Exception er) { }
}
}
}
Source: (StackOverflow)
I am creating an app that capture the web-cam at a certain point(when an event is triggered), like taking a snapshot of the camera and encode the snapshot to base64. But looking at the example online, they first draw that snapshot to a canvas and then convert that canvas to base64. Is there a way to skip "drawing to canvas" part?
Source: (StackOverflow)
Situation of the problem:
The Arduino measures the length of an object. If the length is between a pre-determined interval then the Arduino serial writes a '1' on a particular COM-port. MATLAB will read on the same COM-port so we can read the ‘1’ inside MATLAB. For each '1' (read by Matlab) a photo is taken by a webcam. The following While-loop gives us the opportunity to read the ‘1’ inside MATLAB.
clear all
clc
arduino = serial('/dev/tty.usbmodem1411','BaudRate',9600);
fopen(arduino);
Sensor = true
cam = webcam(2);
while (Sensor)
A = fscanf(arduino,'%d')
if A == 1
img = snapshot(cam);
imshow(img);
end
end
fclose(arduino);
But the webcam doesn’t take the picture we want to take.
We have the following problems:
The first time that a '1' is read (by Matlab), no photo is taken. The second time there is a photo taken. By the third ‘1’, previous photo changes a bit (but Matlab gives not the photo which is taken by the third ‘1’. Then by the fourth ‘1’, Matlab gives the photo that is taken by the third.
Anybody knows how I can fix this?
Source: (StackOverflow)
Whenever I use my webcam application within the main form, the program does not closed and still appearing in the task manager or in the bottom right of the Netbeans saying that it is still running even if I clicked STOP or close my form. It only occurs when I access my Webcam snapshot application from my main form button.
I've read so many answers but it doesn't give me the solution.
My guess in my problem is the following:
- It is still running because it will not terminate unless all non-daemon threads (my webcam thread?) are closed.
- OpenCV libraries' fault?
Please refer to my first stackoverflow question link : MY PROBLEM.java
Thank you,
Source: (StackOverflow)
I am new to using .dll's in c++ and am trying to load a .dll file in my code. The dll is "Extremely Simple Capture API" or escapi.dll . The site I got the .dll from did not include a library file with the .dll, and considering I don't know how to load a .dll with the library file, trying to do it without it is doubly hard. i just want to take a snapshot with the webcam on the computer and display the image on the screen.
The functions I use from the .dll to do this are:
int setupESCAPI(int height, int width);
int initCapture(SimpleCapParams *capture);
void doCapture();
void isCaptureDone();
void deinitCapture();
If anyone can give me easy instructions on how to include this .dll without a .lib file, I would appreciate it. Thanks.
Dan
Source: (StackOverflow)
I have performance issues with capturing in DirectShow.NET. Using resolutions above 920x720 is resulting in stutters on my i5 dual core. The Logitech Software does record smooth on higher resolutions.
I use DirectShow.NET for capturing a webcam and muxing it in an AVI muxer with audio input. A File Writer writes it the capture to disk.
[Webcam (Logitech 920c)-> M-JPEG Compressor] + Microphone ->
-> Avi-Muxer -> File Writer
Source: (StackOverflow)
I Created an UVC based application to connect USB External webcam [Logitech c170] with android device. I Follow the coding from this link. After building the project did Native NDK Build operations and copied the libs folder in to my directory.
The program was built and run successfully, but it would not shown USB connectivity in with my Tablet.
In device_filter.xml file ,i also included product id and vendor id of my webcam (Vendor id : 046D ,product id : 082B).
How to connect my WEBCAM into android device. Guide me!!
Thanks in Advance !..
Source: (StackOverflow)
Following pipeline fails. How to debug this? What is going wrong?
gst-launch-1.0 -v uvch264src device=/dev/video0 name=src auto-start=true src.vidsrc ! queue ! video/x-h264 ! h264parse ! avdec_h264 ! xvimagesink sync=false
Setting pipeline to PAUSED ...
/GstPipeline:pipeline0/GstUvcH264Src:src/GstV4l2Src:v4l2src0: num-buffers = -1
/GstPipeline:pipeline0/GstUvcH264Src:src/GstV4l2Src:v4l2src0: device = /dev/video0
/GstPipeline:pipeline0/GstUvcH264Src:src/GstV4l2Src:v4l2src0: num-buffers = -1
/GstPipeline:pipeline0/GstUvcH264Src:src/GstV4l2Src:v4l2src0: device = /dev/video0
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstUvcH264Src:src/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, format=(string)YUY2, width=(int)2304, height=(int)1536, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, framerate=(fraction)2/1
/GstPipeline:pipeline0/GstUvcH264Src:src.GstGhostPad:vfsrc: caps = video/x-raw, format=(string)YUY2, width=(int)2304, height=(int)1536, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, framerate=(fraction)2/1
/GstPipeline:pipeline0/GstUvcH264Src:src.GstGhostPad:vfsrc.GstProxyPad:proxypad0: caps = video/x-raw, format=(string)YUY2, width=(int)2304, height=(int)1536, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, framerate=(fraction)2/1
ERROR: from element /GstPipeline:pipeline0/GstUvcH264Src:src/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstUvcH264Src:src/GstV4l2Src:v4l2src0:
streaming task paused, reason not-linked (-1)
Execution ended after 0:00:02.891955232
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
But vfsrc is working fine.
gst-launch-1.0 -v -e uvch264src device=/dev/video0 name=src auto-start=true src.vfsrc ! queue ! video/x-raw,format=(string)YUY2,width=320,height=240,framerate=10/1 ! textoverlay text="Capture from vfsrc 79879 " font-desc="Sans 24" ! xvimagesink sync=false
Thanks,
Sneha
Source: (StackOverflow)
I'm trying to take a snapshot through my webcam.
This is my code:
using System;
using System.Text;
using System.Drawing;
using System.Threading;
using AForge.Video.DirectShow;
using AForge.Video;
namespace WebCamShot
{
class Program
{
static FilterInfoCollection WebcamColl;
static VideoCaptureDevice Device;
static void Main(string[] args)
{
WebcamColl = new FilterInfoCollection(FilterCategory.VideoInputDevice);
Console.WriteLine("Press Any Key To Capture Photo !");
Console.ReadKey();
Device = new VideoCaptureDevice(WebcamColl[0].MonikerString);
Device.NewFrame += Device_NewFrame;
Device.Start();
Console.ReadLine();
}
static void Device_NewFrame(object sender, NewFrameEventArgs e)
{
Bitmap Bmp = (Bitmap)e.Frame.Clone();
Bmp.Save("D:\\Foo\\Bar.png");
Console.WriteLine("Snapshot Saved.");
/*
Console.WriteLine("Stopping ...");
Device.SignalToStop();
Console.WriteLine("Stopped .");
*/
}
}
}
It works well, but now I want to use my code for taking a snapshot every one minute.
Due to this reason, I added this line of code: Thread.Sleep(1000 * 60); // 1000 Milliseconds (1 Second) * 60 == One minute.
Unfortunately, This line doesn't giving me the wanted result - It still taking the snapshots like earlier, but it just saving the photos in the file every minute. What I actually want to do, is that my code will trigger the "Device_NewFrame" event every one minute.
How can I do it? I will glad to get some help.. Thhank You !
EDIT: As Armen Aghajanyan offered, I added timer to my code.
This timer Initializes the device object every one minute, registers the new Device object to the Device_NewFrame event and starts the activity of the Device.
After that, I'm uncommented this code in the event's body:
Console.WriteLine("Stopping ...");
Device.SignalToStop();
Console.WriteLine("Stopped .");
Now the code is taking a snapshot every one minute.
Source: (StackOverflow)
i wrote a small js to help me capture still images from webcam on a jquery mobile website.
The code works perfectly on desktops yet when ever i test it on mobile phones (android and ios) the video play then stops on first frame !
JS
window.shutter = document.createElement('audio');
window.shutter.volume = 1;
// var v = new uploadZone($('<img data-src="http://link-to-upload/" data-multiple="true"/>"));
// v.load();
uploadZone = function(element){
var object = this,
mobileInput = $('<input type="file" accept="image/*" multiple="multiple" />'),
errBack = function(e){console.log('error',e);},
localstream,
canvas = document.createElement('canvas'),
thumb = document.createElement('canvas'),
ctx = canvas.getContext("2d"),
ctxsmall = thumb.getContext("2d"),
videoObj = {"video": true},
video = document.createElement('video'),
uz = $('<div id="uploadzone-container">'),
snap = $('<a rel='nofollow' href="#" class="uz-snap"><i class="fa fa-camera"></i></a>'),
confirm = $('<a rel='nofollow' href="#" class="uz-confirm"><i class="fa fa-check"></i></a>'),
clear = $('<a rel='nofollow' href="#" class="uz-clear"><i class="fa fa-close"></i></a>'),
collection = $('<form id="uploadzone-collection" class="scrollY"></form>'),
backdrop = $('<div class="modal-backdrop fade in uploadZone"></div>'),
choice = $('<div class="uz-choice"><h2>Upload picture</h2></div>'),
webcam = $('<a class="btn btn-success">Webcam</a>'),
files = $('<a class="btn btn-primary">Gallery</a>'),
uzVid = $('<div id="uz-video-container"></div>'),
upload = $('<a class="btn btn-warning" rel='nofollow' href="#"><i class="fa fa-upload"></i> upload </a>'),
li = '<div><img width="80px" height="80px" src=""/><input name="title" type="text" /><input type="hidden"/> </div>';
this.uploadObject = {
img:[],//array of [title]=data
cover: element.attr('data-cover'),//BOOL is this an album cover or not
gallery: element.attr('data-snap-picture'), //album name
date: element.attr('data-date') //when was this picture taken ?
};
canvas.width = 600;
canvas.height = 500;
thumb.width = 80;
thumb.height = 80;
if(element.is('[data-multiple]')){
collection.append(upload.hide());
}else{
mobileInput.removeAttr('multiple');
}
//$.post(url, $('#uploadzone-collection').serialize()).done(function(o) {
backdrop.click(function(){$(this).remove();uz.remove();object.stop()});
webcam.click(function(e){object.webcamStart();});
files.click(function(e){
mobileInput.trigger('click');
uz.append(collection);
});
mobileInput.change(function(evt){
console.log('changed');
// console.log(new FormData( this ));
var files = evt.target.files; // FileList object
// Loop through the FileList and render image files as thumbnails.
for (var i = 0, f; f = files[i]; i++) {
// Only process image files.
if (!f.type.match('image.*')) {continue;}
var reader = new FileReader();
// Closure to capture the file information.
reader.onload = (function(theFile) {
return function(e) {
if(element.is('[data-multiple]')){
var newLi = $(li);
upload.show();
collection.append(newLi);
var title = escape(theFile.name);
newLi.find('img')[0].src = e.target.result;
object.uploadObject['img'].push({'title':title,'data':e.target.result});
}else{
element[0].src = e.target.result;
backdrop.trigger('click');
object.uploadObject['img'].push({'title':element.attr('data-snap-picture'),'data':e.target.result});
object.uploadAll();
}
};
})(f);
// Read in the image file as a data URL.
reader.readAsDataURL(f);
}
if(collection.find('img').length > 0)upload.show(); else upload.hide();
});
clear.click(function(){
snap.show();
confirm.hide();
clear.hide();
video.play();
});
snap.click(function(){
window.shutter.play();video.pause();
snap.hide();
confirm.show();
clear.show();
});
confirm.click(function(){
if(element.is('[data-multiple]')){
var newLi = $(li);
upload.show();
collection.append(newLi);
var title = prompt("Picture title", "Paper "+newLi.index());
if (title != null) {
ctxsmall.drawImage(video, 0, 0, 80, 80);
ctx.drawImage(video, 0, 0, 600, 500);
newLi.find('img')[0].src = thumb.toDataURL();
newLi.find('input').val(title);
object.uploadObject['img'].push({'title':title,'data':canvas.toDataURL()});
clear.trigger('click');
}else{
newLi.remove();
}
}else{
ctx.drawImage(video, 0, 0, 600, 500);
var dataURL = canvas.toDataURL();
$(element)[0].src = dataURL;
object.uploadObject['img'].push({'title':element.attr('data-snap-picture'),'data':dataURL});
object.uploadAll();
backdrop.trigger('click');
}
});
upload.click(function(){
object.uploadAll();
backdrop.trigger('click');
});
this.uploadAll =function(){
var url = element.attr('data-src')+'/'+element.attr('data-snap-picture');
if(element.is('[data-cover]'))url= url+'/1';
if(object.uploadObject['img'].length < 1)return console.log('nothing to upload');
return $.post(url,object.uploadObject,function(){
alert('success');
}).fail(function(){alert('falied')}).then(function(){object.uploadObject['img']=[]});
}
this.load = function(){//with choice
if(!$(element).is('[data-src]'))return alert('bad attempt');
$('body').append(backdrop).append(uz.append(choice.append(webcam).append(files)));
}
this.stop = function(){
if (video.mozSrcObject) {
console.log('mox');
video.mozSrcObject.stop();
video.src = null;
}else{
video.src = "";
if(localstream)localstream.stop();
}
};
this.webcamStart = function(){
choice.slideUp()
object.start();
uz.append(uzVid.append(video)).append(collection);
uzVid.append(snap).append(confirm.hide());
}
this.start = function(){
if (navigator.webkitGetUserMedia) {// WebKit-prefixed
navigator.webkitGetUserMedia(videoObj, function(stream) {
video.src = window.webkitURL.createObjectURL(stream);
video.play();
localstream = stream;
}, errBack);
} else if (navigator.mozGetUserMedia) {// Firefox-prefixed
navigator.mozGetUserMedia(videoObj, function(stream) {
video.src = window.URL.createObjectURL(stream);
video.play();
localstream = stream;
}, errBack);
}else if (navigator.getUserMedia) {// Standard
navigator.getUserMedia(videoObj, function(stream) {
video.src = stream;
video.play();
localstream = stream;
}, errBack);
}
};
};
Code is a bit long sorry, now i'm not sure is the problem is in my this.start()
function ? or is there something else i'm not aware of when handling webcam on mobile devices ?
Source: (StackOverflow)
In a java program (using NetBeans) how could one without using the sun package package com.sun.image.codec.jpeg, save the image resulted from the lti-civil library captured from WebCam?
In order to convert into jar-file , we faced this error:
error: package com.sun.image.codec.jpeg does not exist
import com.sun.image.codec.jpeg.JPEGCodec
How can I solve this problem please? Thanks
Source: (StackOverflow)
This is a direct follow-up of the last question I asked which was aptly named "C++: OpenCV2.3.1(!) access to webcam parameters" and where I was told to install OpenCV2.4.11 instead (OpenCV3.0 did not work)... which I did. And yes, most of this text is an exact copy&paste of the last thread since my problem hasn't actually vanished...
Again, I've searched here, on other forums (Google, OpenCV etc), looked at the code of the videoInput library, the different header files and especially OpenCV's highgui_c.h and still seem to be unable to find an answer to this very simple question:
How do I change exposure and gain (or, to be general, any webcam property) in my Logitech C310 webcam with OpenCV2.4.11 the same way I was able to with OpenCV2.1.0? (using Win7 64-bit, Visual Studio 10)
EDIT: This has been solved. I do not know how but when I tested my code this morning it was able to report and set the exposure using VideoCapture and the set/get method.
There's the nice and easy VideoCapture get and set method, I know, similar to the videoInput's [Set/Get]VideoSetting[Camera/Filter] functions. Here's my short example in OpenCV2.4.11 that doesn't work:
EDIT: It does work now. What I don't understand is that the values of several properties are reported as -8.58993E+008 (namely hue, monocrome, gamma, temperature, zoom, focus, pan, tilt, roll and iris) and that property 6 (fourcc) is -4.66163E+008. I know I don't have these features on my webcam but all other unimplemented features report -1.
int __stdcall WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, char* CmdArgs, int CmdShow) {
int device0 = 0;
VideoCapture VC(device0);
if(!VC.isOpened()) // check if we succeeded
return -1;
ostringstream oss;
double CamProp;
for(int i=-4; i<27; i++) {
CamProp = VC.get(i);
Sleep(5);
oss << "Item " << i << ": " << CamProp << "\n";
}
MessageBox(NULL, oss.str().c_str(), "Webcam Values", MB_OK);
return 0;
}
It compiles, it runs, it accesses the webcam alright (and even shows a picture with imshow if I add it to the code) but it only opens a nice window saying this:
Item -4: 0
Item -3: 0
Item -2: 0
...
Item 2: 0
Item 3: 640
Item 4: 480
Item 5: 0
...
Item 25: 0
Item 26: 0
EDIT: See above, this works now. I get values for all supported parameters like exposure, gain, sharpness, brightness, contrast and so on. Perhaps I was still linking to the 2.3.1 libraries or whatever.
The point is: This was all perfectly settable with this camera under OpenCV 2.1.0 using videoInput. I had a running application doing its own lighting instead of using the Logitech functions (RightLight, Auto Exposure, Auto Whitebalance). Now setting and getting the parameters has been integrated into OpenCV highgui for quite a while but with a strongly reduced feature list (no requesting of parameter ranges, Min/Max/Stepwidth..., no setting of auto exposure, RightLight and similar stuff) and for some reason it's incompatible with my Logitech webcam. I can report the resolution but nothing else.
EDIT: I still miss the Min, Max, Step, Auto/Manual features of videoInput. I can set a value but I don't know whether it's allowed.
The videoInput code is now merged into OpenCV's code in the file cap_dshow.cpp but I can't find a header file that declares the videoInput class and simply using my old code doesn't work. So I have a cpp file which contains all functions I need and which I know did the job for me a while back but which I can't access now. Any clues on how to do that? Has anyone accessed and changed camera parameters in OpenCV2.4.11 using the videoInput/DirectShow interface?
EDIT: Seems this has happened now in a working way, unlike 2.3.1. No direct interaction with videoInput seems to be needed. However it would be nice to have it for the aforementioned reasons.
There's also the funny problem that using e.g.
VideoCapture cam(0)
addresses exactly the same camera as
VideoCapture cam(1)
or
VideoCapture cam(any integer value)
which seems odd to me and hints in the same direction - that CV's VideoCapture does not work properly for me. A similar problem is described here but I also tried the code with a Sleep(1000) after opening the capture - without success.
EDIT: This is also working correctly now. I get my webcam with (0) and and error with (1) which is absolutely OK.
Source: (StackOverflow)
I had written a callback function to capture the snapshot of running video using html5 video control and canvas.
I used a for
loop to iterate and call the same callback function take the burst capture.
If i add alert('')
in the callback , the video in the background rerendering when alert message display, the burst snap shot works fine as taking diff photos(frames/images of the running video). But when I removed the alert('')
, the video does not run in the background and the bursted images are the same instead of different.
The code
for (var i = 0; i < burstcount; i++) {
var wcam = Webcam;
wcam.burst_snap(function (dataurl, id) {
var arrayindex = passedName + "_" + id;
imgid = imgid + i;
alert(dataurl);
burstcapturedata[arrayindex] = dataurl;
}, i);
var j = 0;
while (j < 10000000000) {
j++;
}
}
DisplayBurstedImages();
}
Source: (StackOverflow)
I am trying to use WebcamJS in Chrome on Windows tablet which has 2 webcams. However, it is working only with back webcam.
Any idea how to make it work with front webcam ?
Thanks for any hints.
Source: (StackOverflow)
Some background:
I'm building an AR art installation, and need to track a person as they move through a room.
To do this I've built a head piece that has several infrared lights (with diffusers) and have a camera (a USB webcam) that has an optical filter to remove most/all visible light from the image, as well as a few tweaks to the image that basically leave me with white dots on a black background.
Getting the webcam set up in such a way as to capture the boundaries of the room was pretty easy, but I'm unsure as to how to go about then processing the black and white image to get the x,y coordinates of each dot.
Example image output: (This is a mock-up as I don't have one on me this second, and also keep in mind that the data will come from what is effectively a video)
Tools I'm using
- NodeJS for processing
- Logitech Webcam for image capture
- Google Cardboard for visuals
- Infrared leds in styrofoam balls for nice diffuse light points
Any ideas?
Source: (StackOverflow)