EzDevInfo.com

vnc interview questions

Top vnc frequently asked interview questions

How do i automatically set the $DISPLAY variable for my current session

I see that $display is set to localhost:0,0 if i am running over a vnc server this may not be correct, is there a way to automatically set it in my login script?


Source: (StackOverflow)

Can you run GUI apps in a docker container?

How can you run GUI apps in a docker container?

Are there any images that set up vncserver or something so that you can - for example - add an extra speedbump sandbox around say Firefox?


Source: (StackOverflow)

Advertisements

Running VNC fullscreen with multiple monitors

I'm connecting to a remote system using VNC (tigervnc-1.1.0 on client, RealVNC-4.1.2 on server). The client system has two monitors using Nvidia twinview, with an effective resolution of 3200x1200.

When I tell vncviewer to use fullscreen, the remote system window (1600x1200) is centered across both monitors with large black spaces on both sides. I also tried running Xinerama instead of twinview on the client system, but this doesn't make any difference.

Is there any way to run vncviewer in fullscreen mode, without the VNC frame, but restrict it a single monitor?


Source: (StackOverflow)

Trying to login to RDP using AS3

I am trying to login to RDP using AS3 (air). I am doing ok, considering the lack of resources out there to understand the actual process.

I have gotten past the initial sending username, received a response from server, and I am now at initial request connection.

I am sending all my data and when sniffing traffic, I see that netmon is recognizing correctly what kind of packet I am sending (t125). I am not being disconnected by RDP and they send an ack packet - but I don't receive the response that I am expecting.

I have been cross referencing with connectoid, which is an open source RDP client. In the connection code, I am stuck where they write a mixture of little and big-endian integers.

When I look at the limited examples out there (more like packet dumps), I see that connection length for this process is 412, but my bytearray is more like 470.

I have converted connectoid methods to what I believe is correct, but with a mixture of endian type, I am still unsure.

I am sorry if this is garbled, but I am trying my best to help you to help me. I will attach some code showing what I have tried to do in conversion.

public function sendMcsData(): void {
    trace("Secure.sendMcsData");
    var num_channels: int = 2;
    //RdpPacket_Localised dataBuffer = new RdpPacket_Localised(512);
    var hostlen: int = 2 * "myhostaddress.ath.cx".length;
    if (hostlen > 30) {
        hostlen = 30;
    }
    var length: int = 158;
    length += 76 + 12 + 4;
    length += num_channels * 12 + 8;
    dataBuffer.writeShort(5); /* unknown */
    dataBuffer.writeShort(0x14);
    dataBuffer.writeByte(0x7c); //set 8 is write byte //write short is setbigendian 16 //
    dataBuffer.writeShort(1);
    dataBuffer.writeShort(length | 0x8000); // remaining length
    dataBuffer.writeShort(8); // length?
    dataBuffer.writeShort(16);
    dataBuffer.writeByte(0);
    var b1: ByteArray = new ByteArray();
    b1.endian = Endian.LITTLE_ENDIAN;
    b1.writeShort(0xc001);
    dataBuffer.writeBytes(b1);
    dataBuffer.writeByte(0);
    var b2: ByteArray = new ByteArray();
    b2.endian = Endian.LITTLE_ENDIAN;
    b2.writeInt(0x61637544);
    dataBuffer.writeBytes(b2);
    //dataBuffer.setLittleEndian32(0x61637544); // "Duca" ?!
    dataBuffer.writeShort(length - 14 | 0x8000); // remaining length
    var b3: ByteArray = new ByteArray();
    b3.endian = Endian.LITTLE_ENDIAN;
    // Client information
    b3.writeShort(SEC_TAG_CLI_INFO);
    b3.writeShort(true ? 212 : 136); // length
    b3.writeShort(true ? 4 : 1);
    b3.writeShort(8);
    b3.writeShort(600);
    b3.writeShort(1024);
    b3.writeShort(0xca01);
    b3.writeShort(0xaa03);
    b3.writeInt(0x809); //should be option.keybaortd layout just guessed 1
    b3.writeInt(true ? 2600 : 419); // or 0ece
    dataBuffer.writeBytes(b3);
    // // client
    // build? we
    // are 2600
    // compatible
    // :-)
    /* Unicode name of client, padded to 32 bytes */
    dataBuffer.writeMultiByte("myhost.ath.cx".toLocaleUpperCase(), "ISO");
    dataBuffer.position = dataBuffer.position + (30 - "myhost.ath.cx".toLocaleUpperCase()
        .length);
    var b4: ByteArray = new ByteArray();
    b4.endian = Endian.LITTLE_ENDIAN;
    b4.writeInt(4);
    b4.writeInt(0);
    b4.writeInt(12);
    dataBuffer.writeBytes(b4);
    dataBuffer.position = dataBuffer.position + 64; /* reserved? 4 + 12 doublewords */
    var b5: ByteArray = new ByteArray();
    b5.endian = Endian.LITTLE_ENDIAN;
    b5.writeShort(0xca01); // out_uint16_le(s, 0xca01);
    b5.writeShort(true ? 1 : 0);
    if (true) //Options.use_rdp5)
    {
        b5.writeInt(0); // out_uint32(s, 0);
        b5.writeByte(24); // out_uint8(s, g_server_bpp);
        b5.writeShort(0x0700); // out_uint16_le(s, 0x0700);
        b5.writeByte(0); // out_uint8(s, 0);
        b5.writeInt(1); // out_uint32_le(s, 1);
        b5.position = b5.position + 64;
        b5.writeShort(SEC_TAG_CLI_4); // out_uint16_le(s,
        // SEC_TAG_CLI_4);
        b5.writeShort(12); // out_uint16_le(s, 12);
        b5.writeInt(false ? 0xb : 0xd); // out_uint32_le(s,
        // g_console_session
        // ?
        // 0xb
        // :
        // 9);
        b5.writeInt(0); // out_uint32(s, 0);
    }
    // Client encryption settings //
    b5.writeShort(SEC_TAG_CLI_CRYPT);
    b5.writeShort(true ? 12 : 8); // length
    // if(Options.use_rdp5) dataBuffer.setLittleEndian32(Options.encryption ?
    // 0x1b : 0); // 128-bit encryption supported
    // else
    b5.writeInt(true ? (false ? 0xb : 0x3) : 0);
    if (true) b5.writeInt(0); // unknown
    if (true && (num_channels > 0)) {
        trace(("num_channels is " + num_channels));
        b5.writeShort(SEC_TAG_CLI_CHANNELS); // out_uint16_le(s,
        // SEC_TAG_CLI_CHANNELS);
        b5.writeShort(num_channels * 12 + 8); // out_uint16_le(s,
        // g_num_channels
        // * 12
        // + 8);
        // //
        // length
        b5.writeInt(num_channels); // out_uint32_le(s,
        // g_num_channels);
        // // number of
        // virtual
        // channels
        dataBuffer.writeBytes(b5);
        trace("b5 is bigendin" + (b5.endian == Endian.BIG_ENDIAN));
        for (var i: int = 0; i < num_channels; i++) {
            dataBuffer.writeMultiByte("testtes" + i, "ascii"); //, 8); // out_uint8a(s,
            // g_channels[i].name,
            // 8);
            dataBuffer.writeInt(0x40000000); // out_uint32_be(s,
            // g_channels[i].flags);
        }
    }
    //socket.
    //buffer.markEnd();
    //return buffer;
}

Source: (StackOverflow)

Changing the resolution of a VNC session in linux

I use VNC to connect to a Linux workstation at work. At work I have a 20" monitor that runs at 1600x1200, while at home I use my laptop with its resolution of 1440x900. If I set the vncserver to run at 1440x900 I miss out on a lot of space on my monitor, whereas if I set it to run at 1600x1200 it doesn't fit on the laptop's screen, and I have to scroll it all the time.

Is there any good way to resize a VNC session on the fly?

My VNC server is RealVNC E4.x (I don't remember the exact version) running on SuSE64.


Source: (StackOverflow)

How To Set Up GUI On Amazon EC2 Ubuntu server

I'm using an amazon Ubuntu EC2 instance which is only has a command line interface. I want to setup UI for that server to access using remote desktop tools. Is there any way to apply GUI to the EC2 instance?


Source: (StackOverflow)

Web based VNC client? [closed]

I am currently developing a web app which has a part where i have to open a specific machine through VNC to monitor its desktop.

I am required to have a web-based VNC client, which means it shouldn't install a server or any other file on the client's side. The client just opens the web browser and puts the IP of the targeted machine to open in the browser and thus runs a web-based VNC client.

What are good resources to get started in this field?

UPDATE 2013-10-29

Just FYI: back then I ended up using guacamole as @Dolph recommended.
It was:

  • very easy to set up
  • very easy to follow its code and reverse-engineer it (as long as you know java)
  • it is still used at the company I used to work for and is robust

Source: (StackOverflow)

Linux: Screen desktop video capture over network, and VNC framerate

Sorry for the wall of text - TL;DR:

  • What is the framerate of VNC connection (in frames/sec) - or rather, who determines it: client or server?
  • Any other suggestions for desktop screen capture - but "correctly timecoded"/ with unjittered framerate (with a stable period); and with possibility to obtain it as uncompressed (or lossless) image sequence?

Briefly - I have a typical problem that I am faced with: I sometimes develop hardware, and want to record a video that shows both commands entered on the PC ('desktop capture'), and responses of the hardware ('live video'). A chunk of an intro follows, before I get to the specific detail(s).  
 

Intro/Context

My strategy, for now, is to use a video camera to record the process of hardware testing (as 'live' video) - and do a desktop capture at the same time. The video camera produces a 29.97 (30) FPS MPEG-2 .AVI video; and I want to get the desktop capture as an image sequence of PNGs at the same frame rate as the video. The idea, then, would be: if the frame rate of the two videos is the same; then I could simply

  • align the time of start of the desktop capture, with the matching point in the 'live' video
  • Set up a picture-in-picture, where a scaled down version of the desktop capture is put - as overlay - on top of the 'live' video
    • (where a portion of the screen on the 'live' video, serves as a visual sync source with the 'desktop capture' overlay)
  • Export a 'final' combined video, compressed appropriately for the Internet

In principle, I guess one could use a command line tool like ffmpeg for this process; however I would prefer to use a GUI for finding the alignment start point for the two videos.

Eventually, what I also want to achieve, is to preserve maximum quality when exporting the 'final' video: the 'live' video is already compressed when out of the camera, which means additional degradation when it passes through the Theora .ogv codec - which is why I'd like to keep the original videos, and use something like a command line to generate a 'final' video anew, if a different compression/resolution is required. This is also why I like to have the 'desktop capture' video as a PNG sequence (although I guess any uncompressed format would do): I take measures to 'adjust' the desktop, so there aren't many gradients, and lossless encoding (i.e. PNG) would be appropriate.  
 

Desktop capture options

Well, there are many troubles in this process under Ubuntu Lucid, which I currently use (and you can read about some of my ordeals in 10.04: Video overlay/composite editing with Theora ogv - Ubuntu Forums). However, one of the crucial problems is the assumption, that the frame rate of the two incoming videos is equal - in reality, usually the desktop capture is of a lower framerate; and even worse, very often frames are out of sync.

This, then, requires the hassle of sitting in front of a video editor, and manually cutting and editing less-than-a-second clips on frame level - requiring hours of work for what will be in the end a 5 minute video. On the other hand, if the two videos ('live' and 'capture') did have the same framerate and sync: in principle, you wouldn't need more than a couple of minutes for finding the start sync point in a video editor - and the rest of the 'merged' video processing could be handled by a single command line. Which is why, in this post, I would like to focus on the desktop capture part.

As far as I can see, there are only few viable (as opposed to 5 Ways to Screencast Your Linux Desktop) alternatives for desktop capture in Linux / Ubuntu (note, I typically use a laptop as target for desktop capturing):

  1. Have your target PC (laptop) clone the desktop on its VGA output; use a VGA-to-composite or VGA-to-S-video hardware to obtain a video signal from VGA; use video capture card on a different PC to grab video
  2. Use recordMyDesktop on the target PC
  3. Set up a VNC server (vino on Ubuntu; or vncserver) on the target PC to be captured; use VNC capture software (such as vncrec) on a different PC to grab/record the VNC stream (which can, subsequently, be converted to video).
  4. Use ffmpeg with x11grab option
  5. *(use some tool on the target PC, that would do a DMA transfer of a desktop image frame directly - from the graphics card frame buffer memory, to the network adapter memory)

Please note that the usefulness of the above approaches are limited by my context of use: the target PC that I want to capture, typically runs software (utilizing the tested hardware) that moves around massive ammounts of data; best you could say about describing such a system is "barely stable" :) I'd guess this is similar to problems gamers face, when wanting to obtain a video capture of a demanding game. And as soon as I start using something like recordMyDesktop, which also uses quite a bit of resources and wants to capture on the local hard disk - I immediately get severe kernel crashes (often with no vmcore generated).

So, in my context, I typically do assume involvement of a second computer - to run the capture and recording of the 'target' PC desktop. Other than that, the pros and cons I can see so far with the above options, are included below.

(Desktop preparation)

For all of the methods discussed below, I tend to "prepare" the desktop beforehand:

  • Remove desktop backgrounds and icons
  • Set the resolution down to 800x600 via System/Preferences/Monitors (gnome-desktop-properties)
  • Change color depth down to 16 bpp (using xdpyinfo | grep "of root" to check)

... in order to minimize the load on desktop capture software. Note that changing color depth on Ubuntu requires changes to xorg.conf; however, "No xorg.conf (is) found in /etc/X11 (Ubuntu 10.04)" - so you may need to run sudo Xorg -configure first.

In order to keep graphics resource use low, also I usually had compiz disabled - or rather, I'd have 'System/Preferences/Appearance/Visual Effects' set to "None". However, after I tried enabling compiz by setting 'Visual Effects' to "Normal" (which doesn't get saved), I can notice windows on the LCD screen are redrawn much faster; so I keep it like this, also for desktop capture. I find this a bit strange: how could more effects cause a faster screen refresh? It doesn't look like it's due to a proprietary driver (the card is "Intel Corporation N10 Family Integrated Graphics Controller", and no proprietary driver option is given by Ubuntu upon switch to compiz) - although, it could be that all the blurring and effects just cheat my eyes :) ).

Cloning VGA

Well, this is the most expencive option (as it requires additional purchase of not just one, but two pieces of hardware: VGA converter, and video capture card); and applicable mostly to laptops (which have both a screen + additional VGA output - for desktops one may also have to invest in an additional graphics card, or a VGA cloning hardware).

However, it is also the only option that requires no additional software of the target PC whatsoever (and thus uses 0% processing power of the target CPU) - AND also the only one that will give a video with a true, unjittered framerate of 30 fps (as it is performed by separate hardware - although, with the assumption that clock domains misalignment, present between individual hardware pieces, is negligible).

Actually, as I already own something like a capture card, I have already invested in a VGA converter - in expectation that it will eventually allow me to produce final "merged" videos with only 5 mins of looking for alignment point, and a single command line; but I am yet to see whether this process will work as intended. I'm also wandering how possible it will be to capture desktop as uncompressed video @ 800x600, 30 fps.

recordMyDesktop

Well, if you run recordMyDesktop without any arguments - it starts first with capturing (what looks like) raw image data, in a folder like /tmp/rMD-session-7247; and after you press Ctrl-C to interrupt it, it will encode this raw image data into an .ogv. Obviously, grabbing large image data on the same hard disk as my test software (which also moves large ammounts of data), is usually a cause for an instacrash :)

Hence, what I tried doing is to setup Samba to share a drive on the network; then on the target PC, I'd connect to this drive - and instruct recordMyDesktop to use this network drive (via gvfs) as its temporary files location:

recordmydesktop --workdir /home/user/.gvfs/test\ on\ 192.168.1.100/capture/ --no-sound --quick-subsampling --fps 30 --overwrite -o capture.ogv 

Note that, while this command will use the network location for temporary files (and thus makes it possible for recordMyDesktop to run in parallel with my software) - as soon as you hit Ctrl-C, it will start encoding and saving capture.ogv directly on the local hard drive of the target (though, at that point, I don't really care :) )

First of my nags with recordMyDesktop is that you cannot instruct it to keep the temporary files, and avoid encoding them, on end: you can use Ctrl+Alt+p for pause - or you can hit Ctrl-C quickly after the first one, to cause it to crash; which will then leave the temporary files (if you don't hit Ctrl-C quickly enough the second time, the program will "Cleanning up cache..."). You can then run, say:

recordmydesktop --rescue /home/user/.gvfs/test\ on\ 192.168.1.100/capture/rMD-session-7247/

... in order to convert the raw temporary data. However, more often than not, recordMyDesktop will itself segfault in the midst of performing this "rescue". Although, the reason why I want to keep the temp files, is to have the uncompressed source for the picture-in-picture montage. Note that the "--on-the-fly-encoding" will avoid using temp files altogether - at the expence of using more CPU processing power (which, for me, again is cause for crashes.)

Then, there is the framerate - obviously, you can set requested framerate using the '--fps N' option; however, that is no guarantee that you will actually obtain that framerate; for instance, I'd get:

recordmydesktop --fps 25
...
Saved 2983 frames in a total of 6023 requests
...

... for a capture with my test software running; which means that the actually achieved rate is more like 25*2983/6032 = 12.3632 fps!

Obviously, frames are dropped - and mostly that shows as video playback is too fast. However, if I lower the requested fps to 12 - then according to saved/total reports, I achieve something like 11 fps; and in this case, video playback doesn't look 'sped up'. And I still haven't tried aligning such a capture with a live video - so I have no idea if those frames that actually have been saved, also have an accurate timestamp.

VNC capture

The VNC capture, for me, consists of running a VNC server on the 'target' PC, and running vncrec (twibright edition) on the 'recorder' PC. As VNC server, I use vino, which is "System/Preferences/Remote Desktop (Preferences)". And apparently, even if vino configuration may not be the easiest thing to manage, vino as a server seems not too taxing to the 'target' PC; as I haven't experienced crashes when it runs in parallel with my test software.

On the other hand, when vncrec is capturing on the 'recorder' PC, it also raises a window showing you the 'target' desktop as it is seen in 'realtime'; when there are large updates (i.e. whole windows moving) on the 'target' - one can, quite visibly, see problems with the update/refresh rate on the 'recorder'. But, for only small updates (i.e. just a cursor moving on a static background), things seem OK.

This makes me wonder about one of my primary questions with this post - what is it, that sets the framerate in a VNC connection?

I haven't found a clear answer to this, but from bits and pieces of info (see refs below), I gather that:

  • The VNC server simply sends changes (screen changes + clicks etc) as fast as it can, when it receives them ; limited by the max network bandwidth that is available to the server
  • The VNC client receives those change events delayed and jittered by the network connection, and attempts to reconstruct the desktop "video" stream, again as fast as it can

... which means, one cannot state anything in terms of a stable, periodic frame rate (as in video).

As far as vncrec as a client goes, the end videos I get usually are declared as 10 fps, although frames can be rather displaced/jittered (which then requires the cutting in video editors). Note that the vncrec-twibright/README states: "The sample rate of the movie is 10 by default or overriden by VNCREC_MOVIE_FRAMERATE environment variable, or 10 if not specified."; however, the manpage also states "VNCREC_MOVIE_FRAMERATE - Specifies frame rate of the output movie. Has an effect only in -movie mode. Defaults to 10. Try 24 when your transcoder vomits from 10.". And if one looks into "vncrec/sockets.c" source, one can see:

void print_movie_frames_up_to_time(struct timeval tv)
{
  static double framerate;
  ....
  memcpy(out, bufoutptr, buffered);
  if (appData.record)
    {
      writeLogHeader (); /* Writes the timestamp */
      fwrite (bufoutptr, 1, buffered, vncLog);
    }

... which shows that some timestamps are written - but whether those timestamps originate from the "original" 'target' PC, or the 'recorder' one, I cannot tell. EDIT: thanks to the answer by @kanaka, I checked through vncrec/sockets.c again, and can see that it is the writeLogHeader function itself calling gettimeofday; so the timestamps it writes are local - that is, they originate from the 'recorder' PC (and hence, these timestamps do not accurately describe when the frames originated on the 'target' PC).

In any case, it still seems to me, that the server sends - and vncrec as client receives - whenever; and it is only in the process of encoding a video file from the raw capture afterwards, that some form of a frame rate is set/interpolated.

I'd also like to state that on my 'target' laptop, the wired network connection is broken; so the wireless is my only option to get access to the router and the local network - at far lower speed than the 100MB/s that the router could handle from wired connections. However, if the jitter in captured frames is caused by wrong timestamps due to load on the 'target' PC, I don't think good network bandwidth will help too much.

Finally, as far as VNC goes, there could be other alternatives to try - such as VNCast server (promising, but requires some time to build from source, and is in "early experimental version"); or MultiVNC (although, it just seems like a client/viewer, without options for recording).

ffmpeg with x11grab

Haven't played with this much, but, I've tried it in connection with netcat; this:

# 'target'
ffmpeg -f x11grab -b 8000k -r 30 -s 800x600 -i :0.0 -f rawvideo - | nc 192.168.1.100 5678
# 'recorder'
nc -l 0.0.0.0 5678 > raw.video  #

... does capture a file, but ffplay cannot read the captured file properly; while:

# 'target'
ffmpeg -f x11grab -b 500k -r 30 -s 800x600 -i :0.0 -f yuv4mpegpipe -pix_fmt yuv444p - | nc 192.168.1.100 5678
# 'recorder'
nc -l 0.0.0.0 5678 | ffmpeg -i - /path/to/samplimg%03d.png

does produce .png images - but with compression artifacts (result of the compression involved with yuv4mpegpipe, I guess).

Thus, I'm not liking ffmpeg+x11grab too much currently - but maybe I simply don't know how to set it up for my needs.

*( graphics card -> DMA -> network )

I am, admittedly, not sure something like this exists - in fact, I would wager it doesn't :) And I'm no expert here, but I speculate:

if DMA memory transfer can be initiated from the graphics card (or its buffer that keeps the current desktop bitmap) as source, and the network adapter as destination - then in principle, it should be possible to obtain an uncompressed desktop capture with a correct (and decent) framerate. The point in using DMA transfer would be, of course, to relieve the processor from the task of copying the desktop image to the network interface (and thus, reduce the influence the capturing software can have on the processes running on the 'target' PC - especially those dealing with RAM or hard-disk).

A suggestion like this, of course, assumes that: there are massive ammounts of network bandwidth (_for 800x600, 30 fps at least 800*600*3*30 = 43200000 bps = 42 MiB/s, which should be OK for local 100 MB/s networks_); plenty of hard disk on the other PC that does the 'recording' - and finally, software that can afterwards read that raw data, and generate image sequences or videos based on it :)

The bandwidth and hard disk demands I could live with - as long as there is guarantee both for a stable framerate and uncompressed data; which is why I'd love to hear if something like this already exists.

-- -- -- -- -- 

Well, I guess that was it - as brief as I could put it :) Any suggestions for tools - or process(es), that can result with a desktop capture

  • in uncompressed format (ultimately convertible to uncompressed/lossless PNG image sequence), and
  • with a "correctly timecoded", stable framerate

..., that will ultimately lend itself to 'easy', single command-line processing for generating 'picture-in-picture' overlay videos - will be greatly appreciated!

Thanks in advance for any comments,
Cheers!


References

  1. Experiences Producing a Screencast on Linux for CryptoTE - idlebox.net
  2. The VideoLAN Forums • View topic - VNC Client input support (like screen://)
  3. VNCServer throttles user inpt for slow client - Kyprianou, Mark - com.realvnc.vnc-list - MarkMail
  4. Linux FAQ - X Windows: How do I Display and Control a Remote Desktop using VNC
  5. How much bandwidth does VNC require? RealVNC - Frequently asked questions
  6. x11vnc: a VNC server for real X displays
  7. HowtoRecordVNC (an X11 session) - Debian Wiki
  8. Alternative To gtk-RecordMyDesktop in Ubuntu
  9. (Ffmpeg-user) How do I use pipes in ffmpeg
  10. (ffmpeg-devel) (PATCH) Fix segfault in x11grab when drawing Cursor on Xservers that don't support the XFixes extension

Source: (StackOverflow)

Controlling iOS device via mouse/keyboard [closed]

Some companies are offering manual testing of real iPhone/iPad devices. With your mouse and keyboard, you can control the device straight from your browser.

They probably use something like AirPlay to stream the device graphics to the browser. But how do they convert the mouse-clicks to touch events on iPhone/iPad? Since it's not possible to run a VNC server on the device, I'm wondering if there's another way to do this.


Source: (StackOverflow)

Key mappings in RealVNC client

I am using RealVNC viewer on Windows. I sometimes find it very difficult to shift from VNC to Windows. I have to use F8 -> Minimize and then Alt + Tab. I wish I had more flexibility. Can the following key combinations be somehow enabled?

  1. Win + D -> I see my Windows desktop.
  2. Alt + Tab switches between VNC and Windows applications.
  3. Ctrl + Tab switches between VNC subwindows.

Source: (StackOverflow)

VNC black screen with a X cursor on Red Hat Enterprise Linux Server release 5.3 (Tikanga) [closed]

Starting VNC server using vncserver :1, client can connect to the server but it shows only a black screen with a X curser.

Contents of .vnc/xstartup is:

#!/bin/sh

# Uncomment the following two lines for normal desktop:
 unset SESSION_MANAGER
 exec /etc/X11/xinit/xinitrc

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
twm &

Is it a problem with this?


Source: (StackOverflow)

Resizing an Xvfb display

Simple Question: Is there a way to resize a Xvfb display?

I tried with RandR but it seems that the RandR extension is not supported by Xvfb. Are there other ways to resize the screen?

Thanks for your help!


Source: (StackOverflow)

How does your team work together in a remote setup? [closed]

we are a distributed team working on the object database db4o.

The way we work:

  • We try to program in pairs only.
  • We use Skype and VNC or SharedView to connect and work together.
  • In our online Tuesday meeting every week (usually about 1 hour)
    • we talk about the tasks done last week
    • we create new pairs for the next week with a random generator so knowledge and friendship distribute evenly
    • we set the priority for any new tasks or bugs that have come in
    • each team picks the tasks it likes to do from the highest prioritized ones.
  • From Tuesday to Wednesday we estimate tasks. We have a unit of work we call "Ideal Developer Session" (IDS), maybe 2 or 3 hours of working together as a pair. It's not perfectly well defined (because we know estimation always is inaccurate) but from our past shared experience we have a common sense of what an IDS is. If we can't estimate a task because it feels too long for a week we break it down into estimatable smaller tasks.
  • During a short meeting on Wednesday we commit to a workload we feel is well doable in a week. We commit to complete.
  • If a team runs out of committed tasks during the week, it can pick new ones from the prioritized queue we have in Jira.

When we started working this way, some of us found that remote pair programming takes a lot of energy because you are so focussed. If you pair program for more than 5 or 6 hours per day, you get drained. On the other hand working like this has turned out to be very efficient. The knowledge about our codebase is evenly distributed and we have really learnt lots from eachother.

I would be very interested to hear about the experiences from other teams working in a similar way. Things like:

How often do you meet?
Have you tried different sprint lengths (one week, two week, longer) ?
Which tools do you use?
Which issue tracker do you use?
What do you do about time zone differences?
How does it work for you to integrate new people into the team?
How many hours do you usually work per week?
How does your management interact with the way you are working?
Do you get put on a waterfall with hard deadlines?
What's your unit of work?
What is your normal velocity? (units of work done per week)

Programming work should be fun and for us it usually is great fun.

I would be happy about any new ideas how to make it even more fun and/or more efficient.


Source: (StackOverflow)

Using laptop as a second programming monitor

The joys of multimonitor programming are countless, I think there are about 5 blog posts on Coding Horror on the topic alone! I often code in Windows on my main machine, and have my Mac laptop set up to the side. I use the Mac both to compile Mac builds but also as my "reference web browser". There's no KVM or anything.

However a casual conversation at a conference led me to the question, could I use two independent machines to share windows? Literally move some windows from one machine to another, so I could use one PC's display as "overflow" from the other.

Some googling suddenly shows that this is possible in some situations for sure: http://synergy2.sourceforge.net/ http://www.maxivista.com/

My question is whether any programmers have tried such a setup. We have unique needs especially with multiple text windows and editors, and this kind of tool may be a huge win or a huge hassle.

This solution feels like a combination of easy KVM switching AND multiple monitors.. it sounds like a programming dream! So advice or especially reports of actual experience in a programming environment would be greatly useful before I invest in the rather complex setup.

Followup: Sounds like I'm asking for something that doesn't exist! It's kind of combination of a software KVM and VNC. But the VNC would need to break out the app windows and allow individual manipulation (like that maxivista commercial tool, which is Vista only).

Thanks for all the feedback. Looks like there's demand for a cool app if anyone has the drive to be first in this new nich!


Source: (StackOverflow)

Alternate to control+drag to connect view element with file owner in xCode interface builder?

Working on an iPhone application through a TightVNC connection into a Mac Mini; the control+drag operation in Interface Builder to connect a view element to file owner doesn't work - I don't see the connecting line.

It does work when I connect keyboard, mouse and monitor to the Mini and work on it directly, however it is a lot more convenient for me to run it through a VNC connection. Must be some quirk of the TightVNC connection that is preventing this. I tried different TightVNC settings for the cursor (let server handle it and so on) but no luck.

Is there an alternative to control+drag to hook up outlets?


Source: (StackOverflow)