EzDevInfo.com

pipe interview questions

Top pipe frequently asked interview questions

How do I use sudo to redirect output to a location I don't have permission to write to?

I've been given sudo access on one of our development RedHat linux boxes, and I seem to find myself quite often needing to redirect output to a location I don't normally have write access to.

The trouble is, this contrived example doesn't work:

sudo ls -hal /root/ > /root/test.out

I just receive the response:

-bash: /root/test.out: Permission denied

How can I get this to work?


Source: (StackOverflow)

Redirect stderr and stdout in a Bash script

I want to redirect both stdout and stderr of a process to a single file. How do I do that in Bash?


Source: (StackOverflow)

Advertisements

Preserve colouring after piping grep to grep

There is a simlar question in Preserve ls colouring after grep’ing but it annoys me that if you pipe colored grep output into another grep that the coloring is not preserved.

As an example grep --color WORD * | grep -v AVOID does not keep the color of the first output. But for me ls | grep FILE do keep the color, why the difference ?


Source: (StackOverflow)

Python subprocess command with pipe

I want to use subprocess.check_output() with ps -A | grep 'process_name'. I tried various solutions but so far nothing worked. Can someone guide me how to do it?


Source: (StackOverflow)

Detect if stdin is a terminal or pipe in C/C++/Qt?

When I execute "python" from the terminal with no arguments it brings up the Python interactive shell.

When I execute "cat | python" from the terminal it doesn't launch the interactive mode. Somehow, without getting any input, it has detected that it is connected to a pipe.

How would I do a similar detection in C or C++ or Qt?


Source: (StackOverflow)

Trick an application into thinking its stdin is interactive, not a pipe

I'm trying to do the opposite of

http://stackoverflow.com/questions/1312922/detect-if-stdin-is-a-terminal-or-pipe-in-c-c-qt

I'm running an application that's changing its output format because it detects a pipe on stdout, and I want it to think that it's an interactive terminal so that I get the same output when redirecting.

I was thinking that wrapping it in an expect script or using a proc_open() in php would do it, but it doesn't.

Any ideas out there?


Source: (StackOverflow)

How to pipe list of files returned by find command to cat to view all the files

I am doing a find and then getting a list of files. How do I pipe it to another utility like cat (so that cat displays the contents of all those files) and basically need to grep something from these files.


Source: (StackOverflow)

Redirect stdout and stderr to a single file

I'm trying to redirect all output (stdout + stderr) of a DOS command to a single file:

C:\>dir 1> a.txt 2> a.txt
The process cannot access the file because it is being used by another process.

Is it possible, or should I just redirect to two separate files?


Source: (StackOverflow)

How to pipe stdout while keeping it on screen ? (and not to a output file)

I would like to pipe standard output of a program while keeping it on screen.

With a simple example (echo use here is just for illustration purpose) :

$ echo 'ee' | foo
ee <- the output I would like to see

I know tee could copy stdout to file but that's not what I want.
$ echo 'ee' | tee output.txt | foo

I tried
$ echo 'ee' | tee /dev/stdout | foo but it does not work since tee output to /dev/stdout is piped to foo


Source: (StackOverflow)

bash: split output of command by columns

I want to do this:

  1. run a command
  2. capture the output
  3. select a line
  4. select a column of that line

Just as an example, let's say I want to get the command name from a $PID (please note this is just an example, I'm not suggesting this is the easiest way to get a command name from a process id - my real problem is with another command whose output format I can't control).

If I run ps I get:


  PID TTY          TIME CMD
11383 pts/1    00:00:00 bash
11771 pts/1    00:00:00 ps

Now I do ps | egrep 11383 and get

11383 pts/1    00:00:00 bash

Next step: ps | egrep 11383 | cut -d" " -f 4. Output is:

<absolutely nothing/>

The problem is that cut cuts the output by single spaces, and as ps adds some spaces between the 2nd and 3rd columns to keep some resemblance of a table, cut picks an empty string. Of course, I could use cut to select the 7th and not the 4th field, but how can I know, specially when the output is variable and unknown on beforehand.


Source: (StackOverflow)

Force line-buffering of stdout when piping to tee

Usually, stdout is line-buffered. In other words, as long as your printf argument ends with a newline, you can expect the line to be printed instantly. This does not appear to hold when using a pipe to redirect to tee.

I have a C++ program, a, that outputs strings, always \n-terminated, to stdout.

When it is run by itself (./a), everything prints correctly and at the right time, as expected. However, if I pipe it to tee (./a | tee output.txt), it doesn't print anything until it quits, which defeats the purpose of using tee.

I know that I could fix it by adding a fflush(stdout) after each printing operation in the C++ program. But is there a cleaner, easier way? Is there a command I can run, for example, that would force stdout to be line-buffered, even when using a pipe?


Source: (StackOverflow)

Pipe buffer size is 4k or 64k?

I read in multiple places that the default buffer size for a pipe is 4kB (for instance, here), and my ulimit -a tends to confirm that statement:

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15923
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8 // 8 * 512B = 4kB
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

But when I use a little program to test the buffer size (by writing into the pipe until the write() blocks), I see a limit of 64kB!

See this program:

#include <stdio.h>
#include <unistd.h>
#include <limits.h>

int main(void)
{
    int tube[2];
    char c = 'c';
    int i;

    fprintf(stdout, "Tube Creation\n");
    fprintf(stdout, "Theoretical max size: %d\n", PIPE_BUF);
    if( pipe(tube) != 0)
    {
        perror("pipe");
        _exit(1);
    }
    fprintf(stdout, "Writing in pipe\n");
    for(i=0;; i++)
    {
        fprintf(stdout, "%d bytes written\n", i+1);
        if( write(tube[1], &c, 1) != 1)
        {
            perror("Write");
            _exit(1);
        }
    }
    return 0;
}

And its output:

$ ./test_buf_pipe 
Tube Creation
Theoretical max size: 4096
Writing in pipe
1 bytes written
2 bytes written
3 bytes written
4 bytes written
[...]
65535 bytes written
[blocks here]

It strongly suggests that the pipe buffer size is actually 64k! What is happening here??


Source: (StackOverflow)

Why no output is shown when using grep twice?

Basically I'm wondering why this doesn't output anything:

tail --follow=name file.txt | grep something | grep something_else 

You can assume that it should produce output I have run another line to confirm

cat file.txt | grep something | grep something_else

It seems like you can't pipe the output of tail more than once!? Anyone know what the deal is and is there a solution?

EDIT: To answer the questions so far, the file definitely has contents that should be displayed by the grep. As evidence if the grep is done like so:

tail --follow=name file.txt | grep something

Output shows up correctly, but if this is used instead:

tail --follow=name file.txt | grep something | grep something

No output is shown.

If at all helpful I am running ubuntu 10.04


Source: (StackOverflow)

Pipe subprocess standard output to a variable

I want to run a command in pythong, using the subprocess module, and store the output in a variable. However, I do not want the command's output to be printed to the terminal. For this code:

def storels():
   a = subprocess.Popen("ls",shell=True)
storels()

I get the directory listing in the terminal, instead of having it stored in a. I've also tried:

 def storels():
       subprocess.Popen("ls > tmp",shell=True)
       a = open("./tmp")
       [Rest of Code]
 storels()

This also prints the output of ls to my terminal. I've even tried this command with the somewhat dated os.system method, since running ls > tmp in the terminal doesn't print ls to the terminal at all, but stores it in tmp. However, the same thing happens.

Edit:

I get the following error after following marcog's advice, but only when running a more complex command. cdrecord --help. Python spits this out:

Traceback (most recent call last):
  File "./install.py", line 52, in <module>
    burntrack2("hi")
  File "./install.py", line 46, in burntrack2
    a = subprocess.Popen("cdrecord --help",stdout = subprocess.PIPE)
  File "/usr/lib/python2.6/subprocess.py", line 633, in __init__
    errread, errwrite)
  File "/usr/lib/python2.6/subprocess.py", line 1139, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory

Source: (StackOverflow)

How can I redirect and append both stdout and stderr to a file with Bash?

To redirect stdout to a truncated file in Bash, I know to use:

cmd > file.txt

To redirect stdout in Bash, appending to a file, I know to use:

cmd >> file.txt

To redirect both stdout and stderr to a truncated file, I know to use:

cmd &> file.txt

How do I redirect both stdout and stderr appending to a file? cmd &>> file.txt did not work for me.


Source: (StackOverflow)