EzDevInfo.com

ulimit interview questions

Top ulimit frequently asked interview questions

Processes resources not limited by setrlimit

I wrote a simple program to restrict it's data size to 65Kb and to verify the same i am allocating a dummy memory of more than 65Kb and logically if i am doing all correct (as below) malloc call should fail, isn't it?

#include <sys/resource.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>

int main (int argc, char *argv[])
{
  struct rlimit limit;


  /* Get max data size . */
  if (getrlimit(RLIMIT_DATA, &limit) != 0) {
    printf("getrlimit() failed with errno=%d\n", errno);
    return 1;
  }

  printf("The soft limit is %lu\n", limit.rlim_cur);
  printf("The hard limit is %lu\n", limit.rlim_max);

  limit.rlim_cur = 65 * 1024;
  limit.rlim_max = 65 * 1024;

  if (setrlimit(RLIMIT_DATA, &limit) != 0) {
    printf("setrlimit() failed with errno=%d\n", errno);
    return 1;
  }

  if (getrlimit(RLIMIT_DATA, &limit) != 0) {
    printf("getrlimit() failed with errno=%d\n", errno);
    return 1;
  }

  printf("The soft limit is %lu\n", limit.rlim_cur);
  printf("The hard limit is %lu\n", limit.rlim_max);
  system("bash -c 'ulimit -a'");
    int *new2 = NULL;
    new2 = malloc(66666666);
    if (new2 == NULL)
    {
        printf("malloc failed\n");
        return;
    }
    else
    {
        printf("success\n");
    }

  return 0;
}

Surprisingly, the ouput is something like this -

The soft limit is 4294967295
The hard limit is 4294967295
The soft limit is 66560
The hard limit is 66560
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) 65
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14895
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14895
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
success

Am i doing wrong in any way? Please drop your inputs. Thanks!


Source: (StackOverflow)

Problems opening more than 10,000 files in Perl

I need to open more than 10,000 files in a Perl script, so I asked the system administrator to change the limit on my account to 14,000. ulimit -a now shows these settings:

core file size        (blocks, -c) unlimited
data seg size         (kbytes, -d) unlimited
file size             (blocks, -f) unlimited
open files                    (-n) 14000
pipe size          (512 bytes, -p) 10
stack size            (kbytes, -s) 8192
cpu time             (seconds, -t) unlimited
max user processes            (-u) 29995
virtual memory        (kbytes, -v) unlimited

After the change I ran a test Perl program that opens/creates 256 files and closes 256 file handles at the end of script. When it creates 253 files the program dies saying too many open files. I don't understand why I'm getting this error.

I am working on a Solaris 10 platform. This is my code

my @list;
my $filename = "test";

for ($i = 256; $i >= 0; $i--) {
    print "$i " . "\n";
    $filename = "test" . "$i";
    if (open my $in, ">", ${filename}) {
        push @list, $in;
        print $in $filename . "\n";
    }
    else {
        warn "Could not open file '$filename'. $!";
        die;
    }
}

for ($i = 256; $i >= 0; $i--) {
    my $retVal = pop @list;
    print $retVal . "\n";
    close($retVal);
}

Source: (StackOverflow)

Advertisements

Java and virtual memory ulimit

I am trying to use java in an environment where the virtual memory is limited to 2GB by ulimit -v 2000000 but I get memory errors. Running java -version in this environment gives:

$ java -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

No matter how low I set -Xmx, I cannot get java to run under this environment. However, if ulimit -v is set to 2.5GB, then I can set -Xmx to 250m, but no higher.

$ java -Xmx250m -version
java version "1.7.0_19"
OpenJDK Runtime Environment (rhel-2.3.9.1.el6_4-x86_64)
OpenJDK 64-Bit Server VM (build 23.7-b01, mixed mode)

$ java -Xmx251m -version
#
# There is insufficient memory for the Java Runtime Environment to continue.
# pthread_getattr_np
java version "1.7.0_19"
OpenJDK Runtime Environment (rhel-2.3.9.1.el6_4-x86_64)
OpenJDK 64-Bit Server VM (build 23.7-b01, mixed mode)# An error report file with more information is saved as:
# ~/hs_err_pid12079.log

Is it possible to use java in an environment where ulimit is used to limit the virtual memory?


Source: (StackOverflow)

How to configure ulimit with supervisord (to start varnish)

I am migrating a server configuration to supervisord (from init.d files).

There are a few instances of varish running. I remember when I started using varnish I had ulimit problems so there is the following lines in the init.d/varnish scripts

ulimit -n ${NFILES:-131072} ulimit -l ${MEMLOCK:-82000}

I am configuring supervisord to run the /usr/sbin/varnishd program with arguments.

How do you configure the ulimit settings via supervisord? Do I just wrap the varnishd program in a script?


Source: (StackOverflow)

React Native + Jest EMFILE: too many open files error

I am trying to run Jest tests, but I'm getting the following error:

Error reading file: /Users/mike/dev/react/TestTest/node_modules/react-native/node_modules/yeoman-environment/node_modules/globby/node_modules/glob/node_modules/path-is-absolute/package.json /Users/mike/dev/react/TestTest/node_modules/jest-cli/node_modules/node-haste/lib/loader/ResourceLoader.js:88 throw err; ^

Error: EMFILE: too many open files, open '/Users/mike/dev/react/TestTest/node_modules/react-native/node_modules/yeoman-environment/node_modules/globby/node_modules/glob/node_modules/path-is-absolute/package.json' at Error (native) npm ERR! Test failed. See above for more details.

What is interesting to me is that the path listed in the error points to a file in the node_modules directory, which I expected would not be read because of the node_modules entry in testPathIgnorePatterns.

I'm running Node 4.2.1, my install of React-Native is only a week old, I installed Jest today (so I think I'm up to date with everything). I'm on a Mac.

I have run: sudo ulimit -n 10240, closed all Terminal windows, and even tried a reboot. (In my .bash_profile I had previously added ulimit -n 1024. And I've tried even larger numbers.

To make sure the problem is not just in my own project, I created a new project with react-native init TestTest and made RN's suggested changes to the package.json:

{
  "name": "TestTest",
  "version": "0.0.1",
  "private": true,
  "scripts": {
    "start": "node_modules/react-native/packager/packager.sh",
    "test": "jest"
  },
  "dependencies": {
    "react-native": "^0.14.1"
  },
  "jest": {
    "scriptPreprocessor": "node_modules/react-native/jestSupport/scriptPreprocess.js",
    "setupEnvScriptFile": "node_modules/react-native/jestSupport/env.js",
    "testPathIgnorePatterns": [
      "/node_modules/",
      "packager/react-packager/src/Activity/"
    ],
    "testFileExtensions": [
      "js"
    ],
    "unmockedModulePathPatterns": [
      "promise",
      "source-map"
    ]
  },
  "devDependencies": {
    "jest-cli": "^0.7.1"
  }
}

But I'm getting the same error every time.


Source: (StackOverflow)

Linux per-process resource limits - a deep Red Hat Mystery

I have my own multithreaded C program which scales in speed smoothly with the number of CPU cores.. I can run it with 1, 2, 3, etc threads and get linear speedup.. up to about 5.5x speed on a 6-core CPU on a Ubuntu Linux box.

I had an opportunity to run the program on a very high end Sunfire x4450 with 4 quad-core Xeon processors, running Red Hat Enterprise Linux. I was eagerly anticipating seeing how fast the 16 cores could run my program with 16 threads.. But it runs at the same speed as just TWO threads!

Much hair-pulling and debugging later, I see that my program really is creating all the threads, they really are running simultaneously, but the threads themselves are slower than they should be. 2 threads runs about 1.7x faster than 1, but 3, 4, 8, 10, 16 threads all run at just net 1.9x! I can see all the threads are running (not stalled or sleeping), they're just slow.

To check that the HARDWARE wasn't at fault, I ran SIXTEEN copies of my program independently, simultaneously. They all ran at full speed. There really are 16 cores and they really do run at full speed and there really is enough RAM (in fact this machine has 64GB, and I only use 1GB per process).

So, my question is if there's some OPERATING SYSTEM explanation, perhaps some per-process resource limit which automatically scales back thread scheduling to keep one process from hogging the machine.

Clues are:

  1. My program does not access the disk or network. It's CPU limited. Its speed scales linearly on a single CPU box in Ubuntu Linux with a hexacore i7 for 1-6 threads. 6 threads is effectively 6x speedup.
  2. My program never runs faster than 2x speedup on this 16 core Sunfire Xeon box, for any number of threads from 2-16.
  3. Running 16 copies of my program single threaded runs perfectly, all 16 running at once at full speed.
  4. top shows 1600% of CPUs allocated. /proc/cpuinfo shows all 16 cores running at full 2.9GHz speed (not low frequency idle speed of 1.6GHz)
  5. There's 48GB of RAM free, it is not swapping.

What's happening? Is there some process CPU limit policy? How could I measure it if so? What else could explain this behavior?

Thanks for your ideas to solve this, the Great Xeon Slowdown Mystery of 2010!


Source: (StackOverflow)

Need to "calculate" optimum ulimit and fs.file-max values according to my own server needs

Need to "calculate" optimum ulimit and fs.file-max values according to my own server needs. Please do not conflict with "how to set those limits in various Linux distros" questions.

I am asking:

  1. Is there any good guide to explain in detail, parameters used for ulimit? (> 2.6 series kernels)
  2. Is there any good guide to show fs.file-max usage metrics?

Actually there are some old reference i could find on the net: http://www.faqs.org/docs/securing/chap6sec72.html "something reasonable like 256 for every 4M of RAM we have: i.e. for a machine with 128 MB of RAM, set it to 8192 - 128/4=32 32*256=8192"

Any up to date reference is appreciated.


Source: (StackOverflow)

bash fork error (Resource temporarily unavailable) does not stop, and keeps showing up every time I try to kill/reboot

I mistakenly used a limited server as an iperf server for 5000 parallel connections. (limit is 1024 processes) Now every time I log in, I see this:

-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: Resource temporarily unavailable

Then, I try to kill them, but when I do ps, I get this:

-bash-4.1$ ps
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: Resource temporarily unavailable

Same happens when I do a killall or similar things. I have even tried to reboot the system but again this is what I get after reboot:

-bash-4.1$ sudo reboot
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: Resource temporarily unavailable
-bash-4.1$ 

So Basically I cannot do anything. all the commands get this error :/ I can, however, do "exit".

This is an off-site server that I do not have physical access to, so I cannot turn it off/on physically.

Any ideas how I can fix this problem? I highly appreciate any help.


Source: (StackOverflow)

close on socket not releasing file descriptor

When conducting a stress test on some server code I wrote, I noticed that even though I am calling close() on the descriptor handle (and verifying the result for errors) that the descriptor is not released which eventually causes accept() to return an error "Too many open files".

Now I understand that this is because of the ulimit, what I don't understand is why I am hitting it if I call close() after each synchronous accept/read/send cycle?

I am validating that the descriptors are in fact there by running a watch with lsof:

ctsvr  9733 mike 1017u  sock     0,7      0t0 3323579 can't identify protocol
ctsvr  9733 mike 1018u  sock     0,7      0t0 3323581 can't identify protocol
...

And sure enough there are about 1000 or so of them. Further more, checking with netstat I can see that there are no hanging TCP states (no WAIT or STOPPED or anything).

If I simply do a single connect/send/recv from the client, I do notice that the socket does stay listed in lsof; so this is not even a load issue.

The server is running on an Ubuntu Linux 64-bit machine.

Any thoughts?


Source: (StackOverflow)

Duplicity doesn't like max open files setting on Mavericks

I use duplicity to backup some files. I'm now trying to restore to my Mac to test the backup, but get the following error:

> duplicity me@backupserver.io/backup_dr ~/restored_files
Max open files of 256 is too low, should be >= 1024.
Use 'ulimit -n 1024' or higher to correct.

So I try:

sudo ulimit -n 1024

And it seems fine, then run:

> ulimit -a
...
open files                      (-n) 256
...

How do you actually get the limit to change? I've Google'd with no luck :(


Source: (StackOverflow)

ulimit -Sc 1024 vs. ulimit -Hc 1024

What does Hard / Soft limits mean about core file size?

I usually put in my script ulimit -c unlimited before running a binary.
However, I want to limit the file size to avoid disk full.
And then I wonder on the best way:

ulimit -Sc 1024  #Soft
ulimit -Hc 1024  #Hard
ulimit  -c 1024  #Both

Another advice: What about the value?
ulimit -c 1024 or ulimit -c 10240 or something else?


Source: (StackOverflow)

ulimit -t under ubuntu

I am running Ubuntu Linux (2.6.28-11-generic #42-Ubuntu SMP Fri Apr 17 01:57:59 UTC 2009 i686 GNU/Linux) and it seems that the command "ulimit -t" does not work properly. I ran:

ulimit -t 1; myprogram

where 'myprogram' is an endless loop. I expected the program to be interrupted after 1 second, but it did not stop. I tried the same thing on a Linux Fedora installation and it worked as expected.

Is there some configuration that has to be set for it to work properly?

-- tsf


Source: (StackOverflow)

How do I set a ulimit from inside a Perl script that applies to its children?

I have a Perl script that does various installation steps to set up a development box for our company. It runs various shell scripts, some of which crash due to lower than required ulimits (specifically, stack size -s in my case).

Therefore, I'd like to set a ulimit that would apply to all scripts (children) started from within my main Perl one, but I am not sure how to achieve that - any attempts at calling ulimit from within the script only set it on that specific child shell, which immediately exits.

I am aware that I can call ulimit before I run the Perl script or use /etc/security/limits.conf but I don't want the user to know any of this - they should only know how to run the script, which should take care of all of that for them.

I can also run ulimit every time I run a command, like this ulimit -s BLA; ./cmd but I don't want to duplicate this every time and I feel like there's a better, cleaner solution out there.

Another crazy "workaround" is to make a wrapper script called BLA.sh which would set ulimit and call BLA.pl, but again, it's a hack in my mind and now I'd have 2 scripts (I could even make BLA.pl call itself with "ulimit -s BLA; ./BLA.pl --foo" and act differently based on whether it sees --foo or not but that's even hackier than before).

Finally, apparently I could install BSD::Resource but I'd like to avoid using external dependencies.

So what is THE way to set the ulimit from within a Perl script and make it apply to all children?

Thank you.


Source: (StackOverflow)

Why ulimit can't limit resident memory successfully and how?

I start a new bash shell, and execute:

ulimit -m 102400
ulimit -a
"
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) 102400
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
"

and then ,I execute compiling a huge project. the Linking of it will use large memory, more then 2G. The result, process ld used more then 2G resident memory.

is there any wrong ? how to use ulimit or can I use other programs to limit resident memory?

the target of limit resident memory, is because computer will freeze when one process almost used all memory.


Source: (StackOverflow)

Docker Ignores limits.conf (trying to solve "too many open files" error)

I'm running a web server that is handling many thousands of concurrent web socket connections. For this to be possible, on Debian linux (my base image is google/debian:wheezy, running on GCE), where the default number of open files is set to 1000, I usually just set the ulimit to the desired number (64,000).

This works out great, except that when I dockerized my application and deployed it - I found out that docker kind of ignores the limit definitions. I have tried the following (all on the host machine, not on the container itself):

MAX=64000

sudo bash -c "echo \"* soft nofile $MAX\" >> /etc/security/limits.conf"

sudo bash -c "echo \"* hard nofile $MAX\" >> /etc/security/limits.conf"

sudo bash -c "echo \"ulimit -c $MAX\" >>  /etc/profile"

ulimit -c $MAX

After doing some research I found that people were able to solve a similar issue by doing this:

sudo bash -c "echo \"limit nofile 262144 262144\" >> /etc/init/docker.conf"

and rebooting / restarting the docker service.

However, all of the above fail: I am getting the "too many open files" error when my app runs inside the container (doing the following without docker solves the problem).

I have tried to run ulimit -a inside the container to get an indication if the ulimit setup worked, but invoking ulimit inside the container throws an error about ulimit not being an executable that's a part of the PATH.

Anyone ran into this and/or can suggest a way to get docker to recognzie the limits?

Cheers,

Or


Source: (StackOverflow)