EzDevInfo.com

Memory

PHP Memory Cacher - best cachers in one interface, with nice features

How to measure actual memory usage of an application or process?

How do you measure the memory usage of an application or process in Linux?

From the blog article of Understanding memory usage on Linux, "ps" is not an accurate tool to use for this intent.

Why ps is "wrong"

Depending on how you look at it, ps is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up if it were the only process running. Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by ps are almost definitely "wrong".


Source: (StackOverflow)

What is "cache-friendly" code?

Could someone possibly give an example of "cache unfriendly code" and the "cache friendly" version of that code?

How can I make sure I write cache-efficient code?


Source: (StackOverflow)

Advertisements

In Java, what is the best way to determine the size of an object?

For example, let's say I have an application that can read in a CSV file with piles of data rows. I give the user a summary of the number of rows based on types of data, but I want to make sure that I don't read in too many rows of data and cause OutOfMemoryErrors. Each row translates into an object. Is there an easy way to find out the size of that object programmatically? Is there a reference that defines how large primitive types and object references are for a VM?

Right now, I have code that says read up to 32,000 rows, but I'd also like to have code that says read as many rows as possible until I've used 32MB of memory. Maybe that is a different question, but I'd still like to know.


Source: (StackOverflow)

How do I determine the size of my array in C?

How do I determine the size of my array in C?

That is, the number of elements the array can hold?


Source: (StackOverflow)

get OS-level system information

I'm currently building a Java app that could end up being run on many different platforms, but primarily variants of Solaris, Linux and Windows.

Has anyone been able to successfully extract information such as the current disk space used, CPU utilisation and memory used in the underlying OS? What about just what the Java app itself is consuming?

Preferrably I'd like to get this information without using JNI.


Source: (StackOverflow)

"register" keyword in C?

What does the register keyword do in C language? I have read that it is used for optimizing but is not clearly defined in any standard. Is it still relevant and if so, when would you use it?


Source: (StackOverflow)

Which is faster: Stack allocation or Heap allocation

This question may sound fairly elementary, but this is a debate I had with another developer I work with.

I was taking care to stack allocate things where I could, instead of heap allocating them. He was talking to me and watching over my shoulder and commented that it wasn't necessary because they are the same performance wise.

I was always under the impression that growing the stack was constant time, and heap allocation's performance depended on the current complexity of the heap for both allocation (finding a hole of the proper size) and de-allocating (collapsing holes to reduce fragmentation, as many standard library implementations take time to do this during deletes if I am not mistaken).

This strikes me as something that would probably be very compiler dependent. For this project in particular I am using a Metrowerks compiler for the PPC architecture. Insight on this combination would be most helpful, but in general, for GCC, and MSVC++, what is the case? Is heap allocation not as high performing as stack allocation? Is there no difference? Or are the differences so minute it becomes pointless micro-optimization.


Source: (StackOverflow)

How to determine CPU and memory consumption from inside a process?

I once had the task of determining the following performance parameters from inside a running application:

  • Total virtual memory available
  • Virtual memory currently used
  • Virtual memory currently used by my process
  • Total RAM available
  • RAM currently used
  • RAM currently used by my process
  • % CPU currently used
  • % CPU currently used by my process

The code had to run on Windows and Linux. Even though this seems to be a standard task, finding the necessary information in the manuals (WIN32 API, GNU docs) as well as on the Internet took me several days, because there's so much incomplete/incorrect/outdated information on this topic to be found out there.

In order to save others from going through the same trouble, I thought it would be a good idea to collect all the scattered information plus what I found by trial and error here in one place.


Source: (StackOverflow)

What are the dangers when creating a thread with a stack size of 50x the default?

I'm currently working on a very performance critical program and one path I decided to explore that may help reduce resource consumption was increasing my worker threads' stack size so I can move most of the data (float[]s) that I'll be accesing onto the stack (using stackalloc).

I've read that the default stack size for a thread is 1 MB, so in order to move all my float[]s I would have to expand the stack by approximately 50 times (to 50 MB~).

I understand this is generally considered "unsafe" and isn't recommended, but after benchmarking my current code against this method, I've discovered a 530% increase in processing speed! So I can not simply pass by this option without further investigation, which leads me to my question; what are the dangers associated with increasing the stack to such a large size (what could go wrong), and what precautions should I take to minimise such dangers?

My test code,

public static unsafe void TestMethod1()
{
    float* samples = stackalloc float[12500000];

    for (var ii = 0; ii < 12500000; ii++)
    {
        samples[ii] = 32768;
    }
}

public static void TestMethod2()
{
    var samples = new float[12500000];

    for (var i = 0; i < 12500000; i++)
    {
        samples[i] = 32768;
    }
}

Source: (StackOverflow)

How do I determine the size of an object in Python?

In C, we can find the size of an int, char, etc. I want to know how to get size of objects like a string, integer, etc. in Python.

Related question: How many bytes per element are there in a Python list (tuple)?

I am using an XML file which contains size fields that specify the size of value. I must parse this XML and do my coding. When I want to change the value of a particular field, I will check the size field of that value. Here I want to compare whether the new value that I'm gong to enter is of the same size as in XML. I need to check the size of new value. In case of a string I can say its the length. But in case of int, float, etc. I am confused.


Source: (StackOverflow)

Why does appending "" to a String save memory?

I used a variable with a lot of data in it, say String data. I wanted to use a small part of this string in the following way:

this.smallpart = data.substring(12,18);

After some hours of debugging (with a memory visualizer) I found out that the objects field smallpart remembered all the data from data, although it only contained the substring.

When I changed the code into:

this.smallpart = data.substring(12,18)+""; 

..the problem was solved! Now my application uses very little memory now!

How is that possible? Can anyone explain this? I think this.smallpart kept referencing towards data, but why?

UPDATE: How can I clear the big String then? Will data = new String(data.substring(0,100)) do the thing?


Source: (StackOverflow)

What happens when a computer program runs?

I know the general theory but I can't fit in the details.

I know that a program resides in the secondary memory of a computer. Once the program begins execution it is entirely copied to the RAM. Then the processor retrive a few instructions (it depends on the size of the bus) at a time, puts them in registers and executes them.

I also know that a computer program uses two kinds of memory: stack and heap, which are also part of the primary memory of the computer. The stack is used for non-dynamic memory, and the heap for dynamic memory (for example, everything related to the new operator in C++)

What I can't understand is how those two things connect. At what point is the stack used for the execution of the instructions? Instructions go from the RAM, to the stack, to the registers?


Source: (StackOverflow)

Virtual Memory Usage from Java under Linux, too much memory used

I have a problem with a Java application running under Linux.

When I launch the application, using the default maximum heap size (64mb), I see using the tops application that 240 MB of virtual Memory are allocated to the application. This creates some issues with some other software on the computer, which is relatively resource-limited.

The reserved virtual memory will not be used anyway, as far as I understand, because once we reach the heap limit an OutOfMemoryError is thrown. I ran the same application under windows and I see that the Virtual Memory size and the Heap size are similar.

Is there anyway that I can configure the Virtual Memory in use for a Java process under Linux?

Edit 1: The problem is not the Heap. The problem is that if I set a Heap of 128M, for example, still linux allocates 210 MB of Virtual Memory, which is not needed, ever.**

Edit 2: Using ulimit -v allows limiting the amount of virtual memory. If the size set is below 204 MB, then the application won't run even though it doesn't need 204MB, only 64MB. So I want to understand why java requires so much virtual memory. Can this be changed?

Edit 3: There are several other applications running in the system, which is embedded. And the system does have a virtual memory limit. (from comments, important detail)


Source: (StackOverflow)

How dangerous is it to access an array out of bounds?

How dangerous is accessing an array outside of its bounds (in C)? It can sometimes happen that I read from outside the array (I now understand I then access memory used by some other parts of my program or even beyond that) or I am trying to set a value to an index outside of the array. The program sometimes crashes, but sometimes just runs, only giving unexpected results.

Now what I would like to know is, how dangerous is this really? If it damages my program, it is not so bad. If on the other hand it breaks something outside my program, because I somehow managed to access some totally unrelated memory, then it is very bad, I imagine. I read a lot of 'anything can happen', 'segmentation might be the least bad problem', 'your harddisk might turn pink and unicorns might be singing under your window', which is all nice, but what is really the danger?

My questions:

  1. Can reading values from way outside the array damage anything apart from my program? I would imagine just looking at things does not change anything, or would it for instance change the 'last time opened' attribute of a file I happened to reach?
  2. Can setting values way out outside of the array damage anything apart from my program? From this stackoverflow question I gather that it is possible to access any memory location, that there is no safety guarantee.
  3. I now run my small programs from within XCode. Does that provide some extra protection around my program where it cannot reach outside its own memory? Can it harm XCode?
  4. Any recommendations on how to run my inherently buggy code safely?

I use OSX 10.7, Xcode 4.6

This is my first Stackoverflow question. I took time reading as much as I could on the subject, but I probably missed many resources. Let me know if you feel I did not do enough research and/or you see other problems with this question.


Source: (StackOverflow)

Can't understand this way to calculate the square of a number

I have found a function that calculates square of a number:

int p(int n) {
    int a[n]; //works on C99 and above
    return (&a)[n] - a;
}

It returns value of n2. Question is, how does it do that? After a little testing, I found that between (&a)[k] and (&a)[k+1] is sizeof(a)/sizeof(int). Why is that?


Source: (StackOverflow)