profiling
An interactive continuous Python profiler.
I’ve got a lot of plugins enabled when using Vim – I have collected plugins over the years. I’m a bit fed up with how long Vim takes to start now, so I’d like to profile its startup and see which of the many plugins I have are responsible.
Is there any way to profile Vim’s startup or script-running? Ideally I’d like to know how long Vim spends in each Vim script it loads.
Source: (StackOverflow)
What is the difference between the 'self' and 'total' columns in the Chrome CPU profiling of JS code?
Source: (StackOverflow)
I have a production server running with the following flag: -XX:+HeapDumpOnOutOfMemoryError
Last night it generated a java-38942.hprof file when our server encountered a heap error. It turns out that the developers of the system knew of the flag but no way to get any useful information from it.
Any ideas?
Source: (StackOverflow)
My bash shell takes up to 3-4 seconds to start up, while if I start it with --norc
it runs immediately.
I started "profiling" /etc/bash.bashrc
and ~/.bashrc
by manually inserting return
statements and seeking for speed improvements, but it is not a quantitative process and it is not efficient.
How can I profile my bash scripts and see which commands take most time to start up?
Source: (StackOverflow)
I have a C++ application I'm in the process of optimizing. What tool can I use to pinpoint my slow code?
Source: (StackOverflow)
def main():
for i in xrange(10**8):
pass
main()
This piece of code in Python runs in
real 0m1.841s
user 0m1.828s
sys 0m0.012s
However, if the for loop isn't placed within a function,
for i in xrange(10**8):
pass
then it runs for a much longer time:
real 0m4.543s
user 0m4.524s
sys 0m0.012s
Why is this?
Note: The timing is done with the time function in BASH in Linux.
Source: (StackOverflow)
I want to know the memory usage of my Python application and specifically want to know what code blocks/portions or objects are consuming most memory.
Google search shows a commercial one is Python Memory Validator.
And open source ones are PySizer and Heapy.
I haven't tried anyone, so I wanted to know which one is the best considering:
Gives most details.
I have to do least or no changes to my code.
Source: (StackOverflow)
My memory usage increases over time and restarting Django is not kind to users.
I am unsure how to go about profiling the memory usage but some tips on how to start measuring would be useful.
I have a feeling that there are some simple steps that could produce big gains. Ensuring 'debug' is set to 'False' is an obvious biggie.
Can anyone suggest others? How much improvement would caching on low-traffic sites?
In this case I'm running under Apache 2.x with mod_python. I've heard mod_wsgi is a bit leaner but it would be tricky to switch at this stage unless I know the gains would be significant.
Edit: Thanks for the tips so far. Any suggestions how to discover what's using up the memory? Are there any guides to Python memory profiling?
Also as mentioned there's a few things that will make it tricky to switch to mod_wsgi so I'd like to have some idea of the gains I could expect before ploughing forwards in that direction.
Edit: Carl posted a slightly more detailed reply here that is worth reading: http://stackoverflow.com/questions/488864/django-deployment-cutting-apaches-overhead
Edit: Graham Dumpleton's article is the best I've found on the MPM and mod_wsgi related stuff. I am rather disappointed that no-one could provide any info on debugging the memory usage in the app itself though.
Final Edit: Well I have been discussing this with Webfaction to see if they could assist with recompiling Apache and this is their word on the matter:
"I really don't think that you will get much of a benefit by switching to an MPM Worker + mod_wsgi setup. I estimate that you might be able to save around 20MB, but probably not much more than that."
So! This brings me back to my original question (which I am still none the wiser about). How does one go about identifying where the problems lies? It's a well known maxim that you don't optimize without testing to see where you need to optimize but there is very little in the way of tutorials on measuring Python memory usage and none at all specific to Django.
Thanks for everyone's assistance but I think this question is still open!
Another final edit ;-)
I asked this on the django-users list and got some very helpful replies
Honestly the last update ever!
This was just released. Could be the best solution yet: Profiling Django object size and memory usage with Pympler
Source: (StackOverflow)
XDebug offers the configuration directive "xdebug.profiler_enable_trigger" that allows to activate profiling by passing the GET or POST parameter "XDEBUG_PROFILE" when calling a script via HTTP. This is handy if you don't want profiling for ALL of your scripts but only for a few special cases without always changing your PHP configuration.
Is there a way to achieve the same behavior for command line PHP programs? I tried to pass the "XDEBUG_PROFILE" as a command line argument but it didn't work.
In general, profiling command line PHP works well, but I'd like to have the same per-call-flexibility as with a browser and HTTP server.
Any suggestions?
Source: (StackOverflow)
While solving some Project Euler Problems to learn Haskell (so currently I'm a completly beginner) I came over Problem 13. I wrote this (naive) solution:
--Get Number of Divisors of n
numDivs :: Integer -> Integer
numDivs n = toInteger $ length [ x | x<-[2.. ((n `quot` 2)+1)], n `rem` x == 0] + 2
--Generate a List of Triangular Values
triaList :: [Integer]
triaList = [foldr (+) 0 [1..n] | n <- [1..]]
--The same recursive
triaList2 = go 0 1
where go cs n = (cs+n):go (cs+n) (n+1)
--Finds the first triangular Value with more than n Divisors
sol :: Integer -> Integer
sol n = head $ filter (\x -> numDivs(x)>n) triaList2
This Solution for n=500 (sol 500) is extremly slow (running for more than 2 hours now), so I wondered how to find out why this solution is so slow. Are there any commands that tell me where most of the computation-time is spend so I know which part of my haskell-program is slow? Something like a simple profiler.
To make it clear, i'm not asking for a faster solution but for a way to find this solution. How would you start if you would have no haskell knowledge?
I tried to write two triaList functions but found no way to test which one is faster, so thats where my problems start.
Thanks
Source: (StackOverflow)
I need to find where the bottlenecks are in my Android app.
What profiling tools or techniques can I use?
Source: (StackOverflow)
I have a piece of javascript code that I am executing using the node.js interpreter.
for(var i = 1; i < LIMIT; i++){
db.users.save({id : i, name : "MongoUser [" + i + "]"}, function(err, saved) {
if( err || !saved ) console.log("Error");
else console.log("Saved");
});
}
I want to know how to measure the time taken by these db insert operations. I could compute the difference of Date values after and before this piece of code but that would be incorrect because of the asynchronous nature of the code.
Source: (StackOverflow)