Are Unix tools just slow?


(David Douthitt) #1

RE: sudo find / > /dev/null takes 61 seconds…

Well, find is known for being one of the biggest resource hogs in UNIX - and find / is particularly brutal. Watch your usage levels with vmstat next time you run find / …

A tip:

Instead of:

find / -name “blah”

Do this ahead of time:

find / | ls -l > /.filemaster

Then your find command gets changed to this:

grep /.filemaster “blah”

…and it goes lightning fast and doesn’t hog system resources.


(Jean-Hugues ROBERT) #2

Hello,

I am optimizing some code and a profiler would help.
Unfortunately Ruby standard profiler seems not to work for me.
Maybe because my software is multi-threaded. I just can’t get
any output at all.

I tried RdProf too. I get some results, thanks to $profiler.report
but they show weird things, like call sites that are plain impossible.

Thanks,

Jean-Hugues


(Ned Konz) #3

Try the attached, and tell me what you get.

lineprofile.rb (1.41 KB)

···

On Thursday 20 June 2002 09:32 am, Jean-Hugues ROBERT wrote:

Hello,

I am optimizing some code and a profiler would help.
Unfortunately Ruby standard profiler seems not to work for me.
Maybe because my software is multi-threaded. I just can’t get
any output at all.


Ned Konz
http://bike-nomad.com
GPG key ID: BEEA7EFE


(Jean-Hugues ROBERT) #4

Hello,

Hello,

I am optimizing some code and a profiler would help.
Unfortunately Ruby standard profiler seems not to work for me.
Maybe because my software is multi-threaded. I just can’t get
any output at all.

Try the attached, and tell me what you get.


Ned Konz
http://bike-nomad.com
GPG key ID: BEEA7EFE

Thanks for the attached code. I did not used it directly, instead I
used it a starting point for some profiler I added in the system I am
working on.

I enjoyed the simplicity of the scheme where you basically time how long
is spent per file+line using the time diff when control passes to another
file/line. Neat.

This is definitely thread safe too.

Thanks to some code parsing in my system I was able to consolidate the
results at the method level (adding time spent in every line inside that
method). As a result, instead of getting a list of the “top” most CPU
intensive lines of code, I get a list of the “top” method.

The main drawback is that of course only time spent inside de method
itself is available. Not the time spent in the method + any method called
by that method. This is OK for micro-optimization, but the big picture
would require timing of the call flows.

The later would require a different scheme, looking at the call stack,
much like the standard profiler does using set_trace_func() I guess. In
that scheme one must carefully handle thread switches, else results are
useless (for those dealing with threads I mean).

Thanks again.

Jean-Hugues

···

At 02:22 21/06/2002 +0900, you wrote:

On Thursday 20 June 2002 09:32 am, Jean-Hugues ROBERT wrote:


Web: http://hdl.handle.net/1030.37/1.1
Phone: +33 (0) 4 92 27 74 17