user system total real
0.010000 0.000000 0.010000 ( 0.009087)
0.010000 0.000000 0.010000 ( 0.008774)
0.000000 0.000000 0.000000 ( 0.004621)
Perhaps your machine is more deterministic than mine, but successive
runs of that benchmark (and using #bmbm to be safer about the
measurement) sometimes show 'a' faster than "a", sometimes slower.
Even with benchmarking, I wouldn't trust that answers that are within a
few percent of each other. And I certainly wouldn't rush off to refactor
code because of it.
To understand the difference, just think about how many strings are being created with each.
'a' creates a new string, as does 'b'.
The + operation creates a new string, as well.
So, there's a lot of new string creation happening with either of the + examples.
Change the +'s to << and you will see a difference.
'a' << x << 'b'
<< just changes the old String.
The "a#{x}b" example does the least work.
Kirk Haines
···
On Wed, 10 Jan 2007, Gavin Kistner wrote:
From: Vincent Fourmond
user system total real
0.010000 0.000000 0.010000 ( 0.009087)
0.010000 0.000000 0.010000 ( 0.008774)
0.000000 0.000000 0.000000 ( 0.004621)
Perhaps your machine is more deterministic than mine, but successive
runs of that benchmark (and using #bmbm to be safer about the
measurement) sometimes show 'a' faster than "a", sometimes slower.
Even with benchmarking, I wouldn't trust that answers that are within a
few percent of each other. And I certainly wouldn't rush off to refactor
code because of it.
user system total real
0.010000 0.000000 0.010000 ( 0.009087)
0.010000 0.000000 0.010000 ( 0.008774)
0.000000 0.000000 0.000000 ( 0.004621)
Perhaps your machine is more deterministic than mine, but successive
runs of that benchmark (and using #bmbm to be safer about the
measurement) sometimes show 'a' faster than "a", sometimes slower.
The thing is they are rigourosly equivalents. As soon as the program
is parsed, they are represented exactly as the same objects, a String.
So they are the same, that's why you get around the same processing times.
Moreover, it is normal that interpolation is faster, because it
involves only evaluation of x as a String and string copy, whereas
addition involves two method calls (+ and +), which are rather expensive
(at least more than a string copy for such small strings).
Benchmark.bmbm do |x|
x.report('\'a\' +') { n.times {'a' + c + 'b'}}
x.report('"a" +') { n.times {"a" + c + "b"}}
x.report('a#{') { n.times {"a#{c}b"}}
x.report('"" <<') { n.times {"" << "a" << c << "b"}}
x.report('"a" <<') { n.times {"a" << c << "b"}}
x.report('A +') { n.times {A + c + B}}
x.report('"" << A') { n.times {"" << A << c << B}}
end
robert
···
On 09.01.2007 22:17, khaines@enigo.com wrote:
On Wed, 10 Jan 2007, Gavin Kistner wrote:
From: Vincent Fourmond
user system total real
0.010000 0.000000 0.010000 ( 0.009087)
0.010000 0.000000 0.010000 ( 0.008774)
0.000000 0.000000 0.000000 ( 0.004621)
Perhaps your machine is more deterministic than mine, but successive
runs of that benchmark (and using #bmbm to be safer about the
measurement) sometimes show 'a' faster than "a", sometimes slower.
Even with benchmarking, I wouldn't trust that answers that are within a
few percent of each other. And I certainly wouldn't rush off to refactor
code because of it.
Increase n from 5000 to 500000 or 5000000.
To understand the difference, just think about how many strings are being created with each.
'a' creates a new string, as does 'b'.
The + operation creates a new string, as well.
So, there's a lot of new string creation happening with either of the + examples.
Change the +'s to << and you will see a difference.
This report shows the user CPU time, system CPU time, the sum of the
user and system CPU times, and the elapsed real time. The unit of time
is seconds.
i.e. time spent in user mode, kernel mode, user+kernel (for these only
time spent by this particular program is counted) and elapsed real
("wallclock" time)
···
On 1/10/07, Jason Mayer <slamboy@gmail.com> wrote:
On 1/10/07, Robert Klemme <shortcutter@googlemail.com> wrote:
>
> On 09.01.2007 22:17, khaines@enigo.com wrote:
> > On Wed, 10 Jan 2007, Gavin Kistner wrote:
> >
> >> From: Vincent Fourmond
> >>> user system total real
> >>> 0.010000 0.000000 0.010000 ( 0.009087)
> >>> 0.010000 0.000000 0.010000 ( 0.008774)
> >>> 0.000000 0.000000 0.000000 ( 0.004621)
> >>
Excellent, thanks. I started to benchmark my quiz submission's file
reading and uh, ram usage on the ruby process skyrocketed to 200MB. Does
benchmark not play well with blocks of code?
···
On 1/10/07, Jan Svitok <jan.svitok@gmail.com> wrote:
On 1/10/07, Jason Mayer <slamboy@gmail.com> wrote:
> On 1/10/07, Robert Klemme <shortcutter@googlemail.com> wrote:
> >
> > On 09.01.2007 22:17, khaines@enigo.com wrote:
> > > On Wed, 10 Jan 2007, Gavin Kistner wrote:
> > >
> > >> From: Vincent Fourmond
> > >>> user system total real
> > >>> 0.010000 0.000000 0.010000 ( 0.009087)
> > >>> 0.010000 0.000000 0.010000 ( 0.008774)
> > >>> 0.000000 0.000000 0.000000 ( 0.004621)
> > >>
>
> What do the different columns actually mean?
This report shows the user CPU time, system CPU time, the sum of the
user and system CPU times, and the elapsed real time. The unit of time
is seconds.
i.e. time spent in user mode, kernel mode, user+kernel (for these only
time spent by this particular program is counted) and elapsed real
("wallclock" time)
So I tried to benchmark my code, and it works for very small numbers of
tests.
2 repetitions:
user system total real
0.871000 0.010000 0.881000 ( 0.902000)
5 repetitions:
user system total real
2.323000 0.050000 2.373000 ( 3.024000)
15 repetitions however gives the same problem - 5 minutes after I started
the program I killed it.
Can someone help me understand if this is a problem with benchmark or with
my code?
···
On 1/10/07, Jason Mayer <slamboy@gmail.com> wrote:
Excellent, thanks. I started to benchmark my quiz submission's file
reading and uh, ram usage on the ruby process skyrocketed to 200MB. Does
benchmark not play well with blocks of code?