Yes that's correct. In order to understand the logic it's important to take a look at the resulting bytecode, as benchmarks can get a little weird. First the non-module_eval version:
Resulting bytecode:
As shown here, the method is created, then attached to the module. Now the module_eval version:
Resulting bytecode:
Now here, instead of creating a method and attaching it the module, it's instead just putting a string with the code on the stack and sending it off to module_eval. This hits the C side of ruby in the following manner:
So in essence it means more time is spent in the C side instead of a back and forth chain of bytecode calls. Note that all bytecode is produced via MRI 1.9.3. Honestly though the difference between the two using the benchmark was minimal enough that I don't think it's anything to be worried over. Also note that syntax highlighters will see the eval string as what it is, a literal string. This will make your code less readable!
Apart from showing that I have a ~much~ slower machine than yours, I
don't see anything significant in these results.
I think what you're seeing is an artefact of garbage collection.
Okay so I took another look at this. Since the bytecode I showed before only showed the string of code being sent to module_eval, I needed to find a way to get the bytecode for the result of running it. Fortunately in vm_eval.c there's a line that's set to `if (0)`, where the code it's surrounding shows the resulting eval bytecode (with a comment saying use it for debug purposes only, which, guess what I'm doing!) Horray, so I set this to 1, recompiled and this is what I got:
Line for line, it's all exactly the same (okay with the exception of variable names). This breaks my theory that eval's behind the scenes generation did something unique with the code. Here's my results running it on Mac OSX 1.9.2-p290 source compile:
In this case there's less and less of a difference. I'd almost go so far to say that it's just slight inaccuracies involved in benchmark timings, with the bytecode being the same and all.
Yes. the difference is not so critical/important, not at all. And of
course writing methods as strings to be later evaluated is a pain
Thanks a lot.
···
2011/8/7 Chris White <cwprogram@live.com>:
So in essence it means more time is spent in the C side instead of a back and forth chain of bytecode calls. Note that all bytecode is produced via MRI 1.9.3. Honestly though the difference between the two using the benchmark was minimal enough that I don't think it's anything to be worried over. Also note that syntax highlighters will see the eval string as what it is, a literal string. This will make your code less readable!
This doesn't explain at all why running the method `kk` is faster, it explains why
the small snippet *creating* the method `kk` might be faster than creating the
method `aa`. You didn't show and compare the bytecode of the method `kk`
after it had been compiled by module_eval, which is what is actually run by
the loop and is measured to be faster than the bytecode inside `aa`.
Yes that's correct. In order to understand the logic it's important to take a look at the resulting bytecode, as benchmarks can get a little weird. First the non-module_eval version:
Now here, instead of creating a method and attaching it the module, it's instead just putting a string with the code on the stack and sending it off to module_eval. This hits the C side of ruby in the following manner:
So in essence it means more time is spent in the C side instead of a back and forth chain of bytecode calls. Note that all bytecode is produced via MRI 1.9.3. Honestly though the difference between the two using the benchmark was minimal enough that I don't think it's anything to be worried over. Also note that syntax highlighters will see the eval string as what it is, a literal string. This will make your code less readable!
Not only that, but the differences are so small, that you have to
correct for inaccuracies of the internal clock's resolution.
Tenths of milliseconds is, generally speaking, too small to be
measures accurately by non-specialist (i.e. lab) tools.
···
On Tue, Aug 9, 2011 at 5:41 AM, Chris White <cwprogram@live.com> wrote:
In this case there's less and less of a difference. I'd almost go so far to say that
it's just slight inaccuracies involved in benchmark timings, with the bytecode
being the same and all.
Line for line, it's all exactly the same (okay with the exception of variable names). This breaks my theory that eval's behind the scenes generation did something unique with the code. Here's my results running it on Mac OSX 1.9.2-p290 source compile:
In this case there's less and less of a difference. I'd almost go so far to say that it's just slight inaccuracies involved in benchmark timings, with the bytecode being the same and all.