Metaruby, BFTS, Cardinal and Rubicon - State of play?

Hi,

I read Pat Eyler's interview with Kevin Tew[1] this morning. It got
me interested and reminded me of metaruby. I did a little reading and
am wondering the general state of play with regard to both metaruby
and bfts. Is rubicon (rubytests on rubyforge) basically dead, waiting
the release of bfts?

Cheers,

Chris

[1] http://on-ruby.blogspot.com/2006/09/ruby-hacker-interview-kevin-tew.html

Metaruby and BFTS are moving along, slowly (for no other reason than the sheer number of projects we have on our plate, not lack of interest). We'd happily take people interested in either one.

Rubicon is dead for all intents and purposes.

···

On Sep 13, 2006, at 1:48 AM, Chris Roos wrote:

I read Pat Eyler's interview with Kevin Tew[1] this morning. It got
me interested and reminded me of metaruby. I did a little reading and
am wondering the general state of play with regard to both metaruby
and bfts. Is rubicon (rubytests on rubyforge) basically dead, waiting
the release of bfts?

I'd like to help but am unsure as to my time/skill applicability. Is
it possible to get a look at the source out of general interest?

Chris

···

On 9/13/06, Ryan Davis <ryand-ruby@zenspider.com> wrote:

On Sep 13, 2006, at 1:48 AM, Chris Roos wrote:

> I read Pat Eyler's interview with Kevin Tew[1] this morning. It got
> me interested and reminded me of metaruby. I did a little reading and
> am wondering the general state of play with regard to both metaruby
> and bfts. Is rubicon (rubytests on rubyforge) basically dead, waiting
> the release of bfts?

Metaruby and BFTS are moving along, slowly (for no other reason than
the sheer number of projects we have on our plate, not lack of
interest). We'd happily take people interested in either one.

Ryan Davis wrote:

I read Pat Eyler's interview with Kevin Tew[1] this morning. It got
me interested and reminded me of metaruby. I did a little reading and
am wondering the general state of play with regard to both metaruby
and bfts. Is rubicon (rubytests on rubyforge) basically dead, waiting
the release of bfts?

Metaruby and BFTS are moving along, slowly (for no other reason than the
sheer number of projects we have on our plate, not lack of interest).
We'd happily take people interested in either one.

Rubicon is dead for all intents and purposes.

Remind me again what Metaruby is. I know what BFTS is and can't wait to
get my mitts on it.

···

On Sep 13, 2006, at 1:48 AM, Chris Roos wrote:

Did it die on the vine, or has it been replaced?

-A

···

On 9/13/06, Ryan Davis <ryand-ruby@zenspider.com> wrote:

Rubicon is dead for all intents and purposes.

Remind me again what Metaruby is. I know what BFTS is and can't wait to
get my mitts on it.

Metaruby[1] is ruby (Matz's ruby) written in ruby. Others may be able
to provide a better explanation...

[1] Ruby | zenspider.com | by ryan davis

>> I read Pat Eyler's interview with Kevin Tew[1] this morning. It got
>> me interested and reminded me of metaruby.

I'm glad somebody read it :wink:

I'm even happier that it's drawing some attention to the various
Ruby implementations and the growing toolkit around them.

>>I did a little reading and
>> am wondering the general state of play with regard to both metaruby
>> and bfts. Is rubicon (rubytests on rubyforge) basically dead, waiting
>> the release of bfts?
>
Remind me again what Metaruby is. I know what BFTS is and can't wait to
get my mitts on it.

Metaruby is the reimplimentation of Ruby in Ruby, with a translation
mechanism to convert the core to C.

···

On 9/13/06, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote:

> On Sep 13, 2006, at 1:48 AM, Chris Roos wrote:

--
thanks,
-pate
-------------------------

> Rubicon is dead for all intents and purposes.

Did it die on the vine, or has it been replaced?

It is being replaced.

···

On 9/14/06, Alex LeDonne <aledonne.listmail@gmail.com> wrote:

On 9/13/06, Ryan Davis <ryand-ruby@zenspider.com> wrote:

-A

--
thanks,
-pate
-------------------------

It is in the process of being replaced with BFTS.

···

On Sep 14, 2006, at 8:13 AM, Alex LeDonne wrote:

On 9/13/06, Ryan Davis <ryand-ruby@zenspider.com> wrote:

Rubicon is dead for all intents and purposes.

Did it die on the vine, or has it been replaced?

I'd be happier if Mr Tew didn't try to lend legitimacy to the alioth
shootout. Microbenchmarks don't show anything useful, even if they're
run correctly -- which the shootout has never been run correctly. It
isn't even administered correctly. (I was similarly annoyed that Joel
Spolsky used it in his latest slam on Ruby. Stupid, Joel, stupid.)

-austin

···

On 9/13/06, pat eyler <pat.eyler@gmail.com> wrote:

On 9/13/06, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote:
> > On Sep 13, 2006, at 1:48 AM, Chris Roos wrote:
> >> I read Pat Eyler's interview with Kevin Tew[1] this morning. It got
> >> me interested and reminded me of metaruby.
I'm glad somebody read it :wink:

I'm even happier that it's drawing some attention to the various
Ruby implementations and the growing toolkit around them.

--
Austin Ziegler * halostatue@gmail.com * http://www.halostatue.ca/
               * austin@halostatue.ca * You are in a maze of twisty little passages, all alike. // halo • statue
               * austin@zieglers.ca

Austin Ziegler wrote:

I'm even happier that it's drawing some attention to the various
Ruby implementations and the growing toolkit around them.

I'd be happier if Mr Tew didn't try to lend legitimacy to the alioth
shootout. Microbenchmarks don't show anything useful, even if they're
run correctly -- which the shootout has never been run correctly. It
isn't even administered correctly. (I was similarly annoyed that Joel
Spolsky used it in his latest slam on Ruby. Stupid, Joel, stupid.)

-austin

Well ... as a working performance engineer, I'm going to defend
microbenchmarks as virtually (no pun intended) the *only* way to improve
performance over all for the Ruby interpreter, coupled of course with
profiling said interpreter and careful design of the data structures the
interpreter must maintain during execution.

The simple fact is that programmers are going to write nested loops,
either explicitly a la Fortran or implicitly, for example, in

a = mat1*mat2

where mat1 and mat2 are matrices defined with "Matrix." Programmers are
going to read large files and apply regular expression transformations
to every line in those files. Programmers are going to define data
structures for real-world problems just like they do for the Tower of
Hanoi benchmark. To quote Yul Brynner, "Et cetera, et cetera, et cetera."

.

Thank you, once again, for derailing a thread with your personal vendetta.

···

On Sep 13, 2006, at 7:59 AM, Austin Ziegler wrote:

I'd be happier if Mr Tew didn't try to lend legitimacy to the alioth
shootout. Microbenchmarks don't show anything useful, even if they're
run correctly -- which the shootout has never been run correctly. It
isn't even administered correctly. (I was similarly annoyed that Joel
Spolsky used it in his latest slam on Ruby. Stupid, Joel, stupid.)

And you benchmark algorithms written in the same language, BUT the shootout
benchmarks *different languages* and I have looked at the algorithms they
use just once, that was enough.

The point is use it as a tool if you find it useful, but here it is used for
advocacy.

Cheers
Robert

···

On 9/14/06, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote:

Austin Ziegler wrote:
>> I'm even happier that it's drawing some attention to the various
>> Ruby implementations and the growing toolkit around them.
>
> I'd be happier if Mr Tew didn't try to lend legitimacy to the alioth
> shootout. Microbenchmarks don't show anything useful, even if they're
> run correctly -- which the shootout has never been run correctly. It
> isn't even administered correctly. (I was similarly annoyed that Joel
> Spolsky used it in his latest slam on Ruby. Stupid, Joel, stupid.)
>
> -austin
Well ... as a working performance engineer, I'm going to defend
microbenchmarks as virtually (no pun intended) the *only* way to improve
performance over all for the Ruby interpreter, coupled of course with
profiling said interpreter and careful design of the data structures the
interpreter must maintain during execution.

The simple fact is that programmers are going to write nested loops,
either explicitly a la Fortran or implicitly, for example, in

a = mat1*mat2

where mat1 and mat2 are matrices defined with "Matrix." Programmers are
going to read large files and apply regular expression transformations
to every line in those files. Programmers are going to define data
structures for real-world problems just like they do for the Tower of
Hanoi benchmark. To quote Yul Brynner, "Et cetera, et cetera, et cetera."

.

--
Deux choses sont infinies : l'univers et la bêtise humaine ; en ce qui
concerne l'univers, je n'en ai pas acquis la certitude absolue.

- Albert Einstein

Benchmarking for internal purposes is fine. What the shootout does is
something different entirely. Have you ever really *looked* at the code
they run for the various different versions? Some of it is so blatantly
tweaked to run faster on the benchmark that it's not funny. (There's a
Perl example I looked at a couple of years ago that *deliberately* had
obfuscated code because the obfuscated code took advantage of internals
that you're not supposed to use and ran faster than the other versions.)
There's no excuse for that sort of thing showing up on a benchmarking
site.

It's no different than NVidia or ATI detecting a benchmark program and
optimizing certain things for that program only.

It gets worse, Ed: the administrators behind the shootout don't care.
They never have. They continually promote their website, but when
challenged on the methodology used or technical issues, they give the
quote that television psychics use: for entertainment purposes only.

They're dishonest and run a benchmark comparison site that is so flawed
that you can't even remotely trust it.

The saying goes "lies, damned lies, and statistics". Well, any published
benchmark is even worse than statistics in that line.

I'm *not* against the concept of benchmarking. I'm definitely against
the concept of comparative benchmarking in the way that the shootout
does it. I will often benchmark the code that I write to make sure that
I'm writing efficient code.

But I won't pretend that the results are useful for comparisons.

They never are.

-austin

···

On 9/14/06, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote:

Austin Ziegler wrote:

I'm even happier that it's drawing some attention to the various
Ruby implementations and the growing toolkit around them.

I'd be happier if Mr Tew didn't try to lend legitimacy to the alioth
shootout. Microbenchmarks don't show anything useful, even if they're
run correctly -- which the shootout has never been run correctly. It
isn't even administered correctly. (I was similarly annoyed that Joel
Spolsky used it in his latest slam on Ruby. Stupid, Joel, stupid.)

Well ... as a working performance engineer, I'm going to defend
microbenchmarks as virtually (no pun intended) the *only* way to
improve performance over all for the Ruby interpreter, coupled of
course with profiling said interpreter and careful design of the data
structures the interpreter must maintain during execution.

--
Austin Ziegler * halostatue@gmail.com * http://www.halostatue.ca/
               * austin@halostatue.ca * You are in a maze of twisty little passages, all alike. // halo • statue
               * austin@zieglers.ca

Look. We *know* that the shootout is crap. We've known this for three
years now. But we *still* have people come in and use it for a variety
of reasons, most of which are completely bogus.

If we, as Ruby users, want to have a set of benchmarks, we should
develop our own and hold them to a rigorous standard. That is, the
exact *opposite* of what Mr. Guouy's shootout does.

I'd gladly support a Ruby benchmark suite that people could use in
talking about things. But in a variety of different threads we've seen
that not only don't the shootout people know anything about
benchmarks, most other people don't either (see the "For
performance..." thread).

Zed Shaw has had to do a lot of teaching about statistics. I'm sure
that Ed Borasky could teach us a lot about benchmarking with *good*
benchmarks to be tested under controlled or controllable situations so
we could improve the performance of Ruby in various situations.

-austin

···

On 9/14/06, Ryan Davis <ryand-ruby@zenspider.com> wrote:

On Sep 13, 2006, at 7:59 AM, Austin Ziegler wrote:
> I'd be happier if Mr Tew didn't try to lend legitimacy to the alioth
> shootout. Microbenchmarks don't show anything useful, even if they're
> run correctly -- which the shootout has never been run correctly. It
> isn't even administered correctly. (I was similarly annoyed that Joel
> Spolsky used it in his latest slam on Ruby. Stupid, Joel, stupid.)
Thank you, once again, for derailing a thread with your personal
vendetta.

--
Austin Ziegler * halostatue@gmail.com * http://www.halostatue.ca/
               * austin@halostatue.ca * You are in a maze of twisty little passages, all alike. // halo • statue
               * austin@zieglers.ca

Robert Dober wrote:

And you benchmark algorithms written in the same language, BUT the shootout
benchmarks *different languages* and I have looked at the algorithms they
use just once, that was enough.

The point is use it as a tool if you find it useful, but here it is used
for
advocacy.

Cheers
Robert

It's a perfectly natural desire to want to compare languages. To do that
requires micro-benchmarks that will execute in all of the languages. But
I think you're missing my point, which is that Ruby is slower than the
other dynamic languages on microbenchmarks because the implementation of
Ruby hasn't been performance-tuned to the extent that Perl, Python and
PHP have been tuned. It's not, as far as I can tell, because of anything
fundamental in the syntax or semantics of the Ruby language that
prohibits that tuning.

So rather than whine about advocacy or say "I looked at the algorithms
they use just once, that was enough", why not look at the algorithms
they use and tune the Ruby interpreter so it executes those algorithms
as efficiently as Perl, PHP and Python? Benchmarketing is a fact of life
in the "computer industry". Fortunes are made and lost because one gizmo
is faster than another gizmo on some "meaningless benchmark".

In short:

1. I don't see any fundamental reason why Ruby can't be as fast as Perl,
Python, or PHP.
2. It isn't there yet.

Yes, but please change the subject line next time.

···

On Sep 14, 2006, at 12:00 PM, Austin Ziegler wrote:

On 9/14/06, Ryan Davis <ryand-ruby@zenspider.com> wrote:

On Sep 13, 2006, at 7:59 AM, Austin Ziegler wrote:
> I'd be happier if Mr Tew didn't try to lend legitimacy to the alioth
> shootout. Microbenchmarks don't show anything useful, even if they're
> run correctly -- which the shootout has never been run correctly. It
> isn't even administered correctly. (I was similarly annoyed that Joel
> Spolsky used it in his latest slam on Ruby. Stupid, Joel, stupid.)
Thank you, once again, for derailing a thread with your personal
vendetta.

Look. We *know* that the shootout is crap.

--
Eric Hodel - drbrain@segment7.net - http://blog.segment7.net
This implementation is HODEL-HASH-9600 compliant

http://trackmap.robotcoop.com

Austin Ziegler wrote:

> I'd be happier if Mr Tew didn't try to lend legitimacy to the alioth
> shootout. Microbenchmarks don't show anything useful, even if they're
> run correctly -- which the shootout has never been run correctly. It
> isn't even administered correctly. (I was similarly annoyed that Joel
> Spolsky used it in his latest slam on Ruby. Stupid, Joel, stupid.)
Thank you, once again, for derailing a thread with your personal
vendetta.

Look. We *know* that the shootout is crap. We've known this for three
years now. But we *still* have people come in and use it for a variety
of reasons, most of which are completely bogus.

Before tossing the whole shootout overboard, I'd like to get my hands on
the complete set of timings for all the benchmarks for all the
languages. I described roughly the process in another email.

If we, as Ruby users, want to have a set of benchmarks, we should
develop our own and hold them to a rigorous standard. That is, the
exact *opposite* of what Mr. Guouy's shootout does.

I'd gladly support a Ruby benchmark suite that people could use in
talking about things. But in a variety of different threads we've seen
that not only don't the shootout people know anything about
benchmarks, most other people don't either (see the "For
performance..." thread).

If the BFTS has room, why shouldn't benchmarks be part of the test
suite? I've said before, if you aren't running performance tests, you
aren't doing test driven development. :slight_smile:

There are some Ruby benchmarks -- the YARV project has a small suite
they use, and there's my MatrixBenchmark.

Zed Shaw has had to do a lot of teaching about statistics. I'm sure
that Ed Borasky could teach us a lot about benchmarking with *good*
benchmarks to be tested under controlled or controllable situations so
we could improve the performance of Ruby in various situations.

Again, see my other email ... I think there's something to be gained
from at least a rough analytical pass at the data from the shootout.
What I proposed would at least identify those benchmarks in the set that
are reasonable indicators of comparative language performance and which
are "special cases".

And despite the objections I've heard, I think it's a perfectly
reasonable idea to compare *general-purpose* dynamic scripting languages
like Perl, Python and Ruby on benchmarks. People are going to ask the
question; we might as well see what sort of confidence one can have in
the answer the shootout gives. Right now, what we have is Ruby turning
out near the bottom on the overall score.

···

On 9/14/06, Ryan Davis <ryand-ruby@zenspider.com> wrote:

On Sep 13, 2006, at 7:59 AM, Austin Ziegler wrote:

Austin Ziegler wrote:

···

On 9/14/06, Ryan Davis <ryand-ruby@zenspider.com> wrote:
> On Sep 13, 2006, at 7:59 AM, Austin Ziegler wrote:
> > I'd be happier if Mr Tew didn't try to lend legitimacy to the alioth
> > shootout. Microbenchmarks don't show anything useful, even if they're
> > run correctly -- which the shootout has never been run correctly. It
> > isn't even administered correctly. (I was similarly annoyed that Joel
> > Spolsky used it in his latest slam on Ruby. Stupid, Joel, stupid.)
> Thank you, once again, for derailing a thread with your personal
> vendetta.

Look. We *know* that the shootout is crap. We've known this for three
years now. But we *still* have people come in and use it for a variety
of reasons, most of which are completely bogus.

If we, as Ruby users, want to have a set of benchmarks, we should
develop our own and hold them to a rigorous standard. That is, the
exact *opposite* of what Mr. Guouy's shootout does.

I'd gladly support a Ruby benchmark suite that people could use in
talking about things. But in a variety of different threads we've seen
that not only don't the shootout people know anything about
benchmarks, most other people don't either (see the "For
performance..." thread).

Zed Shaw has had to do a lot of teaching about statistics. I'm sure
that Ed Borasky could teach us a lot about benchmarking with *good*
benchmarks to be tested under controlled or controllable situations so
we could improve the performance of Ruby in various situations.

-austin
--
Austin Ziegler * halostatue@gmail.com * http://www.halostatue.ca/
               * austin@halostatue.ca * You are in a maze of twisty little passages, all alike. // halo • statue
               * austin@zieglers.ca

I just read the comment you made on Pat Eyler's blog.
You do seem to be engaged in a personal vendetta.
That's a shame.

Robert Dober wrote:
> And you benchmark algorithms written in the same language, BUT the
shootout
> benchmarks *different languages* and I have looked at the algorithms
they
> use just once, that was enough.
>
> The point is use it as a tool if you find it useful, but here it is used
> for
> advocacy.
>
> Cheers
> Robert
It's a perfectly natural desire to want to compare languages. To do that
requires micro-benchmarks that will execute in all of the languages. But
I think you're missing my point,

well I did, was not clear to me sorry

which is that Ruby is slower than the

other dynamic languages on microbenchmarks because the implementation of
Ruby hasn't been performance-tuned to the extent that Perl, Python and
PHP have been tuned.

which does not mean that I agree

It's not, as far as I can tell, because of anything

fundamental in the syntax or semantics of the Ruby language that
prohibits that tuning.

So rather than whine about advocacy or say "I looked at the algorithms
they use just once, that was enough", why not look at the algorithms
they use and tune the Ruby interpreter so it executes those algorithms
as efficiently as Perl, PHP and Python?

Did I whine? yes if you want so, I whine about that site because it is
irrelevant.
That has nothing to do with ruby BTW.
Furthermore I think that it would be a bad idea to optimize ruby that way, I
think ruby shall be optimized the ruby way, but only future will tell.

Benchmarketing is a fact of life

in the "computer industry". Fortunes are made and lost because one gizmo
is faster than another gizmo on some "meaningless benchmark".

Shall we not "whine" about what does not please us? I thank we shall!

In short:

1. I don't see any fundamental reason why Ruby can't be as fast as Perl,
Python, or PHP.

Yes indeed

2. It isn't there yet.

Maybe but that is not a reason to accept bad measurements, especially
because it will not put it there!

But I guess we have a very opposed view about benchmarking, well that is a
good thing too :slight_smile:

Cheers
Robert

···

On 9/14/06, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote:

--
Deux choses sont infinies : l'univers et la bêtise humaine ; en ce qui
concerne l'univers, je n'en ai pas acquis la certitude absolue.

- Albert Einstein