Legacy support (Was: Base64 not there makes Rails 2.0.2 fail to load in 1.9.0)

hemant wrote:

Eric Hodel wrote:

Just re-built latest svn of 1.9.0 and base64.rb is removed. Its
required in active_support (conversions.rb). Do you think its worth
having all libraries that were removed from Ruby in 1.9.0 moved into a
"legacy gem" or something? I am sure the rails guys will catch this
and remove the dependency since its not going to be supported going
forward, but this kind of thing may happen to other libraries.

I don't think a legacy gem should be created. This doesn't encourage
people to switch to the new way of doing things, whatever that is. (For
base64, use [str].pack 'm*'.)

People have asked me to alias require_gem for RubyGems 1.x, and I will
not. Software depending on it won't get fixed if it stays around.

Yeah ... I'm with you on this one, having just discovered that two gems,
rcov and hpricot, need to be updated to work with 1.9 anyhow. Should
there be some kind of "master list" of the "commonly used gems" that
need updating so they'll work with either 1.8 or 1.9? So far I have

1. rcov
2. hpricot
3. active_support

Yes lots, Almost ALL C extensions would require some kinda tinkering.
My broken lib list is:
mysql, mongrel, eventmachine

1. Isn't there a pure Ruby mysql gem?

2. Given that the core Ruby 1.9 virtual machine is so much faster than
the core Ruby 1.8 "virtual machine", and that Koichi is still tweaking
it, and that Charlie Nutter et. al. on jRuby are still tweaking the
jRuby implementations of both 1.8 and 1.9, shouldn't gem developers be
looking at porting their stuff back to pure Ruby whenever possible? I
can understand C interface code for OS-specific functionality, but is it
really that necessary for raw speed any more?

Of the list I see here (rcov, hpricot, active_support, mysql,
eventmachine and mongrel) I think only eventmachine and mongrel "really
need" C-level code.

···

On Dec 26, 2007 1:01 AM, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote:

On Dec 25, 2007, at 07:03 AM, Richard Kilmer wrote:

Quoth M. Edward (Ed) Borasky:

hemant wrote:
>> Eric Hodel wrote:
>>>> Just re-built latest svn of 1.9.0 and base64.rb is removed. Its
>>>> required in active_support (conversions.rb). Do you think its worth
>>>> having all libraries that were removed from Ruby in 1.9.0 moved into a
>>>> "legacy gem" or something? I am sure the rails guys will catch this
>>>> and remove the dependency since its not going to be supported going
>>>> forward, but this kind of thing may happen to other libraries.
>>> I don't think a legacy gem should be created. This doesn't encourage
>>> people to switch to the new way of doing things, whatever that is. (For
>>> base64, use [str].pack 'm*'.)
>>>
>>> People have asked me to alias require_gem for RubyGems 1.x, and I will
>>> not. Software depending on it won't get fixed if it stays around.
>>>
>>>
>> Yeah ... I'm with you on this one, having just discovered that two gems,
>> rcov and hpricot, need to be updated to work with 1.9 anyhow. Should
>> there be some kind of "master list" of the "commonly used gems" that
>> need updating so they'll work with either 1.8 or 1.9? So far I have
>>
>> 1. rcov
>> 2. hpricot
>> 3. active_support
>>
>
> Yes lots, Almost ALL C extensions would require some kinda tinkering.
> My broken lib list is:
> mysql, mongrel, eventmachine
>
>
>
>

1. Isn't there a pure Ruby mysql gem?

2. Given that the core Ruby 1.9 virtual machine is so much faster than
the core Ruby 1.8 "virtual machine", and that Koichi is still tweaking
it, and that Charlie Nutter et. al. on jRuby are still tweaking the
jRuby implementations of both 1.8 and 1.9, shouldn't gem developers be
looking at porting their stuff back to pure Ruby whenever possible? I
can understand C interface code for OS-specific functionality, but is it
really that necessary for raw speed any more?

Of the list I see here (rcov, hpricot, active_support, mysql,
eventmachine and mongrel) I think only eventmachine and mongrel "really
need" C-level code.

IIRC much of hpricot is actually generated from some grammar or something.
It's not actually written in C.

But to answer your first question, no, pure-ruby isn't fast enough yet.

···

> On Dec 26, 2007 1:01 AM, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote:
>>> On Dec 25, 2007, at 07:03 AM, Richard Kilmer wrote:

--
Konrad Meyer <konrad@tylerc.org> http://konrad.sobertillnoon.com/

M. Edward (Ed) Borasky wrote:

2. Given that the core Ruby 1.9 virtual machine is so much faster than
the core Ruby 1.8 "virtual machine", and that Koichi is still tweaking
it, and that Charlie Nutter et. al. on jRuby are still tweaking the
jRuby implementations of both 1.8 and 1.9, shouldn't gem developers be
looking at porting their stuff back to pure Ruby whenever possible? I
can understand C interface code for OS-specific functionality, but is it
really that necessary for raw speed any more?

I've been advocating this for JRuby for newcomers looking to write extensions or new libraries. I always prefer they write a pure-Ruby version first, and if performance is really a problem to write a Java-based version later. This guarantees two things:

1. I think they'll attack the problem better in Ruby and come up with a better solution overall.
2. Having a pure Ruby version around will push JRuby harder and ultimately help both JRuby and other implementations too.

But JRuby offers something neither Ruby 1.8 nor Ruby 1.9 does, and I'm curious what you all think will be the effect...

JRuby will allow enabling most Ruby 1.9 features selectively.

So we already offer improved performance with 1.8-compatible semantics. And there are a few limited places where we're offering 1.9 features. But in general, since 1.8 and 1.9 will be supported by basically the same binary, many of the optimizations or features in 1.9 will be configurable by a flag.

For optimizations, this is fairly non-invasive. Most of the big optimizations in 1.9 are there because people won't miss them (fast fixnum math, constant-time literal whens, etc). So users of 1.8 may choose to turn those optimizations on right away as we implement them.

But for features...would it be useful to be able to turn on e.g. M17N strings or Fibers but leave the rest of 1.8 semantics intact?

- Charlie

Charles Oliver Nutter wrote:

M. Edward (Ed) Borasky wrote:

2. Given that the core Ruby 1.9 virtual machine is so much faster than
the core Ruby 1.8 "virtual machine", and that Koichi is still tweaking
it, and that Charlie Nutter et. al. on jRuby are still tweaking the
jRuby implementations of both 1.8 and 1.9, shouldn't gem developers be
looking at porting their stuff back to pure Ruby whenever possible? I
can understand C interface code for OS-specific functionality, but is it
really that necessary for raw speed any more?

I've been advocating this for JRuby for newcomers looking to write
extensions or new libraries. I always prefer they write a pure-Ruby
version first, and if performance is really a problem to write a
Java-based version later. This guarantees two things:

1. I think they'll attack the problem better in Ruby and come up with a
better solution overall.
2. Having a pure Ruby version around will push JRuby harder and
ultimately help both JRuby and other implementations too.

But JRuby offers something neither Ruby 1.8 nor Ruby 1.9 does, and I'm
curious what you all think will be the effect...

JRuby will allow enabling most Ruby 1.9 features selectively.

So we already offer improved performance with 1.8-compatible semantics.
And there are a few limited places where we're offering 1.9 features.
But in general, since 1.8 and 1.9 will be supported by basically the
same binary, many of the optimizations or features in 1.9 will be
configurable by a flag.

For optimizations, this is fairly non-invasive. Most of the big
optimizations in 1.9 are there because people won't miss them (fast
fixnum math, constant-time literal whens, etc). So users of 1.8 may
choose to turn those optimizations on right away as we implement them.

But for features...would it be useful to be able to turn on e.g. M17N
strings or Fibers but leave the rest of 1.8 semantics intact?

- Charlie

Well ... I think flexibility is a good thing in general. Almost every
*commercial* FORTRAN compiler I ever worked with had numerous control
flags/options to control exactly what dialect of the language was being
read, and GCC has carried on this noble tradition for all the languages
it compiles. So yes, I think such fine-grained control over what Ruby
syntactical and semantic details are translated and executed is a very
good thing.

Of course, that *does* raise the bar for John Lam, doesn't it? :wink: