Which is why they teach data structures in computer science class. It's
all about fast search, I think. That's one of the big gripes I have with
"lazy" interpretation. If you don't do stuff until you have to do it, it
only pays off if you end up *never* having to do it. 
I meant something different. Ruby has to bind most of the names it uses at
runtime, precisely because the language makes it possible (and extremely
useful) to do things which invalidate them. Ruby does this quite efficiently
(so it's not the case that Ruby's data structures were written
incompetently, as you seem to be suggesting), but the fact is that you
simply have to do it every time you access a method or variable.
I've nearly given up on extensively profiling my Ruby programs. Over the
years I've gotten a pretty good feel for what generates hot spots so I can
avoid them upfront. What I always end up with is long lists of "warm" spots
that are basically irreducible (Things like calls to === and ). Ruby's
basic behavior at runtime just seems to generate a lot of "background
noise." We'll see if this improves in the new runtimes.
Bottom line: using Ruby will always be characterized by a tradeoff between
> performance and programmer productivity. This is not a criticism of Ruby
in
> any way, shape or form! Productivity is a fundamental engineering value,
and
> time-to-market is a fundamental quality dimension. Ruby therefore has,
and
> will continue to have, a unique value proposition.
I'm not sure this is a valid tradeoff. The economics of *development*
and the economics of *operating* a large code are two entirely different
subjects. People have "always" prototyped in "slow but productive"
languages, like Lisp, Perl, PHP and Ruby, and then reached a point where
the economics dictated a complete rewrite for speed into C, C++ or Java.
I can think of more examples of this than I can of something that was
developed and prototyped rapidly and then grew by "just throwing more
hardware at inefficient software."
So ... just like a startup should plan for the day when a big company
offers them the choice of selling out or being crushed like a bug, when
you implement a great idea in some rapid prototyping framework like
Rails, plan for the day when you are offered the choice of rewriting
completely in a compiled language or going bankrupt buying hardware.
It sounds like your experience has been largely with systems in which
development activities have a long "tail," extending well into the
production cycle. It's certainly been my experience that systems written in
C/C++ and Java work this way. It's almost as if the high investment in
initial development necessitates a continuing commitment to that code base,
along with the business assumptions and engineering decisions it originally
embodied.
But over the last four years of working with Ruby, I've found something like
the opposite. Ruby programs largely have a "write it and forget it" quality
to them. The very first Ruby program I ever wrote, on the day that I learned
Ruby, was a RADIUS server that worked off an LDAP datastore. It took four
hours to write (including the time spent learning the language), went into
live production the next day (the client was as adventurous as I was), and
has been running ever since. It was only modified once, to add a new
feature.
Since them, my teams have written innumerable Ruby programs that are usually
developed as small gems with full unit-test suites, and they find their way
into one (or, infrequently, more than one) much larger project.
How does it change the economics? Because of the time-to-market dimension. I
believe that a lot of business issues that can be addressed with the
creation of some software (and that usually involve increasing the "surface
area" (or timely accessibility) of generally small bodies of information)
only make sense if the development cycle is very short, and the
user-acceptance period is quite nearly zero. (Meaning, the software works as
expected the first time out.)
I've found in my businesses that such fleeting opportunities come up all the
time. If you're a person who believes in proper software engineering, and
well-controlled processes, you're probably climbing the walls and getting
ready to throw things at me right now! But this is why I was talking about a
value-tradeoff. If I'm right, then there are a lot of opportunities to
create capturable business value that traditional methodologies (including
fast-prototype-followed-by-extensive-rewrite) simply can't touch.
For these cases, Ruby is uniquely valuable.
So am I the person who is creating the boatloads of crapware that no one
else can understand but that can't be gotten rid of, and eventually drives
purchases of larger hardware?
In some respects, guilty as charged. (I deny the point about
non-understandability. If you're going to develop like this, then writing
documentation and unit tests *must* dominate the development effort, perhaps
by 10-to-1. Otherwise, you end up with nothing usable.)
Larger hardware: the trend I see in every single enterprise client is toward
virtualization. We're entering a long cycle in which *fewer* hardware
resources will be available for the typical program rather than more. I'm
already expecting Ruby to suffer as a result. Java, with its enormous memory
footprint, is hardly the solution to this problem.
···
On 9/27/07, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote: