I am working on an article on the subject of implementing dynamically
typed languages on the .net CLR. In .net Types are immutable, ie. once
defined they cannot be changed. Writing about this I needed to cover the
problem of namespace collisions. Most modern languages including ruby
has introduced namespaces to alleviate the namespace collision problem,
but the convention of adding your own methods to existing classes
reintroduces the problem.
Is ruby merely being pragmatic by allowing you to do this, because it
seldom causes any real problems? Or is it inherently wrong and should
not be permitted by the language/object model?
It’s controversial, that’s for sure, and well worth writing an article
about, as the risks are obvious but not all of the benefits are.
It is wrong to argue that it’s inherently wrong. It’s also wrong to argue
that it’s inherently right. You could say that Ruby is being pragmatic
because it seldom causes problems, but you need to emphasise the
responsibility of the programmer to make sure their code behaves nicely.
How do you feel about this feature. Would it be a big loss if ruby
didn’t support it? My own feelings are ambivalent. I have found myself
adding features to Time, Enumerable, Module and when I do it I always
appreciate how nice it is that the feature is where it belongs, but I
always have this nagging fear of collisions that I push to the back of
my mind.
The nagging feeling is justifiable. Nobody wants namespace collisions,
and they should be avoided by good management, not good luck. Sometimes
it’s easy to justify the addition of some or other method, sometimes it’s
not. So you can put your extra methods in a module and extend individual
objects with that module, instead of including it in classes. That
narrows your impact and allows you to keep track of what’s going on, and
should go some way to reducing that nagging feeling.
The main thing is that Ruby represents a significant step forward in
popular computing [1], and you need to embrace some danger to make that
progress. Fortunately, in one’s own programs, just embracing a tiny bit
of “danger” can make a large practical difference, e.g. modifiying the way
an object represents itself just for the purpose of a unit test.
Two benefits of Ruby’s approach that are probably underestimated by people
who, for whatever reason, like to cling to static typing:
-
the ability of library writers to create powerful frameworks,
thus containing the “risk” in a way that can be studied, discussed
and tested
-
the ability of everyday programmers to make small but meaningful
improvements or bugfixes to the language or libraries, thus
achieving independence and inspiring change based on working
examples
At the end of the day, it’s better to solve problems pragmatically than
dogmatically
I look forward to reading your article.
Curious Tom
Cheers,
Gavin
[1] The groundwork was done by LISP, Smalltalk, Perl et al, but that does
not diminish Ruby at all.
PS. I am launching a project that consolidates popular extensions to the
standard classes. Look at project “extensions” on RubyForge if you’re
interested. The CVS will hopefully be there within 7 days. A release
will be a few more weeks. This is inspired by that nagging feeling.
Instead of implementing common modifications in my code, I can rely on a
reference implementation that is documented and tested.