Question about Ruby philosophy

Hello, when I compare Ruby to Java there is something I don't understand .

For example, let say that I create instances of JDBCFooBaseDriver in Java - The methods and their behaviors will never change, unless I update JDBCFooBaseDriver jar file. I can be sure that this part of my environment will stay the same even if new third party libraries are added and used by my application.

Now, in Ruby I can use a new library (FancyDates) that will alter the behavior of some JDBCFooBaseDriver methods - Some methods of JDBCFooBaseDriver will be redefined by FancyDates and new methods will be added - All this stuff will occurs without giving me a chance to be informed of what is going on under the scene.

So,

1) Is what I say right ?
2) Why should I not be scared by that ?
3) Why most C#, Java, C++ developper thinks that this approach is dangerous and lead to bad practices ?

Thanks

If this question have been answered many times, feel free to only provide some links to references.

As a Java developer still learning the Ruby way, here's my understanding:

1) Yes
2) If you're forced to integrate with code from bad programmers, maybe
you should be worried (maybe some Rubyists will provide some way this
can be mitigated...). If not then:
3) The Ruby philosophy is that it's better to give you, the developer,
the power to do things like this, even if you COULD should yourself in
the foot; Ruby trusts you to be smart and not make changes that will
break things.

I guess the only way to protect to yourself is to have automated
functional tests that provide you with comfort that everything works
even with all the code loaded that may make these kinds of
alterations.

-Stephen

···

On 12/4/06, Zouplaz <user@domain.invalid> wrote:

Hello, when I compare Ruby to Java there is something I don't understand .

For example, let say that I create instances of JDBCFooBaseDriver in
Java - The methods and their behaviors will never change, unless I
update JDBCFooBaseDriver jar file. I can be sure that this part of my
environment will stay the same even if new third party libraries are
added and used by my application.

Now, in Ruby I can use a new library (FancyDates) that will alter the
behavior of some JDBCFooBaseDriver methods - Some methods of
JDBCFooBaseDriver will be redefined by FancyDates and new methods will
be added - All this stuff will occurs without giving me a chance to be
informed of what is going on under the scene.

So,

1) Is what I say right ?
2) Why should I not be scared by that ?
3) Why most C#, Java, C++ developper thinks that this approach is
dangerous and lead to bad practices ?

Thanks

If this question have been answered many times, feel free to only
provide some links to references.

--
Stephen Duncan Jr

Zouplaz wrote:

Hello, when I compare Ruby to Java there is something I don't understand .

For example, let say that I create instances of JDBCFooBaseDriver in
Java - The methods and their behaviors will never change, unless I
update JDBCFooBaseDriver jar file. I can be sure that this part of my
environment will stay the same even if new third party libraries are
added and used by my application.

Now, in Ruby I can use a new library (FancyDates) that will alter the
behavior of some JDBCFooBaseDriver methods - Some methods of
JDBCFooBaseDriver will be redefined by FancyDates and new methods will
be added - All this stuff will occurs without giving me a chance to be
informed of what is going on under the scene.

So,

1) Is what I say right ?
2) Why should I not be scared by that ?
3) Why most C#, Java, C++ developper thinks that this approach is
dangerous and lead to bad practices ?

You are correct and your worry is not uncommon. And yes, it does _seem_
scary. But in practice it turns out not to be such a problem. Most
C/Java programmers don't know this simply b/c they can't do it. With a
bit of good sense, open classes can be a great productivity booster.
The reason for this are suprisingly simple. As with any library you
dont use it unless a) you need it and b) you know what it does. Combine
that with unit testing and there's no need to be so worried.

Also, it's generally accepted practice not to alter pre-existing
methods, and instead add new ones if you need additional functionality
(though there are execptions of course).

T.

Others have given good answers that I cannot answer better. However:

3) Why most C#, Java, C++ developper thinks that this approach is dangerous and lead to bad practices ?

Are you sure this is true? True, people frequently are scared of Ruby's dynamic nature - or criticize it. But I would not go as far as to claim that the majority of C# / Java developers thinks this approach is dangerous.

If this question have been answered many times, feel free to only provide some links to references.

You will find discussions of these issues in threads dealing with "dynamic typing" - there's a ton of them. Also, IIRC there are discussions of these issues that revolve around "rational" which used to change the behavior of Fixnum#/ which could cause problems for some programs.

Kind regards

  robert

···

On 04.12.2006 13:04, Zouplaz wrote:

Hello, when I compare Ruby to Java there is something I don't understand .

For example, let say that I create instances of JDBCFooBaseDriver in
Java - The methods and their behaviors will never change, unless I
update JDBCFooBaseDriver jar file. I can be sure that this part of my
environment will stay the same even if new third party libraries are
added and used by my application.

Now, in Ruby I can use a new library (FancyDates) that will alter the
behavior of some JDBCFooBaseDriver methods - Some methods of
JDBCFooBaseDriver will be redefined by FancyDates and new methods will
be added - All this stuff will occurs without giving me a chance to be
informed of what is going on under the scene.

Not really. Presumably if you're using a third-party library, you
have a clue as to why you're using it. You've probably looked at the
documentation, you definitely have access to the source. Not saying
you should need to examine the source to use a lib, but basically you
should at least read the documentation and stuff before blindly
throwing it into a project.

So,

1) Is what I say right ?
2) Why should I not be scared by that ?
3) Why most C#, Java, C++ developper thinks that this approach is
dangerous and lead to bad practices ?

There's potential for things to go wrong, I guess. Just like there's
potential for mistakes in dynamic typing. Those opposed to dynamic
typing will say stuff like, "Well what if someone calls
refigerator.meow? It blows up at run time!" Those kinds of questions
basically boil down to "What if I'm stupid?" Well, can't help you
there...

Basically, just give Ruby a shot for a while. Use it in a real
project. If it doesn't work for you, drop it.

Pat

···

On 12/4/06, Zouplaz <user@domain.invalid> wrote:

Hello, when I compare Ruby to Java there is something I don't understand .

For example, let say that I create instances of JDBCFooBaseDriver in
Java - The methods and their behaviors will never change, unless I
update JDBCFooBaseDriver jar file. I can be sure that this part of my
environment will stay the same even if new third party libraries are
added and used by my application.

Now, in Ruby I can use a new library (FancyDates) that will alter the
behavior of some JDBCFooBaseDriver methods - Some methods of
JDBCFooBaseDriver will be redefined by FancyDates and new methods will
be added - All this stuff will occurs without giving me a chance to be
informed of what is going on under the scene.

So,

1) Is what I say right ?
2) Why should I not be scared by that ?
3) Why most C#, Java, C++ developper thinks that this approach is
dangerous and lead to bad practices ?

I can only speak from a Java perspective here, but if you look at what
most frameworks and (shudder) containers out there do, you'll find
there is an enormous amount of reflection going on. The more modern
frameworks also make use of runtime bytecode manipulation. Both of
these techniques introduce the same type of dynamics into a project
(and negate the "advantages" of static typing), but - unlike Ruby - it
is almost impossible to understand what is going on under the hood.
And every framework does things in a different way. At least with
Ruby, you can recognise this kind of metaprogramming easily, and if
used in libraries its use is normally well documented.

Ruby programmers are simply more aware of these features and are
therefore able to use them effectively.

Cheers,
Max

···

On 12/4/06, Zouplaz <user@domain.invalid> wrote:

Thanks

If this question have been answered many times, feel free to only
provide some links to references.

It's a serious question concerning language philosophy and design.
I believe all of us would be pleased to know the opinion of Matz on it.

Mike Shock
(Mikhail V. Shokhirev)

Zouplaz wrote:

···

2) Why should I not be scared by that ?
3) Why most C#, Java, C++ developper thinks that this approach is dangerous and lead to bad practices ?

Stephen Duncan wrote:

As a Java developer still learning the Ruby way, here's my understanding:

1) Yes
2) If you're forced to integrate with code from bad programmers, maybe
you should be worried (maybe some Rubyists will provide some way this
can be mitigated...). If not then:
3) The Ruby philosophy is that it's better to give you, the developer,
the power to do things like this, even if you COULD should yourself in
the foot; Ruby trusts you to be smart and not make changes that will
break things.

A typo I know, but that's a perfect way to put it: You can "should
yourself in the foot" with such worries :slight_smile:

T.

Zouplaz wrote:
> Hello, when I compare Ruby to Java there is something I don't
understand .
>
> For example, let say that I create instances of JDBCFooBaseDriver in
> Java - The methods and their behaviors will never change, unless I
> update JDBCFooBaseDriver jar file. I can be sure that this part of my
> environment will stay the same even if new third party libraries are
> added and used by my application.
>
> Now, in Ruby I can use a new library (FancyDates) that will alter the
> behavior of some JDBCFooBaseDriver methods - Some methods of
> JDBCFooBaseDriver will be redefined by FancyDates and new methods will
> be added - All this stuff will occurs without giving me a chance to be
> informed of what is going on under the scene.
>
> So,
>
> 1) Is what I say right ?
> 2) Why should I not be scared by that ?
> 3) Why most C#, Java, C++ developper thinks that this approach is
> dangerous and lead to bad practices ?

You are correct and your worry is not uncommon. And yes, it does _seem_
scary.

It is scary unless you have a very good testing environment, but that is
not the most important point I want to make

look at Object#freeze, maybe you feel better now ;), I am not sure if it can
be circumvented but it is better than nothing.

HTH
Robert

But in practice it turns out not to be such a problem. Most

C/Java programmers don't know this simply b/c they can't do it. With a
bit of good sense, open classes can be a great productivity booster.
The reason for this are suprisingly simple. As with any library you
dont use it unless a) you need it and b) you know what it does. Combine
that with unit testing and there's no need to be so worried.

Yup sorry, you said it before.

Also, it's generally accepted practice not to alter pre-existing

methods, and instead add new ones if you need additional functionality
(though there are execptions of course).

T.

R.

···

On 12/4/06, Trans <transfire@gmail.com> wrote:

--
"The real romance is out ahead and yet to come. The computer revolution
hasn't started yet. Don't be misled by the enormous flow of money into bad
defacto standards for unsophisticated buyers using poor adaptations of
incomplete ideas."

- Alan Kay

First I've heard of the 'used to'! When did that change?

martin

···

On 12/5/06, Robert Klemme <shortcutter@googlemail.com> wrote:

"dynamic typing" - there's a ton of them. Also, IIRC there are
discussions of these issues that revolve around "rational" which used to
change the behavior of Fixnum#/ which could cause problems for some
programs.

The bottom line seems to be that the current crop of Ruby programmers
are responsible enough not to mess with things under the hood unless
they understand the workings.

Is this an accurate assessment?

What scares me a bit is when Ruby becomes extremely popular and every
second programmer is using it, and less-disciplined developers start
providing libraries that tinker with core classes willy-nilly. At some
point you are going to get clashes between two same-named add-on methods
in (say) Array, that do different things <shudder>. That could be fun to
debug.

Sure, you can decide not to use such a library, but you have to get
burned first before you realize it's not good to use. I'm probably
showing my age here, but I remember cases where people used to code
macros in C header files (like MIN and MAX) that would clash with other
uses of MIN and MAX and cause untold havoc. Some of these were from
mainstream companies.

It's one thing to say "What if I'm stupid?". It's quite another thing to
say, "What if lots of stupid/undisciplined people start using Ruby?"
This tends to be the price of language popularity, I fear.

Probably, evolutionary forces will get rid of the poorly designed
libraries in time. I'm more worried about the beleaguered corporate IT
developer who often does not have much choice and has to use in-house
code.

I dunno. Thoughts?

···

--
Posted via http://www.ruby-forum.com/.

Robert Klemme wrote:

Others have given good answers that I cannot answer better. However:

3) Why most C#, Java, C++ developper thinks that this approach is
dangerous and lead to bad practices ?

Are you sure this is true? True, people frequently are scared of Ruby's
dynamic nature - or criticize it. But I would not go as far as to claim
that the majority of C# / Java developers thinks this approach is
dangerous.

<rant>

Aaactually, yes, they/we do. The Ruby way presumes reasonably skilled
coders, not going overboard on clobbering , sufficient documentation,
and test coverage. After having seen my odd share of undocumented,
untested messes (usually cobbled together by generations of interns)
that are *barely* maintainable -with- the security blanket of type
checks, tool support, and no possibility of out-of-sight clobbering
changes, I pray /those people/ don't discover Ruby with me having to set
eyes on their code.

(I'm not accepting that the blame be laid on the language, the problems
are usually plain newbiedom, laziness, or copy-pasting snippets of "what
everyone before used and worked" instead of cleaning up the design. I've
seen it with Java, I've seen it with PHP, I have very little doubts I'll
eventually see it with Ruby when it reaches the popularity / hype
threshold. Some of that approach I already notice in some of the newbie
posts here, though in yet negligible amounts.)

Compared with Ruby having a low barrier to entry, followed by a (IMO)
steep learning curve to mastery, and the tendency of people in their
intern phase to be infatuated by shiny sparkly objects (like, oh, witty
eval meta hacks), this scares the heck out of me. (Hopefully, the
harmful features are too advanced for the pointy-haired-boss-acting-hip
sort of programmers.)

Basically, the messiest Java codebase I can imagine isn't nearly as
scary as the messiest Ruby one. FUD? Maybe. Personal experience?
Certainly - code I have to tackle would invariably be much worse if it
abused the wordt of Ruby. But as usual, just a language comparison
without context doesn't really say anything, and the above quoted point
3 is an overgeneralising blanket statement; i.e. more often than not,
patent nonsense, and at least in this case, horribly phrased. Lack Of A
Clue (tm) is what leads to bad practices, the important question is "How
bad practices can you have?". Or scratch that, the only practically
relevant question is "How bad practices DO you have right now?",
anything more general tends to speculation, and in programming
discussions, speculations leads to a vapid cloudfest really fast.

Provided the assumptions mentioned hold for a project during its
lifetime, the dangers of the approach are a non-issue. Except they don't
necessarily, and then the security blanket helps a lot - not everyone
can afford to wrinkle his nose in disgust at ugly code and quit on a
whim. (Need just a *little* more time in the global Fortune 100 entries
spearheading the CV before I can consider moving on from "grin and bear
the soul-draining horror 75% of the time, read bash.org for the rest of
it" employment to being more picky with regards to personally rewarding
experiences.) Ruby's critical point of mass adoption is yet to come, and
if the blogosphere noise is even remotely representative, I'm halfways
sure it -will- come (the mindshare Ruby gets despite not having a single
notable sugar daddy is rather spectacular), and I expect Interesting
Things (tm) to happen if/when it does. Hopefully, of the good sort (job
openings, development funding), but the opposite (widespread
language-altering libraries becoming a tangled mess that requires effort
to get working in a nontrivial project) isn't entirely impossible.

</rant>

David Vallner
Sleep Deprived

···

On 04.12.2006 13:04, Zouplaz wrote:

Max Muermann wrote:

I can only speak from a Java perspective here, but if you look at what
most frameworks and (shudder) containers out there do, you'll find
there is an enormous amount of reflection going on. The more modern
frameworks also make use of runtime bytecode manipulation.

Though to me it seems like runtime bytecode manipulation is still rather
rare, I see load-time enhancement much more often. While still magical,
changes made in the scope of a given class don't directly propagate into
other classes. If you can mention a counterexample though, do so so I
know what to avoid.

Both of
these techniques introduce the same type of dynamics into a project
(and negate the "advantages" of static typing)

This is not completely analogous. Reflective method calls still fail
early on type errors. I would prefer if path expressions and their host
documents were precompiled if possible instead of handled reflectively
though. JSP + JSTL + EL must be the single worst combination of
technologies to debug and generally maintain.

, but - unlike Ruby - it
is almost impossible to understand what is going on under the hood.

Depends on the definition of "understand". If you mean trace the
(possible) inner workings in your head, then you're right; however,
usually just knowing the actions and corresponding results is enough to
use the code. Which is another reason why learning programming languages
is a valuable hobby - if you've seen a mechanism in one language (and
understood it because that allowed for a clear executable notation), you
don't have much problems using effectively the same thing in another,
even if the implementation jumps through hoops. You just don't need to
care as long as it works.

And every framework does things in a different way. At least with
Ruby, you can recognise this kind of metaprogramming easily, and if
used in libraries its use is normally well documented.

In Ruby, metaprogramming alterations aren't nearly perfectly consistent
between libraries, only the low-level implementation methods which you
usually don't need to care about are the common denominator. Valuable as
a learning resourse, I maintain that it's not quite relevant in practice.

As for recognition, I don't think it's too language-specific once you're
familiar with the high-level concept being realised; same for the
documentation.

Ruby programmers are simply more aware of these features and are
therefore able to use them effectively.

This is a double-edged sword. Java runs less risk of the features being
used maliciously because of the programmers being unaware. I'm not
stating this is a linearly better state of affairs, it just happens to
result in Java metaprogramming being contained to a select few
frameworks, where it's less likely destructive conflicts will occur.
It's still possible, and might yet show up as an issue, just not in the
near future.

However, greater skill in metaprogramming does NOT follow from greater
awareness. You can be able to recite five ways of dynamically definining
a method from heart, and still mess up someone's number-munging script
because you absolutely had to require mathn instead of just using the
other operators.

David Vallner

Hi,

It's a serious question concerning language philosophy and design.
I believe all of us would be pleased to know the opinion of Matz on it.

2) Why should I not be scared by that ?

You don't have to, when you trust other programmers, and you have time
to run test suite before deploying the software. It gives you new
possibility, possibility to do many good things, along with a few
dangerous things. Ruby makes you free. You are even free to shoot
your own foot.

3) Why most C#, Java, C++ developper thinks that this approach is
dangerous and lead to bad practices ?

Often there are cases where we can not trust each other for various
reasons: immature programmers, discommunication between team members,
etc. Under such circumstance, it is dangerous and lead to bad
practices. Don't use Ruby for such cases. It's no fun.

              matz.

···

In message "Re: Question about Ruby philosophy" on Tue, 5 Dec 2006 15:35:37 +0900, Mike Shock <mshock@shadrinsk.net> writes:

The bottom line seems to be that the current crop of Ruby programmers
are responsible enough not to mess with things under the hood unless
they understand the workings.

Is this an accurate assessment?

What scares me a bit is when Ruby becomes extremely popular and every
second programmer is using it, and less-disciplined developers start
providing libraries that tinker with core classes willy-nilly. At some
point you are going to get clashes between two same-named add-on methods
in (say) Array, that do different things <shudder>. That could be fun to
debug.

Sure, you can decide not to use such a library, but you have to get
burned first before you realize it's not good to use. I'm probably
showing my age here, but I remember cases where people used to code
macros in C header files (like MIN and MAX) that would clash with other
uses of MIN and MAX and cause untold havoc. Some of these were from
mainstream companies.

   harp:~ > irb
   irb(main):001:0> Array.freeze
   => Array

   irb(main):002:0> class << Array; def each() 'ha-ha'; end; end
   TypeError: can't modify frozen object
           from (irb):4

It's one thing to say "What if I'm stupid?". It's quite another thing to
say, "What if lots of stupid/undisciplined people start using Ruby?"
This tends to be the price of language popularity, I fear.

Probably, evolutionary forces will get rid of the poorly designed
libraries in time. I'm more worried about the beleaguered corporate IT
developer who often does not have much choice and has to use in-house
code.

I dunno. Thoughts?

what if people start coding in c and leave dangling pointers lying around,
double free pointers, corrupt the heap in their lib, forget the clean-up
resources in at_exit handlers, or don't prefix each and every
var/function/macro with something like my_lib_XXX?

where would we be? :wink:

i can hear people thinking 'java' out there already - but those guys are
manipulating byte code to subvert their fist-cuffs already! anyone know a
thing or two about boost::any? on a related note, it seems the most useful
ocaml code uses the type system in a way that makes the promise of 'safe'
programs more difficult or impossible for the compiler to ensure...

history has shown that there is exactly __one__ re-usable component of code:
the shared library. at least in the *nix world, nearly all of them are written
in c and it is plagued by issues at least two orders of magnitude, imho, worse
than clobbering Array#each! yet, the internet continues to be powered by *nix
servers running said c libraries :wink:

(ducking)

-a

···

On Tue, 5 Dec 2006, Edwin Fine wrote:
           from :0
--
if you want others to be happy, practice compassion.
if you want to be happy, practice compassion. -- the dalai lama

Perl has had the same issue for a long time now and it probably was used by "every second programmer" in its prime. It works when people stay conscious of what they are doing and remember to document. Other times it degrades into an unusable mess and we hope natural selection will kill off those libraries.

I'm with Ruby's natural tendency on this issue: trust the programmer.

James Edward Gray II

···

On Dec 4, 2006, at 6:42 PM, Edwin Fine wrote:

What scares me a bit is when Ruby becomes extremely popular and every
second programmer is using it, and less-disciplined developers start
providing libraries that tinker with core classes willy-nilly. At some
point you are going to get clashes between two same-named add-on methods
in (say) Array, that do different things <shudder>. That could be fun to
debug.

Edwin Fine wrote:

The bottom line seems to be that the current crop of Ruby programmers are responsible enough not to mess with things under the hood unless they understand the workings.

Ruby: You'll Shoot Your Eye Out!

http://www.cafepress.com/rubyshootout.46707105

(Yes, a shameless plug. But hey; Christmas is coming! )

···

--
James Britt

"Blanket statements are over-rated"

I was surprised myself, too:

irb(main):001:0> 1 / 2
=> 0
irb(main):002:0> 1.quo 2
=> 0.5
irb(main):003:0> require 'rational'
=> true
irb(main):004:0> 1 / 2
=> 0
irb(main):005:0> 1.quo 2
=> Rational(1, 2)
irb(main):006:0> RUBY_VERSION
=> "1.8.5"

I believe I remember that "1 / 2" used to return "Rational(1, 2)" with 'rational' loaded. Is my memory wrong here?

Kind regards

  robert

···

On 04.12.2006 23:28, Martin DeMello wrote:

On 12/5/06, Robert Klemme <shortcutter@googlemail.com> wrote:

"dynamic typing" - there's a ton of them. Also, IIRC there are
discussions of these issues that revolve around "rational" which used to
change the behavior of Fixnum#/ which could cause problems for some
programs.

First I've heard of the 'used to'! When did that change?

The bottom line seems to be that the current crop of Ruby programmers are responsible enough not to mess with things under the hood unless they understand the workings.

Is this an accurate assessment?

I guess so.

What scares me a bit is when Ruby becomes extremely popular and every second programmer is using it, and less-disciplined developers start providing libraries that tinker with core classes willy-nilly. At some point you are going to get clashes between two same-named add-on methods in (say) Array, that do different things <shudder>. That could be fun to debug.

Sure, you can decide not to use such a library, but you have to get burned first before you realize it's not good to use. I'm probably showing my age here, but I remember cases where people used to code macros in C header files (like MIN and MAX) that would clash with other uses of MIN and MAX and cause untold havoc. Some of these were from mainstream companies.

Actually there are two significant differences between this and modified Ruby core classes: C is compiled and CPP macros are gone by the time the error shows up (it may be that modern C/C++ compilers take care of this when compiling debug code, dunno). You can more easily debug a Ruby lib (provided it has no native parts) than a C lib where you might not have the source code.

Also, I believe Ruby more easily encourages the use of unit tests and it avoids a lot other errors (famous buffer overruns) which IMHO makes up for the potential danger of core class modifications.

It's one thing to say "What if I'm stupid?". It's quite another thing to say, "What if lots of stupid/undisciplined people start using Ruby?" This tends to be the price of language popularity, I fear.

Maybe a bit. But I think the community will take care of this.

Probably, evolutionary forces will get rid of the poorly designed libraries in time. I'm more worried about the beleaguered corporate IT developer who often does not have much choice and has to use in-house code.

I think evolution will take care pretty well. :slight_smile:

Kind regards

  robert

···

On 05.12.2006 01:42, Edwin Fine wrote:

Yukihiro Matsumoto <matz@ruby-lang.org> writes:

Often there are cases where we can not trust each other for various
reasons: immature programmers, discommunication between team members,
etc. Under such circumstance, it is dangerous and lead to bad
practices. Don't use Ruby for such cases. It's no fun.

Do you consider Ruby a good language for teaching people that haven't
programmed yet? Just wondering.

···

              matz.

--
Christian Neukirchen <chneukirchen@gmail.com> http://chneukirchen.org