The type checking in the popular strongly typed languages like Java
and C++ are actually quite weak in uncovering incorrect code.
They are really about imposing constraints so that code isn't run
against the wrong data at run-time. Such constraints are necessary in
languages whose implementations determine which method to run by a
type determined at compile time, rather than by using self-describing
runtime information to do method dispatching.
Many errors aren't found at compile time in these languages, for
example, array bounds errors. It's very hard to design a static type
system which detects the error in
function foo(array a, int i)
array[i]
end
ary = new Array(10)
foo(ary, 12)
There are certainly less popular languages which attempt this, but
they aren't widely used. Some consider arrays with different bounds
to be of different types, but this makes it hard to write lots of
things. Others do type inferencing, but these tend to be hard for
mere mortals to understand, at least the ones I've seen so far.
So languages like Java, despite being statically typed, defer array
bounds checking to run-time. In the C family of languages, array
bounds violations tend to be unchecked and do lead to mysterious
failures. Of course dynamically typed languages almost invariably do
bounds checking at run-time, sometimes, as in the case of Ruby arrays,
providing dynamically extensible bounds.
As for run-away errors being hard to debug, that's true, but from
practical experience these tend to be much worse in statically typed
languages where the compiler got fooled by a typecast, or a pointer
alias bug, and branched through a non-existent or wrong virtual
function table, or did a fetch or store outside of the bounds of the
object because it got the type wrong.
Although you might not get an error right away when you confuse a
giraffe with an order, any more than you are guaranteed to in a
statically typed language under all circumstances, experience
indicates that hard to debug problems are actually far less common
than someone only extrapolating experience from static languages would
expect, and are usually much easier to debug when they do occur.
If your concern is correctness, which it should be, then it's best to
use best practices for writing in dynamic languages, such as TDD/BDD
rather than attempting to mimic techniques from statically typed
languages which are really there to cover the class of errors caused
by a statically typed implementation.
http://talklikeaduck.denhaven2.com/articles/2008/05/18/on-ceremony-and-training-wheels
···
On Sun, May 18, 2008 at 12:40 PM, Roger Alsing <roger.alsing@precio.se> wrote:
Instead, look at it like this: how likely is a Giraffe able to do what
an Order does? If the Giraffe doesn't quack like an Order, than an
exception is going to be thrown automatically anyway.Just program in it, and see how often it ever actually bites you. Start
with small things, work your way up.Good advice.
This kind of mindset is just asking for things to go wrong..
Lets say that the Giraffe happens to have a few methods that does match
those of an order, so that it manages to introduce incorrect values that
populate through your data before the exception is eventually caught.(ofcourse a Giraffe is an extreme case, but multiple domain objects can
have attributes that overlap, such as price, name etc)And once you catch the exception, you have absolutely no clue at all
what have been affected.Incorrectness should be caught early before it litters your data, or you
will have a seriously hard time debugging.Just because the language is non strict when it comes to types doesnt
mean you should not care about correctness?
--
Rick DeNatale
My blog on Ruby
http://talklikeaduck.denhaven2.com/