Not quite getting it

The type checking in the popular strongly typed languages like Java
and C++ are actually quite weak in uncovering incorrect code.

They are really about imposing constraints so that code isn't run
against the wrong data at run-time. Such constraints are necessary in
languages whose implementations determine which method to run by a
type determined at compile time, rather than by using self-describing
runtime information to do method dispatching.

Many errors aren't found at compile time in these languages, for
example, array bounds errors. It's very hard to design a static type
system which detects the error in

function foo(array a, int i)
    array[i]
end

ary = new Array(10)
foo(ary, 12)

There are certainly less popular languages which attempt this, but
they aren't widely used. Some consider arrays with different bounds
to be of different types, but this makes it hard to write lots of
things. Others do type inferencing, but these tend to be hard for
mere mortals to understand, at least the ones I've seen so far.

So languages like Java, despite being statically typed, defer array
bounds checking to run-time. In the C family of languages, array
bounds violations tend to be unchecked and do lead to mysterious
failures. Of course dynamically typed languages almost invariably do
bounds checking at run-time, sometimes, as in the case of Ruby arrays,
providing dynamically extensible bounds.

As for run-away errors being hard to debug, that's true, but from
practical experience these tend to be much worse in statically typed
languages where the compiler got fooled by a typecast, or a pointer
alias bug, and branched through a non-existent or wrong virtual
function table, or did a fetch or store outside of the bounds of the
object because it got the type wrong.

Although you might not get an error right away when you confuse a
giraffe with an order, any more than you are guaranteed to in a
statically typed language under all circumstances, experience
indicates that hard to debug problems are actually far less common
than someone only extrapolating experience from static languages would
expect, and are usually much easier to debug when they do occur.

If your concern is correctness, which it should be, then it's best to
use best practices for writing in dynamic languages, such as TDD/BDD
rather than attempting to mimic techniques from statically typed
languages which are really there to cover the class of errors caused
by a statically typed implementation.

http://talklikeaduck.denhaven2.com/articles/2008/05/18/on-ceremony-and-training-wheels

···

On Sun, May 18, 2008 at 12:40 PM, Roger Alsing <roger.alsing@precio.se> wrote:

Instead, look at it like this: how likely is a Giraffe able to do what
an Order does? If the Giraffe doesn't quack like an Order, than an
exception is going to be thrown automatically anyway.

Just program in it, and see how often it ever actually bites you. Start
with small things, work your way up.

Good advice.

This kind of mindset is just asking for things to go wrong..

Lets say that the Giraffe happens to have a few methods that does match
those of an order, so that it manages to introduce incorrect values that
populate through your data before the exception is eventually caught.

(ofcourse a Giraffe is an extreme case, but multiple domain objects can
have attributes that overlap, such as price, name etc)

And once you catch the exception, you have absolutely no clue at all
what have been affected.

Incorrectness should be caught early before it litters your data, or you
will have a seriously hard time debugging.

Just because the language is non strict when it comes to types doesnt
mean you should not care about correctness?

--
Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

The type checking in the popular strongly typed languages like Java
and C++ are actually quite weak in uncovering incorrect code.

Yes absolutely, static typed languages have a problem finding logical
flaws at compiletime.
But so does Ruby since it doesnt even have compiletime.
So thats a non argument.

If C# cant find logical problems at compile time and
Ruby cant fint logical flaws at compile time either (due to lack of
such)

And you will find the problem at runtime in C#, and likewise in Ruby.

How does that make C# worse than Ruby in that aspect?

They are really about imposing constraints so that code isn't run
against the wrong data at run-time. Such constraints are necessary in
languages whose implementations determine which method to run by a
type determined at compile time, rather than by using self-describing
runtime information to do method dispatching.

I will still argue that a method name alone is not self-describing.
It will only describe signature but not semantics.
(while I do argue that interfaces carry semantics with them, and that
you have to willingly break that semantics if you implement an interface
method incorrectly)

Many errors aren't found at compile time in these languages, for
example, array bounds errors. It's very hard to design a static type
system which detects the error in

function foo(array a, int i)
    array[i]
end

ary = new Array(10)
foo(ary, 12)

There are certainly less popular languages which attempt this, but
they aren't widely used. Some consider arrays with different bounds
to be of different types, but this makes it hard to write lots of
things. Others do type inferencing, but these tend to be hard for
mere mortals to understand, at least the ones I've seen so far.

So languages like Java, despite being statically typed, defer array
bounds checking to run-time.

Yes, and Ruby still does runtime checks for everything, so it is still a
non argument.

You are comparing the compile time checks of static languages alone to
the runtime checks of Ruby.
Thats not how it works, static languages have compile time AND runtime
checks.

(And I'm confident that you know that there are list types in other
languages if we need expansion)

As for run-away errors being hard to debug, that's true, but from
practical experience these tend to be much worse in statically typed
languages where the compiler got fooled by a typecast, or a pointer
alias bug, and branched through a non-existent or wrong virtual
function table, or did a fetch or store outside of the bounds of the
object because it got the type wrong.

Well that's a bit of a stretch.
You are taking the worst of c and c++ and making it look like its a
major problem in all static typed languages.
(Yes C# have pointers if you like, but they are used once in a lifetime
for most people there)

If your concern is correctness, which it should be, then it's best to
use best practices for writing in dynamic languages, such as TDD/BDD
rather than attempting to mimic techniques from statically typed
languages which are really there to cover the class of errors caused
by a statically typed implementation.

So you are saying that just because you do TDD you do not apply argument
validation in your API?

Argument validation have nothing to do with static typing, it has to do
with good practice and preventing the consumers of your API from
screwing up badly.

I do practice TDD, but that doesnt make me assume that every consumer of
my code will also do TDD.

···

--
Posted via http://www.ruby-forum.com/\.

Also, I'm not here to run a war about static VS dynamic typing.
I do know how both works.

What I was asking was how you were dealing with the potential problems
associated with dynamic typing.

And replying with exaggerations and metaphors to 6 y/o olds isn't
exactly answering the questions in a constructive way, rather making you
look like you are defensive when someone is questioning the sanity of
what you do..

···

--
Posted via http://www.ruby-forum.com/.

Roger Alsing wrote:

Also, I'm not here to run a war about static VS dynamic typing.
I do know how both works.

What I was asking was how you were dealing with the potential problems
associated with dynamic typing.

In a sane way: Once the potential gets realized, the code gets adjusted.

Or do you guard against XSS when your application isn't supposed to be
on the web right now, or when the issue actually arises?

How this adjustment is *done* depends on the code in question.

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan
Blog: http://justarubyist.blogspot.com

~ The purpose of writing is to inflate weak ideas, obscure pure
reasoning, and
inhibit clarity. With a little practice, writing can be an intimidating and
impenetrable fog! -- Calvin

In a sane way: Once the potential gets realized, the code gets adjusted.

Or do you guard against XSS when your application isn't supposed to be

Absolutely, I'm all with you on that one.

BUT:

The chance that others than yourself will be interacting with your code
is fairly big if you do anything else than hobby coding for your self.

Thus, preventing others from messing up is good practice in pretty much
every case.

···

--
Posted via http://www.ruby-forum.com/\.

I'm not sure what to say beyond what has been said.

Lots of people are able to deal with duck typing without particularly
finding it harmful. If you are starting a Ruby project that is going to
require multiple people working with your code, then they are going to
be used to this style of coding anyway; they won't have a problem with
it. They know about documentation. They know about unit tests. These are
things that you are probably doing in your compile-time-type-checked
languages anyway; you aren't losing a bunch of time with that.

Do you lose some safety? Sure. Is it within an acceptable margin for the
benefit in agility? Many people think so.

I think the overall point people have been trying to make, is please
don't go trying to check all of your types all of the time, via is_a? or
kind_of? checks all over the place (i.e. don't write Ruby like C#). If
it bugs you that much, well, the other languages with type checking
exist for a reason!

Again, it's really a matter of trying it. If you find yourself getting
bitten by it, don't use it. And yes, weigh this before starting some
sort of gigantic expensive project.

···

--
Posted via http://www.ruby-forum.com/.

Roger Alsing wrote:

In a sane way: Once the potential gets realized, the code gets adjusted.

Or do you guard against XSS when your application isn't supposed to be

Absolutely, I'm all with you on that one.

BUT:

The chance that others than yourself will be interacting with your code
is fairly big if you do anything else than hobby coding for your self.

Call me naive, but I think that developers are able to read
documentation, unit-tests and example code.

Thus, preventing others from messing up is good practice in pretty much
every case.

Oh no, I cannot anticipate every kind of error somebody might make, nor
every environment my code's being used. If it is a genuine bug or
misbehaving feature, I accept patches (or fix it myself).

While some sanity checks on data are certainly a Good Thing, going over
board doesn't help.

I mean: feeding hpricot something else than HTML or XML isn't hpricot's
problem, but the *users* problem. Of course hpricot should be so
courteous to throw an exception if it gets data it cannot process.

- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan
Blog: http://justarubyist.blogspot.com

~ - You know you've been hacking too long when...
...you want to wash your hair and think: awk -F"/neck" '{ print $1 }' |
shower