Stefan Rusterholz wrote:
Christian Luginbuehl wrote:
I know that this is no Ruby-specific problem and I can live with it in a
more low level language like C, where a 'float' is represented by 32/64
bits depending on your processor type, the same as an 'int' is stored as
a 32 bit (?) value.
If you have an idea on how to do that, I'm sure the core developers
would love to hear. The nature of the problem is not the same as with
Fixnums vs. Bignums. With non-integral values your knowledge about the
problem is required. Ruby provides several ways to deal with them, but
it is upon you to choose the best way. There are Rational, BigDecimal
and Float. Personally I don't see how the system should manage to
automatically select "the best" variant, as that depends upon your
needs. As far as I see it, those are not algorithmically ascertainable.
What is IMHO arguable is to what literals default, e.g. whether 2.95
should mean Float("2.95") or BigDecimal("2.95") or Rational(295, 100).
I didn't but a lot of thinking in it, so there might be many issues to
solve. But again, it would be nice to have this behaviour in a
programming language that - generally very successfully - follows the
principle of least surprise (POLS).
Yes, it would be nice. Just as far as I see it impossible. But some
people love to solve seemingly impossible problems, so who knows...
Regards
Stefan
1. As far as the "principle of least surprise" is concerned, the fact
that floating point arithmetic on most systems is in binary and hence
numbers like 0.1 are not exactly representable is surprising only to
those who have never been trained in the use of floating point
arithmetic. If Ruby is your *first* language, or the first one you've
used floating point in, I can understand surprise. But to my knowledge
Ruby floating point behaves no more "surprisingly" than that in any
other language.
2. At the risk of angering the duck typing crowd, I've been a numerical
programmer since Fortran II, in which one got automatic "promotion" of
integers to floating point values in expressions and assignments, but
not much else in the way of conveniences. In short, I *expect* to have
to declare the types of numbers!
I *expect* to have to specify whether a number is integer (fixnum),
multi-precision integer(bignum), floating, double precision, complex,
rational or big decimal if such a thing exists in the language. It is
surprising to *me* when I *don't* need to specify that.
For that
matter, I also expect to have to declare fixed sizes for
multidimensional arrays of constant numerical type.
In return for these expectations, I expect the compiler / interpreter /
runtime to provide me with optimization. Ruby doesn't do that part of
it, and may never do it, since it's easy to offload numeric processing
from Ruby to C, where that kind of magic can happen at full warp speed. 