Floating point rounding error?

1999.0.ceil

1999

(12.99+7.0)*100

1999.0

((12.99+7.0)*100).ceil

2000

is this expected?
Is there any BigFloat that avoids this?
Thanks!
-=r

···

--
Posted via http://www.ruby-forum.com/\.

Is there any BigFloat that avoids this?

BigDecimal:

require 'bigdecimal'
((BigDecimal("12.99")+BigDecimal("7.0"))*100).ceil
=> #<BigDecimal:101f8f8c,'0.1999E4',4(12)>

((BigDecimal("12.99")+BigDecimal("7.0"))*100).to_f.ceil

=> 1999

I think you have to take care though that the BigDecimal isn't
converted back into a normal float.

(BigDecimal("12.99")+7.0).class

=> Float

Float computer math is full of those effects - and so are the archives of this list with articles similar to yours.

Kind regards

  robert

···

On 21.02.2009 19:14, Roger Pack wrote:

1999.0.ceil

1999

(12.99+7.0)*100

1999.0

((12.99+7.0)*100).ceil

2000

is this expected?

Float computer math is full of those effects - and so are the archives
of this list with articles similar to yours.

I hate those subtle rounding errors. Any thoughts if I were to suggest
Ruby by default use BigDecimal instead of float?
i.e.
7.0 + 7.0
is parsed as two bigdecimals, and to use floats you do Float(7.0) or
Float.new(7.0)
?

···

--
Posted via http://www.ruby-forum.com/\.

Yes. There are several reasons against IMHO:

1. existing code might be broken

2. efficiency (Float is faster than BigDecimal)

3. standards (AFAIK the Float implementation is backed by an ISO standard while I am not sure whether this is the case for BigDecimal).

4. limited use: while your particular example will work as you expect, BigDecimal is still not a real number (in math terms) but still a rational number => rounding errors for other numbers will be introduced when switching from Float to BigDecimal by default.

Sorry to disappoint you, but I'd say this is the learning experience everybody in programming has to undergo: floats are inherently unsafe, even more, computer representations of mathematical constructs like numbers are always imperfect and it is your responsibility of the writer of a program to ensure you properly deal with this mismatch.

One solution which comes to mind is this: have an extension or rather a command line switch which changes the default behavior. But this in itself is not without problems either: library code might suddenly start to fail when Ruby is started with the BigDecimal flag etc.

Kind regards

  robert

···

On 21.02.2009 22:33, Roger Pack wrote:

Float computer math is full of those effects - and so are the archives
of this list with articles similar to yours.

I hate those subtle rounding errors. Any thoughts if I were to suggest Ruby by default use BigDecimal instead of float?

1. existing code might be broken

True

2. efficiency (Float is faster than BigDecimal)

Can't disagree with that.

3. standards (AFAIK the Float implementation is backed by an ISO
standard while I am not sure whether this is the case for BigDecimal).

True.

4. limited use: while your particular example will work as you expect,
BigDecimal is still not a real number (in math terms) but still a
rational number => rounding errors for other numbers will be introduced
when switching from Float to BigDecimal by default.

The only real benefit of it is that it won't bite "as often," for better
or worse.
This is a learning curve that could hit you at the edges [adding
extremely small and extremely large numbers, for example] and not during
normal day use :slight_smile:

One "almost good" answer would be to have ruby output float to strings
by default with enough precision to be able to recreate the original.

Thoughts?
-=r

···

--
Posted via http://www.ruby-forum.com/\.

Roger Pack wrote:

1. existing code might be broken

True

2. efficiency (Float is faster than BigDecimal)

Can't disagree with that.

3. standards (AFAIK the Float implementation is backed by an ISO
standard while I am not sure whether this is the case for BigDecimal).

True.

In this case, I think it's IEEE.

4. limited use: while your particular example will work as you expect,
BigDecimal is still not a real number (in math terms) but still a
rational number => rounding errors for other numbers will be introduced
when switching from Float to BigDecimal by default.

The only real benefit of it is that it won't bite "as often," for better
or worse.
This is a learning curve that could hit you at the edges [adding
extremely small and extremely large numbers, for example] and not during
normal day use :slight_smile:

You don't even have to get into numbers that are that big...

irb(main):001:0> a = (10.11 * 100).to_i
=> 1011
irb(main):002:0> b = (10.12 * 100).to_i
=> 1011
irb(main):003:0> a == b
=> true

One "almost good" answer would be to have ruby output float to strings
by default with enough precision to be able to recreate the original.

Thoughts?
-=r

This behavior isn't limited to ruby, any language which uses the IEEE
spec for floating point exhibits the same behavior. For example the C++
code that matches the ruby above gives the same result...

#include <iostream>

using namespace std;

int main(int argc, char* argv) {
  float x = 10.12;
  int y = (int) (x * 100.0);
  cout << x << " , " << y << endl;
  return(0);
}

gives: 10.12 , 1011

I guess the take home is to avoid floats whenever you'll be doing any
rounding or truncating.

···

--
Posted via http://www.ruby-forum.com/\.

Roger Pack wrote:

3. standards (AFAIK the Float implementation is backed by an ISO
standard while I am not sure whether this is the case for BigDecimal).

True.

In this case, I think it's IEEE.

Right, thanks for the correction!

I guess the take home is to avoid floats whenever you'll be doing any rounding or truncating.

I would not go as far. IMHO the take home should be awareness of the fact that mathematical numbers do not blend well with any computerized representation and that you always have to be careful when dealing with them. Floats are perfectly ok for a large number of applications so generally avoiding them seems a too harsh rule. Btw, if that were true then we could throw them out of the language without loss but I am pretty sure if you suggest that there would be vigorous objection. :slight_smile:

Kind regards

  robert

···

On 22.02.2009 03:21, Raphael Clancy wrote:

Robert Klemme wrote:

I would not go as far. IMHO the take home should be awareness of the
fact that mathematical numbers do not blend well with any computerized
representation and that you always have to be careful when dealing with
them. Floats are perfectly ok for a large number of applications so
generally avoiding them seems a too harsh rule. Btw, if that were true
then we could throw them out of the language without loss but I am
pretty sure if you suggest that there would be vigorous objection. :slight_smile:

Kind regards

  robert

I totally agree with you, I guess what I was trying to say is that due
to rounding error floats aren't appropriate for low precision
computations or areas where you are likely to have to convert to int.
It's better to use BigDecimal or to handle the decimal portion in some
other way.

R.

···

--
Posted via http://www.ruby-forum.com/\.