0.0 sign

Hello!

I don’t know if this is intentional, but is there a reason
that 0.0 holds the sign?

tiny = 0.0000000000000000001
inf = 1/tiny

a = 1/inf
b = -1/inf

p a, b, (a-b), (a+b)

a = -1/inf
b = 1/inf

p a, b, (a-b), (a+b)

I can hardly imagine you can use it somehow; OTOH it can
make the output a little uglier. :slight_smile:

I checked Python and altough 0.0 carries the sign it is not
used in the results of subtraction/addition, as it is in Ruby.

Regards,

···


University of Athens I bet the human brain
Physics Department is a kludge --Marvin Minsky

— /home/elathan/hacking/ruby/numeric.c.orig 2004-05-05 00:18:12.000000000 +0300
+++ /home/elathan/hacking/ruby/numeric.c 2004-05-05 00:18:41.000000000 +0300
@@ -502,6 +502,7 @@
avalue = fabs(value);
if (avalue == 0.0) {
fmt = “%.1f”;

  • value = avalue;
    }
    else if (avalue < 1.0e-3) {
    d1 = avalue;

IEEE 754 (floating point arithmetic standard) specifies negative zero
for floats. For more explanation, see
http://www.obtuse.com/resources/negative_zero.html

if a mathematical result too small for your hardware to compute is
returned, you can determine whether it was on the negative side, or the
positive side.

def really_complex_function(num)
num/1_000_000_000_000_000_000.0
end
==>nil
really_complex_function( 0.000_000_000_000_000_000_1 )
==>0.0
really_complex_function( -0.000_000_000_000_000_000_1 )
==>-0.0

···

On May 5, 2004, at 10:33 AM, Elias Athanasopoulos wrote:

Hello!

I don’t know if this is intentional, but is there a reason
that 0.0 holds the sign?

What I find much more problematic than 0.0 having a sign is that ruby treats

0.0000000000000000001 == 0.0 as true.

If I’ve counted correct that should be 1 * 10^-19. Doubles can represent
Numbers down to approx. 10^-300.

1e-300 == 0.0 is false in ruby, as it should be.

There must be a bug in the interpreter, where it reads Float literals.

Tobias

Yes, but how can you distinguish them inside your code? Since
0.0 == -0.0, 0.0 === -0.0 and even 0.0.eql?(-0.0) is true.

Regards,

···

On Thu, May 06, 2004 at 04:28:36AM +0900, Mark Hubbart wrote:

On May 5, 2004, at 10:33 AM, Elias Athanasopoulos wrote:

Hello!

I don’t know if this is intentional, but is there a reason
that 0.0 holds the sign?

IEEE 754 (floating point arithmetic standard) specifies negative zero
for floats. For more explanation, see
http://www.obtuse.com/resources/negative_zero.html

if a mathematical result too small for your hardware to compute is
returned, you can determine whether it was on the negative side, or the
positive side.

def really_complex_function(num)
num/1_000_000_000_000_000_000.0
end
==>nil
really_complex_function( 0.000_000_000_000_000_000_1 )
==>0.0
really_complex_function( -0.000_000_000_000_000_000_1 )
==>-0.0


University of Athens I bet the human brain
Physics Department is a kludge --Marvin Minsky

For me, it cuts off after 1e-17:

0.000_000_000_000_000_01
==>1e-17
0.000_000_000_000_000_001
==>0.0
I’m not sure it’s a bug in float parsing, since you can go:
1e-18
==>1e-18
1e-300
==>1e-300
…which would be preferable to typing in all those zeros anyway. But I
would be curious to know why they decided to truncate long float
literals?

–Mark

···

On May 5, 2004, at 3:13 PM, Tobias Peters wrote:

What I find much more problematic than 0.0 having a sign is that ruby
treats

0.0000000000000000001 == 0.0 as true.

If I’ve counted correct that should be 1 * 10^-19. Doubles can
represent Numbers down to approx. 10^-300.

1e-300 == 0.0 is false in ruby, as it should be.

There must be a bug in the interpreter, where it reads Float literals.

Hi,

Elias Athanasopoulos wrote:

Yes, but how can you distinguish them inside your code? Since
0.0 == -0.0, 0.0 === -0.0 and even 0.0.eql?(-0.0) is true.

Check XSDFloat implementation at xsd/datatypes.rb in ruby > 1.8.1 or
http://www.ruby-lang.org/cgi-bin/cvsweb.cgi/lib/soap4r/lib/xsd/datatypes.rb?rev=1.13
I hope this helps.

Regards,
// NaHi

1/0.0 > 0
==>true
1/-0.0 > 0
==>false

0.0 and -0.0 are considered equal. They only retain the sign for the
case where you need to check it. If you divide 1 by a suspicious zero,
you’ll get either Infinity or -Infinity. Or, if you need to check a
lot:

class Numeric
def negative?()
zero? ? (1.0/self < 0) : (self < 0)
end
end
==>nil
-0.0.negative?
==>true
0.0.negative?
==>false

cheers,
–Mark

···

On May 6, 2004, at 12:17 AM, Elias Athanasopoulos wrote:

On Thu, May 06, 2004 at 04:28:36AM +0900, Mark Hubbart wrote:

On May 5, 2004, at 10:33 AM, Elias Athanasopoulos wrote:

Hello!

I don’t know if this is intentional, but is there a reason
that 0.0 holds the sign?

IEEE 754 (floating point arithmetic standard) specifies negative zero
for floats. For more explanation, see
http://www.obtuse.com/resources/negative_zero.html

if a mathematical result too small for your hardware to compute is
returned, you can determine whether it was on the negative side, or
the
positive side.

def really_complex_function(num)
num/1_000_000_000_000_000_000.0
end
==>nil
really_complex_function( 0.000_000_000_000_000_000_1 )
==>0.0
really_complex_function( -0.000_000_000_000_000_000_1 )
==>-0.0

Yes, but how can you distinguish them inside your code? Since
0.0 == -0.0, 0.0 === -0.0 and even 0.0.eql?(-0.0) is true.

I think this might be a bug in the C library, not ruby. Don’t take this for
sure however.

Why I think this is because something similar seems to occur in Perl too.

I’ve written a japh “http://www.perlmonks.com/index.pl?node_id=330784”.
I’ve checked that it works on machines with either endianness (linux-ix86
and sun4-solaris-sparc). However, I’ve got reports saying that it won’t
work on their machine. One report,
Re: Fun with duff's device and AUTOLOAD” looks especially as if
the last few digits of the floating point literals in the code were ignored.
That last report was on a winnt machine, but it’s also said that it doesn’t
work on solaris.

Please tell me on what platform and OS did you get the above results.

ambrus

···

On Thu, May 06, 2004 at 07:13:57AM +0900, Tobias Peters wrote:

What I find much more problematic than 0.0 having a sign is that ruby treats

0.0000000000000000001 == 0.0 as true.

If I’ve counted correct that should be 1 * 10^-19. Doubles can represent
Numbers down to approx. 10^-300.

1e-300 == 0.0 is false in ruby, as it should be.

There must be a bug in the interpreter, where it reads Float literals.

Tobias

Ok, got it! Very nice. :slight_smile: Thanx.

Regards,

···

On Thu, May 06, 2004 at 05:52:46PM +0900, Mark Hubbart wrote:

class Numeric
def negative?()
zero? ? (1.0/self < 0) : (self < 0)
end
end
==>nil
-0.0.negative?
==>true
0.0.negative?
==>false


University of Athens I bet the human brain
Physics Department is a kludge --Marvin Minsky

Hi,

···

In message “Re: Bug in Float literal (1e-19 == 0) Re: 0.0 sign” on 04/05/06, “Zsban Ambrus” ambrus@math.bme.hu writes:

0.0000000000000000001 == 0.0 as true.

I think this might be a bug in the C library, not ruby. Don’t take this for
sure however.

It’s a bug in strtod(), but Ruby uses its own strtod() implementation.
H.Yamamoto submitted the fix in [ruby-dev:23465].

						matz.