if a mathematical result too small for your hardware to compute is
returned, you can determine whether it was on the negative side, or the
positive side.
if a mathematical result too small for your hardware to compute is
returned, you can determine whether it was on the negative side, or the
positive side.
0.000_000_000_000_000_01
==>1e-17
0.000_000_000_000_000_001
==>0.0
I’m not sure it’s a bug in float parsing, since you can go:
1e-18
==>1e-18
1e-300
==>1e-300
…which would be preferable to typing in all those zeros anyway. But I
would be curious to know why they decided to truncate long float
literals?
–Mark
···
On May 5, 2004, at 3:13 PM, Tobias Peters wrote:
What I find much more problematic than 0.0 having a sign is that ruby
treats
0.0000000000000000001 == 0.0 as true.
If I’ve counted correct that should be 1 * 10^-19. Doubles can
represent Numbers down to approx. 10^-300.
1e-300 == 0.0 is false in ruby, as it should be.
There must be a bug in the interpreter, where it reads Float literals.
0.0 and -0.0 are considered equal. They only retain the sign for the
case where you need to check it. If you divide 1 by a suspicious zero,
you’ll get either Infinity or -Infinity. Or, if you need to check a
lot:
class Numeric
def negative?()
zero? ? (1.0/self < 0) : (self < 0)
end
end
==>nil
-0.0.negative?
==>true
0.0.negative?
==>false
cheers,
–Mark
···
On May 6, 2004, at 12:17 AM, Elias Athanasopoulos wrote:
On Thu, May 06, 2004 at 04:28:36AM +0900, Mark Hubbart wrote:
On May 5, 2004, at 10:33 AM, Elias Athanasopoulos wrote:
Hello!
I don’t know if this is intentional, but is there a reason
that 0.0 holds the sign?
if a mathematical result too small for your hardware to compute is
returned, you can determine whether it was on the negative side, or
the
positive side.
I think this might be a bug in the C library, not ruby. Don’t take this for
sure however.
Why I think this is because something similar seems to occur in Perl too.
I’ve written a japh “http://www.perlmonks.com/index.pl?node_id=330784”.
I’ve checked that it works on machines with either endianness (linux-ix86
and sun4-solaris-sparc). However, I’ve got reports saying that it won’t
work on their machine. One report,
“Re: Fun with duff's device and AUTOLOAD” looks especially as if
the last few digits of the floating point literals in the code were ignored.
That last report was on a winnt machine, but it’s also said that it doesn’t
work on solaris.
Please tell me on what platform and OS did you get the above results.
ambrus
···
On Thu, May 06, 2004 at 07:13:57AM +0900, Tobias Peters wrote:
What I find much more problematic than 0.0 having a sign is that ruby treats
0.0000000000000000001 == 0.0 as true.
If I’ve counted correct that should be 1 * 10^-19. Doubles can represent
Numbers down to approx. 10^-300.
1e-300 == 0.0 is false in ruby, as it should be.
There must be a bug in the interpreter, where it reads Float literals.