Wes Gamble wrote:
Today I discovered the difference in the meaning of the / (arithmetic
divide) method depending on whether it's caller is an integer or a
Not exactly. The result is expressed in the type of the more precise of the
I discovered that I had to cast the calling number explicitly to a float
and it seemed totally non-intuitive to me.
Well, that depends on what you want as a result. You aren't required to cast
one of the operands as a float. Maybe you want the default result for the
I often break a time value into hours, minutes and seconds. I do this by
successive application of integer modulo and divide. If the division
operator were to promote my values, the algorithm would fail.
A couple of points:
1) I just discovered that this behavior exists in Java and C as well.
Which, frankly, I didn't even realize. Sigh.
There are excellent reasons for this behavior. When you divide two numbers,
the result should have the precision of the more precise type of either of
the two operands.
2) I've read some other posts and people have mentioned the notion of
type purity with respect to this. I don't see why that's a problem.
It's not about type purity. That isn't supported by the fact that the less
precise type is discarded in favor of the more precise type.
When one divides two integers, one isn't asking for one of the integer
arguments to be cast to a float, they're asking for the result of
dividing two integers. Since when is it not allowable for a method to
take arguments of a given type and return a result of a different type?
It is not a question of allowed, it is a question of what one would expect
without explicit adjustment.
A couple of questions:
1) I see that there is a divmod operator in Ruby that returns the
quotient and remainder of a division. Why is the overridden / operator
necessary when there is already a way to get int1 DIV int2 by using
Because people expect to be able to use a division operator for division.
2) Is the reason for the behavior of "/" is that this is the behavior of
"/" in C? Why is it that way in C?
This behavior is common to all modern languages. It exists because the
languages that support it are designed to act on a set of common-sense
BTW, the behavior goes beyond integers and floats. It also applies to
extended-precision numerical types.
x = BigDecimal.new("3.0",100)
One of the operands is an integer, but the result is cast to the precision
of the more precise type. So much for type purity.