You're using the term "reference" in two different ways, though:
first, as a synonym for a literal constructor (the actual "ink on the
page", so to speak), and second as the thing bound to an object.
Perhaps that was just sloppiness on my part. I think of a 'reference'
as the bit pattern that some particular Ruby implementation uses to
identify an object. The point I was trying to communicate is that I
view literals such as
1
42
true
false
:blue
as syntactical constructs that map directly to the particular bit
pattern that the Ruby interpreter uses to reference the associated
objects. I think of them as 'literal references' as opposed to
'literal objects' or 'literal values'. I'm not claiming that I'm
using any sort of accepted terminology. I'm just trying to
communicate how I think about these things.
But consider something like:
a = "blue".to_sym
or
x = 10 + 10
The literals :blue and 20 never occur, but a and x are in exactly the
same state they'd be in if I'd done:
a = :blue
x = 20
So my variables must be bound to something other than a reference, if
"reference" means the those literals.
Well, I guess my point is that the Ruby interpreter, as part of its
parsing job, must convert :blue as found in the source code to
whatever internal representation of a symbol reference it is going to
use and similarly for 20 (and nil, false, true).
The fact that the conversion can be done at *parse* time rather than
later on at execution time point out how :blue and 20 are distinctly
different than 'hello' and [1,2,4], which can only be converted to
object references at runtime since it involves instantiating new objects
every time the code is executed.
I guess I'd say that:
:blue
"blue".to_sym
10 + 10
20
etc. are expressions (rather than references), and that every
expression evaluates to an object.
And I would say that they are all expressions that evaluate to object
references. Are you are saying that those particular expressions are
special because the 'standard' behavior is to return an object instance
that happens to be encoded right in the value of the reference
vs. some indirection into the heap? That is an implementation detail.
Surely I could write a version of the Ruby interpreter that actually
allocated an object from the heap for Fixnums. It would be slow and
it would have to ensure that there was only one 1 and one 2 and so on
but that implementation could still implement the same language semantics.
Then there's the question of what
happens with assignment. I'm not sure how "canonical" the notion of
the universal reference is (not just as a matter of implementation) --
but it probably doesn't matter too much either way as long as
(im)mutability and uniqueness, which are really object properties, are
clear.
I agree with the uniqueness point but I'm not so sure about immutability.
Fixnums can have instance variables...
My only concern is cases where the fact that something is an
immediate value might explain some behavior that might otherwise seem
unclear or pointless (like the ++ operator case).
But there is always all sorts of hand waving about assignment semantics
that include different rules for 'regular' objects and for 'value'
objects (nil, true, false, Fixnum, Symbol). If you view 1 as a value
(i.e. an object) then you have to have those different rules to explain
how everything works. If you view 1 as a reference to an object (even
if the object is a virtual object whose creation is 'optimized' away
via some creative bit-twiddling) then you don't have to have all those
different rules. I like that.
···
On Aug 7, 2007, at 5:18 PM, dblack@rubypal.com wrote: