I see your logic -- you're trying to look at how the language runtime
is implemented (and perhaps optimized) and then trying to do a back-
inference as to how the high-level language "should" operate. And
that can sometimes be a useful technique. However I think it's useful
to keep a separation b/w those issues -- implementation details and
the design of the language itself -- even though they're intimately
related, although ideally we'd like the design to influence the
implementation far more than the other way around.
Are you familiar with the idea of a closure? The idea is when a code
block is defined, it *somehow* captures the variables in scope at that
time. I emphasize *somehow* because that's an implementation detail.
Take a look at the following code and you'll see that a variable
that's defined in a method can *somehow* live even when we've returned
out of the method.
···
On May 23, 1:26 am, cbare <christopherb...@gmail.com> wrote:
Thanks Eric, that does help. But, I'm still a little puzzled.
Take this blob of Java:
void myMethod() {
int a = 1;
for (int i=0; i<10; i++) {
int b = 2;
System.out.println(a + b + i);
}
}
If I understand correctly, even though b is defined inside the loop, a
and b are both stored in the same stack frame when this method is
compiled (barring optimation). At least, that's the way we did it in
compilers 101. The compiler prevents access to b outside the loop, but
the 4 bytes for b remain allocated on the stack during the whole
execution of the method.
Anyway, the execution of myMethod causes the allocation and cleanup of
one stack frame. Not two, and certainly not one plus one for each
execution of the loop. So, my guess is that a code block maintains
that behavior.
====
def make_counter
count = 0
p1 = lambda { count } #
peek
p2 = lambda { count += 1 } #
increment
p3 = lambda { count -= 1 } #
decrement
[p1, p2, p3] # return array of 3
Procs
end
counter1 = make_counter # contains 3
Procs
# will the calls to Procs
work?
# what value of count will they
use?
3.times do counter1[1].call end
8.times do counter1[2].call end
puts counter1[0].call
# can we make another
counter?
counter2 = make_counter
# will this counter's count start again at
0?
12.times do counter2[1].call end
puts counter2[0].call
# does counter1 have/keep its own value of
count?
puts counter1[0].call
====
Depending on your past experience, you might find the code above to be
"mind-blowing". Clearly Ruby is not implemented in terms of stack
frames as you described them. If you want to learn more about this, I
recommend _Structure and Interpretation of Computer Programs_, a book
that people seem to either love or hate (on Amazon it has 85 5-star
reviews, 53 1-star reviews, and just 19 combined 2-, 3-, and 4-star
reviews).
I'm still wondering why using the return keyword is different than
implying the return value of the block and what a "Local Jump" is.
Maybe I'm perseverating on this point, but I'm curious about how Ruby
works under the hood.
As for *why*, that's a language design issue. I can only presume that
Matz decided that a "return" should work at the level of the method,
not the block, as he thought it would be the most useful way for the
language to operate.
As for what a Local Jump is, I don't know other than to say that I've
only encountered the term through LocalJumpError -- the exception that
gets raised when you try to return out of a block when its original
context is no longer valid. Maybe someone else will have deeper
insight.
Eric
====
LearnRuby.com offers Rails & Ruby HANDS-ON public & ON-SITE
workshops.
Ruby Fundamentals Wkshp June 16-18 Ann Arbor, Mich.
Ready for Rails Ruby Wkshp June 23-24 Ann Arbor, Mich.
Ruby on Rails Wkshp June 25-27 Ann Arbor, Mich.
Ruby Plus Rails Combo Wkshp June 23-27 Ann Arbor, Mich
Please visit http://LearnRuby.com for all the details.