Ruby, actors, continuations, Kernel#callcc

reading carl hewitt's seminal paper on the actor model:
http://www.lcs.mit.edu/publications/specpub.php?id=762
my current impression is that actors are basically pure-OO objects
using continuations (coroutines?) instead of stack-frame
methods/subroutines. this way objects only require "promises" (in the
form of the passing of context) rather than the final value (as with
method stack-frames). (i think this important towards breaking free
from von neumann architecture, ala bachus:
http://delivery.acm.org/10.1145/360000/359579/p613-backus.pdf?key1=359579&key2=7975779801&coll=portal&dl=ACM&CFID=11111111&CFTOKEN=2222222

but i still can't quite seem to grok continuations, at least in terms
of interpreting it for ruby, w/r/t closures and Kernel#callcc. this
after reading dan "parrot" sugalski's weblog
[http://www.sidhe.org/~dan/blog/], jim weirich's email
[http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/78288
], and rubygarden/c2.com/wikipedia articles on the subject:
http://www.rubygarden.org/ruby?action=history&id=Continuations
http://rubygarden.org/ruby?ContinuationExplanation
http://www.ruby-doc.org/docs/ProgrammingRuby/html/ref_c_continuation.html
http://c2.com/cgi/wiki?ContinuationExplanation
http://en.wikipedia.org/wiki/Subroutine
http://en.wikipedia.org/wiki/Coroutine

any futher help is much appreciated.

thanks!

-z

zuzu wrote:

but i still can't quite seem to grok continuations, at least in terms
of interpreting it for ruby, w/r/t closures and Kernel#callcc. this
after reading dan "parrot" sugalski's weblog

I find that Continuations are easy to work with once you've got the confusing stuff wrapped away. I've attached Continuation.create which allows you to do such things:

cc, counter = Continuation.create(10)
puts counter
cc.call(counter - 1) if counter > 0

Or a custom version of Enumerable#inject: (Won't work with Enumerables containing nil values and is way less efficient than the built-in one.)

module Enumerable
   def cc_inject(initial_state = nil, &block)
     obj = self.to_a.clone
     initial_state = obj.shift if initial_state.nil?
     cc, state, item = Continuation.create(initial_state, obj.shift)
     unless item.nil?
       state = yield(state, item)
       cc.call(state, obj.shift)
     end
     return state
   end
end

Can't say anything about the whole actor thing, but I hope I was at least of some help. :slight_smile:

Regards,
Florian Gross

simplecc.rb (247 Bytes)

zuzu <sean.zuzu@gmail.com> writes:

reading carl hewitt's seminal paper on the actor model:
http://www.lcs.mit.edu/publications/specpub.php?id=762
my current impression is that actors are basically pure-OO objects
using continuations (coroutines?) instead of stack-frame
methods/subroutines. this way objects only require "promises" (in the
form of the passing of context) rather than the final value (as with
method stack-frames). (i think this important towards breaking free
from von neumann architecture, ala bachus:
http://delivery.acm.org/10.1145/360000/359579/p613-backus.pdf?key1=359579&key2=7975779801&coll=portal&dl=ACM&CFID=11111111&CFTOKEN=2222222

but i still can't quite seem to grok continuations, at least in terms
of interpreting it for ruby, w/r/t closures and Kernel#callcc. this
after reading dan "parrot" sugalski's weblog
[http://www.sidhe.org/~dan/blog/\], jim weirich's email
[http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/78288
], and rubygarden/c2.com/wikipedia articles on the subject:
http://www.rubygarden.org/ruby?action=history&id=Continuations
http://rubygarden.org/ruby?ContinuationExplanation
Programming Ruby: The Pragmatic Programmer's Guide
http://c2.com/cgi/wiki?ContinuationExplanation
Function (computer programming) - Wikipedia
Coroutine - Wikipedia

any futher help is much appreciated.

For me, there wasn't really an a-ha moment, at which point I immediately
grasped the implications of CALL/CC. Rather, I kept playing around with
it, and after I while I just found myself understanding.

Nevertheless, if you want more stuff to read, you can try
http://mikael.phubuh.org/Media/Writing/Continuations/\. I'm not much of
an educator, but at least it uses Ruby.

  mikael

Hi,

Here is how I think continuations and actor theory work together. This is
my interpretation, and it is possible I didn't get everything right, but I
hope it may help you.

I think the main problem in understanding continuations is that normally a
continuation is regarded as a function. However I think it is more the
other way around. A function is more a special form of a continuation.

Let me try to explain a little.
Instead of looking at a program as a sequens of statements, you could
consider a program as many independend points (or actors), each which
takes a value, does something with it, and passes it to another point. You
could then call each such point a contination-point. (That is how I call
it, I don't really know if there is an official name). In the
actor-theory, this point would be an actor.

To be useful, the
continuation-point must also be aware of the current environment, and the
easiest way is to pass the environment from each point to another. In the
actor-theory this environment could be just another actor, which could
respond to queries, for example to get a value.

A function would be a special kind of continuation-point, one that takes
in addition to the other values a continuation. This continuation will be
saved in the environment, and when the function has finished to do what it
has to do, it will send it's return value to the saved continuation.

How do closures and call/cc fit in this model? You could say a closure is
a function and a saved environment. To call a closure, you get the saved
environment, and call the function with this environment, and any
parameters. As described above, the function is a continuation-point that
accepts continuation, and calls this continuation will the final result.

However you could also save the environment for a continuation-point,
basicly creating a closure over the continuation-point. This closure is
what is normally called a continuation.

call/cc just captures the next continuation (with the environment, which
includes any functions to return to), wraps it in an object, and calls the
given block with that object. The continuation is actually that part of
the program that will receive the result from call/cc. That result can be
either the value of the block, or the value passed to the
continuation-object (using "cont.call(value)").

I think the nice thing about actor theory is that it allows lambda
calculus to be implemented using parallel objects. You could build a chip,
where each continuation-point can be programmed in the chip, and would act
on its own. This chip could the perform all the computations in parallel,
since each continuation point can act on it's own. This would be
radically different from the current cpu-design, where there is only a
single point of controlflow. A special sort of FPGA could be used for
this kind a chip, since they allow reprogramming on the fly.

Because lambda calculus is very basic to programming, it could be possible
to transform a traditional program into a form that makes maximum use of
parallelism in such a chip, and therefor be very fast. Also all processes
that would fit into memory would run effectively parallel (not simulated
using task-switching), and wouldn't slow down any other process. It would
also support dynamic languages better, so that a ruby program wouldn't be
any slower that a C/C++ program.

I don't know if the design of such a chip would be possible, but it would
be a radically different aproach to computing.

Regards,
KB

···

On Sat, 21 Aug 2004 07:24:31 +0900, zuzu wrote:

reading carl hewitt's seminal paper on the actor model:
http://www.lcs.mit.edu/publications/specpub.php?id=762 my current
impression is that actors are basically pure-OO objects using
continuations (coroutines?) instead of stack-frame methods/subroutines.
this way objects only require "promises" (in the form of the passing of
context) rather than the final value (as with method stack-frames). (i
think this important towards breaking free from von neumann
architecture, ala bachus:
Can programming be liberated from the von Neumann style?: a functional style and its algebra of programs: Communications of the ACM: Vol 21, No 8

but i still can't quite seem to grok continuations, at least in terms of
interpreting it for ruby, w/r/t closures and Kernel#callcc. this after
reading dan "parrot" sugalski's weblog
[http://www.sidhe.org/~dan/blog/\], jim weirich's email
[http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/78288 ],
and rubygarden/c2.com/wikipedia articles on the subject:
http://www.rubygarden.org/ruby?action=history&id=Continuations
http://rubygarden.org/ruby?ContinuationExplanation
Programming Ruby: The Pragmatic Programmer's Guide
http://c2.com/cgi/wiki?ContinuationExplanation
Function (computer programming) - Wikipedia
Coroutine - Wikipedia

any futher help is much appreciated.

thanks!

-z

Not sure if you've done ths already, but what really helped me grok
continuations was reading up on continuation passing style.

martin

···

zuzu <sean.zuzu@gmail.com> wrote:

but i still can't quite seem to grok continuations, at least in terms
of interpreting it for ruby, w/r/t closures and Kernel#callcc. this

Mikael Brockman <mikael@phubuh.org> writes:

zuzu <sean.zuzu@gmail.com> writes:
> but i still can't quite seem to grok continuations, at least in terms
> of interpreting it for ruby, w/r/t closures and Kernel#callcc.

For me, there wasn't really an a-ha moment, at which point I immediately
grasped the implications of CALL/CC. Rather, I kept playing around with
it, and after I while I just found myself understanding.

Nevertheless, if you want more stuff to read, you can try
http://mikael.phubuh.org/Media/Writing/Continuations/\. I'm not much of
an educator, but at least it uses Ruby.

By the way, that code uses a lot of catch/throw, which is a typical use
of continuations in languages that lack exception systems (i.e.,
Scheme). Maybe this implementation will help you understand what's
going on:

class Catcher
  def initialize
    @futures = {}
  end

  def catch (id)
    callcc do |future|
      @futures[id] = future
      yield
    end
  end

  def throw (id)
    @futures[id].call
  end
end

$global_catcher = Catcher.new

def my_catch (id, &block)
  $global_catcher.catch id, &block
end

def my_throw (id)
  $global_catcher.throw id
end

You can use it just like regular catch/throw, I think:

> my_catch :foo do
> puts "hello"
> throw :foo
> puts "world"
> end
hello
=> nil

An interesting bug/feature is that it neglects to remove the
continuations after the block is done, so you can make an obfuscated
infinite loop like this:

my_catch :foo do
  puts "foo"
end
puts "bar"
throw :foo

It'll print foo followed by an infinite amount of bars. So in fact,
the Catcher class actually implements something like a Common Lisp
TAGBODY or a C tag/goto thing, except you can't go to labels that you
haven't declared yet.

  mikael

> reading carl hewitt's seminal paper on the actor model:
> http://www.lcs.mit.edu/publications/specpub.php?id=762 my current
> impression is that actors are basically pure-OO objects using
> continuations (coroutines?) instead of stack-frame methods/subroutines.
> this way objects only require "promises" (in the form of the passing of
> context) rather than the final value (as with method stack-frames). (i
> think this important towards breaking free from von neumann
> architecture, ala bachus:
> http://delivery.acm.org/10.1145/360000/359579/p613-backus.pdf?key1=359579&key2=7975779801&coll=portal&dl=ACM&CFID=11111111&CFTOKEN=2222222
>
> but i still can't quite seem to grok continuations, at least in terms of
> interpreting it for ruby, w/r/t closures and Kernel#callcc. this after
> reading dan "parrot" sugalski's weblog
> [http://www.sidhe.org/~dan/blog/\], jim weirich's email
> [http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/78288 ],
> and rubygarden/c2.com/wikipedia articles on the subject:
> http://www.rubygarden.org/ruby?action=history&id=Continuations
> http://rubygarden.org/ruby?ContinuationExplanation
> Programming Ruby: The Pragmatic Programmer's Guide
> http://c2.com/cgi/wiki?ContinuationExplanation
> Function (computer programming) - Wikipedia
> Coroutine - Wikipedia
>
>
> any futher help is much appreciated.
>
> thanks!
>
> -z

Hi,

hey. let me see how well i follow what say here...

Here is how I think continuations and actor theory work together. This is
my interpretation, and it is possible I didn't get everything right, but I
hope it may help you.

I think the main problem in understanding continuations is that normally a
continuation is regarded as a function. However I think it is more the
other way around. A function is more a special form of a continuation.

yes, i've heard it described this way before; seems accurate.

Let me try to explain a little.
Instead of looking at a program as a sequens of statements,

aka "procedural programming"...

you could
consider a program as many independend points (or actors), each which
takes a value, does something with it, and passes it to another point. You
could then call each such point a contination-point. (That is how I call
it, I don't really know if there is an official name). In the
actor-theory, this point would be an actor.

or i believe one could say "object". as described both by carl hewitt
[Carl Hewitt, Peter Bishop, Richard Stieger, "A Universal Modular
Actor Formalism for Artificial Intelligence", Proceedings of the 1973
International Joint Conference on Artificial Intelligence, pp.
235-246.] and mark miller
[http://www.erights.org/elib/capability/ode/ode-submission.html\] (and
in a previous thread here; Re: Functional Ruby):

Objects == Lambda Abstraction + Message Dispatch + Local Side Effects

however, objects define their inputs and outputs with _methods_
(functions/subroutines/whatever you wish to call them). so from this
perspective the "continuation points" seem more concerned with lexical
scope, yes? or "associates" as carl hewitt seemingly describes this
phenomenon.

To be useful, the
continuation-point must also be aware of the current environment,

(context / scope)

and the
easiest way is to pass the environment from each point to another. In the
actor-theory this environment could be just another actor, which could
respond to queries, for example to get a value.

A function would be a special kind of continuation-point, one that takes
in addition to the other values a continuation. This continuation will be
saved in the environment, and when the function has finished to do what it
has to do, it will send it's return value to the saved continuation.

after re-reading that paragraph twice, ok, i think... how is this not
a COroutine rather than a SUBroutine? that it returns a _value_
rather than simply another context/environment?

How do closures and call/cc fit in this model? You could say a closure is
a function and a saved environment.

ok, so how is this not just aka a continuation?
closures retain their lexical scope, and otherwise an anonymous/lambda
function/subroutine, no?

To call a closure, you get the saved
environment, and call the function with this environment, and any
parameters. As described above, the function is a continuation-point that
accepts continuation, and calls this continuation will the final result.

so, this is to say, subroutines/functions/methods and coroutines are
_composed_ of continuations? and a closure is composed of a function?
again, when the topic returns to a matter of "saved
environment/context/scope" i am confused by a perceived redundancy.

However you could also save the environment for a continuation-point,
basicly creating a closure over the continuation-point. This closure is
what is normally called a continuation.

uh, right. a closure is a continuation; but by saying this i seem to
be muddling this "continuation-point" concept i think.

call/cc just captures the next continuation (with the environment, which
includes any functions to return to), wraps it in an object, and calls the
given block with that object.

for now i am thinking of this as not unlike the Proc object, as how
functions/blocks are defined, particularly without an object to "own"
them.

The continuation is actually that part of
the program that will receive the result from call/cc. That result can be
either the value of the block, or the value passed to the
continuation-object (using "cont.call(value)").

i'm still not so swift on understanding this either... for now it
feels like block/proc object design burp syntax. maybe that's totally
wrong.

I think the nice thing about actor theory is that it allows lambda
calculus to be implemented using parallel objects. You could build a chip,
where each continuation-point can be programmed in the chip, and would act
on its own. This chip could the perform all the computations in parallel,
since each continuation point can act on it's own. This would be
radically different from the current cpu-design, where there is only a
single point of controlflow. A special sort of FPGA could be used for
this kind a chip, since they allow reprogramming on the fly.

yes! think of http://www.starbridgesystems.com/
though in the short to mid-term i am thinking more about the PPC970
architecture.
but an FPGA solution would certainly reach much closer to escaping von
neumann architecture.

Because lambda calculus is very basic to programming, it could be possible
to transform a traditional program into a form that makes maximum use of
parallelism in such a chip, and therefor be very fast. Also all processes
that would fit into memory would run effectively parallel (not simulated
using task-switching), and wouldn't slow down any other process. It would
also support dynamic languages better, so that a ruby program wouldn't be
any slower that a C/C++ program.

precisely! this has very important ramifications for my mid to
long-term interests in cybernetics (norbert wiener) and systems theory
(which seems to now popularly referred to as "complexity"). frankly,
much like stephen wolfram and mathematica, my current research on
bringing actor model / flow-based programming to ruby is simply a tool
to do what i really want to do. like graffiti at mit, "i would rather
write programs to help me write programs, then write programs".

I don't know if the design of such a chip would be possible, but it would
be a radically different aproach to computing.

i believe some work in this has already been done. some old, such as
"real-time actor systems" by henry baker, and some new such as the
aforementioned starbridgesystems. perhaps the hardware will seem more
relevant once the software for it actually exists.

Regards,
KB

peace,
-z

···

On Sat, 21 Aug 2004 10:26:08 +0900, Kristof Bastiaensen <kristof@vleeuwen.org> wrote:

On Sat, 21 Aug 2004 07:24:31 +0900, zuzu wrote:

In article <pan.2004.08.21.01.23.42.512887@vleeuwen.org>,

I think the nice thing about actor theory is that it allows lambda
calculus to be implemented using parallel objects. You could build a chip,
where each continuation-point can be programmed in the chip, and would act
on its own. This chip could the perform all the computations in parallel,
since each continuation point can act on it's own. This would be
radically different from the current cpu-design, where there is only a
single point of controlflow. A special sort of FPGA could be used for
this kind a chip, since they allow reprogramming on the fly.

Because lambda calculus is very basic to programming, it could be possible
to transform a traditional program into a form that makes maximum use of
parallelism in such a chip, and therefor be very fast. Also all processes
that would fit into memory would run effectively parallel (not simulated
using task-switching), and wouldn't slow down any other process. It would
also support dynamic languages better, so that a ruby program wouldn't be
any slower that a C/C++ program.

I don't know if the design of such a chip would be possible, but it would
be a radically different aproach to computing.

Sure, CPUs are not necessarily designed with this degree of parallelism
in mind, however chip designs in general are usually modeled using HDLs
which do allow for describing your design with all of the parallelism
that is available in hardware. Of course, this is hard to simulate on
our non-parallel computers, but usually the trick is to use something like
continuations/coroutines (that's what is done in RHDL, for example). So
I guess it seems like you're talking about a method for modelling the
parallelism that is already inherant in hardware.

BTW: Since you mentioned FPGAs (and yes, I think FPGAs represent the
future of high performance computing since they allow you to map
compuationally intensive algorithms into hardware for maximum performance
[though, we still need better tools for mapping the algorithms]), you
might be interested in the $99 Spartan III starter kit from Xilinx
(http://www.xilinx.com) - it includes a board with a 200K gate FPGA and
software for programming it. I just got mine today and am looking
forward to playing with it (tinker toys for adults :wink: - it's amazing
that you can get something like that now for $99, I recall working
on an ASIC back in '89 that only had 2K gates and it wasn't
reprogrammable. Back then a 200K ASIC would have been a very high-end
design done by a large company with tens of thousands (if not hundreds of
thousands) of dollars worth of workstations and design tools and FPGAs
were only just becoming available with maybe 1K gates.

Phil

···

Kristof Bastiaensen <kristof@vleeuwen.org> wrote:

> reading carl hewitt's seminal paper on the actor model:
> http://www.lcs.mit.edu/publications/specpub.php?id=762 my current
> impression is that actors are basically pure-OO objects using
> continuations (coroutines?) instead of stack-frame
> methods/subroutines. this way objects only require "promises" (in the
> form of the passing of context) rather than the final value (as with
> method stack-frames). (i think this important towards breaking free
> from von neumann architecture, ala bachus:
> Can programming be liberated from the von Neumann style?: a functional style and its algebra of programs: Communications of the ACM: Vol 21, No 8
>
> but i still can't quite seem to grok continuations, at least in terms
> of interpreting it for ruby, w/r/t closures and Kernel#callcc. this
> after reading dan "parrot" sugalski's weblog
> [http://www.sidhe.org/~dan/blog/\], jim weirich's email
> [http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/78288 ],
> and rubygarden/c2.com/wikipedia articles on the subject:
> Captcha
> Captcha
> Programming Ruby: The Pragmatic Programmer's Guide
> http://c2.com/cgi/wiki?ContinuationExplanation
> Function (computer programming) - Wikipedia
> Coroutine - Wikipedia
>
>
> any futher help is much appreciated.
>
> thanks!
>
> -z

Hi,

hey. let me see how well i follow what say here...

Here is how I think continuations and actor theory work together. This
is my interpretation, and it is possible I didn't get everything right,
but I hope it may help you.

I think the main problem in understanding continuations is that normally
a continuation is regarded as a function. However I think it is more the
other way around. A function is more a special form of a continuation.

yes, i've heard it described this way before; seems accurate.

Let me try to explain a little.
Instead of looking at a program as a sequens of statements,

aka "procedural programming"...

you could
consider a program as many independend points (or actors), each which
takes a value, does something with it, and passes it to another point.
You could then call each such point a contination-point. (That is how I
call it, I don't really know if there is an official name). In the
actor-theory, this point would be an actor.

or i believe one could say "object". as described both by carl hewitt
[Carl Hewitt, Peter Bishop, Richard Stieger, "A Universal Modular Actor
Formalism for Artificial Intelligence", Proceedings of the 1973
International Joint Conference on Artificial Intelligence, pp. 235-246.]
and mark miller
[http://www.erights.org/elib/capability/ode/ode-submission.html\] (and in a
previous thread here; Re: Functional Ruby):

Objects == Lambda Abstraction + Message Dispatch + Local Side Effects

however, objects define their inputs and outputs with _methods_
(functions/subroutines/whatever you wish to call them). so from this
perspective the "continuation points" seem more concerned with lexical
scope, yes? or "associates" as carl hewitt seemingly describes this
phenomenon.

Yes, that's right. A 'continuation point' would be a special kind
of object/actor, which receives an environment and a value. It
could also take more values, but in the pure lambda-calculus it can take
only one.

To be useful, the
continuation-point must also be aware of the current environment,

(context / scope)

and the
easiest way is to pass the environment from each point to another. In
the actor-theory this environment could be just another actor, which
could respond to queries, for example to get a value.

A function would be a special kind of continuation-point, one that takes
in addition to the other values a continuation. This continuation will
be saved in the environment, and when the function has finished to do
what it has to do, it will send it's return value to the saved
continuation.

after re-reading that paragraph twice, ok, i think... how is this not a
COroutine rather than a SUBroutine? that it returns a _value_ rather than
simply another context/environment?

If I understand correctly, a coroutine restores it's environment when it
resumes control. A function doesn't do that. It doesn't have an environment.
It accepts the environment that the caller gives it. This may be the
current environment, or a saved one in the case of a closure. Also note
that it doesn't really "return" a value. It passes the value to the given
continuation.

How do closures and call/cc fit in this model? You could say a closure
is a function and a saved environment.

ok, so how is this not just aka a continuation? closures retain their
lexical scope, and otherwise an anonymous/lambda function/subroutine, no?

Exactly! A closure is just a continuation. It's a continuation that
takes another continuation (a return adress) that it must pass the end
result to.

you can express that in Ruby:

#return a closure that adds i to its argument
def sum(i)
  params = callcc do |cc|
    return cc #exit from the method with the current-continuation
  end
  #params contains the arguments passed to the continuation
  ret = params.shift # the first argument is the return continuation
  retval = i + params.shift # add i to the second argument
  ret.call(retval) #send the result to the return continuation
end

a = sum(2)
callcc{ |cc| a.call(cc, 3) }
=> 5

To call a closure, you get the saved
environment, and call the function with this environment, and any
parameters. As described above, the function is a continuation-point
that accepts continuation, and calls this continuation will the final
result.

so, this is to say, subroutines/functions/methods and coroutines are
_composed_ of continuations? and a closure is composed of a function?
again, when the topic returns to a matter of "saved
environment/context/scope" i am confused by a perceived redundancy.

However you could also save the environment for a continuation-point,
basicly creating a closure over the continuation-point. This closure is
what is normally called a continuation.

uh, right. a closure is a continuation; but by saying this i seem to be
muddling this "continuation-point" concept i think.

Well, see the continuation-point as an actor which takes a value and
an environment. If you pack the environment and the continuation-point
together you get a continuation like it is returned by call/cc.

call/cc just captures the next continuation (with the environment, which
includes any functions to return to), wraps it in an object, and calls
the given block with that object.

for now i am thinking of this as not unlike the Proc object, as how
functions/blocks are defined, particularly without an object to "own"
them.

The continuation is actually that part of
the program that will receive the result from call/cc. That result can
be either the value of the block, or the value passed to the
continuation-object (using "cont.call(value)").

i'm still not so swift on understanding this either... for now it feels
like block/proc object design burp syntax. maybe that's totally wrong.

I think the nice thing about actor theory is that it allows lambda
calculus to be implemented using parallel objects. You could build a
chip, where each continuation-point can be programmed in the chip, and
would act on its own. This chip could the perform all the computations
in parallel, since each continuation point can act on it's own. This
would be radically different from the current cpu-design, where there is
only a single point of controlflow. A special sort of FPGA could be
used for this kind a chip, since they allow reprogramming on the fly.

yes! think of http://www.starbridgesystems.com/ though in the short to
mid-term i am thinking more about the PPC970 architecture.
but an FPGA solution would certainly reach much closer to escaping von
neumann architecture.

Because lambda calculus is very basic to programming, it could be
possible to transform a traditional program into a form that makes
maximum use of parallelism in such a chip, and therefor be very fast.
Also all processes that would fit into memory would run effectively
parallel (not simulated using task-switching), and wouldn't slow down
any other process. It would also support dynamic languages better, so
that a ruby program wouldn't be any slower that a C/C++ program.

precisely! this has very important ramifications for my mid to long-term
interests in cybernetics (norbert wiener) and systems theory (which seems
to now popularly referred to as "complexity"). frankly, much like stephen
wolfram and mathematica, my current research on bringing actor model /
flow-based programming to ruby is simply a tool to do what i really want
to do. like graffiti at mit, "i would rather write programs to help me
write programs, then write programs".

I don't know if the design of such a chip would be possible, but it
would be a radically different aproach to computing.

i believe some work in this has already been done. some old, such as
"real-time actor systems" by henry baker, and some new such as the
aforementioned starbridgesystems.

Nice! I didn't know this kind of computing already exists. These cpu's
from starbridgesystems are chips with a lot of fpga's packed in them.
I am a bit worried they say they patented this technology, and have
patents pending. They shouldn't patent universal technology!

perhaps the hardware will seem more relevant once the software for it
actually exists.

It would be nice to have a Ruby translater for them. You could then take
a Ruby-program, and it would be wired into the FPGA, and as a result be
very fast. For example each class/object could have it's own method
lookup (which is one of the bottlenecks of Ruby). Dynamicly adding
or changing methods is no problem. Ruby on speed!

Regards,
KB

peace,
-z

Thanks,
KB

···

On Sat, 21 Aug 2004 11:30:50 +0900, zuzu wrote:

On Sat, 21 Aug 2004 10:26:08 +0900, Kristof Bastiaensen > <kristof@vleeuwen.org> wrote:

On Sat, 21 Aug 2004 07:24:31 +0900, zuzu wrote:

BTW: Since you mentioned FPGAs (and yes, I think FPGAs represent the future of high performance computing since they allow you to map compuationally intensive algorithms into hardware for maximum performance [though, we still need better tools for mapping the algorithms]), you might be interested in the $99 Spartan III starter kit from Xilinx (http://www.xilinx.com) - it includes a board with a 200K gate FPGA and software for programming it. I just got mine today and am looking forward to playing with it (tinker toys for adults :wink: - it's amazing that you can get something like that now for $99, I recall working on an ASIC back in '89 that only had 2K gates and it wasn't reprogrammable. Back then a 200K ASIC would have been a very high-end design done by a large company with tens of thousands (if not hundreds of thousands) of dollars worth of workstations and design tools and FPGAs were only just becoming available with maybe 1K gates.

Tell me you're planning a Ruby2FPGA compiler! Talk about edge vs other langs... :slight_smile:

/Robert

or, perhaps i'm over-thinking this because of the terminology and theory.

if a closure is a continuation,

then it would seem the simpler syntax for creating an actors model
continuation passing (cps) would simply be a matter of 'yield'ing
another closure rather than a firm value (and avoiding 'return'
completely, except perhaps to allow a "falling off the edge of a
block" return when a final value is eventually computed).

would this be correct?

peace,
-z

···

On Sat, 21 Aug 2004 21:25:56 +0900, Kristof Bastiaensen <kristof@vleeuwen.org> wrote:

On Sat, 21 Aug 2004 11:30:50 +0900, zuzu wrote:

> On Sat, 21 Aug 2004 10:26:08 +0900, Kristof Bastiaensen > > <kristof@vleeuwen.org> wrote:
>>
>>
>> On Sat, 21 Aug 2004 07:24:31 +0900, zuzu wrote:
>>
>> > reading carl hewitt's seminal paper on the actor model:
>> > http://www.lcs.mit.edu/publications/specpub.php?id=762 my current
>> > impression is that actors are basically pure-OO objects using
>> > continuations (coroutines?) instead of stack-frame
>> > methods/subroutines. this way objects only require "promises" (in the
>> > form of the passing of context) rather than the final value (as with
>> > method stack-frames). (i think this important towards breaking free
>> > from von neumann architecture, ala bachus:
>> > http://delivery.acm.org/10.1145/360000/359579/p613-backus.pdf?key1=359579&key2=7975779801&coll=portal&dl=ACM&CFID=11111111&CFTOKEN=2222222
>> >
>> > but i still can't quite seem to grok continuations, at least in terms
>> > of interpreting it for ruby, w/r/t closures and Kernel#callcc. this
>> > after reading dan "parrot" sugalski's weblog
>> > [http://www.sidhe.org/~dan/blog/\], jim weirich's email
>> > [http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/78288 ],
>> > and rubygarden/c2.com/wikipedia articles on the subject:
>> > http://www.rubygarden.org/ruby?action=history&id=Continuations
>> > http://rubygarden.org/ruby?ContinuationExplanation
>> > Programming Ruby: The Pragmatic Programmer's Guide
>> > http://c2.com/cgi/wiki?ContinuationExplanation
>> > Function (computer programming) - Wikipedia
>> > Coroutine - Wikipedia
>> >
>> >
>> > any futher help is much appreciated.
>> >
>> > thanks!
>> >
>> > -z
>>
>> Hi,
>
> hey. let me see how well i follow what say here...
>
>> Here is how I think continuations and actor theory work together. This
>> is my interpretation, and it is possible I didn't get everything right,
>> but I hope it may help you.
>>
>> I think the main problem in understanding continuations is that normally
>> a continuation is regarded as a function. However I think it is more the
>> other way around. A function is more a special form of a continuation.
>
> yes, i've heard it described this way before; seems accurate.
>
>> Let me try to explain a little.
>> Instead of looking at a program as a sequens of statements,
>
> aka "procedural programming"...
>
>> you could
>> consider a program as many independend points (or actors), each which
>> takes a value, does something with it, and passes it to another point.
>> You could then call each such point a contination-point. (That is how I
>> call it, I don't really know if there is an official name). In the
>> actor-theory, this point would be an actor.
>
> or i believe one could say "object". as described both by carl hewitt
> [Carl Hewitt, Peter Bishop, Richard Stieger, "A Universal Modular Actor
> Formalism for Artificial Intelligence", Proceedings of the 1973
> International Joint Conference on Artificial Intelligence, pp. 235-246.]
> and mark miller
> [http://www.erights.org/elib/capability/ode/ode-submission.html\] (and in a
> previous thread here; Re: Functional Ruby):
>
> Objects == Lambda Abstraction + Message Dispatch + Local Side Effects
>
> however, objects define their inputs and outputs with _methods_
> (functions/subroutines/whatever you wish to call them). so from this
> perspective the "continuation points" seem more concerned with lexical
> scope, yes? or "associates" as carl hewitt seemingly describes this
> phenomenon.
>
>

Yes, that's right. A 'continuation point' would be a special kind
of object/actor, which receives an environment and a value. It
could also take more values, but in the pure lambda-calculus it can take
only one.

>> To be useful, the
>> continuation-point must also be aware of the current environment,
>
> (context / scope)
>
>> and the
>> easiest way is to pass the environment from each point to another. In
>> the actor-theory this environment could be just another actor, which
>> could respond to queries, for example to get a value.
>>
>> A function would be a special kind of continuation-point, one that takes
>> in addition to the other values a continuation. This continuation will
>> be saved in the environment, and when the function has finished to do
>> what it has to do, it will send it's return value to the saved
>> continuation.
>
> after re-reading that paragraph twice, ok, i think... how is this not a
> COroutine rather than a SUBroutine? that it returns a _value_ rather than
> simply another context/environment?
>

If I understand correctly, a coroutine restores it's environment when it
resumes control. A function doesn't do that. It doesn't have an environment.
It accepts the environment that the caller gives it. This may be the
current environment, or a saved one in the case of a closure. Also note
that it doesn't really "return" a value. It passes the value to the given
continuation.

>> How do closures and call/cc fit in this model? You could say a closure
>> is a function and a saved environment.
>
> ok, so how is this not just aka a continuation? closures retain their
> lexical scope, and otherwise an anonymous/lambda function/subroutine, no?
>

Exactly! A closure is just a continuation. It's a continuation that
takes another continuation (a return adress) that it must pass the end
result to.

you can express that in Ruby:

#return a closure that adds i to its argument
def sum(i)
  params = callcc do |cc|
    return cc #exit from the method with the current-continuation
  end
  #params contains the arguments passed to the continuation
  ret = params.shift # the first argument is the return continuation
  retval = i + params.shift # add i to the second argument
  ret.call(retval) #send the result to the return continuation
end

a = sum(2)
callcc{ |cc| a.call(cc, 3) }
=> 5

>
>> To call a closure, you get the saved
>> environment, and call the function with this environment, and any
>> parameters. As described above, the function is a continuation-point
>> that accepts continuation, and calls this continuation will the final
>> result.
>
> so, this is to say, subroutines/functions/methods and coroutines are
> _composed_ of continuations? and a closure is composed of a function?
> again, when the topic returns to a matter of "saved
> environment/context/scope" i am confused by a perceived redundancy.
>
>> However you could also save the environment for a continuation-point,
>> basicly creating a closure over the continuation-point. This closure is
>> what is normally called a continuation.
>
> uh, right. a closure is a continuation; but by saying this i seem to be
> muddling this "continuation-point" concept i think.

Well, see the continuation-point as an actor which takes a value and
an environment. If you pack the environment and the continuation-point
together you get a continuation like it is returned by call/cc.

>
>> call/cc just captures the next continuation (with the environment, which
>> includes any functions to return to), wraps it in an object, and calls
>> the given block with that object.
>
> for now i am thinking of this as not unlike the Proc object, as how
> functions/blocks are defined, particularly without an object to "own"
> them.
>
>>The continuation is actually that part of
>> the program that will receive the result from call/cc. That result can
>> be either the value of the block, or the value passed to the
>> continuation-object (using "cont.call(value)").
>
> i'm still not so swift on understanding this either... for now it feels
> like block/proc object design burp syntax. maybe that's totally wrong.
>
>> I think the nice thing about actor theory is that it allows lambda
>> calculus to be implemented using parallel objects. You could build a
>> chip, where each continuation-point can be programmed in the chip, and
>> would act on its own. This chip could the perform all the computations
>> in parallel, since each continuation point can act on it's own. This
>> would be radically different from the current cpu-design, where there is
>> only a single point of controlflow. A special sort of FPGA could be
>> used for this kind a chip, since they allow reprogramming on the fly.
>
> yes! think of http://www.starbridgesystems.com/ though in the short to
> mid-term i am thinking more about the PPC970 architecture.
> but an FPGA solution would certainly reach much closer to escaping von
> neumann architecture.
>
>> Because lambda calculus is very basic to programming, it could be
>> possible to transform a traditional program into a form that makes
>> maximum use of parallelism in such a chip, and therefor be very fast.
>> Also all processes that would fit into memory would run effectively
>> parallel (not simulated using task-switching), and wouldn't slow down
>> any other process. It would also support dynamic languages better, so
>> that a ruby program wouldn't be any slower that a C/C++ program.
>
> precisely! this has very important ramifications for my mid to long-term
> interests in cybernetics (norbert wiener) and systems theory (which seems
> to now popularly referred to as "complexity"). frankly, much like stephen
> wolfram and mathematica, my current research on bringing actor model /
> flow-based programming to ruby is simply a tool to do what i really want
> to do. like graffiti at mit, "i would rather write programs to help me
> write programs, then write programs".
>
>> I don't know if the design of such a chip would be possible, but it
>> would be a radically different aproach to computing.
>
> i believe some work in this has already been done. some old, such as
> "real-time actor systems" by henry baker, and some new such as the
> aforementioned starbridgesystems.

Nice! I didn't know this kind of computing already exists. These cpu's
from starbridgesystems are chips with a lot of fpga's packed in them.
I am a bit worried they say they patented this technology, and have
patents pending. They shouldn't patent universal technology!

> perhaps the hardware will seem more relevant once the software for it
> actually exists.

It would be nice to have a Ruby translater for them. You could then take
a Ruby-program, and it would be wired into the FPGA, and as a result be
very fast. For example each class/object could have it's own method
lookup (which is one of the bottlenecks of Ruby). Dynamicly adding
or changing methods is no problem. Ruby on speed!

>
>> Regards,
>> KB
>
> peace,
> -z

Thanks,
KB

not likely as an advantage over other languages... such low level
design would probably end up looking alot more like the parrot VM than
a single-language interpreter, or at least the development process
would allow other languages to follow suit in parallel.

however, lisp and smalltalk machines were developed.
programming language as the OS is not unheard of.

the more things change, the more they stay the same.

-z

···

On Sun, 22 Aug 2004 15:56:57 +0900, Robert Feldt <feldt@ce.chalmers.se> wrote:

>BTW: Since you mentioned FPGAs (and yes, I think FPGAs represent the
>future of high performance computing since they allow you to map
>compuationally intensive algorithms into hardware for maximum performance
>[though, we still need better tools for mapping the algorithms]), you
>might be interested in the $99 Spartan III starter kit from Xilinx
>(http://www.xilinx.com) - it includes a board with a 200K gate FPGA and
>software for programming it. I just got mine today and am looking
>forward to playing with it (tinker toys for adults :wink: - it's amazing
>that you can get something like that now for $99, I recall working
>on an ASIC back in '89 that only had 2K gates and it wasn't
>reprogrammable. Back then a 200K ASIC would have been a very high-end
>design done by a large company with tens of thousands (if not hundreds of
>thousands) of dollars worth of workstations and design tools and FPGAs
>were only just becoming available with maybe 1K gates.
>
>
>
Tell me you're planning a Ruby2FPGA compiler! Talk about edge vs other
langs... :slight_smile:

/Robert

In article <412843B2.5070504@ce.chalmers.se>,

···

Robert Feldt <feldt@ce.chalmers.se> wrote:

Tell me you're planning a Ruby2FPGA compiler! Talk about edge vs other
langs... :slight_smile:

Well, that would be cool.

I'm certainly hoping to use Ruby somewhere in the process. I'm working on
some blif (Berkeley logic interchange format) parsing tools in Ruby now.

Phil

zuzu wrote:

not likely as an advantage over other languages... such low level
design would probably end up looking alot more like the parrot VM than
a single-language interpreter,

That last part was with tongue-in-cheek. I seriously doubt it would look anything like a Parrot VM though, more than at a very artificial level.

or at least the development process
would allow other languages to follow suit in parallel.

Well, what kind of research or open-source development would not?

however, lisp and smalltalk machines were developed.
programming language as the OS is not unheard of.

the more things change, the more they stay the same.

Sure, but they were trying to build custom chips implementing lisp/st VM's (at least to my knowledge); what would be more up-to-date/interesting now would be to take a certain piece of code (in Ruby or what have you) and compile that part into an FPGA config realizing the same computational process. That is a real challenge that I imagine much research is going into. For example there are the "similar" efforts to get C compilers to compile for modern GPU's etc.

/R

In article <41286EBC.6020808@ce.chalmers.se>,

···

Robert Feldt <feldt@ce.chalmers.se> wrote:

zuzu wrote:

not likely as an advantage over other languages... such low level
design would probably end up looking alot more like the parrot VM than
a single-language interpreter,

That last part was with tongue-in-cheek. I seriously doubt it would look
anything like a Parrot VM though, more than at a very artificial level.

or at least the development process
would allow other languages to follow suit in parallel.

Well, what kind of research or open-source development would not?

however, lisp and smalltalk machines were developed.
programming language as the OS is not unheard of.

the more things change, the more they stay the same.

Sure, but they were trying to build custom chips implementing lisp/st
VM's (at least to my knowledge); what would be more
up-to-date/interesting now would be to take a certain piece of code (in
Ruby or what have you) and compile that part into an FPGA config
realizing the same computational process.

There's certainly work going into converting C to gates especially with
DSP algorithms. There is also a company that sells a tool to convert
MatLab DSP algorithms to FPGA implementations.

Phil