Ruby implementation Q's

Apologies in advance for this meaty posting:

I’m currently in the process of developing a Ruby implementation
that’s more suited to embedding.
I had a look at many languages, Smalltalk, Lisp, Scheme, Self, Python,
Java, C#, Lua, ElastiC, but figured that Ruby has the cleanest and
most appealing OO model and language syntax.

Here’s a few differences with regards to my implementation. I wonder
if anybody here would like to comment:

  1. I support the Ruby core language and object model but won’t be
    implementing all of the libraries. For example, file handling and
    regexp will be optional. Networking and net programming won’t be
    supported.
  2. I’ve written a generational garbage collector that should be much
    faster than the Ruby mark-and-sweep collector. The young generation
    is implemented using a Cheney-style copying collector which means that
    allocations are very fast and only the ‘live’ set is visited. There
    is a seperate ‘large-chunk’ space for dealing with large binary
    resources.
  3. In the current Ruby implementation, everything is represented by
    linked nodes. Unless I’m mistaken, that means that even code is
    scanned for garbage collection. I have the concept of an atom or
    slot. These are 64bit elementes that represent 32bits of flags/counts
    and a data element. Objects, even internal hash tables and arrays,
    are composed of these slots and are allocated in a unified way.
  4. Methods are represented as bytecodes. The method bytecodes are
    stored in the ‘large-chunk’ manager and so are not a burden on the
    garbage collector. I’m still working out the best opcode arrangment.
  5. I’m developing in C++.

So far, I have the garbage collector and the Ruby class and object
code written, methods and class/instance variables can be accessed.
Mixins via include are fully supported and I have the initial Ruby
metaclasses and class hierarchy initialized. I have the beginnings of
the lexer and parser.

After having a look at the original Ruby source code, a few things
struck me:

  1. The internal symbol function is called rb_intern(). This generates
    a unique number with embedded symbol type and is used as a selector
    for method and variable lookups. But a hash must be generated from
    this number each time which is a little time consuming. Perhaps a
    speed improvement would be to have the intern return a pointer to a
    unique data structure which contains the precalculated hash value?

    intern_symbol* pSymbol = rb_inter( … );
    int hash = pSymbol->hash;
    int type = pSymbol->type;

  2. The code implements a method cache to accelerate method lookup.
    How about a cache for object variable lookup? This would accelerate
    class variable lookup.

As a final point, I hear a lot of talk about finalization and it’s
pitfalls. It seems to me that finalization happens to late to be
truly useful. What would the theoretical implications be of an
optional destroy mechanism?

a = MyClass.new
a.show			=> "I live!"
a.destroy
a.show			=> exception! 'a' does not exist!

The ‘destroy’ method would have the effect of broadcasting the
’destroy’ message to all variables belonging to ‘a’. ‘a’ would then be
marked as destroyed, any pointers to ‘a’ would be treated similar to
weak pointers - if they pointed to a destroyed object they could
become pointers to ‘Nil’. The garbage collector could easily be made
aware of this.
This would allow the programmer to override the ‘destroy’ method and
do useful resource deallocation, file closing and all the other
stuff that people complain about.

What issue might this raise? Not all classes should define 'destroy’
for obvious reasons…

Lastly, gratitude to matz for developing such an elegant, powerful and
simple language.

Justin Johnson
justinj@mobiusent.com

As a final point, I hear a lot of talk about finalization and it’s
pitfalls. It seems to me that finalization happens to late to be
truly useful. What would the theoretical implications be of an
optional destroy mechanism? [details elided]

I have heard this mentioned a lot and never understood these comments. Can
you explain in more detail? Why do you think finalization happens too late
to be useful?

I have used the existing finalisation mechanism many times, from simple uses
(e.g. ensuring the release of external resources) to quite complex classes
(e.g. a weak hashtable that removes entries when keys are garbage
collected). I have never found a limitation with the finalization system.
On the other hand, I have found it conceptually simpler than Java’s
finalisation system (just look at the complexity of the java.lang.ref
package), and less prone to dangling pointers than C++ destructors.

Cheers,
Nat.

···

Dr. Nathaniel Pryce
B13media Ltd.
Studio 3a, Aberdeen Business Centre, 22/24 Highbury Grove, London, N5 2EA
http://www.b13media.com

Hi,

Here’s a few differences with regards to my implementation. I wonder
if anybody here would like to comment:

<snip wonderful differences, except no.5>

Great. Can I steal your code for the future Ruby? :wink:

After having a look at the original Ruby source code, a few things
struck me:

  1. The internal symbol function is called rb_intern(). This generates
    a unique number with embedded symbol type and is used as a selector
    for method and variable lookups. But a hash must be generated from
    this number each time which is a little time consuming. Perhaps a
    speed improvement would be to have the intern return a pointer to a
    unique data structure which contains the precalculated hash value?

I’m not sure whether symbol hash calculation is a bottleneck. I guess
it’s not, so that saving precalculated hash value is not worth
consuming memory.

  1. The code implements a method cache to accelerate method lookup.
    How about a cache for object variable lookup? This would accelerate
    class variable lookup.

Object variables change too often comparing methods, if I don’t
misunderstand you.

As a final point, I hear a lot of talk about finalization and it’s
pitfalls. It seems to me that finalization happens to late to be
truly useful. What would the theoretical implications be of an
optional destroy mechanism?

a = MyClass.new
a.show => “I live!”
a.destroy
a.show => exception! ‘a’ does not exist!

Ruby’s finalization is kept very primitive intentionally. It’s not
for everybody. Some classes implement optional (or explicit) destroy
mechanism, for example:

a = open(“/dev/null”, “w”)
a.print “foo”
a.close
a.print “bar” # => exception! “closed stream”

						matz.
···

In message “Ruby implementation Q’s” on 02/07/02, Justin Johnson justinj@mobiusent.com writes:

Apologies in advance for this meaty posting: I’m currently
in the process of developing a Ruby implementation that’s
more suited to embedding. I had a look at many languages,
Smalltalk, Lisp, Scheme, Self, Python, Java, C#, Lua,
ElastiC, but figured that Ruby has the cleanest and most
appealing OO model and language syntax.

> Here's a few differences with regards to my
> implementation. I wonder if anybody here would like to
> comment:

> 1. I support the Ruby core language and object model but
> won't be implementing all of the libraries.  For example,
> file handling and regexp will be optional.  Networking and
> net programming won't be supported.  2. I've written a
> generational garbage collector that should be much faster
> than the Ruby mark-and-sweep collector.   [...]

Then you might consider using the garbage collector in Qish, see

QISH introduction

Qish contains a generational copying GC for and in C which also
supports finalized objects

(If the GPL license bother you and you prefer LGPL for the GC, please
email me)

···

Justin Johnson justinj@mobiusent.com

Basile STARYNKEVITCH Basile STARYNKEVITCH
email: basilestarynkevitchnet
alias: basiletunesorg
8, rue de la Faïencerie, 92340 Bourg La Reine, France

i know next to nothing about GC’s, yet i wonder: when is it, in ones
code, that references are lost?

my initital answers:

  1. for all objects: when the program ends execution
  2. for local instances in a method: when the method completes execution

when else?

is there a pattern here that can be tapped into such that by monitoring
a type of “scope” for an object, when that scope is completed all
objects within it should be destroyed?

just some thoughts,
~transami

p.s. if you really want speed for an embedded solution why do GC at all?
just take the c/c++ approach.

···

On Mon, 2002-07-01 at 10:50, Justin Johnson wrote:

Apologies in advance for this meaty posting:

I’m currently in the process of developing a Ruby implementation
that’s more suited to embedding.
I had a look at many languages, Smalltalk, Lisp, Scheme, Self, Python,
Java, C#, Lua, ElastiC, but figured that Ruby has the cleanest and
most appealing OO model and language syntax.

Here’s a few differences with regards to my implementation. I wonder
if anybody here would like to comment:

  1. I support the Ruby core language and object model but won’t be
    implementing all of the libraries. For example, file handling and
    regexp will be optional. Networking and net programming won’t be
    supported.
  2. I’ve written a generational garbage collector that should be much
    faster than the Ruby mark-and-sweep collector. The young generation
    is implemented using a Cheney-style copying collector which means that
    allocations are very fast and only the ‘live’ set is visited. There
    is a seperate ‘large-chunk’ space for dealing with large binary
    resources.
  3. In the current Ruby implementation, everything is represented by
    linked nodes. Unless I’m mistaken, that means that even code is
    scanned for garbage collection. I have the concept of an atom or
    slot. These are 64bit elementes that represent 32bits of flags/counts
    and a data element. Objects, even internal hash tables and arrays,
    are composed of these slots and are allocated in a unified way.
  4. Methods are represented as bytecodes. The method bytecodes are
    stored in the ‘large-chunk’ manager and so are not a burden on the
    garbage collector. I’m still working out the best opcode arrangment.
  5. I’m developing in C++.

So far, I have the garbage collector and the Ruby class and object
code written, methods and class/instance variables can be accessed.
Mixins via include are fully supported and I have the initial Ruby
metaclasses and class hierarchy initialized. I have the beginnings of
the lexer and parser.

After having a look at the original Ruby source code, a few things
struck me:

  1. The internal symbol function is called rb_intern(). This generates
    a unique number with embedded symbol type and is used as a selector
    for method and variable lookups. But a hash must be generated from
    this number each time which is a little time consuming. Perhaps a
    speed improvement would be to have the intern return a pointer to a
    unique data structure which contains the precalculated hash value?

intern_symbol* pSymbol = rb_inter( … );
int hash = pSymbol->hash;
int type = pSymbol->type;

  1. The code implements a method cache to accelerate method lookup.
    How about a cache for object variable lookup? This would accelerate
    class variable lookup.

As a final point, I hear a lot of talk about finalization and it’s
pitfalls. It seems to me that finalization happens to late to be
truly useful. What would the theoretical implications be of an
optional destroy mechanism?

a = MyClass.new
a.show => “I live!”
a.destroy
a.show => exception! ‘a’ does not exist!

The ‘destroy’ method would have the effect of broadcasting the
‘destroy’ message to all variables belonging to ‘a’. ‘a’ would then be
marked as destroyed, any pointers to ‘a’ would be treated similar to
weak pointers - if they pointed to a destroyed object they could
become pointers to ‘Nil’. The garbage collector could easily be made
aware of this.
This would allow the programmer to override the ‘destroy’ method and
do useful resource deallocation, file closing and all the other
stuff that people complain about.

What issue might this raise? Not all classes should define ‘destroy’
for obvious reasons…

Lastly, gratitude to matz for developing such an elegant, powerful and
simple language.

Justin Johnson
justinj@mobiusent.com


~transami

“They that can give up essential liberty to obtain a little
temporary safety deserve neither liberty nor safety.”
– Benjamin Franklin

Also note that this isn’t exception-safe, since a doesn’t get closed if
the first a.print raises an exception. The idiomatic way to do this in
Ruby is using blocks.

Explicit destruction is almost never a good idea, since it’s far too
easy to forget or to create a code path that happens to skip
destruction.

Paul

···

On Tue, Jul 02, 2002 at 02:54:00AM +0900, Yukihiro Matsumoto wrote:

Ruby’s finalization is kept very primitive intentionally. It’s not
for everybody. Some classes implement optional (or explicit) destroy
mechanism, for example:

a = open(“/dev/null”, “w”)
a.print “foo”
a.close
a.print “bar” # => exception! “closed stream”

I have heard this mentioned a lot and never understood these comments. Can
you explain in more detail? Why do you think finalization happens too late
to be useful?

Imagine that there is a class that manages bitmaps, sounds or other
forms of large binary data. Perhaps a mixture, even. An instance of
the class is created and used. When the instance is no longer
referenced, it hangs around until a garbage collection is initiated,
whereby it is deleted and then possibly finalized. The problem is,
the data is hanging around until that point. Python (if I can mention
the ‘P’ word here!) gets around this by using a reference counting
system for garbage collection. The moment an objects reference hits
0, it can be safetly destructed. The problem is, I don’t think
reference counting is a good garbage collection strategy for a number
of reasons. The problem is, you cannot predict when the finalization
is going to happen.

When an object is ‘destroyed’ it would broadcast to all it’s objects
for destruction thus creating a chain reaction.

Ruby is a wonderfully dynamic language. On the fly, you can create
variables for classes/modules/instances. Wouldn’t it be consistent to
be able to delete them too?

Justin Johnson
justinj@mobiusent.com

<snip wonderful differences, except no.5>

Great. Can I steal your code for the future Ruby? :wink:

How easy is it to convert from C++ to C? :wink:

Maybe I can contribute some bytecode work at some time - although it’s
going to take a while. There are a lot of Ruby implementation issues
that I have to go through.

I’m not sure whether symbol hash calculation is a bottleneck. I guess
it’s not, so that saving precalculated hash value is not worth
consuming memory.

You probably right, for scripting there are other bottlenecks. I’m
trying to get an implementation that has very fast execution - so I’ll
still consider it.

Object variables change too often comparing methods, if I don’t
misunderstand you.

I don’t think I explained this very well. I’m referring to class
variables. To fetch a class variable, the runtime may have to
traverse class hierarchies in the same way that it does for methods.
Even though class variables change, their hash is calculation using
their intern id, so cache behaviour is the same as it is for methods.

Ruby’s finalization is kept very primitive intentionally. It’s not
for everybody. Some classes implement optional (or explicit) destroy
mechanism, for example:

I understand. What I’m think about is the ability to remove a
variable from a class/module. Because the variable can be removed, it
would be useful to allow a destroy method to execute before removal,
so that file closing, system resource freeing etc can be performed.

BTW, This is not a proposal for inclusion in the current version of
Ruby.

class Parent
def destroy
p “Destructing Parent”
end
end

class MyClassA
def destroy
p “Destructing A”
end
end

class MyClassB < Parent
def initialize
@avar = MyClassA.new
end
def destroy
p “Destructing B”
end
end

test = MyClassB.new

other = test

test.destroy => “Destructing B”
“Destructing A”
“Destructing Parent”

p other => Nil

‘test’ no longer exists as a variable.

When the runtime comes across a var like ‘other’, it checks to see if
the object has been destroyed. If it has, the variable is changed to
Nil. Garbage collection would do exactly the same.

This solves another problem. Some finalizers (not Rubys) have to be
careful about creating references that might prevent gc. Because this
method handles ‘floating’ ptr’s, that’s impossible to do.

I’m not suggesting this instead of garbage collection. If ‘test’ was
not manually destroyed, the gc would eventually call it’s destroy
method. This way, destroy would act like similar to finalization.

Justin Johnson
justinj@mobiusent.com

is there a pattern here that can be tapped into such that by monitoring
a type of “scope” for an object, when that scope is completed all
objects within it should be destroyed?

The issue of scope is different for Ruby than for a language like C/C++
because a variable is a reference to an object, not the object itself.
This means that objects travel around at runtime and behave in a fairly
scopeless way. I suppose scope in this case ends when the object is no
longer referenced. This is the problem, reference counting offers a
potential solution but is flawed because of cyclic references.

p.s. if you really want speed for an embedded solution why do GC at all?
just take the c/c++ approach.

A couple of reasons:

  1. C/C++ needs to much language scaffolding and is too error prone for
    situations where all you care about is modelling behaviour.
  2. Dynamics. I need to be able to change classes and variables, and use
    reflection at runtime.
···


Justin Johnson

Explicit destruction is almost never a good idea, since it’s far too
easy to forget or to create a code path that happens to skip
destruction.

I’m suggesting that being able to remove a variable (and have it
broadcast a destroy message) is a useful feature which would
facilitate resource management. I’m not suggesting it as a
replacement for gc, more like a suppliment.

I wouldn’t even be thinking about this if an object knew the moment it
wasn’t referenced anymore and could call a destroy method. Python and
Perl know this but it can’t be 100% guaranteed because cyclic
references have to be handled with a mark-and-sweep gc.

Justin Johnson

Deleting an object from behind someone’s back would be bad. There’s
currently no easy way to know who is holding a reference to an object
without going entering the mark phase of GC. So if an object is
deleted, it would be very expensive to inform everyone that he has gone
away. There’s probably a whole mess of other problems to worry about as
well (such as ensuring that bad references never get used after the
object has been destroyed).

An alternative would be to never hold references to your object, but
instead to only hold WeakRefs. It’s safe to use a WeakRef to an object
after it has been destroyed, because an exception gets raised, rather
than Ruby segfaulting. You can (usually) destroy your object (assuming
no one else is holding a reference to it), by setting your reference to
nil and calling GC.start. Unfortunately, this is not guaranteed.

Another alternative (as Matz pointed out) is to be able to explicitly
free the resources the object is holding, but not free the object
itself. The object will continue to exist, and would hold 20 bytes or
so in the freelist, but would be a “dead” object. This requires the
library writer to do some work. If you want to make sure that everyone
gets informed that the object is now “dead”, then use the Observer
pattern.

Paul

···

On Tue, Jul 02, 2002 at 06:51:15PM +0900, Justin Johnson wrote:

Ruby is a wonderfully dynamic language. On the fly, you can create
variables for classes/modules/instances. Wouldn’t it be consistent to
be able to delete them too?

Hi,

Great. Can I steal your code for the future Ruby? :wink:

How easy is it to convert from C++ to C? :wink:

It depends on how much you “abuse” C++.

Maybe I can contribute some bytecode work at some time - although it’s
going to take a while. There are a lot of Ruby implementation issues
that I have to go through.

I’m waiting. I’m waiting, eagerly. :wink:

Object variables change too often comparing methods, if I don’t
misunderstand you.

I don’t think I explained this very well. I’m referring to class
variables. To fetch a class variable, the runtime may have to
traverse class hierarchies in the same way that it does for methods.
Even though class variables change, their hash is calculation using
their intern id, so cache behaviour is the same as it is for methods.

Now I understand. It would reduce the cost of class variable access.
But class variable access can rarely be the bottleneck.

Ruby’s finalization is kept very primitive intentionally. It’s not
for everybody. Some classes implement optional (or explicit) destroy
mechanism, for example:

I understand. What I’m think about is the ability to remove a
variable from a class/module. Because the variable can be removed, it
would be useful to allow a destroy method to execute before removal,
so that file closing, system resource freeing etc can be performed.

Hmm, your “removing variable” means something stronger than “removing
object”. I don’t think I fully understand you. All I can say is it
should not be done by method or method like operation. It requires
special syntax. This idea is very interesting, but I’m not sure (yet)
whther it is worth merging. Lisp / Smalltalk / Ruby etc. have had no
problem with the current object model.

						matz.
···

In message “Re: Ruby implementation Q’s” on 02/07/02, Justin Johnson justinj@mobiusent.com writes:

Not just because of cyclic references.

Another problem is that the time to destroy a refcounted object is not
predictable or bounded; it depends on the total number of items
getting destroyed, and that depends on the object structure.

So a refcount destroy could conceivably destroy lots of objects, and
each of those could have some significant destruction behavior,
leading to long pauses.

Today’s GC solutions typically offer higher performance.

···

On Friday 05 July 2002 03:16 am, Justin Johnson wrote:

This is the problem, reference counting offers a
potential solution but is flawed because of cyclic references.


Ned Konz
http://bike-nomad.com
GPG key ID: BEEA7EFE

Justin Johnson wrote:

Explicit destruction is almost never a good idea, since it’s far too
easy to forget or to create a code path that happens to skip
destruction.

I’m suggesting that being able to remove a variable (and have it
broadcast a destroy message) is a useful feature which would
facilitate resource management. I’m not suggesting it as a
replacement for gc, more like a suppliment.

I wouldn’t even be thinking about this if an object knew the moment it
wasn’t referenced anymore and could call a destroy method. Python and
Perl know this but it can’t be 100% guaranteed because cyclic
references have to be handled with a mark-and-sweep gc.

Justin Johnson

Presently an object persists until all links to it are removed. If you
were to destroy an object and leave loads of references around that
think they are looking at an object but aren’t harks back to the sort of
reference / pointer problems thet bedeviled C / C++ and was one of the
main advantages of the new wave of Java / Python / Ruby pointerless
languages.

Programmers have spent a great deal of time trying to avoid presisly
this sort of problem so I am at a loss as to why you think this would be
useful in anyway.

Now I understand. It would reduce the cost of class variable access.
But class variable access can rarely be the bottleneck.

Of course. In Rubys usual environment I don’t think it would be a
bottleneck, not compared to regexp and file manipulation speed. For my own
needs, i’m trying to consider how to make the core language as fast as
possible.

Hmm, your “removing variable” means something stronger than “removing
object”. I don’t think I fully understand you. All I can say is it
should not be done by method or method like operation. It requires
special syntax. This idea is very interesting, but I’m not sure (yet)
whther it is worth merging. Lisp / Smalltalk / Ruby etc. have had no
problem with the current object model.

a = MyClass.new
Action: Instance of ‘MyClass’ is created and assigned to new local
variable ‘a’. ‘initialize’ method is called for ‘a’.

a.destroy
Action: ‘unitialize’ method is called for ‘a’. ‘a’ is removed from local
variable hash. ‘a’ no longer exists as a variable.

Any ptr’s that used to reference to ‘a’ are set to Nil. This can be done as
the runtime comes across ptr’s to dead objects. This means that ‘destroy’
could be implemented as a method - it wouldn’t be possible to make ‘a’ live
again after ‘destroy’.

I’ll admit, it’s an unusal approach for gc languages to implement voluntary
destruction.

The preference would be for finalization to be called as soon as an object
becomes unreferenced although I haven’t thought of a good way of doing that.

···


Justin Johnson
justinj@mobiusent.com
Technical Director
Mobius

Programmers have spent a great deal of time trying to avoid presisly
this sort of problem so I am at a loss as to why you think this would be
useful in anyway.

Ok, imagine I have a class that represents bitmap resources, 3d geometric
models or sounds. These are resources that are allocated externally to Ruby
and are not garbage collected. Imagine I have written a C/C++ extension to
allow Ruby to request/release such external resources.

Perhaps I have a var referencing the resource: myvar = bitmap(“image.tga”).
Now, when myvar becomes unreferenced, I still have the bitmap image taking
up memory. This will be the case until garbage collection, whereby
finalization can be done and I can release my external bitmap image. My
memory may therefore become full with unused resources that are waiting for
garbage collection to finalize and release them.

The problem becomes worse if the resources are allocated via a limited
number of handles. Such as file handles.

The ideal would be for finalization to happen as soon an object becomes
unreferenced although I can’t think of a 100% sure way of achieving that.

···


Justin Johnson
justinj@mobiusent.com
Technical Director
Mobius

Hi,

a = MyClass.new
Action: Instance of ‘MyClass’ is created and assigned to new local
variable ‘a’. ‘initialize’ method is called for ‘a’.

No method called for a variable, for an object referred by ‘a’.

a.destroy
Action: ‘unitialize’ method is called for ‘a’. ‘a’ is removed from local
variable hash. ‘a’ no longer exists as a variable.

Still the object referred by ‘a’ may exist.

Any ptr’s that used to reference to ‘a’ are set to Nil. This can be done as
the runtime comes across ptr’s to dead objects. This means that ‘destroy’
could be implemented as a method - it wouldn’t be possible to make ‘a’ live
again after ‘destroy’.

I’ll admit, it’s an unusal approach for gc languages to implement voluntary
destruction.

The preference would be for finalization to be called as soon as an object
becomes unreferenced although I haven’t thought of a good way of doing that.

Probably, you want a way like allocation-by-construction in
C++, the strategy is effective in the case of a variable is
equivalent to an object, but unavailable for Ruby, Java and
such languages whose variables are references, without
reference count or similar.

You need to use ‘ensure(in Ruby)’ and ‘finally(in Java)’
instead.

def MyClass.use
yield(obj = new)
ensure
obj.destroy
end

MyClass.ues do |a|
a.do_something # may raise exceptions,
end # but ensured to be destroyed

···

At Wed, 3 Jul 2002 21:53:18 +0900, Justin Johnson wrote:


Nobu Nakada

Hi,

···

In message “Re: Ruby implementation Q’s” on 02/07/03, “Justin Johnson” justinj@mobiusent.com writes:

a.destroy
Action: ‘unitialize’ method is called for ‘a’. ‘a’ is removed from local
variable hash. ‘a’ no longer exists as a variable.

Any ptr’s that used to reference to ‘a’ are set to Nil. This can be done as
the runtime comes across ptr’s to dead objects. This means that ‘destroy’
could be implemented as a method - it wouldn’t be possible to make ‘a’ live
again after ‘destroy’.

I agree it is interesting. But you will not see it in (my) Ruby in
the near future. I have to consider it more deeply, since I have
some concerns about implementation / language impact of the idea.

By the way, it’s not usually called “finalization” in GC languages.

						matz.

Justin Johnson wrote:

Programmers have spent a great deal of time trying to avoid presisly
this sort of problem so I am at a loss as to why you think this would be
useful in anyway.

Ok, imagine I have a class that represents bitmap resources, 3d geometric
models or sounds. These are resources that are allocated externally to Ruby
and are not garbage collected. Imagine I have written a C/C++ extension to
allow Ruby to request/release such external resources.

Perhaps I have a var referencing the resource: myvar = bitmap(“image.tga”).
Now, when myvar becomes unreferenced, I still have the bitmap image taking
up memory. This will be the case until garbage collection, whereby
finalization can be done and I can release my external bitmap image. My
memory may therefore become full with unused resources that are waiting for
garbage collection to finalize and release them.

The problem becomes worse if the resources are allocated via a limited
number of handles. Such as file handles.

The ideal would be for finalization to happen as soon an object becomes
unreferenced although I can’t think of a 100% sure way of achieving that.


Justin Johnson
justinj@mobiusent.com
Technical Director
Mobius

How about a class with the following interface.

x = BitMapLoader.new();

x.loadbitmap(“image.tga”);
x.dosomethingwiththedata();
x.unloadbitmap();

The unloadbitmap method would, back in the C code, unallocate the memory
used by data and therefore free up the resources. However the referent x
still has currency with a much smaller footprint and can be GCd later.

I will admit that I do not have any experiance with the Ruby to C
interface but if you can allocate memory from within the C interface
then you should be able to unallocate it.

If you can’t do this then I can see your problem is somewhat greater
than I have assumed.

No method called for a variable, for an object referred by ‘a’.

Sorry, I didn’t describe it so well. I meant that the equivilent of
‘a.initialize’ is executed. It is as you say, the object instance that
recieves the ‘initialize’ method call.

Still the object referred by ‘a’ may exist.

The intention is that calling ‘a.destroy’ would kill the object that ‘a’
referred to, after calling some ‘uninitialize’ method for the object. ‘a’
would now be a reference to Nil.

I can see how this would cause problems because some variables may still
point to the object that has been killed. It would be possible for the
runtime to recognize ptr’s to killed objects and treat them accordingly by
changing the ptr to Nil.

class MyClass
def uninitialize
p “Uninitializing”
end
end

a = MyClass.new
b = a

p a.class => MyClass
p b.class => MyClass

a.destroy => “Uninitializing”

p a.class => NilClass
p b.class => NilClass

Even so, I think such a feature would raise a lot of issues and I think it
would be better for objects to finalize the moment that they have no other
references to them.

···


Justin Johnson
justinj@mobiusent.com
Technical Director
Mobius