Ability to run finalizers at a given point of a program?

> Wait.. both calls to define_finalizer have a reference to foo as the
> first argument, and both have a closure as second argument which
> doesn't have a reference to foo.

Incorrect. The first argument is irrelevant.
ObjectSpace::define_finalizer is written this way probably to make
things ugly. You have to pass in the object you wish to attach a
finalizer to, and that's what the first arg is for.

I wasn't pretending it would retain a reference, I was just comparing
both versions and underlined that both had that.

In your version, foo is in-scope when the finalizer proc is created, so
the proc captures foo *even though it is not explicitly used in the
proc*. That's right! Just because you don't reference foo inside a
closure doesn't mean it magically disappears for you.

Thanks, I was wrong on that very matter. I thought a closure only
captured the necessary environment, not the whole of it (what is
necessary being decided at closure creation time). And I can't
duplicate such ruby's behaviour in perl :confused: Perl in my example doesn't
capture the whole environment when creating the closure:

http://www.zarb.org/~gc/t/prog/closures_wholecapture/

···

--
Guillaume Cottenceau - http://zarb.org/~gc/

In your version, foo is in-scope when the finalizer proc is created, so
the proc captures foo *even though it is not explicitly used in the
proc*. That's right! Just because you don't reference foo inside a
closure doesn't mean it magically disappears for you.

After a second thought, I then think that it reduces a lot the use of
the finalizer, because it means that it cannot do anything related to
the instance of the object being destroyed, or else it will retain a
reference to the object as you explain, right? If the closure,
wherever created, has a reference to any attribute of the class for
example.

···

--
Guillaume Cottenceau - http://zarb.org/~gc/

Robert Klemme schrieb:

"Glenn Parker" <glenn.parker@comcast.net> schrieb im Newsbeitrag news:425C50C4.4070504@comcast.net...

Do you know when the last "extraction" has been done (making it safe for your finalizer to run)? Can you use that knowledge to explicitly run the finalization method, instead of waiting for the GC?

That's the exact right question. Because if one can invoke GC manually, then one can as well invoke some cleanup method. Nothing really gained.

In some unit tests it would be nice if you could force the GC to run the finalizers. For testing the finalizer code itself you just can call it explicitly, that's right. But how can you verify that after calling a cleanup method there are no more references to certain objects? The only way I could think of was explicitly starting the GC and checking whether the finalizers had been called. Unfortunately, this didn't work, because I couldn't reliably force the GC to run the finalizers.

For example, on my system, Guy's code isn't working either:

   C:\tmp>ruby -v r.rb
   ruby 1.8.1 (2003-12-25) [i386-mswin32]
   *** constructor
   Foo was out-scoped.
   Gc was run.
   *** pseudo-destructor

If this has changed in 1.8.2 I'd be glad to update, but I doubt it.

Regards,
Pit

> Do you know when the last "extraction" has been done (making it safe for
> your finalizer to run)? Can you use that knowledge to explicitly run the
> finalization method, instead of waiting for the GC?

That's the exact right question. Because if one can invoke GC manually,
then one can as well invoke some cleanup method. Nothing really gained.

It's far from being as elegant as real destructors, but it's still
better because invoking the GC manually can be done at a single point
of a program (for example, when a server-oriented request is finished)
whereas invoking cleanup method is needed to each objet at each
different location of the program where you know you need to.

···

--
Guillaume Cottenceau - http://zarb.org/~gc/

After a second thought, I then think that it reduces a lot the use of
the finalizer, because it means that it cannot do anything related to
the instance of the object being destroyed,

This is worst that you think

svg% cat b.rb
#!/usr/local/bin/ruby
obj =
ObjectSpace.define_finalizer(obj, proc { p "before"; p obj; p "after" })
svg%

svg% b.rb
"before"
svg%

Guy Decoux

"ts" <decoux@moulon.inra.fr> schrieb im Newsbeitrag
news:200504131121.j3DBLwDA001100@moulon.inra.fr...

> After a second thought, I then think that it reduces a lot the use of
> the finalizer, because it means that it cannot do anything related to
> the instance of the object being destroyed,

Not necessarily: you can keep a reference to a member instance that needs
cleanup. Also, the finalizer receives the oid which you can use for
bookkeeping purposes.

This is worst that you think

svg% cat b.rb
#!/usr/local/bin/ruby
obj =
ObjectSpace.define_finalizer(obj, proc { p "before"; p obj; p "after" })
svg%

svg% b.rb
"before"
svg%

Interesting. IMHO the finalizer should not be called in the first place
as it retains a ref to "obj".

    robert