Hi all,
Just ran into something very interesting with finalizers. I've found a workaround (it'll be obvious what it is from the code below), but I just thought I'd share it for discussion's sake.
Consider the code below:
$fcount = 0
class A
def initialize
end
end
class B
def initialize
end
def bar a
ObjectSpace.define_finalizer(a, lambda {|oid| $fcount += 1})
a = nil # xxx
nil
end
def foo
a = A.new
bar a
nil
end
end
b = B.new
for i in 1 .. 10000
GC.start
b.foo
GC.start
end
$stderr.print "Program ends. #{$fcount} finalizers called.\n"
All but one of the finalizers run at the point of the trace.
Now, comment the line marked with xxx. This shouldn't make any difference- but it does. The program will report that 0 finalizers ran at the point of the trace. You can confirm that the rest did run, but they *only* ran at program exit, after the trace. Basically, the resources are never released when finalizers are used. This is a big problem in a long-running program.
If 10000 iterations isn't enough, you can always increase the counter.
Note that I am using Ruby 1.9.2p136, Linux. Other versions may behave differently.
Why the code above? I was noticing in my code that finalizers were *never* being run under any circumstances. The above is a stripped-down set of code that acts similarly to mine.
From a bit of research online, I've seen comments that say sometimes values are left in registers, which affects GC. That seems fair enough in general- but not here. There are 10000 objects here that aren't being finalized- they're not in all in registers. If it's the stack, then the first run should also have failed. It's not the return value, this is nil. It's not the parameter coming in, the first test would have failed.
Is it the current scope? I have a feeling that, based on the one line change I made, that the current scope is somehow being captured by the finaliser, so that if "a" remains set, the finaliser holds on to it, and the object is never released. That's just my theory- I could be wrong. This situation is particularly bad if you want to set up a finaliser and then immediately return the value (say, as a result of caching a value)- the finaliser will never be called, because you can't clear the value before returning it.
If you've followed me so far, you can probably guess the workaround- call a separate method to set the finaliser, and clear the parameter afterward in that call, then return to the caller with a nil return value. It's annoying, but not too painful.
What I am incredibly curious about is why this happens in the first place- and why there doesn't seem too much talk of this specific problem when using finalizers online. Finalizers failing to work when used in the current scope without explicitly clearing the object afterward seems like the sort of problem other people should be running into more often.
It's bizarre. I'm wondering what everyone else thinks of it. Have I missed something?
Garth