Okay. I went and did a little simple hacking on a 1.8.4 instance of Ruby to simply output to stdout some debugging info for various GC/memory related actions.
Using the info I gathered from that, I tuned a real application that I have which consistently leaks a small amount of RAM despite having no object leaks. I have tested that by setting up a signal handler with which I can have it dump complete object counts at any time. It grows slowly, but fairly deterministically. Given a certain number of units of work that I ask of the code, RAM usage will increase a predictable amount, and never seems to go back down, even across very long runtimes (I recently killed and restarted some processes that had been running since sometime in 2005) but the object counts are the same. So, that memory use is coming from somewhere else.
Anyway, but manually calling GC.start() at a modest interval within the application, I can prevent rb_newobj() from ever encountering an empty freelist and having to call garbage_collect() itself. And by doing so the freed count stays very consistent and well above FREE_MIN, so add_heap is never invoked there. I also put debugging output on each of the other two locations add_heap() can be called. It never is.
However, despite this, RAM usage of the process continues to creep upward.
So. Why? A leak somewhere else, in some usage of rc_xmalloc/rb_xcalloc/rb_realloc?