Jason DiCioccio wrote:
I have written a long-running daemon in ruby to handle dynamic DNS
updates. I have just recently moved it from ruby 1.6 to ruby 1.8 and
updated all of its libraries to their latest versions (it uses dbi and
dbd-postgres). The problem i am having now is that it appears to start
out using a sane amount of memory (around 8mb) but then by the next day
around the same time will be using close to 200MB for the ruby
interpreter alone. The daemon code itself is 100% ruby so I don’t
understand how this leak is happening. Are there any dangerous code
segments I should look for that could make it do this? The only thing I
could think of is the fact that every returned object from a sql query
is .dup’d since ruby dbi passes a reference. However, these should be
getting swept up automatically by the garbage collector. This is
driving me nuts and I would love it if someone could point me in the
right direction…
If the 200MB is used by objects that are still known to the interpreter
(i.e., not garbage), then you can use ObjectSpace to find them. For
instance, just to count objects of each class:
irb(main):001:0> h = Hash.new(0); ObjectSpace.each_object {|x| h[x.class]
+= 1} => 6287
irb(main):002:0> h
=> {RubyToken::TkRBRACE=>1, IO=>3, Regexp=>253, IRB::WorkSpace=>1,
SLex::Node=>78, RubyToken::TkRBRACK=>1, RubyToken::TkINTEGER=>2,
Float=>5, NoMemoryError=>1, SLex=>1, RubyToken::TkRPAREN=>1,
RubyToken::TkBITOR=>2, RubyToken::TkIDENTIFIER=>7, RubyToken::TkNL=>1,
RubyToken::TkCONSTANT=>2, Proc=>49, IRB::Context=>1, IRB::Locale=>1,
RubyToken::TkSPACE=>7, ThreadGroup=>1, RubyToken::TkLPAREN=>1, Thread=>1,
fatal=>1, File=>10, String=>4413, Data=>1, RubyToken::TkfLBRACE=>1,
RubyToken::TkDOT=>3, IRB::ReadlineInputMethod=>1, RubyToken::TkASSIGN=>1,
Hash=>136, IRB::Irb=>1, RubyToken::TkfLBRACK=>1, Object=>6, RubyLex=>1,
RubyToken::TkSEMICOLON=>1, MatchData=>111, Tempfile=>1, Module=>23,
RubyToken::TkOPASGN=>1, SystemStackError=>1, Binding=>2, Class=>345,
Array=>806}
I had a lot of hope for this method when I first tried it out.
Unfortunately at the moment the process is using 162M resident memory and
here’s the output:
Total Objects: 8499 Detail: {EOFError=>2, DBI::Row=>81
, SQLPool=>1, IO=>3, DBI::StatementHandle=>49, fatal=>1,
SystemStackError=>1, Fl
oat=>18, Binding=>1, Mutex=>4, String=>5218, DBI::DatabaseHandle=>9,
DBI::DBD:
g::PgCoerce=>9, DBI::DBD::Pg::Tuples=>49, TCPSocket=>7, ODS=>1,
DBI::DBD::Pg::Dr
iver=>1, NoncritError=>3, RR=>6, Thread=>23, DBI::DBD::Pg::Statement=>49,
DBI::S
QL::PreparedStatement=>49, User=>4, ConditionVariable=>2, ThreadGroup=>1,
DBI:
BD::Pg::Database=>9, Event=>22, Proc=>16, File=>1, Hash=>253, Range=>11,
PGconn=
9, PGresult=>49, CritError=>2, Errno::ECONNABORTED=>1, Object=>4,
Bignum=>5, IO
Error=>5, Whois=>1, TCPServer=>2, DBI::DriverHandle=>1, NoMemoryError=>1,
Module
=>34, Array=>1922, Sql=>5, MatchData=>150, Class=>232, Regexp=>172}
At it’s peak it reaches about 20k. I’m guessing the drop occurs when the
garbage collector steps in. However, the memory size of the process
doesn’t seem to drop at that point… I’m running FreeBSD 4.9 with ruby
1.8.1 (2003-10-31). The problem was also occuring with release and stable
builds of 1.8.0 though as well. It was not, however, occurring in 1.6.x.
Any ideas? Bug?
Thanks!
-JD-
···
–On Saturday, November 8, 2003 6:51 AM +0900 Joel VanderWerf vjoel@PATH.Berkeley.EDU wrote: