‘require fcgi.rb’ pulls in the C version if you have it, otherwise it loads
the Ruby version.
doesn’t this always require ‘fcgi.rb’? i mean, isn’t only
require ‘fcgi’
allowed to chose the *.so over the *.rb?
No - it is fcgi.rb which in turn tries to do require fcgi.so, and catches
the exception if that fails, in which case it builds the Ruby classes
itself.
So require ‘fcgi’ and require ‘fcgi.rb’ are identical.
- the trapping of TERM and HUP doesn’t work properly for me. What happens is
that if I send such a signal to the process, nothing happens (ps shows the
same pid) until the next HTTP request comes along, at which point it fails
and Apache returns ‘500 Internal Server Error’. The process is then
restarted and it’s fine thereafter.
i checked this out a little using strace - looks like accept (or calls before
accept) are catching everything so there’s not much to be done :
[howardat@dhcppc1 fcgi-bin]# strace -p 10511
accept(0, 0xbfffe1ac, [112]) = ? ERESTARTSYS (To be restarted)
— SIGUSR2 (User defined signal 2) —
sigreturn() = ? (mask now )
accept(0,
same goes for HUP, USR1, etc.
That’s what I’d expect - well, actually I’d expect EINTR. Poking around
FreeBSD header files, there’s no ERESTARTSYS but there’s an ERESTART which
is used internally by the kernel, so I guess certain types of syscall are
automatically restarted. Looks like I’ll need to play to find out whether
accept() works that way.
-
with install_traps: I get the hanging behaviour as described above
-
with install_traps commented out: the process dies immediately on
receipt of TERM or HUP as expected (although if it were in the middle of
processing a request, it would bomb out without tidying up rather than
finish the request)
i think sending SIGHUP and having one request fail is about a good as it gets
;-(
i’m not sure what the alternative would be…
Don’t trap the SIGHUP. The child dies straight away, it gets respawned
straight away by Apache (which will keep a minimum of one child per fcgi
around unless you configure it otherwise), so the next request is handled
successfully.
FYI I am currently working on adding FCGI server support to druby (there is
already a HTTP client in samples/http0.rb). Will let you know if I get it to
work!
to what end? i mean, what would be the point of having a distributed fastcgi
process? not that there isn’t a point, i’m just wondering what you’re on
about?
I mean using the drb protocol over HTTP as an API: e.g. front-end server
talks to the world, and talks DRB-over-HTTP to the back end system, which
has a pool of database processes run under fastcgi. It requires Ruby at both
ends of course, but it should be a darned sight faster than SOAP or
YAML/OKAY, and is so easy to use because you just make object calls on the
front-end (which magically perform actions on the backend)
one thing which really needs addressed with fcgi is a way to run from a tty so
you can enter params and see the html (or error messages) come blasting back
out… not being able to do this is a real pain when debugging.
It can be made automatic. The C library provides a function FCGX_IsCGI()
which lets you detect whether you’re running under a fastcgi environment or
not. (Alternatively, you could write a shell which popen’s a fastcgi process
and talks fastcgi protocol to it)
Anyway, I did manage to get DRB/HTTP to work, but I’ve now been waylaid
looking at performance problems. Firstly, my server was only handling about
10 requests per second, even for plain HTML pages. I finally solved this by
setting the TCP_NODELAY socket option (see attached)
Interestingly, the Ruby MOD_FCGI/CGI module itself isn’t particularly
speedly when compared with raw fcgi:
Test 1: (MOD_FCGI)
#!/usr/local/bin/ruby
require ‘mod_fcgi’
MOD_FCGI.each(‘html3’) do |cgi|
cgi.out { “Minimal\n” }
end
Test 2: (raw FCGI)
#!/usr/local/bin/ruby
require ‘fcgi’
FCGI.each { |req|
req.out.print “Content-Type: text/plain\n\nMinimal\n”
req.finish
}
Request/response cycle times:
- plain HTML page 0.00486 secs
- test 1 0.0744 secs
- test 2 0.00666 secs (> 10 times faster)
So may be worth doing some profiling of the CGI module. (This doesn’t seem
to be a TCP_NODELAY problem: trussing the code shows it read()ing a FCGI
request over the socket, and the write()ing the response back 0.07 seconds
later, so it does appear to be Ruby processing)
This I’m happy with. What I’m not happy with, under Solaris, is a strange
bug where normal CGI requests can take up to 3 seconds to complete. I did
once manage to capture this with truss, it seemed to be doing
alarm(3)
sigsuspend … sleeps here
… woken by the alarm signal
However, since today, running a truss on the child which is handling the
requests makes the problem go away In fact, a ‘normal’ CGI of the form
#!/bin/sh
echo “Content-Type: text/html”
echo “”
echo “Hello”
actually executes faster (0.035 secs) than the Ruby mod_fcgi (0.086 secs)
although fcgi is still faster (0.022 secs) if truss is in place.
With trussing turned off, the above shell script takes an average of 1.5
seconds per iteration. Argh!
Anyway, that’s either an Apache or a Solaris problem, and hence off-topic
for this list (although I’d be very happy to hear the solution if anyone
knows it
FYI, with DRb over HTTP I am managing about 20 RPC exchanges per second,
with a reasonably substantial object being returned, and a DBI query thrown
in as part of the request processing. That I’m very happy with.
Regards,
Brian.
hithtml.rb (762 Bytes)
···
On Sun, Mar 09, 2003 at 06:40:35PM +0900, ahoward wrote:
On Sun, 9 Mar 2003, Brian Candler wrote: