you should definitely check out fastcgi and the ruby fastcgi binding. it’s
AWESOMELY fast:
~ > ab http://eli/env.cgi | grep ‘per second’
Requests per second: 4.04 [#/sec] (mean)
~ > ab http://eli/env.fcgi | grep ‘per second’
Requests per second: 557.10 [#/sec] (mean)
(that’s two orders of MAGNITUDE!)
~ > cat env.fcgi
#!/usr/local/bin/ruby
require ‘cgi’
require ‘fcgi’
FCGI.each_cgi do |cgi|
content = ‘’
env =
cgi.env_table.each do |k,v|
env << [k,v]
end
env.sort!
env.each do |k,v|
content << %Q(#{k} => #{v}
\n)
end
cgi.out{content}
end
env.cgi is the exact same program - it runs either way: as a cgi or fcgi based
on extension. it’s a terrible cgi i realize - but it demonstrates the speed
difference which can only get better, not worse:
here is a slightly more realistic benchmark and examples:
NORMAL CGI PROGRAM:
/usr/local/httpd/htdocs > ab http://127.0.0.1/a.cgi | grep ‘Requests’
Requests per second: 10.83 [#/sec] (mean)
/usr/local/httpd/htdocs > cat a.cgi
#!/usr/bin/env ruby
require ‘cgi’
require ‘postgres’
require ‘amrita/template’
our template
html = <<-html
html
template = Amrita::TemplateText.new html
this connection is made EVERY time!
pgconn = PGconn.new
tuples = pgconn.query “select * from foo”
generate content from database
template.expand((content = ‘’), :tuples => tuples)
blast out
cgi = CGI.new
cgi.out{ content }
FAST CGI PROGRAM (SAME PROGRAM PORTED TO FCGI):
/usr/local/httpd/htdocs > ab http://127.0.0.1/a.fcgi | grep ‘Requests’
Requests per second: 245.58 [#/sec] (mean)
/usr/local/httpd/htdocs > cat a.fcgi
#!/usr/bin/env ruby
require ‘cgi’
require ‘fcgi’
require ‘postgres’
require ‘amrita/template’
our template
html = <<-html
html
template = Amrita::TemplateText.new html
this connection is made ONE time!
pgconn = PGconn.new
FCGI.each_cgi do |cgi|
tuples = pgconn.query “select * from foo”
# generate content from database
template.expand((content = ''), :tuples => tuples)
# blast out
cgi.out{ content }
end
still MUCH faster. some of the other advantages include being able to monitor
your web processes in top and NOT being able to affect the apache process.
as to your webrick questions i cannot say - but it does look very cool.
-a
···
On Wed, 3 Sep 2003, Jeff Mitchell wrote:
I’ve been scoping out ruby for an upcoming server project.
For my purpose, performance will be a factor because the server will
be slending data in real-time much like a multipayer game server such
as Quake (as opposed to waiting for user input). Quake is an extreme
example, though – it wouldn’t be nearly that demanding.
I am very impressed with WEBrick, and this would be my primary choice
were it not for performance concerns.
How does WEBrick scale? Looking through the code, it appears that new
requests are handled by a ruby Thread – as opposed to a native thread
or a fork. What happens with 40 threads, all transferring data and
doing stuff, all being handled by a single ruby process setjmp-ing and
longjmp-ing around? I’m guessing performance would be poor.
Next option is mod_ruby. Now I don’t even understand how mod_ruby
works with Apache 2.0 since the ruby interpreter isn’t thread-safe.
So that’s all a mystery to me. Apache 1.3 + mod_perl is probably
well-tested and perhaps the most appropriate choice.
Are the above assumptions correct, and are there other options? One
more option which comes to mind is to simply write a forking ruby
server from scratch. This would mean more development time, but how
much more I’m not sure.
====================================
Ara Howard
NOAA Forecast Systems Laboratory
Information and Technology Services
Data Systems Group
R/FST 325 Broadway
Boulder, CO 80305-3328
Email: ara.t.howard@noaa.gov
Phone: 303-497-7238
Fax: 303-497-7259
~ > ruby -e ‘p(%.\x2d\x29…intern)’
====================================