[ANN] RFuzz HTTP Destroyer 0.2 -- Ragel HTTP Client

In my constant quest to turn evil activities into useful tools I have
started the RFuzz HTTP Destroyer project (or just RFuzz until I can find
a better name).

RFuzz will eventually become a small framework that lets people write
Ruby scripts that destroy web servers. No, not so you can be a script
kiddie. So you can harden your web applications and servers against
script kiddies.

It's based on little bits of Ruby floating around my Mongrel
(http://mongrel.rubyforge.org) source that I use to make Mongrel cry.
Mongrel is a tough little web server because of these scripts, so I'm
now turning them into something everyone can use.

== The HttpClient

The 0.2 release features a nearly full complete and functioning HTTP
client that's only about 183 lines of ruby code, and re-uses parts of
the Mongrel HTTP parser. This means that it's a client that can also
validate the correctness of HTTP servers. It also means that it's tiny,
fast, and requires you to build a C extension. If you can install
Mongrel then you can install rfuzz.

Features of the HttpClient are:

1. Simple usage that let's you configure the client object once and
reduce repetition.
2. No blocks, nested exceptions, inconsistent functions, weird
parameters, or unrequested timeouts. It's bare metal and simple.
3. No threads, no timeouts, no exception handling. This is for those
who want to feel everything like an aluminum bat fighting a chain saw.
4. Functions to encode and decode much of HTTP outside of the library.
5. A notification plugin Notifier class so you can track the process
easily.
6. All parameters are set with "data", meaning you could load it out of
a YAML file and replay a client request or serialize the request for
later.
7. Dynamically supports any HTTP method, and even those retarded ones
that I can't anticipate because RESTafarians decided this was a
GoodThing(tm) and broke the protocol by inventing "verbs".
8. Decodes all of the HTTP protocol responses cleanly and reports
parsing errors immediately.
9. Tracks cookies between requests to emulate a client (has a reset).
10. Cookies suck, but work well enough to thrash a Rails app.
11. Body payload is supported, but no encoding done (you do this).
12. Response object is just a Hash for headers with a set of additional
attributes: http_status, http_body, http_reason, http_version. This is
wired hot inside the http11_client extension so it's fast as hell.

== Informations Available

  * http://www.zedshaw.com/projects/rfuzz/ -- Simple project page.
  * http://www.zedshaw.com/projects/rfuzz/coverage/ -- Source and rcov.
  * http://pastie.caboo.se/3667 -- The sample script.
  * http://pastie.caboo.se/3668 -- Sample script output.
  * http://www.zedshaw.com/projects/rfuzz/rfuzz-0.2.gem
  * http://www.zedshaw.com/projects/rfuzz/rfuzz-0.2.tgz

== !!!! WARNING !!!!

This is still kind of "works for me" code. In order for the tests to
run you'll have to fire up a rails app on localhost:3000. Doesn't
matter what's in it, just any one. I'll be adding more to the test
suite so this isn't needed.

PS. Yes I'm working on updating my blog so STFU already.

···

--
Zed A. Shaw
http://www.zedshaw.com/
http://mongrel.rubyforge.org/
http://www.railsmachine.com/ -- Need Mongrel support?

Hi Zed,

this looks really great !

Do you plan to add streaming of http request and response bodies ?

···

--
Cheers,
  zimbatm

http://zimbatm.oree.ch

If by "streaming" you mean HTTP pipelining where one TCP connection is
used to process multiple HTTP requests, then you bet.

Part of the motivation for RFuzz is to show people just how evil some
dumbass parts of HTTP are. The evil I can do to Apache with horribly
pipelined requests is insane.

···

On Fri, 2006-07-07 at 18:41 +0900, Jonas Pfenniger wrote:

Hi Zed,

this looks really great !

Do you plan to add streaming of http request and response bodies ?

--
Zed A. Shaw

http://mongrel.rubyforge.org/
http://www.railsmachine.com/ -- Need Mongrel support?

If by "streaming" you mean HTTP pipelining where one TCP connection is
used to process multiple HTTP requests, then you bet.

I don't think it's the same but my knowledge of the HTTP protocol is
quite limited. I was thinking of getting big files efficiently,
without loading the whole body into memory. The current implementation
doesn't seem to support that.

Part of the motivation for RFuzz is to show people just how evil some
dumbass parts of HTTP are. The evil I can do to Apache with horribly
pipelined requests is insane.

Hehe, go Zed go ! :slight_smile:

···

--
Cheers,
  zimbatm

http://zimbatm.oree.ch