[ANN] Packet - 0.1.7, Ruby Library for EventDriven Network programming

Hi,

I am proud to release 0.1.7 version of Packet.

Packet is a pure ruby library for writing network applications in Ruby.
It follows Evented Model of network programming and implements almost all the
features provided by EventMachine.

It also provides real easy to user UNIX workers for concurrent programming.

Changes since 0.1.5:

* Uses fork and exec, rather than just fork.
* Fixed bugs with large packet transfer.
* Improved performance
* Tons of other fixes.

Its best to have some examples going:

== Examples
=== A Simple Echo Server:
require "rubygems"
require "packet"

class Foo
  def receive_data p_data
    send_data(p_data)
  end

  def post_init
    puts "Client connected"
  end

  def connection_completed
    puts "Whoa man"
  end

  def unbind
    puts "Client Disconnected"
  end
end

Packet::Reactor.run do |t_reactor|
  t_reactor.start_server("localhost",11006,Foo)
end

Those new to network programming with events and callbacks, will note that,
each time a new client connects an instance of class Foo is instantiated.
When client writes some data to the socket, receive_data method is invoked.

Although Packet implements an API similar to EventMachine, but it differs
slightly because of the fact that, for a packet app, there can be more than one
reactor loop running and hence, we don't use Packet.start_server(...).

=== A Simple Http Client
class WikiHandler
  def receive_data p_data
    p p_data
  end

  def post_init
  end

  def unbind
  end

  def connection_completed
    send_data("GET / \r\n")
  end
end

Packet::Reactor.run do |t_reactor|
  t_reactor.connect("en.wikipedia.org",80,WikiHandler)
end

=== Using Callbacks and Deferables
Documentation to come.

=== Using Workers
  Packet enables you to write simple workers, which will run in
  different process and gives you nice
  evented handle for concurrent execution of various tasks.

  When, you are writing a scalable networking application
  using Event Model of network programming,
  sometimes when processing of certain events take time,
  your event loop is stuck there. With green
  threads, you don't really have a way of paralleling
  your request processing. Packet library, allows
  you to write simple workers, for executing long
  running tasks. You can pass data and callbacks as an
  argument.

  When you are going to use workers in
  your application, you need to define
  constant WORKER_ROOT,
  which is the directory location, where
  your workers are located. All the workers defined in that directory
  will be automatically, picked and forked in a
  new process when your packet app starts. So, a typical
  packet_app, that wants to use workers, will look like this:

  packet_app_root
    >
    >__ lib
    >
    >___ worker
    >
    >___ config
    >
    >___ log

  You would define WORKER_ROOT = PACKET_APP_ROOT/worker

  All the workers must inherit class Packet::Worker, and hence a
  general skeleton of worker will look like:

    class FooWorker < Packet::Worker
      set_worker_name :foo_worker #=> This is necessary.
      def receive_data p_data
      end

      def connection_completed
      end

      def unbind
      end

      def post_init
      end
    end

  All the forked workers are connected to master via
  UNIX sockets, and hence messages passed to workers from master
  will be available in receive_data method. Also,
  when you are passing messages to workers, or worker is passing
  message to master ( in a nutshell, all the internal
  communication between workers and master ) directly takes
  place using ruby objects. All the passed ruby objects are
  dumped and marshalled across unix sockets in a non blocking
  manner. BinParser class parses dumped binary objects and
  makes sure, packets received at other end are complete.
  Usually, you wouldn't need to worry about this little detail.

  Packet provides various ways of interacting with
  workers. Usually, when a worker is instantiated, a proxy for
  that worker will also be instantiated at master
  process. Packet automatically provides a worker proxy(See meta_pimp.rb)
  for you, but if you need to multiplex/demultiplex
  requests based on certain criteria, you may as well define your
  own worker proxies. Code, would like something like this:

    class FooWorker < Packet::Worker
      set_worker_proxy :foo_handler
    end

  When you define, :foo_handler as a proxy for
  this worker, packet is gonna search for FooHandler class and
  instantiate it when the worker gets started. All
  the worker proxies must inherit from Packet::Pimp.
  Have a look at, Packet::MetaPimp,
  which acts as a meta pimp for all the workers,
  which don't have a explicit worker proxy defined.

=== A complete Case :

    Just for kicks, lets write a sample server,
    which evals whatever clients send to it. But, assuming this 'eval' of
    client data can be potentially time/cpu
    consuming ( not to mention dangerous too ), we are gonna ask our
eval_worker, to
    perform eval and return the result to master process, which in
    turn returns the result to happy client.

    # APP_ROOT/bin/eval_server.rb
    EVAL_APP_ROOT = File.expand_path(File.join(File.dirname(__FILE__) + "/.."))
    ["bin","worker","lib"].each { |x| $LOAD_PATH.unshift(EVAL_APP_ROOT
+ "/#{x}")}
    WORKER_ROOT = EVAL_APP_ROOT + "/worker"

    require "packet"
    class EvalServer
      def receive_data p_data
        ask_worker(:eval_worker,:data => p_data, :type => :request)
      end

      # will be called, when any worker sends data back to master process
      # it should be noted that, you may have several instances of
eval_server in
      # your master, for each connected client, but worker_receive
will be always
      # be invoked for the instance, which originally made the request.
      # If you need fine control, over this behaviour, you can
implement a worker proxy
      # on the lines of meta_pimp class. This API will change in
future perhaps, as i
      # expect, better ideas to come.
      def worker_receive p_data
        send_data "#{p_data[:data]}\n"
      end

      def show_result p_data
        send_data("#{p_data[:response]}\n")
      end

      def connection_completed
      end

      def post_init
      end

      def wow
        puts "Wow"
      end
    end

    Packet::Reactor.run do |t_reactor|
      t_reactor.start_server("localhost", 11006,EvalServer) do |instance|
        instance.wow
      end
    end

   # APP_ROOT/worker/eval_worker.rb
     class EvalWorker < Packet::Worker
     set_worker_name :eval_worker
     def worker_init
       p "Starting no proxy worker"
     end

     def receive_data data_obj
       eval_data = eval(data_obj[:data])
       data_obj[:data] = eval_data
       data_obj[:type] = :response
       send_data(data_obj)
     end
   end

=== Disable auto loading of certain workers:
  Sometimes, you would need to start a
  worker at runtime and don't want this pre-forking mechanism.
  Packet, allows this. You just need to define
  "set_no_auto_load true" in your worker class and worker
  will not be automatically forked. Although name is a bit misleading perhaps.

  Now, at runtime, you can call start_worker(:foo_worker, options)
  to start a worker as usual. It should
  be noted that, forking a worker, which is already
  forked can be disastrous, since worker names are being
  used as unique keys that represent a worker.

== Code repo:
   http://github.com/gnufied/packet/tree/master

== Credits
  Francis for awesome EventMachine lib, which has constantly acted as
an inspiration.
  Ezra, for being a early user and porting mongrel to run on top of packet

···

--
Let them talk of their oriental summer climes of everlasting
conservatories; give me the privilege of making my own summer with my
own coals.

http://gnufied.org