Nifty Proxy Object for Testing

Hello Group,

I had an idea for a testing method I wanted to share. I develop a c
extension that efficently implements a priority queue. An inefficent
pure ruby reference implementation can easily be written, so I have a
reference implementation and my c implementation and want to assure
that they behave the same. I achieve this by using a proxy object in
my tests that does all actions on both implementations and asserts
that return values and thrown exceptions are equal.

That allows for double testing. I assure that my normal unit tests
work and additionally for each action it is tested that the right
thing is returned. Furthermore the teardown method now tests that the
actions have resulted in the same state in both implementations.

Ideas, thoughts, critcal voices?

Brian

See below for the implementation and usage

---8<------8<---
class ReferenceImplementationTester
  attr_reader :__implementations__

  def initialize(testcase, reference, implementation)
    @testcase = testcase
    @reference = reference
    @implementation = implementation
    @__implementations__ = {:reference => @reference, :implementation
=> @implementation}
  end

  def method_missing(method, *args, &block)
    method_description = "#{method}(#{args.join(', ')})"
    method_description << " do <##{block.object_id} ...> end" if block_given?

    r1 = begin
      @reference.send(method, *args, &block)
    rescue Object => e1
    end
    r2 = begin
      @implementation.send(method, *args, &block)
    rescue Object => e2
    end
    r1 = :___SELF_RETURNED___ if (r1 == @reference)
    r2 = :___SELF_RETURNED___ if (r2 == @implementation)
    @testcase.assert_equal(e1, e2,
     "#{method_description} raised different exceptions on
#{@reference.inspect} and on #{@implementation.inspect}")
    @testcase.assert_equal(r1, r2,
     "#{method_description} returned different results on
#{@reference.inspect} and on #{@implementation.inspect}")
  end
end

class PriorityQueueReferenceTester < ReferenceImplementationTester
  def initialize(testcase)
    super(testcase, PMPriorityQueue.new, PriorityQueue.new)
  end
end

class PriorityQueueTest < Test::Unit::TestCase
  # Create a new queue with automatic tests against the reference implementation
  def setup
    @q = PriorityQueueReferenceTester.new(self)
  end

  # Test that both implementations return the same elements on delete min
  def teardown
    true while @q.delete_min
  end

  # Assure that delete_min works
  def test_delete_min
    assert_equal(nil, @q.delete_min, "Empty queue should pop nil")
    @q["n1"] = 0
    assert_equal(["n1", 0], @q.delete_min)
    @q["n1"] = 0
    @q["n2"] = -1
    assert_equal(["n2", -1], @q.delete_min)
  end

  # Try on random values
  def test_random_actions
    100.times do
      @q[rand(10)] = rand
    end
    15.times do
      @q.min
      @q.empty?
      @q.min_value
      @q.min_key
      @q.delete_min
    end
  end
  ...
---8<------8<---

···

--
http://www.brian-schroeder.de/lucius/ <-- My newborn son!

Brian Schröder wrote:

Hello Group,

I had an idea for a testing method I wanted to share. I develop a c
extension that efficently implements a priority queue. An inefficent
pure ruby reference implementation can easily be written, so I have a
reference implementation and my c implementation and want to assure
that they behave the same. I achieve this by using a proxy object in
my tests that does all actions on both implementations and asserts
that return values and thrown exceptions are equal.

That allows for double testing. I assure that my normal unit tests
work and additionally for each action it is tested that the right
thing is returned. Furthermore the teardown method now tests that the
actions have resulted in the same state in both implementations.

Maybe I didn't understand this correctly but if you have "normal" unit
tests already, why then do you need a result comparison? I mean, if
instances of both classes pass the unit test results should be the same or
at least compatible.

Alternatively you could use the Ruby implementation to yield expected
values that are compared with the C version, but that has the same problem
as a comparison only test: you won't catch algorithmic flaws that you made
(if you developed both it's likely that both exhibit the same erroneous
behavior which you won't catch with the comparison approach).

Kind regards

    robert

Hello robert,

my idea was that it is a lot easier to write a functional (bugfree)
ruby version. Of course it would be possible to write good unit test
and use them on both implementations, but I hope to get better
coverage using this approach as I'm checking the result of each and
every action. But beware, I'm relatively inexperienced in writing good
unit tests, so maybe the whole idea does not make much sense.

best regards,

Brian

···

On 17/10/05, Robert Klemme <bob.news@gmx.net> wrote:

Brian Schröder wrote:
> Hello Group,
>
> I had an idea for a testing method I wanted to share. I develop a c
> extension that efficently implements a priority queue. An inefficent
> pure ruby reference implementation can easily be written, so I have a
> reference implementation and my c implementation and want to assure
> that they behave the same. I achieve this by using a proxy object in
> my tests that does all actions on both implementations and asserts
> that return values and thrown exceptions are equal.
>
> That allows for double testing. I assure that my normal unit tests
> work and additionally for each action it is tested that the right
> thing is returned. Furthermore the teardown method now tests that the
> actions have resulted in the same state in both implementations.

Maybe I didn't understand this correctly but if you have "normal" unit
tests already, why then do you need a result comparison? I mean, if
instances of both classes pass the unit test results should be the same or
at least compatible.

Alternatively you could use the Ruby implementation to yield expected
values that are compared with the C version, but that has the same problem
as a comparison only test: you won't catch algorithmic flaws that you made
(if you developed both it's likely that both exhibit the same erroneous
behavior which you won't catch with the comparison approach).

Kind regards

    robert

--
http://ruby.brian-schroeder.de/

Stringed instrument chords: http://chordlist.brian-schroeder.de/