Test::Unit Newbie Question regarding loops

With the following example:

···

#---------------------------------------------------------
#!/usr/bin/ruby

require "test/unit"

class TestImplArchFile < Test::Unit::TestCase

  def test_loop

    [0,1,2].each do |i|
      assert_equal(i, 6, "Duh, #{i} is not equal to 6.")
    end
  end

end
#---------------------------------------------------------
I get the following output:
#---------------------------------------------------------

Loaded suite ./t
Started
F
Finished in 0.00527 seconds.

  1) Failure:
test_loop(TestImplArchFile)
    [./t.rb:10:in `test_loop'
     ./t.rb:9:in `each'
     ./t.rb:9:in `test_loop']:
Duh, 0 is not equal to 6.
<0> expected but was
<6>.
#---------------------------------------------------------

Is there any way I can see three errors, ie the exception
would create a failure, but not exit the def? I'd like to see
all errors in the loop. I'm not sure
#---------------------------------------------------------

  1) Failure:
test_loop(TestImplArchFile)
    [./t.rb:10:in `test_loop'
     ./t.rb:9:in `each'
     ./t.rb:9:in `test_loop']:
Duh, 0 is not equal to 6.
<0> expected but was
<6>.

  2) Failure:
test_loop(TestImplArchFile)
    [./t.rb:10:in `test_loop'
     ./t.rb:9:in `each'
     ./t.rb:9:in `test_loop']:
Duh, 1 is not equal to 6.
<1> expected but was
<6>.

  3) Failure:
test_loop(TestImplArchFile)
    [./t.rb:10:in `test_loop'
     ./t.rb:9:in `each'
     ./t.rb:9:in `test_loop']:
Duh, 2 is not equal to 6.
<2> expected but was
<6>.
--
Posted via http://www.ruby-forum.com/.

Yotta Meter wrote:

Is there any way I can see three errors, ie the exception
would create a failure, but not exit the def?

No, I don't think so. A failed assertion raises AssertionFailedError
which terminates your method. There are no restarts in Ruby.

You can accumulate the errors yourself:

  errs =
  [0,1,2].each do |i|
    errs << "Duh, #{i} is not equal to 6" unless i == 6
  end
  assert errs.empty?, errs.join("; ")

You can use a bit of metaprogramming to create three separate test
methods in a loop.

  require 'test/unit'
  class TestBase < Test::Unit::TestCase
    (0..2).each do |i|
      define_method("test_loop#{i}") do
        assert_equal 6, i, "Duh, #{i} is not equal to 6."
      end
    end
  end

Or for fun, even three separate test classes:

  require 'test/unit'
  class TestBase < Test::Unit::TestCase
    CHECK = 0
    def test_it
      assert_equal 6, self.class::CHECK, "Duh, #{self.class::CHECK} !=
6"
    end
  end

  klasses =
  (1..2).each do |n|
    klass = Class.new(TestBase)
    klass.const_set(:CHECK, n)
    klasses << klass # for GC
  end

That's not very pretty though.

···

--
Posted via http://www.ruby-forum.com/\.

This is really the great idea I was looking for, thanks. Obviously I'm
not comparing integers, I'm actually iterating over verilog modules in
an ASIC, and do a test for each verilog module. Most verilog modules
will pass, but there will be a few that don't, so I'd like the test to
go through all verilog modules.

Another question, is my philosophy off? I'm a little confused by the
amount of test options out there, rspec, cucumber, etc. Is there another
testing strategy I should be working with in this instance where I want
to iterate through an array of complex objects?

Thanks for not telling me to rtfm or google it.

···

  require 'test/unit'
  class TestBase < Test::Unit::TestCase
    (0..2).each do |i|
      define_method("test_loop#{i}") do
        assert_equal 6, i, "Duh, #{i} is not equal to 6."
      end
    end
  end

--
Posted via http://www.ruby-forum.com/\.

Yotta Meter wrote:

Another question, is my philosophy off? I'm a little confused by the
amount of test options out there, rspec, cucumber, etc.

Well, they're apples and oranges to some extent. rspec is similar in
concept to Test::Unit but with a very different syntax (which I
personally dislike, so I stick to Test::Unit). But as far as I know,
assertion failures in rspec also raise exceptions and so can't continue.
There are a whole bunch of alternative test frameworks out there, but I
don't know enough about any of them to say whether they would fit your
needs better.

Cucumber works at a different layer. It sits on top of either rspec or
test/unit and lets you write tests in plain language, using regexps to
match that plain language and turn it into actual test code. But again,
if one step fails within a scenario, the remaining steps are skipped.

Is there another
testing strategy I should be working with in this instance where I want
to iterate through an array of complex objects?

Apart from the ideas I gave before, you could try rescuing
Test::Unit::AssertionFailedError explicitly inside your loop, to allow
it to continue (but making a note of the failure).

Indeed, the core code for Test::Unit which handles this looks very
simple:

      # Runs the individual test method represented by this
      # instance of the fixture, collecting statistics, failures
      # and errors in result.
      def run(result)
        yield(STARTED, name)
        @_result = result
        begin
          setup
          __send__(@method_name)
        rescue AssertionFailedError => e
          add_failure(e.message, e.backtrace)
        rescue Exception
          raise if PASSTHROUGH_EXCEPTIONS.include? $!.class
          add_error($!)
        ensure
          begin
            teardown
          rescue AssertionFailedError => e
            add_failure(e.message, e.backtrace)
          rescue Exception
            raise if PASSTHROUGH_EXCEPTIONS.include? $!.class
            add_error($!)
          end
        end
        result.add_run
        yield(FINISHED, name)
      end

So it looks like you could just call add_failure yourself and continue.
This is a private method/undocumented API, but I'd happily do that if it
gets the job done.

Here's a proof-of-concept:

require "test/unit"

class TestImplArchFile < Test::Unit::TestCase

  def no_stop
    yield
  rescue Test::Unit::AssertionFailedError => e
    add_failure(e.message, e.backtrace)
  rescue Exception
    raise if PASSTHROUGH_EXCEPTIONS.include? $!.class
    add_error($!)
  end

  def test_loop
    [0,1,2].each do |i|
      no_stop do
        assert_equal(i, 6, "Duh, #{i} is not equal to 6.")
      end
    end
  end

end

···

--
Posted via http://www.ruby-forum.com/\.

If you want to metaprogram tests, you should really look into using dust.
Its only a thin
wrapper around Test::Unit so id doesn't bring with it a steep learning
curve. The main
innovation is that your tests are not distinct methods themselves - you call
the test
method with a string argument (your test name) and you pass in a block which
is your
test code. Behind the scenes it constructs a complete test_foo method for
you.

The main upside to this is that it is super simple to iterate arrays of
arguments and
build up large suites of test methods while you still adhere to the
principle of 'One assert
per test' - which is where you are having issues.

Links:
Dust:

I blogged about using Dust before, the examples aren't completely relevant,
but they do
show you how you can build up tests from arrays:

Understand that RSpec, Cucumber, Coulda, Shoulda, Bacon etc. are solving
different
problems in testing. There is a certain learning curve and conceptual
hurdles to overcome.

Your problem is that you can't coerce Test::Unit to do exactly what you want
it to, but
Test::Unit is perfectly capable of doing it - Dust helps an awful lot
though. I have
done metaprogammed tests in both Test::Unit and Dust, and there is no
comparison.

regards,
Richard.

···

On Tue, Feb 23, 2010 at 5:04 PM, Yotta Meter <spam@joeandjody.com> wrote:

Another question, is my philosophy off? I'm a little confused by the
amount of test options out there, rspec, cucumber, etc. Is there another
testing strategy I should be working with in this instance where I want
to iterate through an array of complex objects?

Thanks for not telling me to rtfm or google it.

--

I'm seeing some performance issues I can't seem to get around. Your
solution for creating a method was great though. I'd try the last one,
but I think I'm seeing a fundamental problem with Test::Unit.

Using the following code as the base reference:

#!/usr/bin/ruby

require "test/unit"

class TestImplArchFile < Test::Unit::TestCase
(0..10).each do |i|
(0..10).each do |j|
  (0..100).each do |k|
   define_method("test_i#{i}__j#{j}_k#{k}") do
    flunk
   end
  end
end
end
end

When 'flunk' is set to flunk
Finished in 5.586734 seconds.
12221 tests, 12221 assertions, 12221 failures, 0 errors

When 'flunk' is replaced with true:
Finished in 0.482389 seconds.
12221 tests, 12221 assertions, 12221 failures, 0 errors

If I manually generate all 12k tests as individual tests in a new file,
with each one flunking:
Finished in 5.631014 seconds.
12221 tests, 12221 assertions, 12221 failures, 0 errors

The above would indicate that the problem is not with define_method, but
rather the ability of Test::Unit to handle a large number of test cases.

As a comparison, if I just print out results in a log file, I get the
awesome result:

Just using a class, no Unit::Test, not creating method, just print
method name
Finished in 0.102 seconds.

I couldn't get define_method going in a class to compare, it's a private
method in Method, and I really don't understand the Method rdoc example
using send.

Is there some sort of optimization in Test::Unit that I need to set?
Seems like something is really wrong, where is all this time going?

···

--
Posted via http://www.ruby-forum.com/.

One edit to the above:

When 'flunk' is replaced with true:
Finished in 0.482389 seconds.
12221 tests, 12221 assertions, 12221 failures, 0 errors

should read

When 'flunk' is replaced with true:
Finished in 0.482389 seconds.
12221 tests, 0 assertions, 0 failures, 0 errors

I got lazy and copied only the time from the previous value.

···

--
Posted via http://www.ruby-forum.com/.

Yotta Meter wrote:

The above would indicate that the problem is not with define_method, but
rather the ability of Test::Unit to handle a large number of test cases.

A large number of *failing* test cases. What if you do

  def flunk
  end

or

  def flunk
    assert true
  end

?

I couldn't get define_method going in a class to compare, it's a private
method in Method, and I really don't understand the Method rdoc example
using send.

  class Foo
    define_method ...
  end

should work fine. If you have an existing Class object you can also do

  Foo.class_eval { define_method ... }

Is there some sort of optimization in Test::Unit that I need to set?
Seems like something is really wrong, where is all this time going?

Rescuing exceptions perhaps. I thought you said that you expected most
of your tests to pass, and only a few to fail?

···

--
Posted via http://www.ruby-forum.com/\.

Nothings wrong, exceptions are just slow. This is actually true of most
programming languages. 'flunk' fails by generating an exception and
Test::Unit proceeds to rescue that exception. For some more information
on just how slow exceptions are, check out:
http://www.simonecarletti.com/blog/2010/01/how-slow-are-ruby-exceptions/

This is why and exception should almost never be the expected outcome
of running any piece of code. Likewise, failure should never be the
expected
outcome of 12221 tests. If you have 12221 tests failing, you have much
bigger
problems than a 5 second run-time on your test suite.

···

On 2/23/2010 12:20 PM, Yotta Meter wrote:

I'm seeing some performance issues I can't seem to get around. Your
solution for creating a method was great though. I'd try the last one,
but I think I'm seeing a fundamental problem with Test::Unit.

Using the following code as the base reference:

#!/usr/bin/ruby

require "test/unit"

class TestImplArchFile < Test::Unit::TestCase
(0..10).each do |i|
(0..10).each do |j|
  (0..100).each do |k|
   define_method("test_i#{i}__j#{j}_k#{k}") do
    flunk
   end
  end
end
end
end

When 'flunk' is set to flunk
Finished in 5.586734 seconds.
12221 tests, 12221 assertions, 12221 failures, 0 errors

When 'flunk' is replaced with true:
Finished in 0.482389 seconds.
12221 tests, 12221 assertions, 12221 failures, 0 errors

If I manually generate all 12k tests as individual tests in a new file,
with each one flunking:
Finished in 5.631014 seconds.
12221 tests, 12221 assertions, 12221 failures, 0 errors

The above would indicate that the problem is not with define_method, but
rather the ability of Test::Unit to handle a large number of test cases.

As a comparison, if I just print out results in a log file, I get the
awesome result:

Just using a class, no Unit::Test, not creating method, just print
method name
Finished in 0.102 seconds.

I couldn't get define_method going in a class to compare, it's a private
method in Method, and I really don't understand the Method rdoc example
using send.

Is there some sort of optimization in Test::Unit that I need to set?
Seems like something is really wrong, where is all this time going?

Yes, the all passing test ran in the 0.48 sec posted above. For the most
part, I expect all to be passing, but I wanted to understand the range
of values.

The problem is I have more than 10k tests. I was just using that as a
relative benchmark.

Also, the 0.48 sec will go up when the comparison happens with complex
objects. This is sort of best case.

In response to Walton's statement, the 5 seconds isn't the focus, it's
really the half second. What it's saying is that for 10k basic level
tests with all passing, it takes half a second. In our system we
integrate from a number of groups, and I was hoping to use Test::Unit as
a framework to verify all object relationships. It's not just the
Exceptions, there is something in Test::Unit that is taking more time
than it should, as the half second doesn't come close to the tenth of a
second of the class implementation. Both of those examples do not have
exceptions. I threw in the exceptions just to see what the worst case
was. In reality, we will have tests in the millions, so I'm better off
using a Class than Test::Unit, or creating my own Test::Unit.

Richard, thanks for the reference on Dust, I'll be sure to check it out.
My concern is that as a wrapper around Test::Unit, I'll end up with the
same performance issue as before.

That all being said, we are dealing with large object association that
are unique to our system. When we simulate our ASIC with vendor software
it takes approximately 100G of memory on a 64-bit system. I'm attempting
to build front end checks that would verify structure before the large
simulation occurs.

Thanks for all the help guys.

···

--
Posted via http://www.ruby-forum.com/.

Yotta Meter wrote:

Yes, the all passing test ran in the 0.48 sec posted above. For the most
part, I expect all to be passing, but I wanted to understand the range
of values.

The problem is I have more than 10k tests. I was just using that as a
relative benchmark.

Also, the 0.48 sec will go up when the comparison happens with complex
objects. This is sort of best case.

Well, Test::Unit is doing a degree of book-keeping for you - counting
the passing tests as well as assertions, writing a dot to the screen for
each test passed - and I'm also not sure how efficient it is to have

10k methods in one class.

In reality, we will have tests in the millions, so I'm better off
using a Class than Test::Unit, or creating my own Test::Unit.

You could be right. Test::Unit was probably not designed for the case
where you have millions of tests and each test is so fast that the
overhead of Test::Unit itself is large.

You might want to try this first:

require "test/unit"

class TestImplArchFile < Test::Unit::TestCase
  def flunk
    assert true
  end

  def no_stop
    yield
  rescue Test::Unit::AssertionFailedError => e
    add_failure(e.message, e.backtrace)
  rescue Exception
    raise if PASSTHROUGH_EXCEPTIONS.include? $!.class
    add_error($!)
  end

  def test_all
    (0..10).each do |i|
      (0..10).each do |j|
        (0..100).each do |k|
          no_stop do
            flunk
          end
        end
      end
    end
  end
end

It runs about 4 times faster on my PC, possibly due to the tests not
being dispatched as separate methods (you'll need to use the profiler if
you want to understand exactly why). If it's not fast enough, then
having your own book-keeping is probably the right way to go.

···

--
Posted via http://www.ruby-forum.com/\.

Handling large volumes of tests is not out of scope for Test::Unit - there
are sites
out there that run far more than 10,000 tests. To make their test run
perform well,
they use chunking + virtualization strategies. Farming off a subset of the
test
execution to different machines. The setup cost might make this prohibitive
in your
case.

Hudson is a continuous integration server that doesn't seem to get much love
in
Ruby circles, even though it has basic support for master/slave
relationships
(poor mans build cloud) which might get you the bang for buck that you need.

···

On Tue, Feb 23, 2010 at 8:20 PM, Yotta Meter <spam@joeandjody.com> wrote:

Richard, thanks for the reference on Dust, I'll be sure to check it out.
My concern is that as a wrapper around Test::Unit, I'll end up with the
same performance issue as before.

That all being said, we are dealing with large object association that
are unique to our system. When we simulate our ASIC with vendor software
it takes approximately 100G of memory on a 64-bit system. I'm attempting
to build front end checks that would verify structure before the large
simulation occurs.

--

Yes, I see the speed up as well.

I was using 1.8.5 and got the TestImplArchFile::PASSTHROUGH_EXCEPTIONS'
undefined error, but switched over to 1.8.7 and it worked like a champ.

I passed nil into the backtrace, when getting large amounts of errors
it's easier to limit the failure communication to just the message. By
adding a qualifier on the assert_equal I can then grep the output to get
just the messages I need to scan quickly.

The problem is were dealing with errors that span across multiple
departments, so it's not an 'always working' assumption, it's more like
'always broken'.

Thanks for the additional help.

···

--
Posted via http://www.ruby-forum.com/.