Yes, in case this was a feature I suspected the thinking was
like that.
As David and David pointed out, it is, indeed, a feature. Sorry it’s bugging you. 
But how about these cases ?
- the Rubicon tests for Ruby. The test methods test a number of
things in the same method. As soon as any test fails the whole
test suite soon becomes increasingly useless (maybe one could say
that there should not be any errors, but that has always been
the case when I have tried to run Rubicon …)
I don’t understand why the tests become useless to you. If we have a test
for strip:
def test_strip
assert_equal(“name”, " name ")
assert_equal(“name”, “\tname\n”)
end
…and the first assertion fails, we immediately know that there’s a bug in
strip. Why is the test useless if the second assertion isn’t executed?
-
I have tried to write some table driven tests. How should I do
this ? Currently I tried something in the following style:
…
@@data = [
[2, 1, 1],
[5, 2, 2], # should fail
[7, 2, 5],
[8, 3, 4], # should fail
]
def test_b
for facit, aa, bb in @@data
assert_equal facit, aa + bb
end
end
…
Here I have several tests that are not “increasing”.
Perhaps a better word is “related”. All of those tests are related, aren’t
they? If the first one fails, there’s a high probability that the second
one, and the third one, etc., will fail, too. So the testing framework only
worries you with the first one.
Perhaps the key is to realize that it is the method that is the test, not
the assertions. Thus it’s just a like a short-circuited comparison operator
- we don’t bother evaluating more assertions once we know the test has
failed.
Also, I think one of the primary things unit testing does for me is force me
to throw my assumptions out the window. If we let failures cascade it would
be tempting to go through and try to fix them all before running the test
again, but that would be a large assumption. Better to not assume anything
about the rest of the test and just run it again once we’ve fixed the
current problem.
There is one case where I do believe it is better to keep evaluating
assertions even if one fails: acceptance tests. Because of the long-running
nature of acceptance tests, and the high overhead they often incur, we need
to squeeze as much information out of a run as possible. I don’t think it’s
an ideal situation - if my acceptance tests ran as quickly as my unit tests
I’d want to stop a test as soon as an assertion failed, just as when unit
testing. But reality bites, just as it does in C compilers - I’d rather have
the compiler just tell me about the first error, but compiling can be an
expensive process, so I end up wading through a bunch of errors I don’t
really care about to find the one I do. C’est la vi.
So, both for acceptance testing and for those who feel that they need it, I
plan to add the ability to keep running when an assertion fails. I’ve
actually planned on it for a while, but haven’t gotten around to it yet. For
the time being, you can use some variation of what Guy posted in
[ruby-talk:82066].
HTH,
Nathaniel
<:((><
···
Johan Holmberg [mailto:holmberg@iar.se] wrote: