A failure, in a test unit sense, means strictly (to me) that
an assertion has failed, and an error means strictly that an
uncaught exception was detected.
In general I agree, although I’m trying to figure out how important the
’strictly’ part is. I guess you could think of this failure as just the
default test in TestCase; if there are no other tests, a default test is
run that always fails.
But still, no uncaught exception was detected.
I see no need to mark the lack of tests with a failure: one.
Why is it misleading? The failure message seems fairly clear (although
clarity is one thing there’s always room for more of): […]
For it to be really clear, the failure needs to be identified by what
it is, therefore the message “Failure!!! No tests were specified.”
and two, it’s inconvenient. I haven’t done
it myself, but I would do this: decide my tests for a certain
class are no longer valid, delete them, but leave the shell
(the TestCase) there so I can create some new ones in the future.
I both agree and disagree with the inconvenience of it. Yes, if you keep
around a lot of unused TestCases, it would be somewhat inconvenient, but
I’m not sure that’s a practice that ought to be supported. As an XP’er,
I’d say, “YAGNI!” and delete the TestCase as soon as it’s empty, knowing
that it’s cinchy to add back. If I know that I’m going to add another
test to that TestCase in a minute, then the failure is a boon, because
it keeps me from getting distracted and forgetting to add the test.
OTOH, it’s “cinchy” for the user to put a tautological fail case in
there. I actally have an editor shortcut which expands to
I thus track outstanding items on a method-by-method basis.
One reason for not deleting a TestCase is that if you do think you
are going to use it soon-ish, then it’s a hassle to delete files from
a CVS repository and reuse them later.
Another comment: I like unit testing, but I’m not a confirmed XP’er.
In my mind, there’s nothing whatsoever wrong with an empty unit test.
Just like there’s nothing wrong with an empty class/method. It’s a
placeholder. I’d like to see Test::Unit provide for users with a
broad range of methodologies.
I guess what I’m saying is that the failure wouldn’t hurt my way of
working, and might even help it sometimes. It’s not my goal, however, to
impose my development methodology on others, so it could very well be
that this ought to go.
For those reasons, I believe it ought to go.
I suggest emitting a warning in hte event of an empty
Warnings in tests are too mushy for me; I want red or green, true or
false. I’m much more inclined to drop the failure altogether than to add
the concept of warnings to the framework.
I’m leaning towards leaving the failure, but making it more consistent
as noted above. My mind isn’t really made up yet, though, so I’d like to
hear some more on this first (if there are any other opinions). I’m also
going to try to dig up the original request for this feature to present
"the other side", as it were.
If opinion is split, perhaps it could be a command line option. I
would generally avoid option proliferation, but I think
"–fail-empty-tests" has a nice ring to it I’d probably use it
once in a while to look for gaps, but to boil it down to one sentence,
I would not use it all the time, because:
When I see failures, I want real failures, and I do not want to have
to “spoonfeed the compiler” just to avoid them.
Thanks for listenin’
Thanks for asking
On Thursday, January 9, 2003, 6:56:44 AM, nathaniel wrote:
Gavin Sinclair [mailto:firstname.lastname@example.org] wrote: