Testunit 0.1.6 problems

ruby 1.6.7, 1.6.8, 1.7.3
testunit 0.1.6
Linux and Solaris

I seem to get automatic failure when I try to run
testunit now. Here’s a very basic test case I used:

#tctest.rb
require "test/unit"
class TC_Foo < Test::Unit::TestCase
def setup
end

def teardown
end
end

When I try to run it, I get:

djberge-/home/djberge/programming/ruby-514>ruby
tctest.rb
Loaded suite tctest
Started
.
Failure!!!
run:
No tests were run.

Finished in 0.001824 seconds.
0 tests, 0 assertions, 1 failures, 0 errors

I also tried set_up and tear_down, just in case.
Adding any sort of code into the setup and teardown
methods makes no difference.

In Linux, I tried stepping into the debugger with no
luck. In Solaris, trying to use the debugger caused a
segfault. I don’t remember the exact error now, but
it had something to do with DEBUGGER__.

Everything works fine with 0.1.4.

Any ideas?

Regards,

Dan

···

Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

I seem to get automatic failure when I try to run
testunit now. Here’s a very basic test case I used:

#tctest.rb
require "test/unit"
class TC_Foo < Test::Unit::TestCase
def setup
end

def teardown
end
end

Any ideas?

If you tell Test::Unit to run something, and there are no tests to run
in what you told it to run, it will give you an error. If you add a test
to TC_Foo, it should start passing.

I’m curious to know what others think of this change/feature. Does it
make sense? Are there reasons to want a TestCase that has no test
methods? I added the error at someone’s request, but I’m willing to take
it out again if there are compelling reasons to do so.

BTW, this is why one shouldn’t wait 9+ months between releases… it
greatly reduces feedback (unless folks are using CVS), which is a very
bad thing :-(.

Nathaniel

<:((><

···

Daniel Berger [mailto:djberg96@yahoo.com] wrote:

RoleModel Software, Inc.
EQUIP VI

I'm curious to know what others think of this change/feature. Does it
make sense? Are there reasons to want a TestCase that has no test
methods?

print "\nVERSION of BDB is #{BDB::VERSION}\n"
if BDB::VERSION_MAJOR < 3
   print "\t\tno test for this version\n"
   exit
end

pigeon% ruby tests/queue.rb

VERSION of BDB is Sleepycat Software: DB 2.4.14: (6/2/98)
                no test for this version
Loaded suite tests/queue
Started

Failure!!!
run:
No tests were run.

Finished in 0.001561 seconds.
0 tests, 0 assertions, 1 failures, 0 errors
pigeon%

but I'm happy with this

Guy Decoux

A failure, in a test unit sense, means strictly (to me) that an
assertion has failed, and an error means strictly that an uncaught
exception was detected.

I see no need to mark the lack of tests with a failure: one. it’s
misleading; and two, it’s inconvenient. I haven’t done it myself, but
I would do this: decide my tests for a certain class are no longer
valid, delete them, but leave the shell (the TestCase) there so I can
create some new ones in the future.

I suggest emitting a warning in hte event of an empty TestCase.
Perhaps it could be (optionally) made a failure with a method like
TestCase#assert_tests_exist=, or some other approach.

Gavin

···

On Sunday, January 5, 2003, 2:46:04 AM, nathaniel wrote:

If you tell Test::Unit to run something, and there are no tests to run
in what you told it to run, it will give you an error. If you add a test
to TC_Foo, it should start passing.

I’m curious to know what others think of this change/feature. Does it
make sense? Are there reasons to want a TestCase that has no test
methods? I added the error at someone’s request, but I’m willing to take
it out again if there are compelling reasons to do so.

A failure, in a test unit sense, means strictly (to me) that
an assertion has failed, and an error means strictly that an
uncaught exception was detected.

In general I agree, although I’m trying to figure out how important the
’strictly’ part is. I guess you could think of this failure as just the
default test in TestCase; if there are no other tests, a default test is
run that always fails.

I see no need to mark the lack of tests with a failure: one.
it’s misleading;

Why is it misleading? The failure message seems fairly clear (although
clarity is one thing there’s always room for more of):

Loaded suite TC
Started

Failure!!!
run:
No tests were run.

Finished in 0.0 seconds.
0 tests, 0 assertions, 1 failures, 0 errors

The one thing about it that I think IS misleading is that, if you bundle
up an empty TestCase with a non-empty TestCase and run them together,
you won’t get a failure anymore - the check is only made at the runner
level. I need to at the very least make that consistent.

and two, it’s inconvenient. I haven’t done
it myself, but I would do this: decide my tests for a certain
class are no longer valid, delete them, but leave the shell
(the TestCase) there so I can create some new ones in the future.

I both agree and disagree with the inconvenience of it. Yes, if you keep
around a lot of unused TestCases, it would be somewhat inconvenient, but
I’m not sure that’s a practice that ought to be supported. As an XP’er,
I’d say, “YAGNI!” and delete the TestCase as soon as it’s empty, knowing
that it’s cinchy to add back. If I know that I’m going to add another
test to that TestCase in a minute, then the failure is a boon, because
it keeps me from getting distracted and forgetting to add the test.

I guess what I’m saying is that the failure wouldn’t hurt my way of
working, and might even help it sometimes. It’s not my goal, however, to
impose my development methodology on others, so it could very well be
that this ought to go.

I suggest emitting a warning in hte event of an empty
TestCase.

Warnings in tests are too mushy for me; I want red or green, true or
false. I’m much more inclined to drop the failure altogether than to add
the concept of warnings to the framework.

I’m leaning towards leaving the failure, but making it more consistent
as noted above. My mind isn’t really made up yet, though, so I’d like to
hear some more on this first (if there are any other opinions). I’m also
going to try to dig up the original request for this feature to present
"the other side", as it were.

Thanks for listenin’

Nathaniel

<:((><

···

Gavin Sinclair [mailto:gsinclair@soyabean.com.au] wrote:

RoleModel Software, Inc.
EQUIP VI

Hi,

···

In message “Re: Test::Unit fails w/no tests [was: testunit 0.1.6 problems]” on 03/01/09, nathaniel@NOSPAMtalbott.ws nathaniel@NOSPAMtalbott.ws writes:

In general I agree, although I’m trying to figure out how important the
’strictly’ part is. I guess you could think of this failure as just the
default test in TestCase; if there are no other tests, a default test is
run that always fails.

It’s not good for my mental health, since “rubicon” has many empty
test cases for unknown reason.

						matz.

A failure, in a test unit sense, means strictly (to me) that
an assertion has failed, and an error means strictly that an
uncaught exception was detected.

In general I agree, although I’m trying to figure out how important the
’strictly’ part is. I guess you could think of this failure as just the
default test in TestCase; if there are no other tests, a default test is
run that always fails.

But still, no uncaught exception was detected.

I see no need to mark the lack of tests with a failure: one.
it’s misleading;

Why is it misleading? The failure message seems fairly clear (although
clarity is one thing there’s always room for more of): […]

For it to be really clear, the failure needs to be identified by what
it is, therefore the message “Failure!!! No tests were specified.”

and two, it’s inconvenient. I haven’t done
it myself, but I would do this: decide my tests for a certain
class are no longer valid, delete them, but leave the shell
(the TestCase) there so I can create some new ones in the future.

I both agree and disagree with the inconvenience of it. Yes, if you keep
around a lot of unused TestCases, it would be somewhat inconvenient, but
I’m not sure that’s a practice that ought to be supported. As an XP’er,
I’d say, “YAGNI!” and delete the TestCase as soon as it’s empty, knowing
that it’s cinchy to add back. If I know that I’m going to add another
test to that TestCase in a minute, then the failure is a boon, because
it keeps me from getting distracted and forgetting to add the test.

OTOH, it’s “cinchy” for the user to put a tautological fail case in
there. I actally have an editor shortcut which expands to

fail(“Not implemented!”)

I thus track outstanding items on a method-by-method basis.

One reason for not deleting a TestCase is that if you do think you
are going to use it soon-ish, then it’s a hassle to delete files from
a CVS repository and reuse them later.

Another comment: I like unit testing, but I’m not a confirmed XP’er.
In my mind, there’s nothing whatsoever wrong with an empty unit test.
Just like there’s nothing wrong with an empty class/method. It’s a
placeholder. I’d like to see Test::Unit provide for users with a
broad range of methodologies.

I guess what I’m saying is that the failure wouldn’t hurt my way of
working, and might even help it sometimes. It’s not my goal, however, to
impose my development methodology on others, so it could very well be
that this ought to go.

For those reasons, I believe it ought to go.

I suggest emitting a warning in hte event of an empty
TestCase.

Warnings in tests are too mushy for me; I want red or green, true or
false. I’m much more inclined to drop the failure altogether than to add
the concept of warnings to the framework.

Fair enough.

I’m leaning towards leaving the failure, but making it more consistent
as noted above. My mind isn’t really made up yet, though, so I’d like to
hear some more on this first (if there are any other opinions). I’m also
going to try to dig up the original request for this feature to present
"the other side", as it were.

If opinion is split, perhaps it could be a command line option. I
would generally avoid option proliferation, but I think
"–fail-empty-tests" has a nice ring to it :slight_smile: I’d probably use it
once in a while to look for gaps, but to boil it down to one sentence,
I would not use it all the time, because:

When I see failures, I want real failures, and I do not want to have
to “spoonfeed the compiler” just to avoid them.

Thanks for listenin’

Thanks for asking :slight_smile:

Nathaniel

Gavin

···

On Thursday, January 9, 2003, 6:56:44 AM, nathaniel wrote:

Gavin Sinclair [mailto:gsinclair@soyabean.com.au] wrote:

nathaniel@NOSPAMtalbott.ws wrote:

[…] I’m not sure that’s a practice that ought to be supported. As an XP’er,
I’d say, “YAGNI!” and delete the TestCase as soon as it’s empty, knowing
that it’s cinchy to add back […]

[…] I’d like to hear some more on this first (if there are any other opinions).

I like the failure when you have an empty TestCase for the XP YAGNI reason
you stated above.

···


Bil Kleb
NASA Langley Research Center
Hampton, Virginia, USA

Hi –

···

On Thu, 9 Jan 2003 nathaniel@NOSPAMtalbott.ws wrote:

Gavin Sinclair [mailto:gsinclair@soyabean.com.au] wrote:

A failure, in a test unit sense, means strictly (to me) that
an assertion has failed, and an error means strictly that an
uncaught exception was detected.

In general I agree, although I’m trying to figure out how important the
’strictly’ part is. I guess you could think of this failure as just the
default test in TestCase; if there are no other tests, a default test is
run that always fails.

Could there be a way to hook or override the default test, so that one
could relatively easily change the behavior?

David


David Alan Black
home: dblack@candle.superlink.net
work: blackdav@shu.edu
Web: http://pirate.shu.edu/~blackdav

In my mind, there’s nothing whatsoever wrong with an empty unit test.

I tend to agree with Gavin that an empty unit test shouldn’t be flagged as a
failure. I use them for placeholders too.

Yukihiro Matsumoto wrote:

It’s not good for my mental health, since “rubicon” has many empty
test cases for unknown reason.
Think of them as an invitation to write code :slight_smile:

Hi,

From: “Bil Kleb” W.L.Kleb@larc.nasa.gov
Sent: Thursday, January 09, 2003 7:56 PM

[…] I’m not sure that’s a practice that ought to be supported. As an XP’er,
I’d say, “YAGNI!” and delete the TestCase as soon as it’s empty, knowing
that it’s cinchy to add back […]

[…] I’d like to hear some more on this first (if there are any other opinions).

I like the failure when you have an empty TestCase for the XP YAGNI reason
you stated above.

+1 to failure, to keep health of test.
Empty testcase is exactly a bug of the testcase.

Regards,
// NaHi

[…] I’m not sure that’s a practice that ought to be supported. As an
XP’er,

I’d say, “YAGNI!” and delete the TestCase as soon as it’s empty, knowing
that it’s cinchy to add back […]

[…] I’d like to hear some more on this first (if there are any other
opinions).

I like the failure when you have an empty TestCase for the XP YAGNI reason
you stated above.

can I clarify this - the claim is taht a test without any assert checks is a
fail - not an empty test yes? I assume that there is no way of determining
if the method is in fact empty? yes?

for instance - does this constitute a failure?

def test_null_object
    nullObject = NullObject.new
    nullObject.method1
    nullObject.method2
    nullObject.method3
end

to check that I have a null object that implements those methods and doesn’t
raise errors when called.

as I understand the thread, because this has no assert statements, it would
be deemed a failure - correct?

cheers
dim

dblack@candle.superlink.net wrote:

Could there be a way to hook or override the default test, so
that one could relatively easily change the behavior?

Sure, it’s trivial, and that’s really at the core of the matter: where
do you introduce the overhead? Do you introduce it for those who want a
failure whenever they have an empty TestCase? Or, do you introduce it
for those who want to keep empty TestCases around? I think it has to be
one or the other - one set of users will have a bit of extra work, and
the other group of users will be able to start working out of the box.

To be honest, I still lean towards making the default to fail on an
empty TestCase. What’s a failure? It’s an invitation to implement.
What’s an empty TestCase? It’s an invitation to implement (as Dave
pointed out in ruby-talk:60964). If you want to silence the invitation,
you just add an empty test to match your empty test case.

Nathaniel

<:((><

···

RoleModel Software, Inc.
EQUIP VI

In general I agree, although I’m trying to figure out how important
the ‘strictly’ part is. I guess you could think of this failure as
just the default test in TestCase; if there are no other tests, a
default test is run that always fails.

But still, no uncaught exception was detected.

Right. It should be a failure if it is anything.

For it to be really clear, the failure needs to be identified
by what it is, therefore the message “Failure!!! No tests
were specified.”

Agreed.

I both agree and disagree with the inconvenience of it. Yes, if you
keep around a lot of unused TestCases, it would be somewhat
inconvenient, but I’m not sure that’s a practice that ought to be
supported. As an XP’er, I’d say, “YAGNI!” and delete the TestCase as

soon as it’s empty, knowing that it’s cinchy to add back. If I know
that I’m going to add another test to that TestCase in a minute,
then

the failure is a boon, because it keeps me from getting distracted
and

forgetting to add the test.

OTOH, it’s “cinchy” for the user to put a tautological fail
case in there.

I absolutely agree. As I just said in ruby-talk:61064, it’s really
pretty easy either way; it’s just a matter of which class of users have
to do a bit more work: those who want a failure for an empty, or those
who don’t.

One reason for not deleting a TestCase is that if you do
think you are going to use it soon-ish, then it’s a hassle to
delete files from a CVS repository and reuse them later.

Right. But is the failure such a bad thing in that case? It’s just
reminding you to get back to it.

Another comment: I like unit testing, but I’m not a confirmed
XP’er. In my mind, there’s nothing whatsoever wrong with an
empty unit test. Just like there’s nothing wrong with an
empty class/method. It’s a placeholder. I’d like to see
Test::Unit provide for users with a broad range of methodologies.

I guess where I’m coming from is that I’d rather have a unit testing
framework make too much noise as opposed to too little. Those who want
less noise can easily filter it by adding a blank test. But perhaps it
should be the other way around.

I guess what I’m saying is that the failure wouldn’t hurt my way of
working, and might even help it sometimes. It’s not my goal,
however,

to impose my development methodology on others, so it could very
well

be that this ought to go.

For those reasons, I believe it ought to go.

Well, others have chimed in to say that it’s not just an XP thing. So I
think perhaps that’s not a good enough argument to get rid of it.

If opinion is split, perhaps it could be a command line
option. I would generally avoid option proliferation, but I
think “–fail-empty-tests” has a nice ring to it :slight_smile: I’d
probably use it once in a while to look for gaps, but to boil
it down to one sentence, I would not use it all the time, because:

When I see failures, I want real failures, and I do not want to have
to “spoonfeed the compiler” just to avoid them.

I can certainly understand and respect that viewpoint. Not sure if I’ll
go with it in the end, but only time will tell.

BTW, thanks for all the feedback. This dialogue has been very helpful to
my thinking on the matter.

Nathaniel

<:((><

···

Gavin Sinclair [mailto:gsinclair@soyabean.com.au] wrote:

On Thursday, January 9, 2003, 6:56:44 AM, nathaniel wrote:
RoleModel Software, Inc.
EQUIP VI

In my mind, there’s nothing whatsoever wrong with an empty unit test.

I tend to agree with Gavin that an empty unit test shouldn’t be flagged as
a
failure. I use them for placeholders too.

I can see that. But playing devil’s advocate, I think I see
the reasoning the other way also.

Aren’t the XPers always saying, “Write the test first. Since the code
doesn’t exist yet, the test will fail.” A new test, then, fails by
default. This was perhaps someone’s thinking. But I haven’t read this
whole thread.

I could live with it either way. I don’t test the way I should.

Hal

···

----- Original Message -----
From: “Mike Campbell” michael_s_campbell@yahoo.com
To: “ruby-talk ML” ruby-talk@ruby-lang.org
Sent: Wednesday, January 08, 2003 7:52 PM
Subject: Re: Test::Unit fails w/no tests [was: testunit 0.1.6 problems]

I’ll add that in my last Ruby project, I actually wrote my own test
running code so that each test class was run individually (instead of
all test methods being thrown into a large soup and executed
homogenously). The output was something like:

                                      ----- TestFroboz

[normal Test::Unit output]

                                      ----- TestQuarkXP

[normal Test::Unit output]

etc.

I think I made it optionally run the default way. Sometimes I want to
see combined outputs (just for that reassuring “0 errors, 0
failures”), and other times I want to see the output separated.
Motivations:

  • I know something’s failing in one area but am ignoring that for now
  • I want to see how many tests are being performed in each class
  • I want a better breakdown of where things are happening

The second motivation specified is the point of all this. I know
that I don’t have enough test coverage; this helps me find where
coverage is low, without spitting “failures” at me. Of course, I
consider some test methods critical; they have an automatic failure
with the message “Not implemented”.

Summary: there are plenty of ways to manage your tests, and inventing
failures when there was no code to fail seems to me to be a bad fit.

Gavin

···

On Thursday, January 9, 2003, 12:52:33 PM, Mike wrote:

In my mind, there’s nothing whatsoever wrong with an empty unit test.

I tend to agree with Gavin that an empty unit test shouldn’t be flagged as a
failure. I use them for placeholders too.

Not necessarily. And not everyone practises XP as such. (I don’t like major
parts of it, and other parts are just good sense). As a counterargument, I
would like to put forth the following pieces of code:

···

On Thu, 9 Jan 2003 09:35 pm, NAKAMURA, Hiroshi wrote:

I like the failure when you have an empty TestCase for the XP YAGNI
reason you stated above.

+1 to failure, to keep health of test.
Empty testcase is exactly a bug of the testcase.

===

Would fail if empty, so I add something arbitrary to prevent this.

class TC_EmptySample
def test_nothing
’foo’
end
end

Will fail, just like I want.

class TC_ImplementLater
end

IMHO, this is a lot uglier than the alternative:

===

Won’t fail, just like I want.

class TC_EmptySample
end

Want it to fail, to remind me to implement.

class TC_ImplementLater
def test
flunk(“Not implemented!”)
end
end

The second example makes it quite clear what fails, and why.

On a related note, I can’t get this failure to happen - must be exclusive to
0.1.6 (I have 0.1.4) but I can’t find the start of this thread. Can someone
please tell me if you get this failure when subclassing a test case with an
empty class? For example:

class TC_Super < Test::Unit::TestCase
def testme
assert_equal(2, 1+1, “Basic axioms of mathematics are wrong!”)
end
end

class TC_Sub < TC_Super
end

Does TC_Sub have this failure? If so, it will break my tests which is another
argument against it…

Tim Bates

tim@bates.id.au

can I clarify this - the claim is taht a test without any
assert checks is a fail - not an empty test yes? I assume
that there is no way of determining if the method is in fact
empty? yes?

for instance - does this constitute a failure?

def test_null_object
    nullObject = NullObject.new
    nullObject.method1
    nullObject.method2
    nullObject.method3
end

to check that I have a null object that implements those
methods and doesn’t raise errors when called.

as I understand the thread, because this has no assert
statements, it would be deemed a failure - correct?

No, it wouldn’t… the only thing that is (currently) an error is
running an empty TestSuite… as soon as you add an empty test, the
error will disappear.

Nathaniel

<:((><

···

Dmitri Colebatch [mailto:dim@colebatch.com] wrote:

RoleModel Software, Inc.
EQUIP VI

But it’s distracting you from real failures as well.

My main issue is that (no test -> failure) is not theoretically sound.
A failure, contrary to what I said before, is an assertion that, well,
fails. (I said it was an uncaught exception, which is an error.)

So if there is no test, and therefore no assertion, there is no
failure.

Either that, or the definition of failure changes. It is broadened,
therefore weakened. The definition as it stands makes perfect
unit-testing sense.

This is all pretty high-falutin’ theory. But I think software should
be theoretically sound, especially frameworks. We all know that dodgy
hacks can come back to haunt us. Well, dodging the theory is a small
step down that road.

Sorry for harping on about it. It’s good that lots of people have
chimed in. I’ll leave it alone, now, and won’t complain about
whatever decision you make.

Cheers,
Gavin

···

On Friday, January 10, 2003, 3:01:45 PM, nathaniel wrote:

One reason for not deleting a TestCase is that if you do
think you are going to use it soon-ish, then it’s a hassle to
delete files from a CVS repository and reuse them later.

Right. But is the failure such a bad thing in that case? It’s just
reminding you to get back to it.