This post is related to the other one I sent earlier. It is about the
requirement we have on how we might solve it elegantly using Test::Unit.
We’d like to put several metadata into the test collection to get a
better picture of the state of the tests. At least we’d like to get the
following information:
-
total number of test methods, cases, and suites.
-
LOC per each test case and test suite, and total LOC dedicated to
testing (this will eventually be compared to total LOC of code, LOC of
comments, LOC of documentation, etc); aside from LOC, number of
chars/bytes will also be used as another unit of measure.
-
number (and, as % of LOC) of tests that are interactive (i.e. GUI tests).
-
time-cost of each test case and total (i.e. we assign a number/weight
to each test case to get a feel of how long tests will run).
(Of course, some of the above, like #1 and #2, are easy to accomplish
with a couple of lines of Ruby…)
And there will also be lots of measurements to be put into the database
about the results after each run, e.g.: number of assertions, number of
test methods/cases that succeed/failed, actual run time of tests, etc.
···
–
dave
[snip]
We’d like to put several metadata into the test collection to get a
better picture of the state of the tests. At least we’d like to get the
following information:
- total number of test methods, cases, and suites.
I think you can extract info from Test::Unit.
- LOC per each test case and test suite, and total LOC dedicated to
testing (this will eventually be compared to total LOC of code, LOC of
comments, LOC of documentation, etc); aside from LOC, number of
chars/bytes will also be used as another unit of measure.
LOC per testcase is difficult… some parsing of Ruby code will be
necessary… maybe ripper can be useful for extracting this info?
- number (and, as % of LOC) of tests that are interactive (i.e. GUI tests).
coverage.rb, may help identify which pieces of code that hasn’t been
reached during execution.
- time-cost of each test case and total (i.e. we assign a number/weight
to each test case to get a feel of how long tests will run).
extend the TestCase class with a #time_score method, so that you can
collect info about how much each test will require before invoking the
tests. Something like
FILE=‘test_hours.rb’
require ‘common’
class TestScanner < Common::TestCase
def test_longevity
sleep(100000) # 100 seconds
end
def test_ping
# ping server, get feedback… 50 seconds
end
time_score(:test_longevity, 100)
time_score(:test_ping, 50)
end
TestScanner.run if $0 == FILE
FILE=‘common.rb’
def TestCase.time_score(symbol, seconds)
@time[symbol] = seconds
end
Then the #run method would be able to measure how much time there is
required by each testcase.
···
On Sat, 28 Feb 2004 22:58:56 +0900, David Garamond wrote:
–
Simon Strandgaard
David Garamond wrote:
We’d like to put several metadata into the test collection to get a
better picture of the state of the tests.
Personally, I am usually only interested in the number of tests failing.
- total number of test methods, cases, and suites.
I guess function points would be somewhere between cases and suites?
- LOC per each test case and test suite, and total LOC dedicated to
testing (this will eventually be compared to total LOC of code, LOC of
comments, LOC of documentation, etc); aside from LOC, number of
chars/bytes will also be used as another unit of measure.
AFAIK, LOC is out as a worthful measure—see DeMarco’s Slack and
Poppendiecks’ Lean Software Development.
- number (and, as % of LOC) of tests that are interactive (i.e. GUI
tests).
Sounds like challenge for Phlip (to reduce the number of interactive GUI tests).
And there will also be lots of measurements to be put into the database
about the results after each run, e.g.: number of assertions, number of
test methods/cases that succeed/failed, actual run time of tests, etc.
Have you seen BulkTestRunner class that is part of Rubicon? It produces a
console summary like the following:
···
========================================================================
Mobility Test Summary Test Results
Name OK? Tests Asserts Failures Errors
------------------------------------------------------------
Array2DUT 3 6
BoundaryProfileUT 7 31
CartesianFaceUT 4 16
CaseUT 3 7
CellCollectionUT 8 47
CellUT 3 9
CmdLineOptsUT 4 6
FaceUT 5 13
GridLevelUT 7 20
GridUT 3 4
HyperbolicFunctionsUT 3 9
InteriorCellUT FAIL 13 32 1
LimiterUT 4 10
LineFitUT 1 2
PhysicsUT 3 8
ScalarUT 3 3
VectorUT 6 7
All 17 files FAIL 80 230 1 0
========================================================================
Failure Report
========================================================================
InteriorCellUT:
./InteriorCellUT.rb:49:in `testParameterLookupByReynoldsNumber'
....<1> expected but was
<2>.
========================================================================
If you search blade, you should be able to find my stand-alone version of it.
Regards,
Bil Kleb, NASA, Hampton, Virginia, USA