Recently I've been trying to use Ruby's Test::Unit framework to test a network server. When I do things a certain way, it works great. It runs all my tests and gives the expecte results. However there are a few things I don't understand how to do.
Right now, I have setup and teardown methods that make the TCP connection to the server. If these are simple like:
def setup
@sock = TCPSocket.new($host, $port)
end
def teardown
@sock.close
end
Then everything works. However, I'd also like to be able to keep the connection open while each test runs. Each method I try to do this fails, and I'm not quite sure why.
I first tried:
def setup
if @sock.nil?
@sock = TCPSocket.new($host, $port)
end
end
And commenting out the teardown method, figuring that the socket would be created only once, and that when the program exited it would take care of closing the socket, but instead what happens is that the first test works, but the for subsequent tests, @sock is nil, and because the socket is never closed, TCPSocket.new fails with a connection refused method. Does anybody know why @sock would become nil?
I also tried defining an 'initialize' method to set up the socket at the start of the test case, but evidently 'initialize' is used internally in some sneaky way I don't understand.
So, could someone who understands the unit test framework better than me explain how I can:
* Pass a hostname and port as commandline parameters to a script
* Create a socket connected to that host/port
* Run a series of tests using that socket
* Disconnect the socket
Ideally, without using ugly things like global variables, etc.?
Also, in an oviously related question: how can I prevent unit tests from running just because my file happens to define a class deriving from Test::Unit::TestCase?
Thanks,
Ben